Sie sind auf Seite 1von 146

SYLLABUS 1.

Introduction to Software: Introduction of Software, The evolving Role of Software, Software characteristic, Software
Application, Software Crisis, Software Myths.

2. Software Engineering Approach: Software Engineering Approach, A Generic View of Software Engineering, Software Process, Software Process Models - Waterfall Model, Prototype Model, Incremental Model, Spiral Model, COCOMO Model. 3. Software Process and Project Metrics: Measures, Metrics and Indicators, Metrics in the Process and Project Domains, Software Measurement, Metrics for Software Quality, Integrating Metrics within the Software Engineering
Process.

4. Software Project Planning: Project Planning Objectives, Software Scope, Resources, Software Project Estimation, Decomposition Techniques, Empirical Estimation Models, the Make-Buy Decision. 5. Risk Management: Software Risks, Risk Identification, Risk Projection, Risk Mitigation, Monitoring, and Management. 6. Software Requirement Definition: Software requirement Specification, Formal Specification Techniques, Languages and Processors for Requirement Specification. 7. Software Design: Fundamental Design Concepts, Modules and Modularization Criteria, Design Notation, Design Techniques, Detailed Design Consideration. 8. Implementation Issues: Structured Coding Techniques, Coding Styles, Standards and Guidelines, Documentation Guidelines. 9. Verification and Validation: Quality Assurance, Walkthroughs and Inspections, Symbolic Execution, unit testing and Debugging, System Testing, Formal Verification. 10. Maintenance: Introduction, Enhancing Maintainability during Development, Configuration Management, Managerial Aspects of Software Maintenance, Source-Code Metrics, Other Maintenance Tools and Techniques. References: Software Engineering: a practitioners approach: Roger S. Pressman Software Engineering Concepts :Richard Farley

TABLE OF CONTENTS

UNIT 1 INTRODUCTION TO SOFTWARE 1.1 Introduction to Software 1.2 The Evolving Role of Software 1.3 Software Characteristic 1.4 Software Application 1.5 Software Crisis 1.6 Software Myths UNIT-2 SOFTWARE ENGINEERING APPROACH 2.1 Introduction to Software Engineering 2.2 Generic View of Software Engineering 2.3 The Software Process 2.4 Software Process Models 2.4.1 The Linear Sequential Model 2.4.2 The Prototype Model 2.4.3 The Incremental Model 2.4.4 The Spiral Model 2.4.4 The COCOMO Model UNIT- 3 SOFTWARE PROCESS AND PROJECT METRICS 3.1 Introduction 3.2 Measures, Metrics and Indicators 3.3 Metrics in the Process and Project Domains 3.3.1 Process Metrics and Software Process Improvement 3.3.2 Project Metrics 3.4 Software Measurement 3.4.1 Size-Oriented Metrics 3.4.2 Function-Oriented Metrics

3.4.3 Extended Function Point Metrics 3.5 Metrics for Software Quality 3.5.1 An Overview of Factors That Affect Quality 3.5.2 Measuring Quality 3.5.3 Defect Removal Efficiency 3.6 Integrating Metrics within the Software Engineering Process 3.6.1 Arguments for Software Metrics 3.6.2 Establishing a Baseline 3.6.3 Metrics Collection, Computation, and Evaluation UNIT- 4 SOFTWARE PROJECT PLANNING 4.1 Introduction 4.2 Project Planning Objectives 4.3 Software Scope 4.3.1 Obtaining Information Necessary for Scope 4.3.2 Feasibility 4.4 Resources 4.4.1 Human Resources 4.4.2 Reusable Software Resources 4.4.3 Environmental Resources 4.5 Software Project Estimation 4.6 Decomposition Techniques 4.6.1 Software Sizing 4.6.2 Problem-Based Estimation 4.6.3 Process-Based Estimation 4.7 Empirical Estimation Models 4.7.1 The Structure of Estimation Models 4.7.2 The COCOMO Model 4.7.3 The Software Equation 4.8 The Make / Buy Decision 4.8.1 Creating a Decision Tree 4.8.2 Outsourcing

UNIT 5 RISK MANAGEMENT 5.1 Software Risks 5.2 Risk Identification 5.2.1 Product Size Risk 5.2.2 Business Impact Risk 5.2.3 Customer Related Risks 5.2.4 Process Risk 5.2.5Technology Risk 5.2.6 Development Environment Risk 5.2.7 Risk Associated with Staff Size and Experience 5.2.8 Risk Components and Drivers 5.3 Risk Projection 5.3.1 Developing a Risk Table 5.3.2 Assessing a Risk impact 5.3.3 Risk Assessment 5.4 Risk Mitigation, Monitoring and Management UNIT- 6 SOFTWARE REQUIREMENT DEFINITION 6.1 Software Requirement Specification 6.2 Formal Specification Techniques 6.2.1 Relational Notations 6.2.2 State-Oriented Notations 6.3 Languages and Processors for Requirement Specification 6.3.1 PSL/PSA 6.3.2 RSL / REVS 6.3.3 Structured Analysis and Design Technique (SADT) 6.3.4 Structured System Analysis (SSA) 6.3.5 Gist UNIT 7 SOFTWARE DESIGN 7.1 Introduction to Software Design

7.2 Fundamental Design Concepts 7.2.1 Abstraction 7.2.2 Information Hiding 7.2.3 Structure 7.2.4 Modularity 7.2.5 Concurrency 7.2.6 Verification 7.3 Modules and Modularization Criteria 7.4 Design Notations 7.4.1 Data Flow Diagrams 7.4.2 Structure Charts 7.4.3 HIPO Diagrams 7.4.4 Procedure Templates 7.4.5 Pseudo code Design Notations 7.4.6 Structured Flowcharts 7.4.7 Structured English 7.4.8 Decision Tables 7.5 Design Techniques 7.5.1 Stepwise Refinement 7.5.2 Levels of Abstraction 7.5.3 Structured Design 7.5.4 Integrated Top-Down Development 7.5.5 Jackson Structured Programming 7.6 Detailed Design Considerations UNIT- 8 IMPLEMENTATION ISSUES 8.1 Introduction 8.2 Structured Coding Techniques 8.2.1 Single Entry, Single Exit Constructs 8.2.2 Efficiency Consideration 8.2.3 Violations of Single Entry, Single Exit 8.2.4 Data Encapsulation 8.2.5 The Go to Statement

8.2.6 Recursion 8.3 Coding Styles 8.4 Standard and Guidelines 8.5 Documentation Guidelines 8.5.1 Supporting Documents 8.5.2 Program Unit Notebooks 8.5.3 Internal Documentation UNIT-9 VERIFICATION AND VALIDATION 9.1 Introduction 9.2 Quality Assurance 9.3 Walkthroughs and Inspections 9.3.1 Walkthroughs 9.3.2 Inspections 9.4 Symbolic Execution 9.5 Unit Testing and Debugging 9.5.1 Unit Testing 9.5.2 Debugging 9.6 System Testing 9.6.1 Integration testing 9.6.2 Acceptance testing 9.7 Formal Verification 9.7.1 Input-Output Assertions 9.7.2 Weakest Preconditions 9.7.3 Structural Induction UNIT-10 MAINTENANCE 10.1 Introduction 10.2 Enhancing Maintainability during Development 10.2.1Analysis Activities 10.2.2 Standards and Guidelines 10.2.3 Design Activities

10.2.4 Implementation Issues 10.2.5 Supporting Documents 10.3 Managerial Aspects of Software Maintenance 10.3.1 Change Control Board 10.3.2 Change Request Summaries 10.3.3 Quality Assurance Activities 10.3.4 Organizing Maintenance Programmers 10.4 Configuration Management 10.4.1 Configuration management Databases 10.4.2 Version Control Libraries 10.5 Source-code Metrics 10.5.1 Halsteads Effort Equation 10.5.2 McCabes Cyclomatic Metric 10.6 Other Maintenance Tools and Techniques

UNIT 1 INTRODUCTION TO SOFTWARE

1.1 Introduction to Software 1.2 The Evolving Role of Software 1.3 Software Characteristic 1.4 Software Application 1.5 Software Crisis 1.6 Software Myths

1.1

Introduction to Software

Software is the collection of computer programs, procedures, rules , associated documentation and data which are collected for specific purpose.Software is the various kinds of programs used to operate computers and related devices. A program is a sequence of instructions that tells a computer what operations to perform. Programs can be built into the hardware itself, or they may exist independently in a form known as software. Hardware describes the physical components of computers and related devices. According to IEEE software is a collection of computer programs, procedures, rules and associated documentation and data pertaining to the operation of a computer system . Software is often divided into two categories, system software or industrial strength software and Application software or simple software. Application software is a simple program that is usually designed, developed, used and maintained by the same person. No systematic approach is required for such type of softwares. System software use some systematic approach called programming systems. Different users in different platform use these softwares and the complexity is high. System software includes operating systems and any program that supports application software.

1.2

The Evolving Role of Software

The industry originated with the entrepreneurial computer software and services companies of the 1950s and 1960s, grew dramatically through the 1970s and 1980s to become a market force rivaling that of the computer hardware companies, and by the 1990s had become the supplier of technical know-how that transformed the way people worked, played and communicated every day of their lives. The following are the different eras of software engineering: The Pioneering Era (1955-1965) The most important development was that new computers were coming out almost every year or two, rendering existing ones obsolete. Software people had to rewrite all their programs to run on these new machines. Jobs were run by signing up for machine time or by operational staff by putting punched cards for input into the machine's card reader and waiting for results to come back on the printer. The field was so new that the idea of management by schedule was non-existent. Making predictions of a project's completion date was almost impossible. Computer hardware was application-specific. Scientific and business tasks needed different machines. Hardware vendors gave away systems software for free as hardware could not be sold without software. A few companies sold the service of building custom software but no software companies were selling packaged software. The Stabilizing Era (1965-1980) The whole job-queue system had been institutionalized and so programmers no longer ran their jobs except for peculiar applications like on-board computers. To handle the jobs, an enormous bureaucracy had grown up around the central computer center. The major problem as a result of this bureaucracy was turnaround time, the time between job submission and completion. At worst it was measured in days. Then came IBM 360. It signaled the beginning of the stabilizing era. This was the largest software project to date. The 360 also combined scientific and business applications onto one machine. The job control language (JCL) raised a whole new class of problems. The programmer had to write the program in a whole new language to tell the computer and OS what to do. JCL was the least popular feature of the 360. "Structured Programming" burst on the scene in the middle of this era.

PL/I, introduced by IBM to merge all programming languages into one, failed. Most customized applications continued to be done in-house. The Micro Era (1980-Present) The price of computing has dropped dramatically making ubiquitous computing possible. Now every programmer can have a computer on his desk. The old JCL has been replaced by the userfriendly GUI. The software part of the hardware architecture that the programmer must know about, such as the instruction set, has not changed much since the advent of the IBM mainframe and the first Intel chip. The most-used programming languages today are between 15 and 40 years old. The Fourth Generation Languages never achieved the dream of "programming without programmers" and the idea is pretty much limited to report generation from databases. There is an increasing clamor though for more and better software research.

1.2

Software Characteristic

For a better understanding of the software, it is important to examine the characteristics of software that make it different from other things that human beings build. When hardware is built, the human creative process (analysis, design, construction, testing) is ultimately translated into a physical form. If we build a new computer, our initial sketches, formal design drawings, and bread boarded prototype evolve into a physical product (chips, circuit boards, power supplies, etc.). Since software is purely logical rather than a physical system element, it therefore, has characteristics that are entirely different than those of hardware: 1. Software is developed or engineered but it is not manufactured in the classical sense: Although some similarities exist between software development and hardware manufacture, the two activities are fundamentally different. In both activities, high quality is achieved through good design, but the manufacturing phase for hardware can introduce quality problems that are nonexistent (or easily corrected) for software. Both activities depend on people, but the relationship between people applied and work accomplished is entirely different. Both activities require the construction of a Product, but the approach is different. 2. Software doesn't wear out: Figure below shows the failure rate as a function of time for hardware.

10

Failure Rate

Infant mortality

Wear out

Time The relationship, often called the "bathtub curve," indicates that hardware exhibits relatively high failure rates early in its life (these failures are often attributable to design or manufacturing defects); defects are corrected and the failure rate drops to a steadystate level (ideally, quite low) for some period of time. As time passes, however, the failure rate rises again as hardware components suffer from the cumulative affects of just, vibration, abuse, temperature extremes, and many other environmental changes. The hardware begins to wear out. Software is not susceptible to the environmental changes that cause the hardware to wear out. Theoretically the failure rate curve for the software should take the form shown below.

Failure Rate

Continues at the same rate until obsolescence

Time Undiscovered defects will cause high failure rates early in the life of a program. They are corrected and the curve flattens. So, the implications are software doesn't wear out but it does deteriorate.

11

Practically, during its life, software will undergo change. As changes are made, it is likely that some new defects will be introduced, causing the failure rate curve to spike as shown below.
Increased failure rate due to side effects

Failure Rate

Change

Actual curve

Idealized curve

Time Before the curve can return to the original steady-state, another change is requested, causing the curve to spike out. Slowly, the minimum failure rate level begins to rise- the software is deteriorating due to change. 3. Most software is custom-built, rather than being assembled from existing components : Consider the manner in which the control hardware for a computer-based product is designed and built: The design engineer draws a simple schematic of the digital circuitry, does some fundamental analysis to ensure that proper function will be achieved, and then refers to the catalog of digital components Each IC has a part number, a defined and validated function, a well-defined interface, and a set of integration guidelines. After each component is selected, the circuit is implemented. A software component should be designed and implemented so that it can be reused in different programs since it is a better approach, according to finance and manpower. In the 1960s, we built scientific subroutine libraries that were reusable in a broad array of engineering and scientific applications. These subroutine libraries reused well-defined algorithms in an effective manner but had a limited domain of application. Today, we have extended our view of reuse to encompass not only algorithms but also data structure. Modern reusable components encapsulate both data and the processing applied to the data, enabling the software engineer to create new applications from reusable parts.

12

1.4

Software Application

Information determinacy refers to the predictable of the order and timing of information. An engineering analysis program accepts data that have a predefine order, execute the analysis algorithm(s) without interruption, and produce resultant data in report or graphical format. It is sometime difficult to develop meaningful generic categories for software application. As software complexity grows, neat compartmentalization disappears. The fallowing software areas indicate the breadth of potential application. 1. System Software: - System Software is a collection of programs written to service other programs. Some System Software (Compiler, editor) processes complex but defined, information structure. Other System Software (Operating system components, drivers) process large undefined data. In both the cases, the system software is characterized by heavy interaction with computer hardware: heavy usage by multiple users, concurrent process scheduling, resource sharing, process management, etc. 2. Real-Time software: -Software that monitor/analyzes/controls real-world events as they occur is called real-time software. Real-time software includes a data gathering component that collects and formats information from an external environment, an analysis component that transforms information as required by the application, a control component that responds to the external environment, and a monitoring component that coordinates all other components so that real-time response can be maintained. 3. Business Software: - Business information processing is the largest single software application area. Applications in this area (payroll, inventory) restructure existing data in a way that facilitates business operations or management decision making. 4. Engineering and Scientific Software: - Engineering and Scientific software has been characterized by number crunching algorithms. System simulation, computer-aided design and other interactive applications are some of the examples. 5. Embedded Software: - Embedded software resides in read-only memory and is used to control products and systems for consumer and industrial markets. Embedded software can perform limited functions (keypad control for a microwave oven) or provide significant function and control capability (digital functions in an automobile such as fuel control, dashboard displays, braking systems). 6. Personal Computer Software: - The personal computer software market has grown rapidly over the past decade. Word processing, spreadsheets, computer graphics, multimedia, database management, personal and business financial applications are some of the applications. 7. Artificial Intelligence Software: - Artificial Intelligence software makes use of nonnumerical algorithm to solve complex problem that are not liable to computation or straightforward analysis. An active area is Expert systems, also known as knowledge-based

13

systems. Other application areas include pattern recognition, game playing and recently artificial neural network has evolved.

1.5

Software Crisis

The term software crisis has been frequently coined, the Oxford English Dictionary defines crisis as: 'a decisive moment, a time of danger or great difficulty, a turning point' yet for the software industry this crisis has been with us for nearly 30 years - this is a contradiction. Authors have instead proposed the term chronic affliction [Pressman, 1994]. Specifically, the adjective chronic suggests longevity and re occurrence - there are no miracle cures, rather there are ways that we can reduce the pain as we try to find a cure. Regardless of whether we use the term software crisis or software affliction, the term alludes to a set of problems that are encountered in the development of computer software. The problems are not limited to software that doesn't function properly. Rather, the affliction encompasses problems associated with: how we develop software; how we maintain the growing volume of existing software how we keep pace with the growing demand for software; how we manage the software development process.

1.6

Software Myths

Many causes of a software affliction can be traced to a mythology that arouse during the early history of software development. Software Myths propagate misinformation and confusion. Software Myths had a number of attributes that made them insidious: They appeared to be reasonable statements of facts, and they were often accepted by experienced practitioners. Today, most knowledgeable professionals recognize myths for what they are- misleading attitudes that have caused serious problems for managers and technical people. However, old attitudes and habits are difficult to modify, and small traces of software myths are still believed.

Management Myths
Managers with software responsibility, like managers in most disciplines, are often under pressure to maintain budgets, keep schedules from slipping, and improve quality. Often the software manger believes the software myth as it lessens the pressure temporarily. Myth: We already have a book that's full of standards and procedures for building software. Won't that provide my people with everything they need to know? Reality: The book of standards may exist, but is it used? Are software developers aware that it exists? Does it reflect modern software development practice? Is it complete? In many cases the answer to these questions is no.

14

Myth: My people have the latest software development tools; after all we do buy them the latest computers. Reality: It takes more than the latest computer to do high quality software development. Computer-aided software engineering (CASE) tools are more important than hardware for achieving good quality and productivity, yet the majority of software developers still do not use them. Myth: If we get behind schedule we can add more programmers and catch up. Reality: Software development is not a mechanistic process like manufacturing. In the words of Brook: '...Adding people to a late software project make it later'. As new people are added, those who were originally working on it must spend time educating the newcomers. People can be added but only in a planned and well co-coordinated manner.

Customer Myths
Customer requesting software might be a technical group of the same organization, or an outside company requesting software under contract. In many cases, the believes the myths about the software because the responsible may not understand the nature of software development; false expectations must be eliminated at the start. Co manager and practitioners do little to correct misinformation. Myths lead to false expectations by the customer and finally dissatisfaction with the developer. Myth: A general statement of objectives is sufficient to start writing programs - we can fill in the details later. Reality: Poor up-front definition is the major cause of failed software efforts. A formal and detailed description of the information domain, function, performance, interfaces, design constraints and validation criteria is essential. These characteristics can be determined only after thorough communication between customer and developer. Myth: Project requirements continually change, but change can be easily accommodated because software is flexible. Reality: It is true that requirements do change, but the impact of change varies with the time that it is introduced. If serious attention is given to up-front definition, early requests for change can be accommodated. The customer can do the review and recommend modifications with relatively little impact on cost. When changes are requested at the design phase, the cost impact grows rapidly: resources have been committed and a design framework has been established, design modifications means additional cost. Changes in function, performance, interfaces or other characteristics during implementation have a severe impact on cost.

15

Practitioner's Myths
Myths that are still believed by software practitioners have been encouraged by decades of programming culture. During the early days of software, programming was viewed as an art form. Old ways and attitudes die-hard. Myth: Once we write a program and get it to work, our job is done. Reality: Someone once said, The sooner you begin writing code, the longer it will take you to get done. The industry data indicates that the majority of effort expended on a program will be after the program has been delivered to the customer for the first time. Myth: Until I get the program running I have no way of assessing its quality. Reality: One of the most effective software quality assurance mechanisms is the formal technical review. These can be undertaken long before the program is running. Myth: The only deliverable for a successful project is the working program. Reality: A working program is only one of the elements of the project. Other elements include: project plans, requirements specifications, system designs, test specifications, support documentation etc. Documentation forms the foundation for successful development and, more importantly, provides the foundation for the software maintenance task.

16

UNIT 2 SOFTWARE ENGINEERING APPROACH

2.1 Introduction to Software Engineering 2.2 Generic View of Software Engineering 2.3 The Software Process 2.4 Software Process Models 2.4.1 The Linear Sequential Model 2.4.2 The Prototype Model 2.4.3 The Incremental Model 2.4.4 The Spiral Model 2.4.4 The COCOMO Model

2.1

Introduction to Software Engineering

The need for systematic approaches to development and maintenance of computer software products became apparent in 1960s. During this decade, third generation computing hardware was invented, and the software techniques of multiprogramming and time-sharing were developed. These capabilities provided the technology for implementation of interactive, multiuser, on-line and real-time computing systems. As the computing systems became larger and more complex, it became apparent that the demand for computer software was growing faster than our ability to produce and maintain it. A workshop was held in Garmisch, West Germany, in 1968, to consider the growing problems of software technology. This workshop and the subsequent one held in Rome, Italy, in 1969, stimulated widespread interest in the technical and managerial processes used to develop and maintain computer software. The term software engineering was first used in these workshops. Since 1968, the applications of digital computers have become increasingly diverse, complex, and critical to modern society. As a result, the field of software engineering has evolved into technological discipline of considerable importance. According to Boehm, software engineering involves the practical application of scientific knowledge to the design and construction of computer programs and the associated

17

documentation required to develop, operate, and maintain them. The IEEE Standard Glossary of Software Engineering terminology defines software engineering as The systematic approach to the development, operation, maintenance, and retirement of software. Software engineering is the technological and managerial discipline concerned with systematic production and maintenance of software products that are developed and modified on time and within cost estimates. The primary goals of software engineering are to improve the quality of software products and to increase the productivity and job satisfaction of software engineers.

2.2 Generic View of Software Engineering


The work that is associated with software engineering can be categorized into three generic phases regardless of application area, project size, or complexity. The definition phase focuses on what. That is, during definition, the software developer attempts to identify what information is to be processed, what function and performance are desired, what system behavior can be expected, what interfaces are to be established, what design constraints exists, and what validation criteria are required to define a successful system. The key requirements of the system and the software are identified. The development phase focuses on how. That is, during development a software engineer attempts to define how data are to be structured, how function is to implemented , how procedural details are to be implemented, how interfaces are to be characterized, how the design will be translated into a programming language, how testing will be performed. The maintenance phase focuses on change that is associated with error correction, adaptations required as softwares environment evolves, and changes due to enhancements brought about by changing customer requirements. The maintenance phase reapplies the steps of the definition and development phases. Four types of change are encountered during the maintenance phase: Correction: Even with the best quality assurance activities, it is likely that the customer will uncover defects in the software. Corrective maintenance changes the software to correct defects. Adaptation: Over time, the original environment (CPU, Operating System, business rules etc) for which the software was developed is likely to change. Adaptive maintenance results in modification to the software to accommodate changes to its external environment. Enhancement: as software is used, the customer will recognize additional functions that will provide benefit. Perfective maintenance extends the software beyond its original functional requirements. Preventive: Computer software deteriorates due to change, and because of this, preventive maintenance, often called software reengineering, must be conducted to enable the software to serve the need of its end users. Preventive maintenance makes changes to computer programs so that they can be more easily corrected, adapted and enhanced. Today, the aging software is forcing many companies to pursue software reengineering strategies.

18

2.3 The Software Process


Software Process is the total set of software engineering activities necessary to develop and maintain software products. It can be characterized as shown in the figure below. A Common Process Framework is established by defining a small number of framework activities that are applicable to all software projects, regardless of their size and complexity. A number of task setseach a collection of software engineering work tasks, project milestones, software work products and deliverables, and quality assurance points- enable the framework activities to be adapted to the characteristics of the software project and the requirements of the project team. Finally Umbrella activities- such as software quality assurance, software configuration management, and measurement-overly the process model. Umbrella activities are independent of any framework activity and occur through out the process.

Common Process Framework Framework Activities Task Sets Tasks Milestones, Deliverables SQA Points Umbrella Activities

2.4 Software Process Models


The development strategy encompassing all the activities required to define, develop, test, deliver, operate and maintain software product is often referred to as Process Model or Software Engineering Paradigm. Different models emphasize different aspects of the life cycle, and no single model is appropriate for all software products. It is important to define a life-cycle model for each software project because the model provides a basis for controlling the various activities required to develop and maintain a software product. The selection of a process model is based on the nature of the project and application, the tools to be used and the controls and deliverables that are required. All software development can be characterized as a problem solving loop in which four distinct stages are encountered: status quo, problem definition, technical development, and solution integration. Status quo represents the current state of the process; problem definition identifies

19

the specific problem to be solved; technical development solves the problem through the application of some technology; and solution integration delivers the results to those who requested the solution in the first place. Regardless of the process model that is chosen for a software project, all of the stages- status quo, problem definition, technical development, and solution integration- exist simultaneously at some level of detail.

2.4.1 The Linear Sequential Model


Sometimes called the classic life cycle or waterfall model, the linear sequential model suggests a systematic, sequential approach to the software development that begins at the system level and progresses through analysis, design, coding, testing and maintenance. System/information Engineering

Analysis

Design

Code

Test

Linear sequential model involves the following activities. System/information engineering: Because software is always part of a larger system, work begins by establishing requirements for all the system elements and then allocating some subset of these requirements to software. This system view is essential when software must interface with other elements such as hardware, people, and databases. System engineering and analysis involves requirements gathering at the system level with a small amount of top-level analysis and design. Information engineering involves requirements gathering at the strategic business level and at the business area level. Software requirement analysis: The requirements gathering process is intensified and focused specifically on software. To understand the nature of the program to be built, the software engineer must understand the information domain for the software as well as required function, behavior, performance, and interfacing. Requirements for both the system and the software are documented and reviewed with the customer. Design: Software design is a multi step process that focuses on four distinct attributes of a program: data structure, software architecture, interface representations, procedural detail. The design process translates requirements into a representation of the software that can be assessed for quality before code generation begins. Like requirements, the design is documented and becomes part of the software configuration.

20

Code generation: The design must be translated into a machine readable form. The code generation step performs this task. If design is performed in a detailed manner, code generation can be accomplished mechanistically. Testing: Once code has been generated, program testing begins. The testing process focuses on the logical internals of the software, assuring that all statements have been tested, and on the functional externals- that is, conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results. Maintenance: Software will undergo change after it is delivered to the customer. Change will occur because errors have been encountered, because the software must be adopted to accommodate changes in its external environment or because the customer requires functional or performance enhancements. Software maintenance reapplies each of the preceding phases to an existing program rather than a new one. The linear sequential model is the oldest and the most widely used model for software engineering. Some of the problems that may arise when linear sequential model is applied are: 1. Real projects rarely follow the sequential flow that the model proposes. Although the linear model can accommodate iterations, it does so indirectly. As a result, changes can cause confusion as the project team proceeds. 2. It is often difficult for the customer to state all the requirements explicitly. The linear sequential model requires this and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects. 3. The customer must have patience. A working version of the program will not be available until late in the project time-span. A major blunder, if undetected until the working program is reviewed, can be disastrous. Each of these problems is real. But still the classic life cycle model has a definite and important place in the software engineering work. It provides a template into which methods for analysis, design, coding, testing, and maintenance can be placed. The classic life cycle remains the most widely used process model for software engineering.

2.4.2 The Prototyping Model


Often, a customer defines a set of general objectives for software but does not identify detailed input, processing or output requirements. In other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that humanmachine interaction should take. In these and many other situations, a prototyping model may offer the best approach. The Prototype model shown below begins with requirement gathering.

21

Listen to customer

Build/revise mock-up

Customer test-drives mock-up Developer and customer meet and define the overall objectives for the software, identify whatever requirements are known and outline the areas where further definition is required. The output is quick design. A quick design focuses on a representation of those aspects of the software that will be visible to the customer. The quick design leads to the construction of a prototype. The prototype is evaluated by the customer and is used to refine the requirements for the software to be developed. Iteration occurs as the prototype is tuned to satisfy the needs of the customer, at the same time enabling the developer to better understand the needs of the customer. The prototype model serves as a mechanism for identifying software requirements. Both the developer and the customer like this model. Customers get a feel of the actual system and developer gets some idea about the requirements of the customer. Yet prototype model do have disadvantages. 1. The customer sees what appears to a working version of the software unaware of the things like in a hurry to get it working software quality and maintainability had not been considered. When informed that the product must be rebuilt so that high level of quality can be maintained, the customer demands that with a slight modification prototype model can be converted into a working model. Too often, the software development management relents. 2. The developer often makes implementation compromises in order to get a prototype working quickly. An inappropriate operating system or programming language may be used simply because it is available and known; an inefficient algorithm may be implemented simply to demonstrate capability. After sometime, the developer may become familiar with these choices and forget all the reasons why they were inappropriate. The less-than-ideal choice has now become an integral part of the system. Although problems can occur, prototyping can be an effective model for software engineering. The key is to define the rules at the beginning; that is. The customer and the developer must both agree that the prototype is built to serve as a mechanism for defining requirements. It is then discarded and the actual software is engineered with the aim towards the quality and maintainability.

22

2.4.3 The Incremental Model


The incremental model combines elements of the linear sequential model with the iterative nature of the prototyping. The incremental model applies the linear sequences in a staggered fashion as the time progresses as shown in the figure below.

System/Information engineering Analysis Design

Increment 1 Code Test delivery of 1st increment

Analysis Increment 2

Design

Code

Test

delivery

of 2nd increment Analysis Design Code Test

Increment 3

delivery

of 3rdnd increment

Each linear sequence produces a deliverable increment of the software. In incremental model, the increment is a core product in which the requirements are addressed. The core product is used by the customer. As a result of use and evaluation, a plan is developed for the next increment. The plan addresses the modification of the core product to meet the needs of the customer and delivery of additional features and functionality. This process is repeated until the complete product is produced. The incremental model, like prototyping, is iterative in nature. But unlike prototyping, the incremental model focuses on the delivery of a operational product with each increment. The early versions of the incremental model provide a platform for the user to evaluate the requirements. Incremental development is useful when staffing is unavailable for the complete implementation by the deadline that has been established for the project. Early increments can be implemented with fewer people. If the core product is well received, then additional staff can be added to implement the next increment.

23

2.4.4 The Spiral Model


The Spiral Model is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of linear sequential model. In Spiral Model, software is developed in a series of incremental releases. During early iterations, the incremental release might be paper model or prototype. During later iterations, increasingly more complete versions of the engineered system are produced. The spiral model is divided into a number of framework activities, also called task regions. There may be between three and six task regions. Figure below depicts a spiral model that contains six task regions.

Planning Customer Communication

Risk Analysis

Engineering

Construction & Release Customer Evaluation

Customer Communication tasks required to establish effective communication between developer and customer. Planning tasks required to define resources, timelines, and other project related information. Risk analysis tasks required to assess both technical and management risks. Engineering tasks required to build one or more representations of the application. Construction & Release tasks required to construct, test, install and provide user support (documentation and training). Customer Evaluation tasks required to obtain customer feedback based on evaluation of the software representations created during engineering stage and implemented during the installation stage.

24

Each of the regions is populated by a series of work tasks that are adapted to the characteristics of the project to be undertaken. For small projects, the number of work tasks and their formalities are low. For larger, more critical projects, each task region contains more work tasks that are defined to achieve a higher level of formality. As this evolutionary process begins, the software engineering team moves around the spiral in a clockwise direction, beginning at the core. The first circuit around the spiral might result in the development of a product specification; subsequent passes around the spiral might be used to develop a prototype and then progressively more sophisticated versions of the software. Each pass through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on the feedback derived from customer evaluation. In addition, the project manager adjusts the planned number of iterations required to complete the software. The spiral model is a realistic approach to the development of large scale system and software. Because software evolves as the process progresses, the developer and the customer better understand and react to risks at each evolutionary level. The spiral model maintains the systematic stepwise approach suggested by the classic life cycle, but incorporates it into an iterative framework that more realistically reflects the real world. But like other models, the spiral model is not a universally accepted model. It may be difficult to convince the customer that the evolutionary approach is controllable. It requires considerable risk assessment expertise, and completely relies on this expertise for success. If a major risk is not uncovered and managed, problems will occur. Finally, the model itself is relatively new and has not been used as widely as sequential model or prototyping model.

2.4.5 The Cocomo Model


The COnstructive COst MOdel is designed by considering the cost of performing various activities in a software project. The cost of conducting a software project is the sum of the costs incurred in conducting each phase of the project. Costs incurred within each phase include the cost of performing the processes and preparing the products for that phase, plus the cost of verifying that the products of the present phase are complete and consistent with respect to all previous phases. Modifications and correctness to the products of previous phases are necessary because the processes of the current phase will expose the inaccuracies, inconsistencies, and incompleteness in those products. The cost of conducting a software project is shown in the figure below. The cost of producing the System Definition and the Project Plan is the cost of performing the planning function and preparing the documents, plus the cost of verifying that the System Definition states accurately the customers needs and the cost of verifying that the Project Plan is feasible. The cost of preparing the Software Requirement Specification includes the cost of performing requirements definition and preparing the specification document plus the cost of modifying and correcting the System Definition and the Project Plan, plus the cost of verifying that the Software Requirement Specification is complete and consistent with respect to the System Definition and the customers need.

25

Plan Verify SD: System Definition, Project Plan

SD

Software Requirements analysis Verify SRS

Modify SD Verify

Fix SD Verify

SRS: Software Requirement Specification, Preliminary Users Manual

Design Verify

Modify SD/SRS Verify

Fix SD/SRS Verify

DD

DD: Software Design Specification, Software Verification Plan

Build Verify

Modify SD/SRS/DD Verify

Fix SD/SRS/DD Verify

SC

SC: Source Code, Acceptance Test Plan

System Test Verify SS

Modify SD/SRS/DD/SC Verify

Fix SD/SRS/DD/SC Verify

SS: Source Code, Users Manual, Principles of Operation

Operate and Maintain Verify

Modify SS Verify

Adapt SS Verify

Fix SS Verify

26

The cost of Design is the cost of performing the design activities and preparing the design specification and the test plan, plus the cost of modifying and correcting the System Definition, the Project Plan, and the System Requirement Specification, plus the cost of verifying the design against the requirement, the System Definition and the Project Plan. The cost of product implementation is the cost of implementing, documenting, debugging, and unit testing the source code, plus the cost of completing the Users Manual, the verification plan, the maintenance procedures, and the installation and the training instructions, plus the cost of modifying and correcting the System Definition, the Project Plan, the Software Requirement Specification, the Design Specification, and the Verification Plan, plus the cost of verifying that the implementation is complete, consistent, and suitable with respect to the System Definition, the System Requirement Specification, and the Design Documents. The cost of System Testing includes the cost of planning and conducting the tests, plus the cost of modifying and correcting the source code and the external documents during system testing, plus the cost of verifying that the tests adequately validate the product. Finally, the cost of software maintenance is the sum of the costs of performing product enhancements, making adaptations to new processing requirements, and fixing bugs. Each of these activities may involve modifying and correcting any or all of the supporting documents and running a large number of test cases to verify the correctness of the modification.

27

UNIT 3 SOFTWARE PROCESS AND PROJECT METRICS

3.1 Introduction 3.2 Measures, Metrics and Indicators 3.3 Metrics in the Process and Project Domains 3.3.1 Process Metrics and Software Process Improvement 3.3.2 Project Metrics 3.4 Software Measurement 3.4.1 Size-Oriented Metrics 3.4.2 Function-Oriented Metrics 3.4.3 Extended Function Point Metrics 3.5 Metrics for Software Quality 3.5.1 An Overview of Factors That Affect Quality 3.5.2 Measuring Quality 3.5.3 Defect Removal Efficiency 3.6 Integrating Metrics within the Software Engineering Process 3.6.1 Arguments for Software Metrics 3.6.2 Establishing a Baseline 3.6.3 Metrics Collection, Computation, and Evaluation

Introduction
Software metrics refers to a broad range of measurements for computer software. Measurement can be applied to the software process with the intent of improving it on a continuous basis. Measurement can be used throughout software project to assist in estimation, quality control, productivity assessment, and project control. Measurement can also be used by software engineers to help assess the quality of technical work products and to assist in the tactical decision making as a project proceeds.

28

Measures, Metrics and Indicators


Within the software engineering context, a measure provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of a product or process. Measurement is the act of determining a measure. The IEEE Standard Glossary of Software Engineering Terms defines metric as a quantitative measure of the degree to which a system, component, or process possesses a given attribute. When a single data point has been collected, a measure has been established. Measurement occurs as the result of the collection of one or more data points. Software metric relates the individual measures in some way. An indicator is a metric or combination of metrics that provide insight into the software process, a software project, or the product itself. An indicator provides insight that enables the project manager or software engineers to adjust the process, the project, or the process to make things better.

3.3

Metrics in the Process and Project Domains

Measurement is commonplace in the engineering world. We measure power consumption, weight, physical dimensions, temperature, voltage, signal-to-noise ratio and so on. Metrics should be collected so that process and product indicators can be ascertained. Process indicators enable a software engineering organization to gain insight into the efficacy of an existing process (i.e., the paradigm, software engineering tasks, work products and milestones). They enable managers and practitioners to assess what works and what doesnt. Process metrics are collected across all projects and over long periods of time. Their intent is to provide indicators that lead to long-term software process improvement. Project indicators enable a software project manager to Assess the status of an ongoing project, Track potential risks, Uncover problem areas before they go critical, Adjust work flow or tasks, and Evaluate the project teams ability to control quality of software work products.

In some cases, the same software metrics can be used to determine project and then process indicators. In fact, measures that are collected by a project team and converted into metrics for use during a project can also be transmitted to those with responsibility for software process improvement. For this reason, many of the same metrics are used in both the process and project domain.

29

3.3.1 Process Metrics and Software Process Improvement The only rational way to improve any process is to measure specific attributes of the process, develop a set of meaningful metrics based on these attributes, and then use the metrics to provide indicators that will lead to a strategy for improvement. Product

Customer Characteristics Process

Business Conditions

People

Technology

Development Environment Referring to above Figure, process sits at the centre of a triangle connecting three factors that have a profound influence on software quality and organizational performance. The skill and motivation of people has been shown to be the single most influential factor in quality and performance. The complexity of the product can have a substantial impact on quality and team performance. The technology that populates the process also has an impact. In addition, the process triangle exists within a circle of environmental conditions that include the development environment, business conditions and customer characteristics. We derive a set of metrics based on the outcomes that can be derived from the process. Outcomes include measures of errors uncovered before release of the software, defects delivered to and reported by end-users, work products delivered (productivity), human effort expended, calendar time expended, schedule conformance, and other measures. We also derive process metrics by measuring the characteristics of specific software engineering tasks. The personal software process (PSP) is a structured set of process descriptions, measurements, and methods that can help engineers to improve their personal performance. It provides the forms, scripts, and standards that help them estimate and plan their work. It shows them how to define processes and how to measure their quality and productivity. A fundamental PSP principle is that everyone is different and that a method that is effective for one engineer may not be suitable for another. The PSP thus helps engineers to measure and track their own work so they can find the methods that are best for them. Humphrey recognizes that software process improvement can and should begin at the individual level. Private process data can serve as an important driver as the individual software engineer works to improve. Some process metrics are private to the software project team but public to all team members. Examples include defects reported for major software functions, errors found during formal technical reviews, and lines of code or function points per module and function.

30

Public metrics generally assimilate information that originally was private to individuals and teams. Project level defect rates, effort, calendar times, and related data are collected and evaluated in an attempt to uncover indicators that can improve organizational process performance. Software process metrics can provide significant benefit as an organization works to improve its overall level of process maturity. However, like all metrics, these can be misused, creating more problems than they solve. Grady suggest software metrics etiquette that is appropriate for both managers and practitioners as they institute a process metrics program: Use common sense and organizational sensitivity when interpreting metrics data. Provide regular feedback to the individuals and teams who collect measures and metrics. Dont use metrics to appraise individuals. Work with practitioners & teams to set clear goals & metrics that will be used to achieve them. Never use metrics to threaten individuals or teams. Metrics data that indicate a problem area should not be considered negative. As an organization becomes more comfortable with the collection and use of process metrics, the derivation of simple indicators gives way to a more rigorous approach called statistical software process improvement (SSPI). In essence, SSPI uses software failure analysis to collect information about all errors and defects3 encountered as an application, system, or product is developed and used. Failure analysis works in the following manner: 1. All errors and defects are categorized by origin. 2. The cost to correct each error and defect is recorded. 3. The number of errors & defects in each category is counted and ranked in descending order. 4. The overall cost of errors and defects in each category is computed. 5. Resultant data are analyzed to uncover categories that result in highest cost to organization. 6. Plans are developed to modify the process with the intent of eliminating the class of errors and defects that is most costly.

3.3.2 Project Metrics


Software process metrics are used for strategic purposes. Software project measures are tactical. That is, project metrics and the indicators derived from them are used by a project manager and a software team to adapt project work flow and technical activities. The first application of project metrics on most software projects occurs during estimation. Metrics collected from past projects are used as a basis from which effort and time estimates are made for current software work. As a project proceeds, measures of effort and calendar time

31

expended are compared to original estimates. The project manager uses these data to monitor and control progress. Production rates represented in terms of pages of documentation, review hours, function points, and delivered source lines are measured. In addition, errors uncovered during each software engineering task are tracked. As the software evolves from specification into design, technical metrics are collected to assess design quality and to provide indicators that will influence the approach taken to code generation and testing. The intent of project metrics is twofold. First, these metrics are used to minimize the development schedule by making the adjustments necessary to avoid delays and mitigate potential problems and risks. Second, project metrics are used to assess product quality on an ongoing basis and, when necessary, modify the technical approach to improve quality. As quality improves, defects are minimized, and as the defect count goes down, the amount of rework required during the project is also reduced. This leads to a reduction in overall project cost.

3.4 Software Measurement


Measurements in the physical world can be categorized in two ways: direct measures and indirect measures. Direct measures of the software engineering process include cost and effort applied.Direct measures of the product include lines of code produced, execution speed, memory size, and defects reported over some set period of time. Indirect measures of the product include functionality, quality, complexity, efficiency, reliability, maintainability. The cost and effort required to build software, the number of lines of code produced and other direct measures are relatively easy to collect, as long as specific conventions for measurement are established in advance. However, the quality and functionality of software or its efficiency or maintainability are more difficult to assess and can be measured only indirectly.

3.4.1 Size-Oriented Metrics


Size-oriented software metrics are derived by normalizing quality and/or productivity measures by considering the size of the software that has been produced. If a software organization maintains simple records, a table of size-oriented measures can be created. The table lists each software development project that has been completed over the past few years and corresponding measures for that project. In the table entry for the project alpha: 12100 lines of code (LOC) were developed with 24 person-months of effort at a cost of $168,000. it should be noted that the effort and cost recorded in the table represents all software engineering activities. Further information for project alpha indicates that 365 pages of documentation were developed, 134 errors were recorded before the software was released, and 29 defects were encountered after release to the customer within the first year of operation. Three people worked on the development of the project alpha.

32

Project Alpha Beta Gamma

LOC 12,100 27,200 20,200

Effort 24 62 43

$(000) 168 440 314

pp.doc 365 1224 1050

Errors 134 321 256

Defects 29 86 64

People 3 5 6

In order to develop metrics that can be assimilated with similar metrics from other projects, we choose lines of code as our normalization value. From the rudimentary data contained in the table, a set of simple size-oriented metrics can be developed for each project: Errors per KLOC (thousand lines of code). Defects4 per KLOC. $ per LOC. Page of documentation per KLOC. In addition, other interesting metrics can be computed: Errors per person-month. LOC per person-month. $ per page of documentation. Size-oriented metrics are not universally accepted as the best way to measure the process of software development. Most of the controversy swirls around the use of lines of code as a key measure.

3.4.2 Function-Oriented Metrics


Function-oriented software metrics use a measure of the functionality delivered by the application as a normalization value. Since functionality cannot be measured directly, it must be derived indirectly using other direct measures. Function-oriented metrics were first proposed by Albrecht, who suggested a measure called the function point. Function points are derived using an empirical relationship based on countable (direct) measures of software's information domain and assessments of software complexity. Function points are computed by completing the table shown below. Weighing Factor Measuring Parameter Number of user inputs Number of user outputs Number of user inquiries Number of files Count ------------------------X X X X X Simple Average 3 4 3 7 5 4 5 4 10 7 Complex 6 7 6 15 10 = ------= ------= ------= ------= -------

Number of external interfaces ------

Count = Total ------------------------------------------------------------------------------------- = -------

33

Five information domain characteristics are determined and counts are provided in 89 the appropriate table location. Information domain values are defined in the following manner: Number of user inputs. Each user input that provides distinct application oriented data to the software is counted. Inputs should be distinguished from inquiries, which are counted separately. Number of user outputs. Each user output that provides application oriented information to the user is counted. In this context output refers to reports, screens, error messages, etc. Individual data items within a report are not counted separately. Number of user inquiries. An inquiry is defined as an on-line input that results in the generation of some immediate software response in the form of an on-line output. Each distinct inquiry is counted. Number of files. Each logical master file (i.e., a logical grouping of data that may be one part of a large database or a separate file) is counted. Number of external interfaces. All machine readable interfaces (e.g., data files on storage media) that are used to transmit information to another system are counted. Once these data have been collected, a complexity value is associated with each count. Organizations that use function point methods develop criteria for determining whether a particular entry is simple, average, or complex. Nonetheless, the determination of complexity is somewhat subjective. To compute function points (FP), the following relationship is used: FP = count total X [0.65 + 0.01 X (Fi)] where count total is the sum of all FP entries obtained. The Fi (i = 1 to 14) are "complexity adjustment values" based on responses to the following questions: 1. Does the system require reliable backup and recovery? 2. Are data communications required? 3. Are there distributed processing functions? 4. Is performance critical? 5. Will the system run in an existing, heavily utilized operational environment? 6. Does the system require on-line data entry? 7. Does the on-line data entry require the input transaction to be built over multiple screens or operations? 8. Are the master files updated on-line? 9. Are the inputs, outputs, files, or inquiries complex? 10. Is the internal processing complex? 11. Is the code designed to be reusable? 12. Are conversion and installation included in the design?

34

13. Is the system designed for multiple installations in different organizations? 14. Is the application designed to facilitate change and ease of use by the user? Each of these questions is answered using a scale that ranges from 0 (not important or applicable) to 5 (absolutely essential). The constant values in Equation and the weighting factors that are applied to information domain counts are determined empirically. Once function points have been calculated, they are used in a manner analogous to LOC as a way to normalize measures for software productivity, quality, and other attributes: Errors per FP. Defects per FP. $ per FP. Pages of documentation per FP. FP per person-month.

3.4.3 Extended Function Point Metrics


The function point measure was originally designed to be applied to business information systems applications. To accommodate these applications, the data dimension was emphasized to the exclusion of the functional and behavioural dimensions. A function point extension called feature points is a superset of the function point measure that can be applied to systems and engineering software applications. The feature point measure accommodates applications in which algorithmic complexity is high. Real-time, process control and embedded software applications tend to have high algorithmic complexity and are therefore amenable to the feature point. Counts of retained data and external data are used along with measures of complexity to derive a data dimension count. The functional dimension is measured by considering the number of internal operations required to transform input to output data. For the purposes of 3D function point computation, a transformation is viewed as a series of processing steps that are constrained by a set of semantic statements. The control dimension is measured by counting the number of transitions between states. A state represents some externally observable mode of behaviour, and a transition occurs as a result of some event that causes the software or system to change its mode of behaviour. To compute 3D function points, the following relationship is used: index = I + O + Q + F + E + T + R where I, O, Q, F, E, T, and R represent complexity weighted values for the elements discussed already: inputs, outputs, inquiries, internal data structures, external files, transformation, and transitions, respectively. Each complexity weighted value is computed using the following relationship: complexity weighted value = NilWil + NiaWia + NihWih

35

where Nil, Nia, and Nih represent the number of occurrences of element i (e.g., outputs) for each level of complexity (low, medium, high); and Wil, Wia, and Wih are the corresponding weights. The overall computation for 3D function points is shown below Complexity Weighing Measuring elements Internal data structures External data Number of user inputs Number of user outputs Number of user inquiries Transformations Transitions Low ------------------------------------------Average X 7 + -------X 5 + -------X 3 + -------X 4 + -------X 3 + -------X 7 + -------X X X 7 4 5 High X 10 + ------- X + ------- X + ------- X + ------- X + ------- X 15 = ------= ------6 = ------7 = ------6 = ------15 = ------+ ------- X 10

X 4 X 10

X n/a + --------

X n/a

+ ------- X n/a = -------

3D function point index ------------------------------------------------------------------------ = -------It should be noted that function points, feature points, and 3D function points represent the same thing"functionality" or "utility" delivered by software. In fact, each of these measures results in the same value if only the data dimension of an application is considered. For more complex real-time systems, the feature point count is often between 20 and 35 percent higher than the count determined using function points alone.

3.5

Metrics for Software Quality

The overriding goal of software engineering is to produce a high-quality system, application, or product. To achieve this goal, software engineers must apply effective methods coupled with modern tools within the context of a mature software process. In addition, a good software engineer must measure if high quality is to be realized. The quality of a system, application, or product is only as good as the requirements that describe the problem, the design that models the solution, the code that leads to an executable program, and the tests that exercise the software to uncover errors. A good software engineer uses measurement to assess the quality of the analysis and design models, the source code, and the test cases that have been created as the software is engineered. The project manager must also evaluate quality as the project progresses. Private metrics collected by individual software engineers are assimilated to provide project level results. Although many quality measures can be collected, the primary thrust at the project level is to measure errors and defects. Metrics derived from these measures provide an indication of the effectiveness of individual and group software quality assurance and control activities. Metrics such as work product errors per function point, errors uncovered per review hour, and errors uncovered per testing hour provide insight into the efficacy of each of the activities

36

implied by the metric. Error data can also be used to compute the defect removal efficiency (DRE) for each process framework activity.

3.5.1 An Overview of Factors That Affect Quality


Over 25 years ago, McCall and Cavano defined a set of quality factors that were a first step toward the development of metrics for software quality. These factors assess software from three distinct points of view: (1) Product operation (using it), (2) Product revision (changing it), and (3) Product transition (modifying it to work in a different environment; i.e., "porting" it). In their work, the authors describe the relationship between these quality factors (what they call framework) and other aspects of the software engineering process: First, the framework provides a mechanism for the project manager to identify what qualities are important. These qualities are attributes of the software in addition to its functional correctness and performance which have life cycle implications. Such factors as maintainability and portability have been shown in recent years to have significant life cycle cost impact . . . Secondly, the framework provides a means for quantitatively assessing how well the development is progressing relative to the quality goals established . . . Thirdly, the framework provides for more interaction of QA personnel throughout the development effort . . . Lastly, quality assurance personal can use indications of poor quality to help identify [better] standards to be enforced in the future. If a software organization adopts a set of quality factors as a checklist for assessing software quality, it is likely that software built today will still exhibit quality well into the first few decades of this century.

3.5.2 Measuring Quality


Although there are many measures of software quality, correctness, maintainability, integrity, and usability provide useful indicators for the project team. Gilb suggests definitions and measures for each. Correctness. A program must operate correctly or it provides little value to its users. Correctness is the degree to which the software performs its required function. When considering the overall quality of a software product, defects are those problems reported by a user of the program after the program has been released for general use. For quality assessment purposes, defects are counted over a standard period of time, typically one year. Maintainability. Software maintenance accounts for more effort than any other software engineering activity. Maintainability is the ease with which a program can be corrected if an error is encountered, adapted if its environment changes, or enhanced if the customer desires a change in requirements. A simple time-oriented metric is mean-time-to change (MTTC), the time it takes to analyze the change request, design an appropriate modification, implement the change,

37

test it, and distribute the change to all users. On average, programs that are maintainable will have a lower MTTC (for equivalent types of changes) than programs that are not maintainable. Integrity. To measure integrity, two additional attributes must be defined: threat and security. Threat is the probability that an attack of a specific type will occur within a given time. Security is the probability (which can be estimated or derived from empirical evidence) that the attack of a specific type will be repelled. The integrity of a system can then be defined as integrity = summation [(1 threat) X (1 security)] where threat and security are summed over each type of attack. Usability. Usability is an attempt to quantify user-friendliness and can be measured in terms of four characteristics: (1) The physical and or intellectual skill required to learn the system, (2) The time required to become moderately efficient in the use of the system, (3) The net increase in productivity (over the approach that the system replaces) measured when the system is used by someone who is moderately efficient, and (4) A subjective assessment (sometimes obtained through a questionnaire) of users attitudes toward the system. The four factors just described are only a sampling of those that have been proposed as measures for software quality.

3.5.3 Defect Removal Efficiency


A quality metric that provides benefit at both the project and process level is defect removal efficiency (DRE). In essence, DRE is a measure of the filtering ability of quality assurance and control activities as they are applied throughout all process framework activities. When considered for a project as a whole, DRE is defined in the following manner: DRE = E / (E + D) where E is the number of errors found before delivery of the software to the end-user and D is the number of defects found after delivery. The ideal value for DRE is 1. That is, no defects are found in the software. Realistically, D will be greater than 0, but the value of DRE can still approach 1. As E increases, the overall value of DRE begins to approach 1. In fact, as E increases, it is likely that the final value of D will decrease. If used as a metric that provides an indicator of the filtering ability of quality control and assurance activities, DRE encourages a software project team to institute techniques for finding as many errors as possible before delivery.

3.6

Integration Metrics within the Software Engineering Process

The majority of software developers still do not measure, and sadly, most have little desire to begin. Attempting to collect measures where none had been collected in the past often precipitates resistance.

38

Realistically, though, establishing a successful company-wide software metrics program is hard work.

3.6.1 Arguments for Software Metrics


It so important to measure process of software engineering and the product that it produces. By requesting and evaluating productivity and quality measures, senior management can establish meaningful goals for improvement of the software engineering process. If the process through which it is developed can be improved, a direct impact on the bottom line can result. But to establish goals for improvement, the current status of software development must be understood. Hence, measurement is used to establish a process baseline from which improvements can be assessed. The day-to-day rigors of software project work leave little time for strategic thinking. Software project managers are concerned with more mundane issues: developing meaningful project estimates, producing higher-quality systems, getting product out the door on time. By using measurement to establish a project baseline, each of these issues becomes more manageable. Additionally, the collection of quality metrics enables an organization to "tune" its software process to remove the "vital few" causes of defects that have the greatest impact on software development. At the project and technical levels, software metrics provide immediate benefit. As the software design is completed, most developers would be anxious to obtain answers to the questions such as Which user requirements are most likely to change? Which components in this system are most error prone? How much testing should be planned for each component? How many errors (of specific types) can I expect when testing commences?

3.6.2 Establishing a Baseline


By establishing a metrics baseline, benefits can be obtained at the process, project, and product levels. The metrics baseline consists of data collected from past software development projects and can be as simple as the table presented or as complex as a comprehensive database containing dozens of project measures and the metrics derived from them. To be an effective aid in process improvement and/or cost and effort estimation, baseline data must have the following attributes: (1) data must be reasonably accurate" guest mates" about past projects are to be avoided; (2) data should be collected for as many projects as possible; (3) measures must be consistent; (4) applications should be similar to work that is to be estimatedit makes little sense to use a baseline for batch information systems work to estimate a real-time, embedded application.

39

3.6.3 Metrics Collection, Computation, and Evaluation The process for establishing a baseline. Ideally, data needed to establish a baseline has been collected in an ongoing manner. Sadly, this is rarely the case. Therefore, data collection requires a historical investigation of past projects to reconstruct required data. Once measures have been collected (unquestionably the most difficult step), metrics computation is possible. Depending on the breadth of measures collected, metrics can span a broad range of LOC or FP metrics as well as other quality- and project-oriented metrics. Finally, metrics must be evaluated and applied during estimation, technical work, project control, and process improvement. Metrics evaluation focuses on the underlying reasons for the results obtained and produces a set of indicators that guide the project or process.

40

UNIT 4

SOFTWARE PROJECT PLANNING

4.1 Introduction 4.2 Project Planning Objectives 4.3 Software Scope 4.3.1 Obtaining Information Necessary for Scope 4.3.2 Feasibility 4.4 Resources 4.4.1 Human Resources 4.4.2 Reusable Software Resources 4.4.3 Environmental Resources 4.5 Software Project Estimation 4.6 Decomposition Techniques 4.6.1 Software Sizing 4.6.2 Problem-Based Estimation 4.6.3 Process-Based Estimation 4.7 Empirical Estimation Models 4.7.1 The Structure of Estimation Models 4.7.2 The COCOMO Model 4.7.3 The Software Equation 4.8 The Make / Buy Decision 4.8.1 Creating a Decision Tree 4.8.2 Outsourcing

4.1

Introduction

The software project management process begins with a set of activities that are collectively called Project Planning. The first of these activities is estimation.

41

Estimation need to be conducted in a well planned manner. Useful techniques for time and effort estimation do exist. Estimation lays a foundation for all other project planning activities, and project planning provides the road map for successful software engineering. Estimation of resources, cost, and schedule for a software development effort requires experience, access to good historical information, and the courage to commit to quantitative measures when qualitative data are all that exist. Project complexity has a strong uncertainty that is inherent in planning. Complexity is a relative measure that is affected by familiarity with past effort. A real-time application might be perceived as exceedingly complex to a software group that has previously developed only batch applications. Project size is another important factor that can affect the accuracy of estimates. As size increases, the interdependency among various elements of the software grows rapidly. Problem decomposition, an important approach to estimation becomes more difficult. The degree of structural uncertainty also has an effect on estimation risk. Structure is the degree to which requirements have been solidified, the ease with which functions can be compartmentalized, and the hierarchical nature of information that may be processed. The availability of historical information also determines estimation risk. By looking back, we can emulate things that worked and avoid areas where problems arose. When comprehensive software metrics are available for past projects, estimates can be made with greater assurance; schedules can be established to avoid past difficulties, and overall risk is reduced. Risk is measured by the degree of uncertainty in the quantitative estimates established for resources, cost and schedule. If project scope is poorly understood or project requirements are subject to change, uncertainty and risk becomes high.

4.2 Project Planning Objectives


The objective of software project planning is to provide a framework that enables the manager to make reasonable estimates of resources, cost, and schedule. These estimates are made within a limited time frame at the beginning of a software project and should be updated regularly as the project progresses. In addition, estimates should attempt to define best case and worst case scenarios so that project outcomes can be bounded. The planning objective is achieved through a process of information discovery that leads to reasonable estimates.

4.3 Software Scope


The first activity in software project planning is the determination of software scope. Software scope describes the data and control to be processed, function, performance, constraints, interfaces, and reliability. Functions described in the statement of scope are evaluated and in some cases refined to provide more detail prior to the beginning of estimation. Because both cost and schedule estimates are functionally oriented, some degree of decomposition is often useful. Performance considerations encompass processing and response time requirements. Constraints

42

identify limits placed on the software by external hardware, available memory, or other existing systems.

4.3.1 Obtaining Information Necessary for Scope


The most commonly used technique to bridge the communication gap between the customer and developer and to get the communication process started is to conduct a preliminary meeting or interview. The first meeting between the software engineer (the analyst) and the customer can be likened to the awkwardness of a first date between two adolescents. Gause and Weinberg suggest that the analyst start by asking context-free questions; that is, a set of questions that will lead to a basic understanding of the problem, the people who want a solution, the nature of the solution desired, and the effectiveness of the first encounter itself. The first set of context-free questions focuses on the customer, the overall goals and benefits. The next set of questions enables the analyst to gain a better understanding of the problem and the customer to voice any perceptions about a solution. The final set of questions focuses on the effectiveness of the meeting.

4.3.2 Feasibility
Once scope has been identified, it is reasonable to ask: Can we build software to meet this scope? Is the project feasible? All too often, software engineers rush past these questions (or are pushed past them by impatient managers or customers), only to become mired in a project that is doomed from the onset. Projects on the margins of your experience are not so easy. A team may have to spend several months discovering what the central, difficult-to-implement requirements of a new application actually are. The feasibility team ought to carry initial architecture and design of the high-risk requirements to the point at which it can answer these questions. In some cases, when the team gets negative answers, a reduction in requirements may be negotiated.

4.4

Resources

The second software planning task is estimation of the resources required to accomplish the software development effort. Figure below illustrates development resources as a pyramid.

People Reusable S/W Components Hardware/Software Tools The development environmenthardware and software toolssits at the foundation of the resources pyramid and provides the infrastructure to support the development effort. At a higher level, we encounter reusable software components software building blocks that can

43

dramatically reduce development costs and accelerate delivery. At the top of the pyramid is the primary resourcepeople. Each resource is specified with four characteristics: description of the resource, a statement of availability, time when the resource will be required; duration of time that resource will be applied. The last two characteristics can be viewed as a time window. Availability of the resource for a specified window must be established at the earliest practical time.

4.4.1 Human Resources


The planner begins by evaluating scope and selecting the skills required to complete development. Both organizational position and specialty are specified. For relatively small projects (one person-year or less), a single individual may perform all software engineering tasks, consulting with specialists as required. The number of people required for a software project can be determined only after an estimate of development effort is made.

4.4.2 Reusable Software Resources


Component-based software engineering (CBSE) emphasizes reusabilitythat is, the creation and reuse of software building blocks [HOO91]. Such building blocks, often called components, must be catalogued for easy reference, standardized for easy application, and validated for easy integration. Benn tan suggests four software resource categories that should be considered as planning proceeds: Off-the-shelf components: Existing software that can be acquired from a third party or that has been developed internally for a past project. COTS (commercial off-the-shelf) components are purchased from a third party, are ready for use on the current project, and have been fully validated. Full-experience components: Existing specifications, designs, code, or test data developed for past projects that are similar to the software to be built for the current project. Members of the current software team have had full experience in the application area represented by these components. Therefore, modifications required for full-experience components will be relatively low-risk. Partial-experience components: Existing specifications, designs, code, or test data developed for past projects that are related to the software to be built for the current project but will require substantial modification. Members of the current software team have only limited experience in the application area represented by these components. Therefore, modifications required for partial-experience components have a fair degree of risk. New components: Software components that must be built by the software team specifically for the needs of the current project. The following guidelines should be considered by the software planner when reusable components are specified as a resource: 1. If off-the-shelf components meet project requirements, acquire them. The cost for acquisition and integration of off-the-shelf components will almost always be less than the cost to develop equivalent software.6 In addition, risk is relatively low. 2. If full-experience components are available, the risks associated with modification and integration is generally acceptable. The project plan should reflect the use of these components.

44

3. If partial-experience components are available, their use for the current project must be analyzed. If extensive modification is required before the components can be properly integrated with other elements of the software, proceed carefullyrisk is high. The cost to modify partialexperience components can sometimes be greater than the cost to develop new components.

4.4.3 Environmental Resources


The environment that supports the software project, often called the software engineering environment (SEE), incorporates hardware and software. Hardware provides a platform that supports the tools (software) required to produce the work products that are an outcome of good software engineering practice. Because most software organizations have multiple constituencies that require access to the SEE, a project planner must prescribe the time window required for hardware and software and verify that these resources will be available. When a computer-based system is to be engineered, the software team may require access to hardware elements being developed by other engineering teams. For example, software for a numerical control (NC) used on a class of machine tools may require a specific machine tool as part of the validation test step; a software project for advanced page layout may need a digitaltypesetting system at some point during development. Each hardware element must be specified by the software project planner.

4.5

Software Project Estimation

Software cost and effort estimation will never be an exact science. Too many variables human, technical, environmental, politicalcan affect the ultimate cost of software and effort applied to develop it. However, software project estimation can be transformed from a black art to a series of systematic steps that provide estimates with acceptable risk. To achieve reliable cost and effort estimates, a number of options arise: 1. Delay estimation until late in the project. 2. Base estimates on similar projects that have already been completed. 3. Use relatively simple decomposition techniques to generate project cost and effort estimates. 4. Use one or more empirical models for software cost and effort estimation. Unfortunately, the first option, however attractive, is not practical. Cost estimates must be provided "up front." However, we should recognize that the longer we wait, the more we know, and the more we know, the less likely we are to make serious errors in our estimates. The second option can work reasonably well, if the current project is quite similar to past efforts and other project influences are equivalent. Unfortunately, past experience has not always been a good indicator of future results. The remaining options are viable approaches to software project estimation. Ideally, the techniques noted for each option should be applied in tandem; each used as a cross-check for the other.

45

Decomposition techniques take a "divide and conquer" approach to software project estimation. By decomposing a project into major functions and related software engineering activities, cost and effort estimation can be performed in a stepwise fashion. Empirical estimation models can be used to complement decomposition techniques and offer a potentially valuable estimation approach in their own right. A model is based on experience (historical data) and takes the form d = f (vi) where d is one of a number of estimated values and vi are selected independent parameters Automated estimation tools implement one or more decomposition techniques or empirical models. When combined with a graphical user interface, automated tools provide an attractive option for estimating. In such systems, the characteristics of the development organization and the software to be developed are described. Cost and effort estimates are derived from these data.

4.6

Decomposition Techniques

Software project estimation is a form of problem solving, and in most cases, the problem to be solved is too complex to be considered in one piece. For this reason, we decompose the problem, recharacterizing it as a set of smaller problems. Estimation uses one or both forms of partitioning. But before an estimate can be made, the project planner must understand the scope of the software to be built and generate an estimate of its size.

4.6.1 Software Sizing


The accuracy of a software project estimate is predicated on a number of things: (1) degree to which the planner has properly estimated size of the product to be built; (2) ability to translate the size estimate into human effort, calendar time, and dollars; (3) degree to which the project plan reflects the abilities of the software team; and (4) stability of product requirements and the environment that supports software engineering effort. Putnam and Myers [PUT92] suggest four different approaches to the sizing problem: Fuzzy logic sizing. This approach uses the approximate reasoning techniques that are the cornerstone of fuzzy logic. To apply this approach, the planner must identify the type of application, establish its magnitude on a qualitative scale, and then refine the magnitude within the original range. Although personal experience can be used, the planner should also have access to a historical database of projects8 so that estimates can be compared to actual experience. Function point sizing. The planner develops estimates of the information domain characteristics. Standard component sizing. Software is composed of a number of different standard components that are generic to a particular application area.

46

Change sizing. This approach is used when a project encompasses the use of existing software that must be modified in some way as part of a project. The planner estimates the number and type of modifications that must be accomplished. Using an effort ratio for each type of change, the size of the change may be estimated.

4.6.2 Problem-Based Estimation


LOC and FP data are used in two ways during software project estimation: (1) As an estimation variable to "size" each element of the software and (2) As baseline metrics collected from past projects and used in conjunction with estimation variables to develop cost and effort projections. LOC and FP estimation are distinct estimation techniques. Yet both have a number of characteristics in common. The project planner begins with a bounded statement of software scope and from this statement attempts to decompose software into problem functions that can each be estimated individually. LOC or FP is then estimated for each function. Alternatively, the planner may choose another component for sizing such as classes or objects, changes, or business processes affected. Baseline productivity metrics are then applied to the appropriate estimation variable, and cost or effort for the function is derived. Function estimates are combined to produce an overall estimate for the entire project. In general, LOC/pm or FP/pm averages should be computed by project domain. That is, projects should be grouped by team size, application area, complexity, and other relevant parameters. Local domain averages should then be computed. When a new project is estimated, it should first be allocated to a domain, and then the appropriate domain average for productivity should be used in generating the estimate. When LOC is used as the estimation variable, decomposition10 is absolutely essential and is often taken to considerable levels of detail. This decomposition approach assumes that all functions can be decomposed into sub functions that will resemble entries in a historical data base. If this is not the case, then another sizing approach must be applied. The greater the degree of partitioning, the more likely reasonably accurate estimates of LOC can be developed. For FP estimates, decomposition works differently. Rather than focusing on function, each of the information domain characteristicsinputs, outputs, data files, inquiries, and external interfaces are estimated. The resultant estimates can then be used to derive a FP value that can be tied to past data and used to generate an estimate. A three-point or expected value can then be computed. The expected value for the estimation variable (size), S, can be computed as a weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess) estimates. For example, S = (sopt + 4sm + spess)/6 gives heaviest credence to the most likely estimate and follows a beta probability distribution.

47

4.6.3 Process-Based Estimation


The most common technique for estimating a project is to base the estimate on the process that will be used. That is, the process is decomposed into a relatively small set of tasks and the effort required to accomplish each task is estimated. Like the problem-based techniques, process-based estimation begins with a delineation of software functions obtained from the project scope. A series of software process activities must be performed for each function. Once problem functions and process activities are melded, the planner estimates the effort that will be required to accomplish each software process activity for each software function. These data constitute the central matrix of the table. Average labour rates (i.e., cost/unit effort) are then applied to the effort estimated for each process activity. It is very likely the labor rate will vary for each task. Senior staffs heavily involved in early activities are generally more expensive than junior staff involved in later design tasks, code generation, and early testing. Costs and effort for each function and software process activity are computed as the last step. If process-based estimation is performed independently of LOC or FP estimation, we now have two or three estimates for cost and effort that may be compared and reconciled. If both sets of estimates show reasonable agreement, there is good reason to believe that the estimates are reliable. If, on the other hand, the results of these decomposition techniques show little agreement, further investigation and analysis must be conducted.

4.7

Empirical Estimation Model

An estimation model for computer software uses empirically derived formulas to predict effort as a function of LOC or FP. The empirical data that support most estimation models are derived from a limited sample of projects. For this reason, no estimation model is appropriate for all classes of software and in all development environments. Therefore, the results obtained from such models must be used judiciously.

4.7.1 The Structure of Estimation Models


A typical estimation model is derived using regression analysis on data collected from past software projects. The overall structure of such models takes the form E = A + B X (ev)C where A, B, and C are empirically derived constants, E is effort in person-months, and ev is the estimation variable (either LOC or FP). In addition to the relationship the majority of estimation models have some form of project adjustment component that enables E to be adjusted by other project characteristics Among the many LOC-oriented estimation models proposed in the literature are E = 5.2 X (KLOC)0.91 E = 5.5 + 0.73 X (KLOC) E = 3.2 X (KLOC)
1.05 1.16

Walston-Felix model Bailey-Basili model Boehm simple model Doty model for KLOC > 9

E = 5.288 X (KLOC)1.047

48

FP-oriented models have also been proposed. These include E = -13.39 + 0.0545 FP E = 60.62 x 7.728 x 10 FP E = 585.7 + 15.12 FP
-8 3

Albrecht and Gaffney model Kemerer model Matson, Barnett, and Mellichamp model

A quick examination of these models indicates that each will yield a different result for the same values of LOC or FP.

4.7.2 The COCOMO Model


Barry Boeh introduced a hierarchy of software estimation models bearing the name COCOMO, for COnstructive COst MOdel. The original COCOMO model became one of the most widely used and discussed software cost estimation models in the industry. It has evolved into a more comprehensive estimation model, called COCOMO II Like its predecessor, COCOMO II is actually a hierarchy of estimation models that address the following areas: Application composition model. Used during the early stages of software engineering, when prototyping of user interfaces, consideration of software and system interaction, assessment of performance, and evaluation of technology maturity are paramount. Early design stage model. Used once requirements have been stabilized and basic software architecture has been established. Post-architecture-stage model. Used during the construction of the software. Like function points, the object point is an indirect software measure that is computed using counts of the number of (1) screens, (2) reports, and (3) components likely to be required to build the application. Each object instance is classified into one of three complexity levels. Once complexity is determined, the number of screens, reports, and components are weighted according to Table 5.1. The object point count is then determined by multiplying the original number of object instances by the weighting factor and summing to obtain a total object point count. When component-based development or general software reuse is to be applied, the percent of reuse (%reuse) is estimated and the object point count is adjusted: NOP = (object points) X [(100 - %reuse)/100] where NOP is defined as new object points. To derive an estimate of effort based on the computed NOP value, a productivity rate must be derived. The productivity rate PROD = NOP/person-month for different levels of developer experience and development environment maturity. Once the productivity rate has been determined, an estimate of project effort can be derived as Estimated effort = NOP/PROD In more advanced COCOMO II models, a variety of scale factors, cost drivers, and adjustment procedures are required.

49

4.7.3 The Software Equation


The software equation is a dynamic multivariable model that assumes a specific distribution of effort over the life of a software development project. The model has been derived from productivity data collected for over 4000 contemporary software projects. Based on these data, an estimation model of the form E = [LOC X B 0.333/P] 3 X (1/t4) where E = effort in person-months or person-years t = project duration in months or years B = special skills factor16 P = productivity parameter that reflects: Overall process maturity and management practices The extent to which good software engineering practices are used The level of programming languages used The state of the software environment The skills and experience of the software team The complexity of the application The productivity parameter can be derived for local conditions using historical data collected from past development efforts. It is important to note that the software equation has two independent parameters: (1) an estimate of size (in LOC) and (2) an indication of project duration in calendar months or years. To simplify the estimation process and use a more common form for their estimation model, Putnam and Myers suggest a set of equations derived from the software equation. Minimum development time is defined as tmin = 8.14 (LOC/P)0.43 in months for tmin > 6 months E = 180 Bt3 in person-months for E 20 person-months Note that t in the second equation is represented in years.

4.8

The Make / Buy Decision

Software engineering managers are faced with a make/buy decision that can be further complicated by a number of acquisition options: (1) Software may be purchased (or licensed) off-the-shelf, (2) Full experience or partial-experience software components may be acquired and then modified and integrated to meet specific needs, or (3) Software may be custom built by an outside contractor to meet the purchaser's specifications.

50

The steps involved in the acquisition of software are defined by the criticality of the software to be purchased and the end cost. In some cases, it is less expensive to purchase and experiment than to conduct a lengthy evaluation of potential software packages. For more expensive software products, the following guidelines can be applied: 1. Develop specifications for function and performance of the desired software. Define measurable characteristics whenever possible. 2. Estimate the internal cost to develop and the delivery date. 3a. Select three or four candidate applications that best meet your specifications. 3b. Select reusable software components that will assist in constructing the required application. 4. Develop a comparison matrix that presents a head-to-head comparison of key functions. Alternatively, conduct benchmark tests to compare candidate software. 5. Evaluate each software package or component based on past product quality, vendor support, product direction, reputation, and the like. 6. Contact other users of the software and ask for opinions. In the final analysis, the make/buy decision is made based on the following conditions: (1) Will the delivery date of the software product be sooner than that for internally developed software? (2) Will the cost of acquisition plus the cost of customization be less than the cost of developing the software internally? (3) Will the cost of outside support (e.g., a maintenance contract) be less than the cost of internal support?

4.8.1 Creating a Decision Tree


The steps just described can be augmented using statistical techniques such as decision tree analysis. In this case, the software engineering organization can (1) Build system X from scratch, (2) Reuse existing partial-experience components to construct the system, (3) Buy an available software product and modify it to meet local needs, or (4) Contract the software development to an outside vendor. The expected value for cost, computed along any branch of the decision tree, is expected cost = (path probability)i X (estimated path cost)i where i is the decision tree path. It is important to note, however, that many criterianot just cost must be considered during the decision-making process. Availability, experience of the developer/vendor/contractor, conformance to requirements, local "politics," and the likelihood of change are but a few of the criteria that may affect the ultimate decision to build, reuse, buy, or contract.

51

4.8.2 Outsourcing
In concept, outsourcing is extremely simple. Software engineering activities are contracted to a third party who does the work at lower cost and, hopefully, higher quality. Software work conducted within a company is reduced to a contract management activity. The decision to outsource can be either strategic or tactical. At the strategic level, business managers consider whether a significant portion of all software work can be contracted to others. At the tactical level, a project manager determines whether part or all of a project can be best accomplished by subcontracting the software work. Regardless of the breadth of focus, the outsourcing decision is often a financial one. On the positive side, cost savings can usually be achieved by reducing the number of software people and the facilities (e.g., computers, infrastructure) that support them. On the negative side, a company loses some control over the software that it needs. Since software is a technology that differentiates its systems, services, and products, a company runs the risk of putting the fate of its competitiveness into the hands of a third party.

52

UNIT 5 RISK MANAGEMENT

5.1 Software Risks 5.2 Risk Identification 5.2.1 Product Size Risk 5.2.2 Business Impact Risk 5.2.3 Customer Related Risks 5.2.4 Process Risk 5.2.5Technology Risk 5.2.6 Development Environment Risk 5.2.7 Risk Associated with Staff Size and Experience 5.2.8 Risk Components and Drivers 5.3 Risk Projection 5.3.1 Developing a Risk Table 5.3.2 Assessing a Risk impact 5.3.3 Risk Assessment 5.4 Risk Mitigation, Monitoring and Management

5.1

Software Risks

Although there has been considerable debate about the proper definition for software risk, there is general agreement that risk always involves two characteristics: Uncertaintythe risk may or may not happen; that is, there are no 100% probable risks. Lossif the risk becomes a reality, unwanted consequences or losses will occur. When risks are analyzed, it is important to quantify the level of uncertainty and the degree of loss associated with each risk. To accomplish this, different categories of risks are considered. Project risks threaten the project plan. That is, if project risks become real, it is likely that project schedule will slip and that costs will increase. Project risks identify potential budgetary, schedule, personnel, resource, customer and requirements problems and their impact on a software project.

53

Project complexity, size, and the degree of structural uncertainty were also defined as project (and estimation) risk factors. Technical risks threaten the quality and timeliness of the software to be produced. If a technical risk becomes a reality, implementation may become difficult or impossible. Technical risks identify potential design, implementation, interface, verification and maintenance problems. In addition, specification ambiguity, technical uncertainty, technical obsolescence, and "leadingedge" technology are also risk factors. Technical risks occur because the problem is harder to solve than we thought it would be. Business risks threaten the viability of the software to be built. Business risks often jeopardize the project or the product. Candidates for the top five business risks are (1) Building an excellent product or system that no one really wants (market risk), (2) Building a product that no longer fits into the overall business strategy for the company (strategic risk), (3) Building a product that the sales force doesn't understand how to sell, (4) Losing the support of senior management due to a change in focus or a change in people (management risk), and (5) losing budgetary or personnel commitment (budget risks). It is extremely important to note that simple categorization won't always work. Some risks are simply unpredictable in advance. Another general categorization of risks has been proposed by Charette. Known risks are those that can be uncovered after careful evaluation of the project plan, the business and technical environment in which the project is being developed, and other reliable information sources. Predictable risks are extrapolated from past project experience. Unpredictable risks are the joker in the deck. They can and do occur, but they are extremely difficult to identify in advance.

5.2

Risk Identification

Risk identification is a systematic attempt to specify threats to the project plan. By identifying known and predictable risks, the project manager takes a first step toward avoiding them when possible and controlling them when necessary. There are two distinct types of risks for each of the categories that have been presented: Generic risks are a potential threat to every software project. Product-specific risks can be identified only by those with a clear understanding of the technology, the people, and the environment that is specific to the project at hand. To identify product-specific risks, the project plan and the software statement of scope are examined.

54

One method for identifying risks is to create a risk item checklist. The checklist can be used for risk identification and focuses on some subset of known and predictable risks in the following generic subcategories: Product sizerisks associated with the overall size of the software to be built or modified. Business impactrisks associated with constraints imposed by management or the marketplace. Customer characteristicsrisks associated with the sophistication of the customer and the developer's ability to communicate with the customer in a timely manner. Process definitionrisks associated with the degree to which the software process has been defined and is followed by the development organization. Development environmentrisks associated with the availability and quality of the tools to be used to build the product. Technology to be builtrisks associated with the complexity of the system to be built and the "newness" of the technology that is packaged by the system. Staff size and experiencerisks associated with the overall technical and project experience of the software engineers who will do the work. The risk item checklist can be organized in different ways. Questions relevant to each of the topics can be answered for each software project. The answers to these questions allow the planner to estimate the impact of risk.

5.2.1 Product Size Risks


Project Risk is directly proportional to product size. The following risk item checklist identifies generic risks associated with product size Estimated size of the product in lines of code (LOC) or function points (FP). Degree of confidence in estimated size estimate. Estimated size of product in umber of programs, files, transactions Percentage deviation in size or product from average for previous products Size of the database created or used by the product Number of users of the product Number of projected changes to the requirements for the product : before delivery , after delivery Amount of reused software

In each case, the information for the product to be developed must be compared to past experience. If large percentage deviations occurs or if numbers are similar, but past results were considerably less than satisfactory, risk is high.

55

5.2.2 Business Impact Risks


The marketing department is driven by business considerations, and business considerations sometimes come into direct conflict with technical realities. The following risk items checklist identifies generic risks associated with business impact: Effect of this product on company revenue Visibility of this product to senior management Reasonableness of delivery deadline Number of customers who will use this product and consistency of their needs relative to the product Number of other products/systems with which this product must be interoperable Sophistication of end users Amount and quality of product documentation that must be produced and delivered to the customer Governmental constraints on the construction of the product Costs associated with late delivery Costs associated with defective product

Each response for the product to be developed must be compared to past experience. If large percentage deviations occurs or if numbers are similar, but past results were considerably less than satisfactory, risk is high.

5.2.3 Customer Related Risks


All customers are not equal. Customers have different needs. Some know what they want; others know what they dont want. Some customers are willing to sweat the details, while others are satisfied with a vague promise. Customers have different personalities. Some enjoy being customers- the tension, negotiations, the psychological rewards of a good product. Others would prefer not to be customers at all. Some will happily accept almost anything that is delivered and make the very best of a poor product. Others will complain bitterly when quality is lacking; some will show their appreciation when quality is good; a few will complain no matter what.

56

Customers also have varied associations with their suppliers. Some know the product and producer well; others may be faceless, communicating with the producer only by written correspondence and a few telephone calls. Customers are often contradictory. They want everything for free. Often, the producer is caught among the customers own contradictions. A bad customer can have a profound impact on a software teams ability to complete a project on time and within budget. A bad customer represents a significant threat to the project plan and a substantial risk for the project manager. The following risk item checklist identifies generic risks associated with different customers: Have you worked with the customer in the past? Does the customer have a solid idea of what is required? Has the customer spent the time to write it down? Will the customer agree to spend time in formal requirements gathering meetings to identify project scope? Is the customer willing to establish rapid communication links with the developer? Is the customer willing to participate in reviews? Is the customer technically sophisticated in the product area? Is the customer willing to let your people do their job- that is, will the customer resist looking over your shoulder during technically detailed work? Does the customer understand the software process?

If the answer to n any of these questions is no, further investigation should be undertaken to assess risk potential.

5.2.4 Process Risk


If the software process is ill-defined; if analysis, design, and testing are conducted in an ad hoc fashion; if quality is a concept that everyone agrees is important; but no one acts to achieve in any tangible way, then the project is at risk. The following questions are extracted from a workshop on the assessment of software engineering practice developed by R.S. Pressman & Associates, Inc.

Process Issue
Does your senior management support a written policy statement that emphasizes the importance of standard process for software development?

57

Has your organization developed a written description of the software process to be used on this project? Are staff members signed up to the software process as it is documented and willing to use it? Is the software process used for other projects? Has your organization developed or acquired a series of software engineering training courses for managers and technical staff? Are published software engineering standards provided for every software developer and software manager? Have document outlines and examples been developed for all deliverables defined as part of the software process? Are formal technical reviews of test procedures and test cases conducted regularly? Are the results of each formal technical review documented, including errors found and resources used? Is there some mechanism for ensuring that work conducted on a project conforms to software engineering standards? Is configuration management used to maintain consistency among system/software requirements, design, code and test cases? Is a mechanism used for controlling changes to customer requirements that impact the software? Is there a documented statement of work, a software requirements specification, and a software development plan for each subcontract? Is a procedure followed for tracking and reviewing the performance of subcontracts?

Technical Issues
Are facilitated application specification techniques used to aid in communication between the customer and developer? Are specific methods used for software analysis? Do you use a specific method for data and architectural design? Is more than 90 percent of your code written in a high-order language?

58

Are specific conventions for code documentation defined and used? Do you use specific methods for test case design? Are software tools used to support planning and tracking activities? Are configuration management software tools used to control and track change activities throughout the software process? Are software tools used to support the software analysis and design process? Are tools used to create software prototypes? Are software tools used to support the testing process? Are software tools used to support the production and management of documentation? Are quality metrics collected for all software projects? Are productivity metrics collected for all software projects?

If a majority of the above questions are answered no, software process is weak and the risk is high.

5.2.5 Technology Risk


Pushing the limits of the technology is challenging and exciting. Its the dream of almost every technical person, because it forces a practitioner to use his or her skills to the fullest. But its also very risky. The following risk item checklist identifies generic risks associated with the following technology to be built: Is the technology to be built new to your organization? Do the customers requirements demand the creation of new algorithms or input or output technology? Does the software interface with new or unproven hardware? Does the software to be built interface with vendor supplied software products that are unproven? Does the software to be built interface with the database system whose function and performance have not been proven in this application area? Is a specialized user interface demanded by product requirements?

59

Do requirements for the product demand the creation of program components that are unlike any previously developed by your organization? Do requirements demand the use of new analysis, design, or testing methods? Do requirements demand the use of unconventional software development methods, such as formal methods, AI-based approaches, and artificial neural networks? Do requirements put excessive performance constraints on the product? Is the customer uncertain that the functionality requested is doable?

If the answer to any of these questions is yes, further investigation should be undertaken to assess risk potential.

5.2.6 Development Environment Risks


Inappropriate or ineffective tools can blunt the efforts of even a skilled practitioner. The software engineering environment supports the project team, the process, and the product. But if the environment is flawed, it can be the source of significant risk. The following risk item checklist identifies generic risks associated with the development environment. Is the software management tool available? Is the software process management tool available? Are tools for analysis and design available? Do analysis and design tools deliver methods that are appropriate for the product to be built? Are compilers or code generators available and appropriate for the product to be built? Are testing tools available and appropriate for the product to be built? Are software configuration management tools available? Does the environment make use of a database or repository? Are all software tools integrated with one another? Have members of the project team received training in each of the tools? Are local experts available to answer questions about the tools? Is on-line help and documentation for the tools adequate?

60

If a majority of the above questions are answered no, the software development environment is weak and the risk is high.

5.2.7 Risks Associated with Staff Size and Experience


Boehm suggests the following questions to assess risks associated with staff size and experience: Are the best people available? Do the people have the right combination of skills? Are enough people available? Are staffs committed for entire duration of the project? Will some project staff be working only part time on this project? Do staffs have the right expectations about the job at hand? Have staffs received necessary training? Will turnover among staff be low enough to allow continuity?

If the answer to any of these questions is no, further investigation should be undertaken to assess risk potential.

5.2.8 Risk Components and Drivers


The U.S. Air Force has written a pamphlet that contains excellent guidelines for software risk identification and abatement. The Air Force approach requires that the project manager identify the risk drivers that affect software risk componentsperformance, cost, support, and schedule. In the context of this discussion, the risk components are defined in the following manner: Performance riskthe degree of uncertainty that the product will meet its requirements and be fit for its intended use. Cost riskthe degree of uncertainty that the project budget will be maintained. Support riskthe degree of uncertainty that the resultant software will be easy to correct, adapt, and enhance. Schedule riskthe degree of uncertainty that the project schedule will be maintained and that the product will be delivered on time. The impact of each risk driver on the risk component is divided into one of four impact categoriesnegligible, marginal, critical, or catastrophic.

61

5.3

Risk Projection

Risk projection, also called risk estimation, attempts to rate each risk in two waysthe likelihood or probability that the risk is real and the consequences of the problems associated with the risk, should it occur. The project planner, along with other managers and technical staff, performs four risk projection activities: (1) Establish a scale that reflects the perceived likelihood of a risk, (2) Delineate the consequences of the risk, (3) Estimate the impact of the risk on the project and the product, and (4) Note the overall accuracy of the risk projection so that there will be no misunderstandings.

5.3.1 Developing a Risk Table


A risk table provides a project manager with a simple technique for risk projection. A project team begins by listing all risks (no matter how remote) in the first column of the table. This can be accomplished with the help of the risk item checklists referenced. Each risk is categorized in the second column. The probability of occurrence of each risk is entered in the next column of the table. The probability value for each risk can be estimated by team members individually. Individual team members are polled in round-robin fashion until their assessment of risk probability begins to converge. Once the first four columns of the risk table have been completed, the table is sorted by probability and by impact. High-probability, high-impact risks percolate to the top of the table, and low-probability risks drop to the bottom. This accomplishes first-order risk prioritization. The project manager studies the resultant sorted table and defines a cut-off line. The cut-off line implies that only risks that lie above the line will be given further attention. Risks that fall below the line are re-evaluated to accomplish second-order prioritization. Risk impact and probability have a distinct influence on management concern. A risk factor that has a high impact but a very low probability of occurrence should not absorb a significant amount of management time. However, high-impact risks with moderate to high probability and low-impact risks with high probability should be carried forward into the risk analysis steps that follow. All risks that lie above the cut-off line must be managed. The column labelled RMMM contains a pointer into Risk Mitigation, Monitoring and Management Plan or alternatively, a collection of risk information sheets developed for all risks that lie above the cut-off. Risk probability can be determined by making individual estimates and then developing a single consensus value. Although that approach is workable, more sophisticated techniques for determining risk probability have been developed. Risk drivers can be assessed on a qualitative probability scale that has the following values: impossible, improbable, probable, and frequent.

5.3.2 Assessing Risk Impact


Three factors affect the consequences that are likely if a risk does occur: its nature, its scope, and its timing. The nature of the risk indicates the problems that are likely if it occurs. The scope of a risk combines the severity (just how serious is it?) with its overall distribution (how much of the project will be affected or how many customers are harmed?). Finally, the timing of a risk

62

considers when and for how long the impact will be felt. The following steps are recommended to determine the overall consequences of a risk: 1. Determine the average probability of occurrence value for each risk component. 2. Determine the impact for each component based on the criteria shown. 3. Complete the risk table and analyze the results as described in the preceding sections. The overall risk exposure, RE, is determined using the following relationship: RE = P x C where P is the probability of occurrence for a risk, and C is the cost to the project should the risk occur.

5.3.3 Risk Assessment


At this point in the risk management process, we have established a set of triplets of the form: [ri, li, xi] where ri is risk, li is the likelihood (probability) of the risk, and xi is the impact of the risk. During risk assessment, we further examine the accuracy of the estimates that were made during risk projection, attempt to rank the risks that have been uncovered, and begin thinking about ways to control and/or avert risks that are likely to occur. For assessment to be useful, a risk referent level must be defined. For most software projects, the risk components discussed earlierperformance, cost, support, and schedulealso represent risk referent levels. That is, there is a level for per formance degradation, cost overrun, support difficulty, or schedule slippage that will cause the project to be terminated. If a combination of risks create problems that cause one or more of these referent levels to be exceeded, work will stop. In the context of software risk analysis, a risk referent level has a single point, called the referent point or break point, at which the decision to proceed with the project or terminate it (problems are just too great) are equally weighted. Therefore, during risk assessment, we perform the following steps: 1. Define the risk referent levels for the project. 2. Attempt to develop a relationship between each (ri, li, xi) and each of the referent levels. 3. Predict the set of referent points that define a region of termination, bounded by a curve or areas of uncertainty. 4. Try to predict how compound combinations of risks will affect a referent level.

5.4

Risk Mitigation, Monitoring and Management

All of the risk analysis activities presented to this point have a single goalto assist the project team in developing a strategy for dealing with risk. An effective strategy must consider three issues: Risk avoidance Risk monitoring

63

Risk management and contingency planning If a software team adopts a proactive approach to risk, avoidance is always the best strategy. This is achieved by developing a plan for risk mitigation. To mitigate this risk, project management must develop a strategy for reducing turnover. Among the possible steps to be taken are Meet with current staff to determine causes for turnover. Mitigate those causes that are under our control before the project starts. Once the project commences, assume turnover will occur and develop techniques to ensure continuity when people leave. Organize project teams so that information about each development activity is widely dispersed. Define documentation standards and establish mechanisms to be sure that documents are developed in a timely manner. Conduct peer reviews of all work (so that more than one person is "up to speed). Assign a backup staff member for every critical technologist. As the project proceeds, risk monitoring activities commence. The project manager monitors factors that may provide an indication of whether the risk is becoming more or less likely. In the case of high staff turnover, the following factors can be monitored: General attitude of team members based on project pressures. The degree to which the team has jelled. Interpersonal relationships among team members. Potential problems with compensation and benefits. The availability of jobs within the company and outside it. In addition to monitoring these factors, the project manager should monitor the effectiveness of risk mitigation steps. For example, a risk mitigation step noted here called for the definition of documentation standards and mechanisms to be sure that documents are developed in a timely manner. This is one mechanism for ensuring continuity, should a critical individual leave the project. The project manager should monitor documents carefully to ensure that each can stand on its own and that each imparts information that would be necessary if a newcomer were forced to join the software team somewhere in the middle of the project. Risk management and contingency planning assumes that mitigation efforts have failed and that the risk has become a reality. It is important to note that RMMM steps incur additional project cost. For example, spending the time to "backup" every critical technologist costs money. Part of Risk management, therefore, is to evaluate when the benefits accrued by the RMMM steps are outweighed by the costs associated with implementing them. In essence, the project planner performs a classic cost/benefit analysis.

64

UNIT 6

SOFTWARE REQUIREMENT DEFINITION

6.1 Software Requirement Specification 6.2 Formal Specification Techniques 6.2.1 Relational Notations 6.2.2 State-Oriented Notations 6.3 Languages and Processors for Requirement Specification 6.3.1 PSL/PSA 6.3.2 RSL / REVS 6.3.3 Structured Analysis and Design Technique (SADT) 6.3.4 Structured System Analysis (SSA) 6.3.5 Gist

6.1

Software Requirement Specification

The analysis phase of software development involves planning and software requirement definition. The outcome of the planning is recorded in the System Definition, the Project Plan and the preliminary Users manual. Software Requirement Specification records the outcome of the software requirements definition activities. The software requirements specification is a technical specification of requirements for the software product. The goal of software requirements definition is to completely and consistently specify the technical requirements for the software product in a concise and unambiguous manner. The Software Requirement Specification is based on the System Definition. High-level requirements specified during initial planning are elaborated and made more specific in order to characterize the features that the software product will incorporate. The format of the Software Requirement document is presented as follows: Section 1: Section 2: Section 3: Section 4: Section 5: Product Overview and Summary Development, Operating and Maintenance Environments External Interfaces and Data Flow Functional Requirements Performance Requirements

65

Section 6: Section 7: Section 8: Section 9: Section 10: Section 11; Section 12:

Exception Handling Early Subsets and Implementation Priorities Foreseeable Modifications and Enhancements Acceptance Criteria Create Design Hints and Guidelines Cross-References Index Glossary for Terms

Section 1 and Section 2 of the requirements document present an overview of product features and summarize the processing environments for development, operation and maintenance of the product. Section 3 specifies the externally observable characteristics of the software product. It includes user displays and report formats, a summary of user commands and options, data flow diagrams and a data dictionary. Specifications for the user interface displays and reports are refinements of information contained in the System Definition and the preliminary Users Manual. High level data flow diagrams and data dictionary are delivered at the time this section is written. The data flow diagrams specifies the data sources and data sinks, data stores, transformations to be performed on data and the flow of data between sources, sinks, transformations and stores. The data flow diagrams specifies the data sources and data sinks, data stores, transformations to be performed on data and the flow of data between sources, sinks, transformations and stores. A data store is a conceptual data structure, in the sense that physical implementation details are suppressed; only the logical characteristics of data are emphasized on a data flow diagram. Data flow diagrams can be depicted informally as shown in the figure below User Commands Create User Interface Delete Reports Internal Reports Analysis Request Delete Command Get Object Analysis Retrieved Object DMS Design Entity DLP

The data flow diagrams can be written formally by using special notation, where data sources and data sinks are depicted by shaded rectangles, transformations by ordinary rectangles and data

66

stores by open ended rectangles. The arcs in a data flow diagram specify data flow. They are labeled with the names of data items whose characteristics re specified in the data dictionary. The following figure shows a formal data flow diagram. Order Processing System

Inventory

Vendors

Inventory Details Customer requests Orders Verify Order Credit status Data Source Customers Data Stores Valid orders Valid orders

Address

Data flow

Assemble Orders

Warehouse Batch orders

Pending Orders

Processing node

Data Sink

A data dictionary entry is illustrated in the following table: NAME WHERE USED PURPOSE DERIVED FROM SUBITEMS Create SDLP Create passes a user-created design entry to the SLP processor for verification of syntax User Interface Processor Name Uses Procedures References Create contains one complete user-created design entry.

NOTES

Entries in the Data Dictionary includes the name of data item and attributes such as data flow diagrams where it is used, its purpose, where it is derived from, its sub items and any notes that may be appropriate.

67

Section 4 of a SRS specifies the functional requirements for the software product. They are typically expressed in relational and state-oriented notations that specify relationships among inputs, outputs and actions. Section 5 specifies the Performance Characteristics such as response time for various activities, processing time for various processes, throughput, primary and secondary memory constraints and special items like extraordinary security constraints etc. Section 6 specifies the Exception Handling, including the actions to be taken and the messages to be displayed in response to undesired situations or events. Table of exception conditions and exception responses should be prepared. Section 7 of SRS specifies early subsets and implementation priorities for the system under development. The initial version may be skeletal prototype that demonstrate basic user functions and provides a framework for evolution of the product. Each successive version can incorporate the capabilities of previous versions and provide additional processing functions. Section 8 specifies the Foreseeable modifications and enhancements that may be incorporated into the product following initial product release. Foreseeable product modifications might occur as a result of anticipated changes in project budget or project mission, as a result of new hardware acquisition, or as a result of experienced gained from an initial release of the product. Section 9 describes the Acceptance Criteria. It specifies functional and performance tests that must be performed and the standards to be applied to source code, internal documentation and external documents such as design specifications, test plans, users manual, operation principles and the maintenance and installation procedures. Section 10 of SRS contains design Hints and Guidelines. The SRS is primarily concerned with functional and performance aspect of a software product and emphasis is placed on specifying product characteristics without implying how the product will provide those characteristics. Section 11 relates product requirements to the sources of the information used in deriving the requirements. A cross-reference dictionary should be provided to index specific paragraph numbers in the SRS to specific paragraphs in the system definition and preliminary users manual and to other sources of information. Section 12 of the SRS provides the definitions of terms that may be unfamiliar to the customer and the product developers. Desirable Properties of the Software Requirement Specification are as follows: 1. Correct 2. Complete 3. Consistent 4. Unambiguous 5. Functional 6. Verifiable 7. Traceable 8. Easily Changed

68

An incorrect or incomplete set of requirements can result in a software product that satisfies its requirements but does not satisfy customer needs. An inconsistent specification states contradictory requirements in different parts of the document, while an ambiguous requirement is subject to different interpretation by different people. Software requirements should be functional in nature, i.e. they should describe what is required without implying how the system will meet its requirements. This proves maximum flexibility for the product designers. Requirements must be verifiable from two points of view: it must be possible to verify that the requirements satisfy the customers needs and it must be possible to verify that the subsequent work product satisfies the requirements. Finally, the requirements should be indexed, segmented and cross-referenced to permit easy use and easy modification. Ideally, every software requirement should be traceable to specific customer statements and to specific statements in System Definition. Changes will occur and project success often depends on the ability to incorporate changes without start over. Crossreferencing can be accomplished by referring to the appropriate paragraph numbers in the appropriate documents.

6.2

Formal Specification Techniques

Specifying the functional characteristics of a software product is one of the most important activities to be accomplished during the requirements analysis. Formal notations have the advantage of being concise and unambiguous; they support formal reasoning about the functional specifications and provide a basis for verification of the resulting software product. Both relational and state-oriented notations are used to specify the functional characteristics of the software. Relational notations are based on the concepts of entities and attributes, where as the State of the system is the information required to summarize the status of system entities at any particular point in time. Relational notations include implicit equations, Recurrence relations, Algebraic axioms and Regular expressions. State-oriented notations include Decision tables, Event tables, Transition tables, Finite-state mechanisms and Petri nets. All these notations are formal, in the sense they are concise and unambiguous.

6.2.1 Relational Notations


Implicit equations: state the properties of a solution without stating a solution method. For example, the matrix inverse can be specify as M x M = I+E or MxM=IE In the above equation, matrix inverse is specified such that the matrix product of the original matrix M and the inverse of M, M, yields the identity matrix I plus or minus the error matrix E, where E specifies allowable computational errors. Complete specification of matrix inverse must include items such as matrix size, type of data elements and degree of sparseness. Implicit specification of a square root function, SQRT can be stated as

69

(0 <= X <= Y) [ABS (SQRT(X) ** 2 X) < E] The above equation states that for all real values of X in the closed range 0 to Y, computing the square root of X, squaring it, and subtracting X results in an error value that is in some permissible error range. Recurrence Relations: consists of an initial part called the basis and one or more recursive parts. The recursive part(s) describe the desired value of a function in the terms of other values of the function. For example, successive Fibonacci numbers are formed as the sum of previous two Fibonacci numbers, where the first one is defined as 0 and second one defined as 1. The Fibonacci numbers can be defined by the recurrence: F (0) = 0 F (1) = 1 F (N) = F (N 1) + F (N 2) for all N > 1 Recurrence relations are easily transformed into recursive programs. Algebraic Axioms: specifies fundamental properties of a system and provide a basis for deriving additional properties that are implied by the axioms. It involves defining the syntax of the operations and relations among the operations. These additional properties are called Theorems. In order to establish a valid mathematical system, the set of axioms must be complete and consistent, i.e. it must be possible to prove desired results using the axioms, and it must not be possible to prove contradictory results. Elegance of definition is achieved if the axioms are independent, but in practice independence of axioms is less important than completeness and consistency. An axiomatic approach can be used to specify functional properties if software systems. This approach can be used to specify abstract data types. The term abstract data type refers to the fact that permissible operations on the data objects are emphasized, while representation details of the data objects are suppressed. Specification of an abstract data type using algebraic axioms involves defining the syntax of the operations and specifying axiomatic relationships among the operations. The syntactic definition specifies names, domains and ranges of operations to be performed on the data objects, and axioms specify interactions among operations. Axiomatic specification of the Last-in-first-out (LIFO) property of STACK objects can be specified as shown below. The type of data elements being manipulated, ITEM, is a parameter in the specification, and the specification is represented independently. Operations NEW, PUSH, POP, TOP, and EMPTY are named to suggest their purposes. Definitions of the stack operations are: NEW creates a new stack PUSH adds a new item to the top of the stack TOP returns a copy of the top item POP removes the top item EMPTY tests for an empty stack

70

SYNTAX: OPERATION DOMAIN RANGE NEW ( ) -> STACK PUSH (STACK, ITEM) STACK POP (STACK) STACK TOP (STACK) ITEM EMPTY (STACK) BOOLEAN AXIOMS: (stk is of type STACK, itm is of type ITEM) 1. EMPTY(NEW) = true 2. EMPTY(PUSH(stk,itm)) = false 3. POP(NEW) = error 4. TOP(NEW) = error 5. POP(PUSH(stk,itm)) = stk 6. TOP(PUSH(atk,itm)) = itm Operations NEW, PUSH, POP, TOP and EMPTY are named to suggest their purposes and definition not depend on the name chosen but the definition is not dependent on the particular name chosen. Intuitive definitions of the stack operations are: NEW creates a new stack. PUSH adds a new item to the top of a stack. POP removes the top item. TOP returns a copy of the top item. EMPTY test for an empty stack. The axioms shown above can be stated as: 1. A new stack is empty. 2. A stack is not empty immediately after pushing an item into it. 3. Attempting to pop a new stack results in an error. 4. There is no top item on a new stack. 5. Pushing an item onto a stack and immediately popping it leaves the stack unchanged. 6. Pushing an item onto a stack and immediately requesting the top item returns the item just pushed onto the stack. Regular Expressions: can be used to specify the syntactic structure of symbol strings. They provide powerful and widely used notations in software engineering. The rules for forming regular expressions are quite simple: 1. Atoms: The basic symbols in the alphabet of interest form regular expressions. 2. Alteration: If R1 and R2 are regular expressions, then (R1 | R2) is a regular expression. 3. Composition: If R1 and R2 are regular expression, then (R1 R2) is a regular expression. 4. Closure: If R1 is a regular expression, then (R1)* is a regular expression. 5. Completeness: Nothing else is a regular expression. An alphabet of basic symbols provides the atoms. The alphabet is made up of whatever symbols are of interest in the particular application. Alternation, (R1|R2), denotes the union of the

71

languages specified by R1 and R2 and composition, (R1 R2), denotes the language formed by concatenating the strings from R2 onto strings from R1. Closure, (R1)*, denotes the language formed by concatenating zero or more strings from R1 with zero or more strings from R1. Examples of the regular expressions follow: 1. Given atoms a and b, then a denotes the set {a} and b denotes the set {b}. 2. Given atoms a and b, then (a | b) denotes the set {a, b}. 3. Given atoms a, b and c, then (a | b | c) denotes the set {{a, b}, c}. 4. Given atoms a and b, then (a, b) denotes the set {ab} containing one element ab. 5. Given atoms a, b and c, then ((a b) c) denotes the set {abc} containing one element abc. 6. Given atom a, then (a)* denotes the set {e, a, aa, aaa), where e denotes the empty string.

6.2.2 State-Oriented Notations


Decision Tables: provide a mechanism for recording complex decision logic. They are widely used in data processing applications and have an extensively developed literature. The following figure shows the basic elements of a Decision Table Decision Rules Rule 1 ( condition stub ) Rule 2 Rule 3 ( condition entries) Rule 4

( action stub )

( action entries)

A decision table is segmented into four quadrants: condition stub, condition entry, action stub and action entry. The condition stub contains all of the conditions being examined. Condition entries are used to combine conditions into decision rules. The action stub describes the actions to be taken in response to decision rules and action entry quadrant relates decision rules to actions. The following table illustrates the format of a limited-entry decision table. In limited-entry decision table, Y denotes yes, N denotes no, - denotes dont care and X denotes perform action. Rule 1 Y Y N X Decision Rule Rule 2 Rule 3 Y Y N N N N X X X Rule 4 Y N N

C1 C2 C3 A1 A2 A3

72

According to table, orders are approved if the credit limit is not exceeded, or if the credit limit is exceeded but past experience is good, or if a special arrangement has been made. If none of these conditions hold, the order is rejected. The (Y, N, -) entries in each column of the condition entry quadrant form a decision rule. If more than one decision rule has identical (Y, N, -) entries, the table said to be ambiguous. Ambiguous pair of decision rules that specify identical actions are said to be redundant and those specifying different actions are contradictory. Event Tables: specifies actions to be taken when event occur under different sets of conditions. A two-dimensional event table relates actions to two variables; f (M, E) = A, where M denotes the current set of operating conditions, E is the event of interest and A is the action to be taken. Mode Start up Steady Shut down Alarm Event E45 A14; A32

E13 A16 X

E37 A6, A2

The above table illustrates an event table where the actions to be taken are related to the current mode of operation and the events that may occur within modes. Thus, if the system is in start-up mode SU and event E13 occurs, action A16 is to be taken, i.e. f (SU, E13) = A16. Special notations can be invented to suit particular situations. For example, actions separated by semicolons (A14; A32) might denote A14 followed sequentially by A32, while actions separated by commas (A6, A2) might denote concurrent activation of A6 and A2. Similarly a dash (-) might indicate no action required, while an X might indicate an impossible system configuration. Transition Table: is used to specify changes in the state of a system as a function of driving forces. The state of the system summarizes the status of all entries in the system at a particular time. Given the current state and the current conditions, the next state results; if when in state Si, condition Cj results in the transition to state Sk, i.e. f (Si, Cj) = Sk. Current Input Current State S0 S1 a S0 S1 Next State The table above illustrate the format of a simple transition table given current state S1 and current input b, the system will go to state S0; i.e. f(S1, b) = S0. Transitions diagrams are alternative representations for transition table. In a transition diagram, states become nodes in a directed graph and transitions are represented as arcs between nodes. Arcs labeled with conditions that cause transitions. Transition diagrams and transition tables are representations for finite state automata. b S1 S0

73

a S0

b b

a S1

The above figure illustrates the transition diagram corresponding to the transition table shown above. The transition tables and transition diagrams are representation for finite state automata. The theory of finite state automata is rich, complex and highly developed. Decision tables, Event tables and Transition tables are notations for specifying actions as functions of the conditions that initiate those actions. Finite State Mechanisms: Data flow diagrams, regular expressions and transition tables can be combined to provide a powerful finite state mechanism for functional specification of software systems. D11* Ds D11* D12* De SPLIT D12* F7 F6

Present State S0

Input Ds D11

S1

D12 De

S2

D12 De

Actions Open F6 Open F7 Write F6 Close F6 Write F7 Close F6 Close F7 Write F7 Close F7

Outputs

Next State S1

D11 : F6 D12 : F7

S1 S2 S0

D12 : F7

S2 S0

The figure above specifies a system for which the incoming data stream consists of a start marker Ds, followed by zero or more D11 messages, followed by zero or more D12 messages, followed by an end-of-data marker De. The purpose of process split is to route D11 messages to file F6 and D12 messages to file F7. The transition table above specifies the action of split. Process split starts in initial state S0 and waits for the input Ds. Any other input in state S0 is ignored. Arrival of input Ds in state S0 triggers the opening of files F6 and F7 and transition to state S1. In S1, D11 messages are written to F6 until either a D12 message or a De message is received. On receipt of D12 message, process split closes F6, writes the message D12 to F7 and goes to state S2. In state S2, split writes D12 messages to F7, then on receipt of the end-of-data marker, De, closes F7 and return to state S0 to await the next transmission.

74

One drawback of finite-state mechanisms is the so-called state explosion phenomenon. Complex systems may have large numbers of states and many combinations of input data. Specifying the behavior of a software system for all combinations of current state and current input can become unwieldy. Petri Nets provide a graphical representation technique. It is represented as a bipartite graph. The two types of nodes in the Petri net are called PLACES and TRANSITIONS. Places are marked by tokens. A firing rule has two aspects: a transition is enabled if every input place has at least one token. An enabled transition can fire; when a transition fires, each input place of that transition loses one token and each output place of that transition gains one token. A marked Petri net is formally defined as a quadruple, consisting of a set of places P, a set of transitions T, a set of arcs A and a marking M. C = (P, T, A, M), where P = {p1, p2, p3 pm} T = {t1, t2, t3 tn} M I A c {P x T} U {T x P} The following figure illustrates the Petri net model of concurrent processes. 1 P0 t0 0 P1 Co-begin 0 t2 0 0 Co-end 0 t4 t3 t0 Co-begin t1, t2, t3 Co-end t4

0 t1 0

75

In the figure, the Petri Net models the indicated computation; each transition in the net corresponds to task activation and places are used to synchronize processing. Completion of t0 removes the initial token from p0 and places a token on p1, which enables the co-begin. Firing of the co-begin removes the token from p1 and places a token on each of p2, p3, p4. This enables t1, t2 and t3; they can fire simultaneously or in any order. This corresponds to concurrent execution of tasks t1, t2 and t3. When each of tasks t1, t2 and t3 completes its processing, a token is placed on the corresponding output place p5, p6 or p7. Co-end is not enabled until all three tasks are complete their processing. Firing of co-end removes the tokens from p5, p6 and p7 and places a token on p8, which can then fire.

6.3 Languages and Processors for Requirement Specification


A number of special-purpose languages and processors have been developed to permit the concise statement and automated analysis of requirements specifications for software. Some specification languages are graphic in nature, while others are textual, but all are relational. Some specification languages are manually applied and others have automated processors.

6.3.1 PSL/PSA
The Problem Statement Language (PSL) and Problem Statement Analyzer (PSA) were developed as components of the ISDOS project. The PSL/PSA system is also sometimes referred as the ISDOS system. The objective of PSL is to permit expression of much of the information that commonly appears in SRS. In PSL, system description can be divided into 8 major aspects: 1. 2. 3. 4. 5. 6. 7. 8. System input/output flow System structure Data Structure Data Derivation System size and volume System dynamics System properties Project management

PSL contains a number of types of objects and relationships to permit description of these eight aspects. The system input/output flow aspect deals with the interaction between a system and its environment. System structure is concerned with the hierarchies among objects in a system. The data structure aspect includes all the relationships that exist among data used and/or manipulated by a system as seen by the users of the system. The data derivation aspect of the system description specifies which data objects are involved in particular processes in the system. The system size and volume aspect is concerned with the size of the system and those factors that influence the volume of processing required. The system dynamics aspect of a system description presents the manner in which system behaves over time. The project management aspect requires that project-related information, as well as product-related information, be provided. This involves identification of the people involved, their responsibilities, schedules, cost estimation etc.

76

The PSA is an automated analyzer for processing requirements stated in PSL. PSA operates on a data base of information collected from a PSL description. The PSA system can provide reports in four categories: Data-base Modification Reports, Reference Reports, Summary Reports and Analysis Reports. The following figure shows the structure of PSA. Commands in Command Language

Operating System Statements in the Problem Statement Language (PSL) Problem Statement Analyzer (PSA) Reports and Messages

Analyzer Database

Database modification reports list changes that have been made since the last report, together with diagnostic and warning messages; these reports provide record of changes for error correction and recovery. Reference reports include the Name List Report, which lists all the objects in the database with types and dates of last change. Summary reports present the information collected from several relationships. The Database Summary Report provides project management information by listing the total number of objects of various types and how much has been said about them. The Structure Report shows complete and partial hierarchies and the External Picture Report depicts data flow in graphical form. Analysis reports include the Contents Comparison Report, which compares the similarity of inputs and outputs. The Data Processing Interaction Report can be used to detect gaps in information flow and unused data objects. The Processing Chain Report shows the dynamic behavior of the system. PSL/PSA is a useful tool for documenting and communicating software requirements. It not only supports analysis but it also supports design. It sometimes criticized because it does not incorporate any specific problem solving techniques. PSA does provide the capability to define new PSL constructs and new reports formats, which allows the system to be tailored to specific problem domains and specific solution methods.

6.3.2 RSL / REVS


The Requirement Engineering Validation System (REVS) processes and analyzes the Requirement Statement Language (RSL) statements. Both RSL and REVS are components of the

77

Software Requirements Engineering Methodology (SERM). Many concepts in RSL are based on PSL. The fundamental characteristics of RSL are the floworiented approach used to describe real-time systems. RSL models the stimulus-response nature of process-control systems. Each flow originates with a stimulus and continues to the final response. The flow approach also provides for direct testability of requirements. Flows are specified in RSL as requirement networks (R-NETS). They have both graphical and textual representations. The REVS operates on RSL statements. It consists of 3 major components: 1. A translator for RSL 2. A centralized database, the Abstract System Semantic Model (ASSM) 3. A set of automated tools for processing information in ASSM. Consistency Checker Inquiry Capability ASSM Inputs Translator Abstract System Semantic Model Simulator Builder Graphics Package Simulator Execution Results Errors

Listing

Computer System

A schematic diagram of REVS is presented above. ASSM is a relational database similar in concept to the PSL/PSA database. Automated tools for processing information in the ASSM include an interactive graphics package to aid in specifying flow paths, static checkers that check for completeness and consistency of the information used throughout the system and an automated simulation package that generates and executes simulation models of the system. REVS is a large, complex software tool. Use of the REVS system is cost effective only for specification of large, complex real-time system.

6.3.3 Structured Analysis and Design Technique (SADT)


SADT incorporates a graphical language and a set of methods and management guidelines for using the language. The SADT language is called the Language of Structured Analysis (SA). An SADT model consists of an ordered set of SA diagrams. Each diagram is drawn on a single page and each diagram must contain three to six nodes plus interconnecting arcs. Two basic types of SA diagrams are activity diagram (actigram) and the data diagram (datagram).

78

On an actigram the node denote activities and the arcs specify data flow between activities. Where as the datagram specify data objects in the nodes and activities on the arcs. Actigrams and Datagram are thus duals. In practice, activity diagrams are used more frequently than data diagrams. The data diagrams are important for at least two reasons; to indicate all activities affected by a given data object and to check the completeness and consistency of an SADT model by constructing data diagrams from a set of activity diagrams. Activity Diagram Components Control Data Source Data

Input data

Activity

Output Data

Source Program

Source Program Interpreter

Computed Results

Processor

Interpreter

Data Diagram Components Control Activity Operating System

Generating Activity

Data

Using Activity

Interpreter

Output Data

Printer Spool

Storage Device

Disk

The two figures above illustrate the formats of actigrams and datagram nodes. It is important to note that four distinct types of arcs can be attached to each node. Arcs coming into the left side of a node carry inputs and arcs leaving the right side of a node convey outputs. Arcs entering the top of a node convey control and arcs entering the bottom specify mechanism. The concepts of input, output, control and mechanism bound the context of each node in a SA diagram. The SA language has a large number of features for indicating aspects such as bounded context, necessity, dominance, relevance, exclusion, interfaces to parent diagrams, unique decomposition,

79

shared decomposition and cooperation. The SADT methodology provides a notation and a set of techniques for understanding and recording complex requirements in a clear and concise manner. Major strengths of SADT includes the Top-Down methodology inherited in decomposing high-level nodes into subordinate diagram, the distinction between input, output, control and mechanism for each node. It is mainly used for large and complex projects.

6.3.4 Structured System Analysis (SSA)


SSA is used primarily in traditional data processing environments. It uses a graphical language to build models of the system. There are 4 basic features in SSA; data flow diagrams, data dictionaries, procedure logic representations and data store structuring techniques. SSA data flow diagrams are similar to SADT diagrams but they do not indicate mechanism and control and an additional notation is used to show data stores. A data dictionary is used to define and record data elements, and processing logic representations such as decision tables and structured English, are used to specific algorithmic processing details.

6.3.5 Gist
Gist is a textual language based on a relational model of objects and attributes. A Gist specification is a formal description of valid behaviors of a system. A specification is composed of three parts: 1. A specification of object types and relationships among them. This determines a set of possible states. 2. A specification of actions and demons which define transitions between possible states. 3. A specification of constraints on states and state transition. Preparing a Gist specification involves several steps: 1. Identify the collection of object types to describe the objects manipulated by the process. 2. Identify individual objects within types. 3. Identify the relationships to specify the ways in which objects can be related to one another. 4. Identify types and relationships that can be defined in terms of other types and relations. 5. Identify static constraints on the types and relationships. 6. Identify the actions that can change the state of the process in some way. 7. Identify the dynamic constraints. 8. Identify active participants in the process being specified and group them into classes.

80

UNIT 7 SOFTWARE DESIGN

7.1 Introduction to Software Design 7.2 Fundamental Design Concepts 7.2.1 Abstraction 7.2.2 Information Hiding 7.2.3 Structure 7.2.4 Modularity 7.2.5 Concurrency 7.2.6 Verification 7.3 Modules and Modularization Criteria 7.4 Design Notations 7.4.1 Data Flow Diagrams 7.4.2 Structure Charts 7.4.3 HIPO Diagrams 7.4.4 Procedure Templates 7.4.5 Pseudo code Design Notations 7.4.6 Structured Flowcharts 7.4.7 Structured English 7.4.8 Decision Tables 7.5 Design Techniques 7.5.1 Stepwise Refinement 7.5.2 Levels of Abstraction 7.5.3 Structured Design 7.5.4 Integrated Top-Down Development 7.5.5 Jackson Structured Programming 7.6 Detailed Design Considerations

81

7.1 Introduction to Software Design


The process of design involves conceiving and planning out in the mind and making drawing, pattern or sketch of. In software design there are three distinct types of activities: external design, architectural design and detailed design. Architectural and detailed designs are collectively called internal design. External design of software involves conceiving, planning out, and specifying the externally observable characteristics of a software product. These characteristics include user displays and report formats, external data sources and data sinks, the functional characteristics, performance requirements and high level process structure for the product. External design begins during the analysis phase and continues into the design phase. In practice, it is not possible to perform requirement definition without doing some preliminary design. Requirement definition is concerned with specifying the external, functional and performance requirements for a system. External design is concerned with refining these requirements and establishing the high level structural view of the system. Internal design involves conceiving, planning out, and specifying the internal structure and processing details of the software product. The main goals of the internal design are to specify internal structure and processing details, to record design decisions and indicate why certain alternatives were chosen, to elaborate the test plan, and to provide a blueprint for implementation, testing and maintenance activities. The work products of internal design include a specification of architectural structure, the details of algorithms and data structures, and the test plan. Architectural design is concerned with refining the conceptual view of the system, identifying internal processing functions, decomposing high-level functions into sub functions, defining internal data streams and data stores, and establishing the relationships and interconnections among functions, data streams, and data stores. The main goal of detailed design includes specification of algorithms that implement the functions, concrete data structures that implement the data stores, and the actual interconnections among functions and data structures. The test plan describes the objectives of testing, the test completion criteria, the integration plan, particular tools and techniques to be used, and the actual test cases and expected results. Functional tests and performance tests are developed during requirement analysis and are refined during design phase. Tests that examine the internal structure of the software product and the tests that attempt to break the system are developed during detailed design and implementation. External design and architectural design span the period from Software Requirements Review to Preliminary Design Review. Detailed design spans the period from Preliminary Design Review to Critical Design Review. This is shown in the figure below.

82

Phases:

Analysis
Planning Requirements Definitions

Design
External Architectural Detailed

Implementation
Coding Debugging Testing

Activities:

Reviews:

SRR

PDR

CDR

SRR Software Requirement Review PDR Preliminary Design Review CDR Critical Design Review

7.2 Fundamental Design Concepts


Fundamental concepts of software design include Abstraction, Structure, Information hiding, Modularity, Concurrency, Verification and Design aesthetics.

7.2.1 Abstraction
Abstraction is the tool that allows us to deal with concepts apart from particular instances of those concepts. During requirements definition and design, abstraction permits separation of the conceptual aspects of a system from the implementation details. During software design, abstraction allows us to organize and channel our thought processes by postponing structural considerations and detailed algorithmic considerations until the functional characteristics, data streams and data stores have been established. Structural considerations are then addressed prior to consideration of algorithmic details. This approach reduces the amount of complexity that must be dealt with at any particular point in the design process. Architectural design specifications are models of software in which the functional and structural attributes of the system are emphasized. During detailed design, the architectural structure is refined into implementation details. Design is thus a process of proceeding from abstract considerations to concrete representations. Three widely used abstraction mechanisms in software design are functional abstraction, data abstraction, and control abstraction. These mechanisms allow us to control the complexity of the design process by systematically proceeding from the abstract to the concrete. Functional abstraction involves the use of parameterized subprograms. The ability to parameterize a subprogram and to bind different parameter values on different invocations of the subprogram is a powerful abstraction mechanism. Functional abstraction can be generalized to collections of subprograms, herein called groups. Within a group, certain routines have the visible property, which allows them to be used by routines in other groups. Routines without

83

the visible property are hidden from other groups and can only be used within the containing group. A group thus provides a functional abstraction in which the visible routines communicate with other groups and the hidden routines exist to support the visible ones. Data abstraction involves specifying a data type or a data object by specifying legal operations on objects; representation and manipulation details are suppressed. Several modern programming languages, including CLU and Ada, provide abstract data types. In those languages, objects of type stack, for instance, can be created and manipulated by the operations specified in the type definition. The term data encapsulation is used to denote a single instance of a data object defined in terms of the operations that can be performed on it; the term abstract data type is used to denote declaration of a data type from which numerous instances can be created. Abstract data types are abstract in the sense that representation details of the data items and implementation details of the functions that manipulate the data items are hidden within the group that implements the abstract type. Other groups that use the abstraction do not have access to the internal details of abstract objects. Objects of abstract type are thus known only by the functions that can be performed on them. During architectural design, the visible functions that operate on abstract objects are specified. Specification of representation details for data objects and algorithmic details for the functions are postponed until detailed design. Control abstraction is the third commonly used abstraction mechanism in software design. Control abstraction is used to state a desired effect without stating the exact mechanism of control. IF statement and WHILE statements in modern programming languages are abstractions of machine code implementations that involves conditional jump instructions. At the architectural design level, control abstraction permits specification of sequential subprograms, exception handlers, and co routines and concurrent program units without concern for the exact details of implementation.

7.2.2 Information Hiding


Information hiding is a fundamental design concept for software. The principle of information hiding was formulated by Parnas (PAR71). When a software system is designed using the information hiding approach, each module in the system hides the internal details of its processing activities and modules communicate only through well-defined interfaces. According to Parnas, design should begin with a list of difficult design decisions and design decisions that are likely to change. Each module is designed to hide such a decision from the other module. In addition to hiding of difficult and changeable design decisions, other candidates for information hiding are: 1. A data structure, its internal linkage, and implementation details of the procedures that manipulate it. 2. The format of the control blocks such as those for queues in an operating system. 3. Character codes, ordering of character sets, and other implementation details. 4. Shifting, masking, and other machine dependent details.

84

Information hiding can be used as the principal design technique for architectural design of a system or as a modularization criterion in conjunction with other design technique.

7.2.3 Structure
Structure is a fundamental characteristic of computer software. The use of structuring permits decomposition of a large system into smaller, more manageable units with well-defined relationships to the other units in the system. The most general form of system structure is the network. A computing network can be represented as a directed graph, consisting of nodes and arcs. The nodes can represent processing elements that transform data and the arcs can be used to represent data links between nodes. Alternatively, the nodes can represent data stores, and the arcs data transformation. In simplest form, a network might specify data flow and processing steps within a single subprogram, or the data flow among a collection of sequential subprograms. The most complex form of computing network is a distributed computing system in which each node represents a geographically distinct processor with private memory. The structure inside a complex processing node might consist of concurrent processes executing in parallel and communicating through some combination of shared variables and synchronous and asynchronous message passing. Inside each process, one might find functional abstraction groups, data abstraction groups, and control abstraction groups. Each group might consist of a visible specification part and a hidden body. The visible portion would provide attributes such as procedure interfaces, data types, and data objects available for use by other groups. The entire network might be a complex abstraction that provides an information utility and in turn forms a node in a more complex structure of people and machines. The process network view of software structure is shown in the figure below.

Link Network

Node

Environment Group Templates

Node

Shared Object Processes

85

TemE, Group Template Processing groups

Process

Utility group

Group

Visible Part Static Area Hidden Part

The relationship uses and the complementary relationship is used by provide the basis for hierarchical ordering of abstractions in a software system. The hierarchical ordering relation can be represented as an acyclic, directed graph with a distinguished node that represents the root entity. The root uses other entities, but is not used by any entity. Subsidiary nodes in the hierarchical structure represent subordinate entities that are used by their super ordinates and in turn make use of their subordinates. A hierarchical structure isolates software components and promotes ease of understanding, implementation, debugging, testing, integration, and modification of a system. In addition, programmers can be assigned, either individually or in teams, to implement the various subsystems in a hierarchical structure. In this sense, the structure of the product prescribes the structure of the implementation team. It has been suggested that the structure of the software products tends to resemble the structure of the teams that develop them.

86

7.2.4 Modularity
There are many definitions of the terms module. They range from a module is a function in c to a module is a work assignment for an individual programmer. Modular systems incorporate collections of abstractions in which each functional abstraction, each data abstraction and each control abstraction handles a local aspect of the problem being solved. Modular systems consist of well-defined, manageable units with well-defined interfaces among the units. Some of the desirable properties of modular system include 1. Each processing abstraction is well-defined subsystem that is potentially useful in other applications. 2. Each function in each abstraction has a single, well-defied purpose. 3. Each function manipulates no more than one major data structure. 4. Functions share global data selectively. It is easy to identify all routines that share a major data structure. 5. Functions that manipulate instances of abstract data types are encapsulated with the data structure being manipulated.

7.2.5 Concurrency
Software systems can be categorized as sequential and concurrent. In a sequential system, only one portion of the system is active at any given time. Concurrent systems have independent processes that can be activated simultaneously if multiple processors are available. On a single processor, concurrent processes can be interleaved in execution time. This permits implementation of time-shared, multiprogrammed and real-time systems. Problems unique to concurrent systems include deadlocks, mutual exclusion, and synchronization of processes. Deadlock is an undesirable situation that occurs when all processes in a computing system are waiting for other processes to complete some actions so that each can proceed. Mutual exclusion is necessary to ensure that multiple processes do not attempt to update the same components of the shared processing state at the same time. Synchronization is required so that concurrent processes operating at differing execution speeds can communicate at the appropriate points in their execution histories. In the past only those software engineers involved with operating systems, data-base systems, and real-time applications needed to be concerned with concurrency issues. But as the hardware becomes more sophisticated, and as computing systems become more complex, it becomes increasingly necessary for every software engineer to understand the issues and techniques of concurrent software design.

87

7.2.6 Verification
Verification is a fundamental concept in software design. Design is the bridge between customer requirements and the implementation that satisfies those requirements. A design is verifiable if it can be demonstrated that the design will result in an implementation that satisfy the customers requirements. This is done in two steps. 1. Verification that the software requirements definition satisfies the customers needs. (Verification of the requirements). 2. Verification that the design satisfies the requirements definition. (Verification of the design).

7.2.7 Aesthetics
Aesthetic considerations are fundamental to design. Simplicity, elegance, and clarity of purpose distinguish products of outstanding quality.

7.3 Modules and Modularization Criteria


The Architectural design has the goal of producing well-structured, modular software systems. A software module is a named entity having the following characteristics: 1. Modules contain instructions, processing logic, and data structures. 2. Modules can be separately compiled and stored in a library. 3. Modules can be included in a program. 4. Module segments can be used by invoking a name and some parameters. 5. Modules can use other modules. Examples of modules includes procedures, subroutines, and functions; functional groups of related procedures, subroutines and functions; data abstraction groups; and concurrent processes. Modularization allows the designer to decompose a system into functional units, to impose hierarchical ordering on function usage, to implement data abstractions, and to develop independently useful subsystems. There are numerous criteria that can be used to guide the modularization of a system. Depending on the criteria used, different system structures may result. Modularization criteria include the conventional criterion, in which each module and its sub modules correspond to a processing step in the execution sequence; the information hiding criterion, in which each module hides a difficult or changeable design decision from the other modules; the data abstraction criterion, in which each module hides the representation details of a major data structure behind functions that access and modify the data structure; levels of abstraction, in which modules and collections of modules provide a hierarchical set of increasingly complex services; coupling-cohesion, in which a system is structured to maximize the cohesion of elements in each module and to minimize the coupling between modules.

88

A software system can be modularized using a single design criterion or several criteria. In any case, structures of higher quality are produced when the designer uses well-defined modularization criteria to guide the design process.

Coupling and Cohesion


A fundamental goal of software design is to structure the software product so that the number and complexity of interconnections between the modules is minimized. This goal is achieved by the concepts of coupling and cohesion. Coupling and cohesion were first described by Stevens, Constantine, and Myers as part of structured design approach to software design. The strength of coupling between two modules is influenced by the complexity of the interface, the type of connection, and the type of communication. Communication between modules involves passing of data, passing elements of control, and modification of one modules code by another module. The degree of coupling is lowest for the data communication, higher for control communication, and highest for modules that modify other modules. Coupling between modules can be ranked on a scale of strongest (least desirable) to weakest (most desirable) as follows: 1. Content coupling 2. Common coupling 3. Control coupling 4. Stamp coupling 5. Data coupling Content coupling occurs when one module modifies local data values or instructions in another module. Content coupling can occur in assembly language programs. In common coupling, modules are bound together by global data structures. Common coupling results when all routines in FORTRAN program references a single common data block. Control coupling involves passing control flags between modules so that one module controls the sequence of processing steps in another module. Stamp coupling is similar to common coupling, except that global data items are shared selectively among the routines that require the data. Named common blocks in FORTRAN and Ada support stamp coupling. Data coupling involves the use of parameter lists to pass data items between routines. The most desirable form of coupling between modules is a combination of stamp and data coupling. The internal cohesion of a module is measured in terms of the strength of binding of elements within the module. Cohesion of elements occurs on the scale of weakest (least desirable) to strongest (most desirable) in the following order.

89

1. Coincidental cohesion 2. Logical cohesion 3. Temporal cohesion 4. Communication cohesion 5. Sequential cohesion 6. Functional cohesion 7. Informational cohesion Coincidental cohesion occurs when the elements within the module have no apparent relationship to one another. This results when a large, monolithic program is modularized by arbitrarily segmenting the program into several small modules, or when a module is created from a group of unrelated instructions that appear several times in other modules. Logical cohesion implies some relationship among the elements of the module. For e.g., in a module that performs all input and output operations, or in a module that edits all data. A logically bound module often combines several related functions in a complex and interrelated fashion. This results in passing of control parameters, and in shared and tricky code that is difficult to understand and modify. Math library routines often exhibit logical cohesion. Modules with temporal cohesion exhibit many of the same disadvantages as logically bound modules. However, they are higher on the scale of binding because all elements are executed at one time, and no parameters or logic are required to determine which elements to execute. An e.g., of temporal cohesion is a module that performs program initialization. The elements of a module possessing communicational cohesion refer to the same set of input and/or output data. Communicational binding is higher on the binding scale than temporal binding because the elements are executed at one time and also refer to the same data. Sequential cohesion of elements occurs when the output of one element is the input for the next element. Sequential cohesion is high on the binding scale because the module structure usually bears a close resemblance to the problem structure. Functional cohesion is a strong, and hence desirable, type of binding of elements in a module because all elements are related to the performance of a single function. Examples of functionally bound modules are compute square root, obtain random numbers etc. Informational cohesion of elements in a module occurs when the module contains a complex data structure and several routines to manipulate the data structure. Each routine in the module exhibits functional binding. Informational cohesion is the concrete realization of data abstraction. Informational cohesion is similar to communicational cohesion in that both refer to a single data entity. They differ in that communicational cohesion implies that all code in the module is executed on each invocation of the module. Informational cohesion requires that only one functionally cohesive segment of the module be executed on each invocation of the module.

90

Design Notations
In software design, the representation schemes used are of fundamental importance. Good notations can clarify the interrelationships and interactions of interest, while poor notations can complicate and interfere with good design practice. At least three levels of design specifications exist: external design specifications, which describe the external characteristics of a software system; architectural design specifications which describe the structure of the system; and detailed design specifications, which describe control flow, data representation, and other algorithmic details within the modules. Notations used to specify the external characteristics, architectural structure and processing details of a software system include data flow diagrams, structure charts, HIPO diagrams, procedure specifications, pseudo code, structured English, and structured flowcharts.

7.4.1 Data Flow Diagrams


Date flow diagrams are directed graphs in which the nodes specify processing activities and the arcs specify data items transmitted between processing nodes. Data flow diagrams can be used at any desired level of abstraction. A data flow diagram might represent data flow between individual statements or block of statements in a routine, data flow between sequential routines, data flow between concurrent processes, or data flow in a distributed computing system where each node represents a geographically remote processing unit. Data flow diagrams can be expressed using informal notations as shown in the figure below.

Input A Reference Data X

Transform A Compute C Transform B Compute D

Output D

Input B

Operator Input Y Compute E

Operator Input Z

Output E

Or special symbols can be used to denote processing nodes, data sources, data sinks, and data stores as shown below.

91

Order Processing Inventory Vendors

Inventory Details Customer requests Orders Verify Order Credit status Data Source Customers Data Stores Valid orders Valid orders

Address

Data flow

Assemble Orders

Warehouse Batch orders

Pending Orders

Processing node

Data Sink

Data flow diagrams are excellent mechanisms for communicating with customers during requirement analysis; they are also widely used for representation of external and top-level internal design specifications.

7.4.2 Structure Charts


Structure charts are used during architectural design to document hierarchical structure, parameters, and interconnections in a system. A structure chart differs from a flowchart in two ways: a structure chart has no decision boxes, and the sequential ordering of tasks inherent in a flowchart can be suppressed in a structure chart. The structure of a hierarchical system can be specified using structure chart as shown below. The chart can be augmented with module-by-module specification of the input and output parameters, as well as the input and output parameter attributes. During architectural design the parameter attributes are abstract; they are refined into concrete representations during detailed design.

92

A 1 B 5 C 6 7 2 4 3 D 8 E

In 1 2 3 4 5 6 7 8

Out

Input Data

Output Data

7.4.3 HIPO Diagrams


HIPO diagrams (Hierarchy-Process-Input-Output) were developed at IBM as design representation schemes for top-down software development, and as external documentation aids for released products. A set of HIPO diagrams contains a visual table of contents, a set of overview diagrams, and a set of detail diagrams. The visual table of contents is a directory to the set of diagrams in the package; it consists of a tree-structured directory, a summary of the contents of each overview diagram, and a legend of symbol definitions as shown in the figure below. The visual table of contents is a stylized structure chart.

93

Legend .--. .--. .---.--

Identify each diagram by content and diagram number 2 3 4 6

1 5 7 10 9 8 12 11

Contents 1. ----. --2. ----. --3. ----. --4. ----. --5. ----. ---

6. ---. -7. ---. -8. ---. -9. ---. -10---. 11. -. --

7.4.4 Procedure Templates


The format of a procedure interface specification is shown in figure below. PROCEDURE NAME: PART OF: (Subsystem name & number) Level 1 CALLED BY: PURPOSE: DESIGNER/DATE: PARAMETERS :( name, modes, attributes, purpose) INPUT ASSERTION: (preconditions) Level 2 OUTPUT ASSERTION: (post conditions) GLOBALS: (name, modes, attributes, purpose, shared with) SIDE EFFECTS: LOCAL DATA STRUCTURES: (name, attributes, purpose) EXCEPTIONS: (conditions, responses) Level 3 TIMING CONSTRAINTS: OTHER LIMITATIONS: PROCEDURE BODY :( (Pseudo code, structured English, Structured flowchart, decision table) Level 4 In the early stages of architectural design, only the information in level 1 need be supplied. As design progresses, the information on levels 2,3, and 4 can be included in successive steps. The term side Effect in the above figure means any effect a procedure can exert on the processing environment that is not evident from the procedure name and parameters.

94

Modifications to global variables, reading or writing a file, opening or closing a file, or calling a procedure that in turn exhibits side effects are all examples of side effects. Only the information on level 1 in the figure will be provided during initial architectural design, because detailed specification of side effects, exception handling, processing algorithms, and concrete data representations will sidetrack the designer into inappropriate levels of detail too soon. During detailed design, the processing algorithms and data structures can be specified using structured flowcharts, pseudo code, or structured English. The syntax utilized to specify procedure interfaces during detailed design may in fact, be the implementation language. In this case, the procedure interface and pseudo code procedure body can be expanded directly into source code during product implementation. Procedure interface specifications are effective notations for architectural design when used in combination with structure charts and data flow diagrams. They also provide a natural transition from architectural to detailed design and from detailed design to implementation.

7.4.5 Pseudo code


Pseudo code notation can be used in both the architectural and detailed design phases. Pseudo code can be used at any desired level of abstraction. Using pseudo code, the designer describes system characteristics using short, concise, English language phrases that are structured by key words such as If-Then=Else, While-Do, and End. Key words and indentations describe the flow of control, while the English phrases describe processing actions. Using the top-down design strategy, each English phrase is expanded into more detailed pseudo code until the design specification reaches the level of detail of the implementation language. The use of pseudo code for detailed design specification is illustrated in the figure below. INITIALIZE tables and counters; OPEN files READ the first text record WHILE there are more text records DO WHILE there are more words in the text record DO EXTRACT the next word SEARCH word table for the extracted word IF the extracted word is found THEN INCREMENT the extracted words occurrence count ELSE INSERT the extracted word into the word table ENDIF INCREMENT the word processed counter ENDWHILE at the end of the text record ENDWHILE when all text records have been processed PRINT the word table and the words processed counter CLOSE files TERMINATE the program

95

7.4.6 Structured Flowcharts


Flowcharts are traditional means for specifying and documenting algorithmic details in a software system. Flowcharts incorporate rectangular boxes for actions, diamond shaped boxes for decisions, directed arcs for specifying interconnections between boxes, and a variety of specially shaped symbols to denote input, output, data stores etc. Structured flowcharts differ from traditional flowcharts in that structured flowcharts are restricted to compositions of certain basic forms. A typical set of basic forms are shown below. S1; S1;

S2; F

P T

P T S

REPEAT S UNTIL P;

T WHILE P DO S; S1;

S2;

IF P THEN S1 ELSE S2;

The basic forms are characterized by single entry into and single exit from the form. Thus, form can be nested within forms to nay arbitrary depth, and in any arbitrary fashion, so long as the single entry, single exit property is preserved.

96

7.4.7 Structured English


Structured English can be used to provide a step-by-step specification for an algorithm. Like pseudo code, structured English can be used at any desired level of detail. An example of using structured English for software specification is provided in the figure below.

S1;

T1 T T T2

S2; S7;

S3;

T3

S4;

S5;

T4 T4 T S6; S8;

97

7.4.8 Decision Tables


Decision tables can be used to specify complex decision logic in a high-level software specification. They are also useful for specifying algorithmic logic during detailed design. At this level of usage, decision tables can be specified and translated into source code logic.

Design Techniques
The design process involves developing a conceptual view of the system, establishing system structure, identifying data stream and data stores, decomposing high level functions into sub functions, establishing relationships and interconnections among components, developing concrete data representations, and specifying algorithmic details. During the past few years, several techniques have been developed for software design. These techniques include stepwise refinement, levels of abstraction, structured design, integrated topdown development, and the Jackson design method. These are the guidelines and the viewpoints for the process design. Design techniques are typically based on the top-down and/or bottom-up design strategies. Using top-down approach, attention is first focused on global aspects of the overall system. As the design progresses, the system is decomposed into subsystems and more consideration is given to specific issues. Backtracking is fundamental to top-down. In the bottom-up approach to software design, the designer first attempts to identify a set of primitive objects, actions, and relationships that will provide a basis for problem solution. Higher-level concepts are then formulated in terms of the primitives. The bottom-up strategy requires the designer to combine features provided by the implementation language into more sophisticated entities. These entities are combined in turn until a set of functions, data structures, and interconnections has been constructed to solve the problem using elements available in the actual programming environment.

7.5.1 Stepwise Refinement


Stepwise refinement is a top-down technique for decomposing a system from high-level specifications into more elementary levels. Stepwise refinement is also known as stepwise program development and successive refinement. Stepwise refinement involves the following activities: 1. Decomposing design decisions to elementary levels. 2. Isolating design aspects that is not truly interdependent. 3. Postponing decisions concerning representation details as long as possible. 4. Carefully demonstrating that each successive step in the refinement process is a faithful expansion of previous steps.

98

Incremental addition of detail at each step in the refinement process postpones design decisions as long as possible and allows the designer to argue convincingly that the resulting software product is consistent with the design specifications. Stepwise refinement begins with the specifications derived during requirements analysis and external design. The problem is first decomposed into few major processing steps that will demonstrably solve the problem. The process is then repeated for each part of the system until it is decomposed in sufficient detail so that implementation in an executable programming language is straightforward. An explicit representation technique is not prescribed in stepwise refinement. However, use of structure charts, procedure specifications, and pseudocode is consistent with successive refinement. The early stages of refinement are typically stated in an informal pseudocode that becomes more precise as the refinement proceeds. The resulting design is quite close to, and may actually incorporate statements from the implementation language. Successive refinement can be used to perform detailed design of the individual modules in a software product. The major benefits of stepwise refinement as a design technique are: 1. Top-down decomposition. 2. Incremental addition of detail. 3. Postponement of design decisions. 4. Continual verification of consistency. Using stepwise refinement, a problem is segmented into small, manageable pieces, and the amount of detail that must be dealt with at any particular time is minimized. In this manner, the designers thought processes are channeled to the proper concerns at the proper time. However, the successive refinement is not much a design technique as a general approach to problem solving. Success with this method is highly dependent on having a clear conceptual understanding of the desired solution, and on the ability of the designer.

7.5.2 Levels of Abstraction


Levels of abstraction is a bottom-up design technique in which an operating system was designed as a layering of hierarchical levels starting at level 0(processor allocation, real-time clock interrupts) and building up to the level of processing independent user programs. Each level of abstraction is composed of a group of related functions, some of which are externally visible and some of which are internal to the level. Internal functions are hidden from other levels; they can only be invoked by functions on the same level. The internal functions are used to perform tasks common to the work being performed on that level of abstraction. Functions on higher levels cannot be used by functions on lower levels; function usage establishes the levels of abstraction. Each level of abstraction performs a set of services for the functions on the next higher level of abstraction. Thus, a file manipulation system might be layered as a set of routines to manipulate

99

fields (bit vectors on level 0), a set of routines to manipulate records (sets of fields on level 1), and a set of routines to manipulate files (sets of records on level 2). Each level of abstraction has exclusive use of certain resources that other levels are not permitted to access. Higher-level functions can invoke functions on lower-levels, but lower-level functions cannot invoke or in any way make use of higher-level functions.

7.5.3 Structured Design


Structured design is a top-down technique for architectural design of software systems, the basic approach in structured design is systematic conversion of data flow diagrams into structure charts. Design heuristics such as coupling and cohesion are used to guide the design process. The first step in structured design is review and refinement of the data flow diagram developed during requirements definition and external design. The second step is to determine whether the system is transform-centered or transaction-driven, and to derive a high-level structure chart based on this determination. In a transform-centered system, the data flow diagram contains Input, Processing, and Output segments that are converted into Input, Processing, and Output subsystems in the structure chart. Boundaries between the three major subsystems in a transform-centered system are identified by determining the point of most abstract input data and the point of most abstract output data on the data flow diagram. The point of most abstract input data is the point in the data flow diagram where the input stream can no longer be identified. Similarly, the point of most abstract output data is the point in the data flow diagram where components of the output data stream can be first be identified. The third step in structured design is decomposition of each subsystem using guidelines such as coupling, cohesion, information hiding, levels of abstraction, data abstraction, and other decomposition criteria. Decomposition of processing functions into modules should be continued until each module contains no subset of elements that can be used alone, and until each module is small enough that its entire implementation can be grasped at once. In the initial design phase, one should subdivide too finely, because small modules can be easily recombined at a later time. Functions that are useful in several places may not be identified if the subdivision is too coarse. Detailed design of the individual modules in a system can be accomplished using successive refinement technique and by using detailed design notations such as HIPO diagrams, procedure templates, pseudocode and structured English. A transform-centered system is characterized by similar processing steps for each data item processed by the Input, Process, and Output subsystems. In a transaction-driven system, one of the several possible paths through the data flow diagram is traversed by each transaction. The path traversed is typically determined by user input commands.

100

The primary benefits of structured design are: 1. The use of data flow diagrams focuses attention on the problem structure. This follows naturally from requirements analysis and external design. 2. The method of translating data flow diagrams into structure charts provides a method for initiating architectural design in a systematic manner. 3. Data dictionaries can be used in conjunction with structure charts to specify data attributes and data relationships. 4. Design heuristics such as coupling and cohesion, and scope of effect and scope of control provide criteria for systematic development of architectural structure and for comparison of alternative design structures. 5. Detailed design techniques and notations such as successive refinement, HIPO diagrams, procedure specification forms, and pseudocode can be used to perform detailed design of the individual modules. The primary strength of structured design is provision of a systematic method for converting data flow diagrams into top-level structure charts. However, the method does not provide much guidance for decomposing top-level structure charts into detailed structures. The primary disadvantage of structured design is that the technique produces systems that are structured as sequences of processing steps. Decomposing a system into processing steps is inconsistent with the design criterion on information hiding. Disregard for information hiding and separation of concerns may result in a system that is difficult to modify.

7.5.4 Integrated Top-Down Development


Integrated top-down development integrates design, implementation, and testing. Using integrated top-down development, design proceeds top-down from the highest-level routines; they have the primary function of coordinating and sequencing the lower-level routines. Lowerlevel routines may be implementations of the elementary functions, or they may in turn invoke more primitive routines. There is thus a hierarchical structure to a top-down system in which routines can invoke lower-level routines but cannot invoke routines on a higher level. The integrated top-down design technique provides an orderly and systematic framework for software development. Design and coding are integrated. Test cases are developed systematically, and each routine is tested in the actual operating environment. A further advantage of the integrated top-down approach is distribution of system integration across the project; the interfaces are established, coded and tested as the design progresses. The primary disadvantage of the integrated top-down approach is that early high-level design decisions may have to be reconsidered when the design progresses to lower levels. This may require design backtracking and considerable rewriting of code. There are other disadvantages to integrated top-down development; the system may be a very expensive test harness for newly added procedures; it may not be possible to find high-level test data to exercise newly added procedures in the desired manner. It may be necessary to first write and test some low-level procedures before proceeding with top-down development.

101

7.5.5 Jackson Structured Programming


Jackson Structured Programming was developed by Michael Jackson as a systematic technique for mapping the structure of a problem into a program structure to solve the problem. The mapping is accomplished in three steps: 1. The problem is modeled by specifying the input and output data structures using tree structured diagrams. 2. The input-output model is converted into a structural model for the program by identifying points of correspondence between nodes in the input and output trees. 3. The structural model of the program is expanded into detailed design model that contains the operations needed to solve the problem. Input and output structures are specified using a graphical notation to specify data hierarchy, sequences of data, repetition of data items, and alternate data items. The formats of input and output data structures are specified using graphical representation or regular grammars. The second step of the Jackson method involves converting the input and output structures into a structural model of the program. This is accomplished by identifying points of commonality in the input and output structures and combining the two structures into a program structure that maps inputs into outputs. Labels on the data items in the resulting structure are converted to process names that perform the required processing of the data items. The third step expands the structural model of the program into a detailed design model containing the operations needed to solve the problem. This step is performed in three sub steps. 1. A list of operations required to perform the processing steps is developed. 2. The operations are associated with the program structure. 3. Program structure and operations are expressed in a notation called schematic logic, which is stylized pseudocode. Control flow for selection and iteration are specified in this step. Difficulties encountered in applying the Jackson method include structure clashes and the need for look-ahead. Structure clashes occur when points of commonality between input and output data structures cannot be identified. Structure clashes can be resolved by a technique called program inversion. Using program inversion, a consumer routine calls on a producer routine to deliver the next required data item. The producer delivers values to the consumer on a demand basis, and, to the consumer, it appears as if the values are being produces from a sequential file of values. Implementation of program inversion can be accomplished using a co routine structure for the producer and consumer.

Detailed Design Consideration


Detailed design is concerned with specifying algorithmic details, concrete data representations, interconnections among functions and data structures, and packaging of the software product. Detailed design is strongly influenced by the implementation language, but it is not the same as

102

implementation; detailed design is more concerned with semantic issues and less concerned with syntactic details than is implemented. The starting point for detailed design is an architectural structure for which algorithmic details and concrete data representations are to be provided. While there is a strong temptation to proceed directly from architectural structure to implementation, there are a number of advantages to be gained in the intermediate level of detail provided by detailed design. Implementation addresses issues of programming language syntax, coding style, internal documentation, and insertion of testing and debugging probes into the code. The difficulties encountered during implementation are almost always due to the fact that the implementer is simultaneously performing analysis, design, and coding activities while attempting to express the final result in an implementation language. Detailed design permits design of algorithms and data representations at a higher level of abstraction and notation than the implementation language provides. Detailed design separates the activity of low-level design from implementation, just as the analysis and design activities isolate consideration of what is desired from the structure that will achieve the desired result. Detailed design activities expose flaws in the architectural structure, and the ensuing modifications will be eased by having fewer details to manipulate than would be present in the implementation language. Detailed design also provides a vehicle for design inspections, structured walkthroughs, and the Critical Design Review. Notations for detailed design include HIPO diagrams, pseudo code, structured English, structured flowcharts, data structure diagrams, and physical layouts for data representations. The detailed design representation may utilize key words from implementation language to specify control flow, and declaration statements from the implementation language to specify data representation. Product packaging is an important aspect of detailed design. Packaging is concerned with the manner in which global data items are selectively shared among program units, specification of static data areas, packaging of program units as functions and subroutines, specification of parameter passing mechanism, file structures and file access techniques, and the structure of compilation units and load modules. Detailed design should be carried to a level where each statement in the design notation will result in a few statements in the implementation language. Given the architectural detailed design specifications, any programmer familiar with the implementation language should be able to implement the software product.

103

UNIT 8

IMPLEMANTATION ISSUES

8.1 Introduction 8.2 Structured Coding Techniques 8.2.1 Single Entry, Single Exit Constructs 8.2.2 Efficiency Consideration 8.2.3 Violations of Single Entry, Single Exit 8.2.4 Data Encapsulation 8.2.5 The Go to Statement 8.2.6 Recursion 8.3 Coding Styles 8.4 Standard and Guidelines 8.5 Documentation Guidelines 8.5.1 Supporting Documents 8.5.2 Program Unit Notebooks 8.5.3 Internal Documentation

8.1

Introduction

The implementation phase of software development is concerned with translating design specifications into source code. The primary goal of implementation is to write source code and internal documentation so that conformance of the code to its specification can be easily verified, and so that debugging, testing, and modification are eased. This goal can be achieved by making the source code as clear and straightforward as possible. Source-code clarity is enhanced by structured coding techniques, by good coding style, by appropriate supporting documents, by good internal comments, and by features provided in modern programming languages. Production of high-quality software requires that the programming team have a designated leader, a well-defined organizational structure, and thorough understanding of the duties and responsibilities of each team member.

104

The implementation team should be provided with a well-defined set of software requirements, an architectural design specification, and a detailed design description. Also each team member must understand the objective of implementation.

8.2

Structured Coding Techniques

The goal of structured coding is to linearize control flow through a program so that the execution sequence follows the sequence in which the code is written. The dynamic structure of program as it executes then resembles the static structure of the written text. This enhances readability of code, which eases understanding, debugging, testing, documentation and modification of programs. It also facilitates formal verification of programs. Linear flow of control can be achieved by restricting the set of allowed program constructs to single entry, single exit construct format. Implementation issues are discussed in the following subsections:

8.2.1 Single Entry, Single Exit Constructs


Sequencing, selection among alternative actions, and iteration is a sufficient set of constructs for describing the control flow of every conceivable algorithm. A sufficient set of single entry, single exit constructs fir specifying control flow in algorithm is Sequencing: S1; S2; S3; Selection Iteration : if B then S1 else S2; : while B do S;

The single entry and single exit nature of these constructs are illustrated in the figure below:

S1

T B

F T

F B

S2 S1 S3 S2

The single entry, single exit property permits nesting of constructs within one another in any desired fashion; each statement Si might be an assignment statement, a procedure call, an if_then_else, or a while_do. The most important aspect of the single entry, single exit property is that linearity of control flow is retained, even with arbitrarily deep nesting of constructs.

105

The set of structured constructs selected for use in any particular application is primarily a matter of notational convenience; however, the selected construct should be conceptually simple and widely applicable in practice.

8.2.2 Efficiency Consideration


A recurring criticism of strict adherence to single entry, single exit construct is that they result in inefficient use of memory space and execution time. The need for auxiliary variables, repeated segments of code and excessive subprogram calls are often cited. A strategy for using single entry, single exit constructs and coping with efficiency considerations is then presented. The following example illustrates the need for auxiliary variables in a search loop on a linked list: P: =L; while (P /= null) and (P.val /= X) loop P: = P.LINK End loop; if (P /= null) then -- code for X in list else -- code for X not in list end if; The purpose of the loop is to search a linked list L for value X. The list structure is illustrated in the figure below. L VAL LINK L null An Empty List: L = null

null

On loop exit, P will contain the address of the memory cell that contains X, provided X is in the list. If X is not in the list, the outcome depends on the program language being used, and often on the local implementation of the language. Many language translators evaluate all relational

106

expressions in a Boolean expression before performing the Boolean operation. In this example, if X is not in the list, P is assigned the null value (P: = P.LINK) when the end of the list is encountered. Control is then transferred to the start of the loop, and the loop test evaluates the relational expressions (P = null) and (P.VAL = X) before applying the AND operation; in this situation P.VAL is undefined because P is null. Some language translators avoid this problem by evaluating relational expressions in left- toright fashion. If the leftmost operand in an AND is false, the entire AND must be false. Therefore, the rightmost operand need not be evaluated. If the language translator evaluates relational in left-to-right fashion, the example will execute correctly; otherwise an exception will occur. If a language standard does not specify the way in which Boolean expressions are to evaluated, different translators handle the situation according to the viewpoints of different language implementers. Some important points to be considered about structured coding are: structured coding practices give no guidance in the design and method access to data representation. Structured coding is a technique for linearzing control flow. Careful thought about data representation and data access techniques will ease awkward situations when they arise. It may not be possible to design a data structure that will satisfy all requirements in an optimal manner. Major concern about efficiency in structured coding is the need to repeat segments of code to place a code segment in a subprogram and call it repeatedly. Otherwise, the single entry, single exit rule must be violated. It is sometimes true that efficiency considerations will require code segment to be implemented. In order to address efficiency considerations when using single entry, single exit constructs, the following strategy advised; first, implement the algorithm using single entry, single exit constructs. Place the repeated code segments in subprogram and invoke them as needed. When the system or a major subsystem becomes operational, measure the percentage of execution time spent in various regions of the code. Identify a few code segments that are candidates for performance enhancement. Within those code segments, it may be necessary to convolute the code. Then in the internal documentation, include an equivalent structured form of code, to aid understanding of an otherwise obscure code segment. In this manner, critical regions of code can be collapsed together for efficiency purposes, and the remaining code will retain a clear and understandable structure.

8.2.3 Violations of Single Entry, Single Exit


The goal of structured coding is to improve the clarity and readability of source program. Clarity is greatly enhanced by use of single entry, single exit constructs. There are commonly occurring situations which warrant violations of strict adherence to single entry, single exit and yet are consistent with the goal of improved clarity. Three commonly occurring situations are multiple loop exits, error handling and data encapsulation. It is not unusual to have several different conditions for termination of a loop. The best procedure for dealing with multiple loop exits is to redesign the algorithm so that multiple loop exits are not required. If this is not possible, one technique for achieving multiple loop exits is to introduce a Boolean flag variable for each condition and test the flags on each iteration of the

107

loop. There are three difficulties with this approach: first, the need for additional variables; second, the additional tests required on loop exit to determine which condition caused the exit; and the third, the requirement that the loop body be terminated immediately on occurrence of a given condition and not at the start of the next iteration, as would occur on testing a loop flag.

8.2.4 Data Encapsulation


Data encapsulation involves packaging of a data structure and its access routine in a single module. The data structure is manipulated only by the access routines and other routines use the data structure by calling appropriate access routine. A data structure is thus defined by the operations that can be performed on it. Routines that use the structure need not know the details of data representation or data manipulation. Data encapsulation refers to a single instance of an abstract data object. Data abstraction is used to describe a design principle that incorporates both data encapsulation and abstract data types. Five basic features that must be provided by a data encapsulation facility are 1. Multiple entry and return points 2. Local hidden source code and data representation 3. Static data objects 4. An initialization section 5. Exception handling

8.2.5 The Goto Statement


The goto statement provides unconditional transfer of control and thus allows violation of single entry, single exit condition of structured coding. Go to statements were used to achieve a desired result in a manner that violates single entry, single exit and yet maintained locality and linearity of flow control. The goto statement also can be used to simulate single entry, single exit construct in primitive programming languages. Figure below illustrates use of goto statement to simulate if-then-else and while-do. If B then S1 else S2; IF (.NOT. (B)) GO TO 10 S1 GO TO 20 10 20 S2 CONTINUE 20 10 While B do S IF (.NOT. (B)) GO TO 20 S GO TO 10 CONTINUE

108

Although the goto statement can often be used with positive effect on program clarity, it is extremely low-level construct, and is easily misused. A serious problem in using goto statements is the insidious mind set that result from thinking in terms of gotos. For instance, code for a four-way selection might be implemented using goto in the following manner: If (condition1) then goto L1; If (condition2) then goto L2; If (condition3) then goto L3; Goto L4; L1: --- code for condition1 Goto L5 L2: --- code for condition2 Goto L5 L3: --- code for condition3 Goto L5 L4: --- code for all other conditions L5: continue In primitive programming languages, this goto version may be the best structure that cen be achieved. A preferable structure is the more compact form: If (condition1) then --- Code for condition1; Else if (condition2) then --- Code for condition2; Else if (condition3) then --- Code for condition3; Else --- Code for condition4; End if; The goto statement can be a valuable mechanism when used in a disciplined and stylistic manner. Goto statements that transfer control to remote regions of the code, or that jump in and out of code segments in clever ways destroy the linearity and locality of control flow. They are to be avoided.

109

8.2.6 Recursion
A recursive subprogram is one that calls itself, either directly or indirectly. Recursion is a powerful and elegant programming technique. When properly used, recursive subprograms are easy to understand and more efficient than iterative implementations of inherently recursive algorithms. Recursive algorithms arise naturally in conjunction with recursive data structures like linked list, trees, when the exact topology and number of elements in the structure are not known, and in situations that require backtracking algorithms. For e.g. an algorithm for in-order traversal of binary trees can be expressed recursively as Procedure IN_ORDER (T: POINTER) is Begin If (T /= null) then IN_ORDER (T.LEFT); P (T); IN_ORDER (T.RIGHT):\ End if; End IN_ORDER: In this example, P (T) is a procedure that is invoked to process nodes of the tree as they are encountered in IN_ORDER fashion. The distinguishing characteristic of recursive subprogram is use of a system provided stack to hold values of local variables, parameters and return points during recursive calls. Recursion is used inappropriately to implement algorithms that are inherently iterative; such algorithms do not require a stack or any other backtracking mechanism. Recursion is a powerful specification technique, and in appropriate circumstances a powerful implementation technique. However, a recursive specification does not necessarily imply a recursive implementation. Functional specifications state what is required, not how to achieve it. The algorithmic form of recursive specification is often best expressed in an iterative manner. An iterative implementation of an inherently recursive algorithm will result in a user-maintained stack to simulate recursion or clever manipulation of pointers in the data structure to set up backtracking links. This is sometimes less efficient and is always less understandable than a recursive implementation.

8.3

Coding Style

Style is the consistent pattern of choices made among alternative ways of achieving a desired effect. Coding style is manifest in the patterns used by the programmer to express a desired action or outcome. Programmers who work together soon come to recognize the coding style of their colleagues. It has been recognized that good coding style can overcome many of the deficiencies of a primitive programming language, while poor style can defeat the intent of an excellent language. The goal of good coding style is to provide easily understood, straightforward, elegant code.

110

There is no single set of rules that can be applied in every situation; however there are general guidelines that are widely applicable. A number of those guidelines are listed in the table below, which present dos and donts of good coding style. Dos of Good Coding Style Use a few standard, agreed-upon control constructs Use go tos in a disciplined way Introduce user-defined data types to model entities in the problem domain Hide data structures behind access functions Isolate machine dependencies in a few routines Provide standard documentation prologues for each subprogram and compilation unit Carefully examine routines having fewer than 5 or more than 25 executable statements Use indentation, parentheses, spaces, blank lines and borders around comment block

Donts of Good Coding Style Dont be too clever Avoid null then statements Avoid then_if statements Dont nest too deeply Avoid obscure side effects Dont sub optimize Carefully examine routine having more than 5 formal parameters Dont use an identifier for multiple purposes

Dos of Good Coding Style


Use a few standard, agreed-upon control constructs: There is no standard set of constructs for structured coding. Nesting of statement sequences, selection among alternatives, and a mechanism for iteration are sufficient. If implementation is to done in a primitive programming language, structured constructs can be achieved by using conditional and unconditional branching in stylistic patterns to achieve the effect of if-then-else, while-do, repeat-until, case, etc. Pre-processors are sometimes used to translate structured constructs into primitive languages. In assembly language, a macro facility is often used to define templates for structured constructs and to expand structured constructs into stylistic patterns of assembler code. If the implementation language does not provide structured coding constructs, a few stylistic patterns should be used by the programmers who implement the system. This will make coding

111

style more uniform among various programmers, with the result that programs will be easier to read, easier to understand, and easier to modify. Use gotos in a disciplined way: In primitive languages, the goto statement can be used in conjunction with the if statement to achieve the format and effect of structured coding construct. The goto statement is also valuable in achieving multiple and premature loop exits, and for transferring control to error handling code. In all these cases, the goto is used in a stylistic, disciplined manner to achieve a desired result. In particular, the acceptable uses of the goto statement are almost always forward transfers of control within a local region of code. Using a goto to achieve backward transfer of control or to perform a forward jump to a remote region of a program is seldom warranted. The purpose of the goto statement in modern languages is to permit construction of stylistic patterns that are not provided by the language; the goto statement is not provided to encourage violation of structured coding practices. The goto statement, coupled with the if statement, permits user definition of needed control flow mechanism in the same way that user-defined data types permits user definition of unavailable data representations. Introduce user-defined data types to model entities in the problem domain: Use of distinct data types make it possible for humans and computer systems to distinguish between entities from the problem domain. For e.g. Enumeration data types can be used to enumerate the elements of an indexing sequence, subtypes can be used to place constraints on objects of a given type; and derived types permit segregation of objects that have similar representation. The enumeration type declaration type COLOR = (RED, GREEN, YELLOW, BLUE); Permits array indices and loop variables of the form For I in GREEN BLUE loop X (I):= 0; End loop; X (RED):= 0; Subtype of the form type SMALL_INTEGER is INTEGER (110); place range constraints on objects of type SMALL_INTEGER, Derived types of the form type TEMP_IN_DEG_KELVIN is FLOAT: type VELOCITY_IN_FPS is FLAOT; permit differentiation between temperature objects and velocity objects. In primitive programming languages there are limited number of predefined data types like logical, integer, real, complex and arrays, and no mechanism for user-defined types. All entities in the problem domain must be mapped into objects of predefined type. Integers are typically

112

used for indexing, and the programmer must maintain a mental correspondence between integer values and index values. Range constraints must be explicitly tested in the program. Userdefined data types that segment the problem domain in a logical manner can greatly improve the clarity and readability of a source program. Under strong type-checking rules, user-defined data types provide increased data security. Programs written in languages that do not support user-defined data types cannot be made clear or as secure as programs written in a modern language; however, naming conventions for object declarations can be adopted to improve the quality of source code. Hide data structures behind access functions, Isolate machine dependencies in a few routines: These guidelines are manifestations of the principle of information hiding. In software system designed using information hiding; each module makes visible only those features required by other modules. All other aspects of the modules are hidden from external view. Hiding data structures behind access functions is the approach taken in data encapsulation, wherein a data structure and its access routines are encapsulated in a single module. In a language having global scope, disciplined programming style can be used to ensure that major data structures are manipulated in only a few regions of the source code. All other code segments that use a data structure should do so by invoking the appropriate access code. This approach improves the modularity of a source program, and new access functions cab be provided as needed. Changes in the details of data representation require only local changes to, and only local recompilation of, access code. If the specification part of an access routine is changed, all routines that call the modified access routine must be modified and recompiled, but the modification required is very simple: only the call statements to the access routines, and perhaps the ways in which the calling parameters are used, need be modified. Call statements needing modification can be identified using a call graph listing. Similar reasoning applies to the isolation of machine dependencies in a few separate routines. If the nature of data representation need be changed, or if the program is moved to a different machine, details of machine dependencies are localized and can be modified accordingly. Provide standard documentation prologues for each subprogram and compilation unit: A documentation prologue contains information about a subprogram or compilation unit that is not obvious from reading the source text of the subprogram or compilation unit. The exact form and content of a prologue depends on the nature of the implementation language. Information to be provided through a combination of the documentation prologue and the selfdocumenting features of the implementation language are listed below. Name of the author: Date of compilation: Functions performed: Algorithms used: Author/Date/Purpose of modifications: Parameters and modes: Input assertion: Output assertion:

113

Global variables: Side Effects: Major data structures: Calling routines: Called routines: Timing constraints: Exception handling: Assumptions: Use of a pre-programmed editor can reduce the tedium of providing this information for every routine and/or compilation unit. If this information is developed in top-down fashion during detailed design, it can be incorporated directly into the implementation of the routine. Carefully examine routines having fewer than 5 or more than 25 executable statements: A subprogram consists of a specification part, a documentation part, a documentation prologue, declarations, executable statements, and, in some languages, exception handlers. The executable portion of a subprogram should implement a well-defined function that can be described by a simple sentence having an active verb. Routines having 5 to 25 executable statements satisfy these criteria. A subprogram that has more than 25 executable statements probably contains one or more well-defined sub functions that can be packaged as distinct subprograms and invoked when needed. Also, it has been suggested that 30 executable statements is the upper limit of comprehension for a single reading of a subprogram written in a procedural programming language. A subprogram that has fewer than 5 statements is usually too small to provide a well-defined function; it is usually preferable to insert five or fewer statements in-line as needed rather than to invoke a subprogram. A routine having fewer than 5, or more than 25 executable statements should be closely examined for in-line placement in the calling routines, and identification of well-defined sub functions that can be placed in separate routines and called as needed. Often, the well-defined sub functions that are placed in separate routines provide stand-alone utilities that can be used in other contexts. Use indentation, parentheses, spaces, blank lines and borders around comment block: As illustrated in the figure below, a pleasing format of presentation greatly enhances the readability of a source program. Poor source code formatting: while B1 loop if B2 the S1; else for I in 1..N loop COUNT(I) := 0; end loop; end if; end loop; Good source code formatting: while B1 loop

114

if B2 then S1; else for I in 1.. N loop COUNT(I):= 0; end loop; end if; end loop; Formatting the source text can be done manually by the programmer, or by a pretty print routine that automatically indents code to emphasize the syntactic structure of the programming language. A formatting routine can also insert blank lines and borders around comment statements. If formatting source text is done by the programmers, standard conventions should be agreed to and adhered to by all programmers. If formatting is done by a program, all source code should be processed by the formatter before it is retained in a permanent file. When using primitive programming language, source text formatting is sometimes accomplished in conjunction with pre-processing or macro expansion of structured coding constructs. Translators for modern programming languages sometimes provide a pretty-print option as part of the parsing operation. If none of these options are available, a special purpose formatting program can be written. In any case, a source text formatting program is a valuable tool.

Donts of Good Coding Style


Dont be too clever: This point is illustrated by the following segment of code below: DO 10 I = 1, N DO 10 J = 1, N 10 A (I, J) = (I/J) * (J/I) 11 Not only is the code segment obscure, it is also inefficient. The following code segment achieves the same result in a clearer and more efficient manner: DO 20 I = 1, N DO 10 J = 1, N A (I, J) = 0.0 10 20 CONTINUE A (I, I) = 1.0 CONTINUE

Avoid null then statements: A null then statement is of the form If B then; Else S;

115

Which is equivalent to

if (not B) then S;

Avoid then_if statements: A the-if statement is of the form If (A > B) then If (X > Y) then A: = X; Else A: = B; End if; Else A: = B; End if; Then-if statements tend to obscure the conditions under which various actions are performed. A then-if statement can always be rewritten in the more obvious else-if form, as shown below: If (A < B) then A: = B; Else if (X > Y) then B: = Y; Else A: = X; End if; Dont nest too deeply: The major advantage of single entry, single exit constructs is the ability to nest constructs within one another to nay desired depth while maintaining linearity of control flow. If the nesting becomes too deep, as shown below, it becomes difficult to determine the conditions under which statement S2 will be executed; the clarity of the code is obscured. While B1 loop If B2 then Repeat S1 While B3 loop If B4 then S2 Excessive nesting is also an indication of fuzzy thinking and poor design. As a general guideline, nesting of program constructs to depths greater than three or four levels should be avoided. In addition, procedures should only be nested one level deep within one another. Avoid obscure side effects: A side effect of a subprogram invocation is any change to the computational state that occurs as a result of calling the invoked routine. Side effects include

116

modification of parameters passed by reference or by value-result, modification of global variables, I/O operations, and other items. An obscure side effect is one that is not obvious from the invocation of the called routine. An invocation of the form GET(X); That reads a value from a default input file and stores the value in location X exerts an obvious, self-documenting side-effect. If, in addition, the GET routine sorts a file, rewind a tape, and launches a guide missile, it exerts obscure side effects.\ Most programming languages do not provide any special notation to describe the side effects of called routines. In the statement SORT(X, Y, Z) Transmission modes of X, Y and Z, and global variables modified by the SORT can only be determined by examining the SORT routine. The SIDE_EFFECTS section of the documentation prologue for SORT is thus essential for understanding the effect of a call to SORT(X, Y, Z). Dont suboptimize: suboptimization occurs when one devotes inordinate effort to refining a situation that has a little effect on the overall outcome. There are two aspects to suboptimization of the software. First, it is usually not possible to determine what portion of the code will occupy the majority of execution time until execution characteristics of the running program are measured. Second, unless one has intimate knowledge of the language translator, operating system, and hardware, it usually is not clear how various situations will be handled by the system. Thus, the programmer may waste time and effort worrying about situations that are handled in nonintuitive ways, by the system, and that have little effect on overall performance. On should choose the optimal approach when it is obvious. Carefully examine routine having more than 5 formal parameters: Parameters and global variables are the mechanisms used to communicate data values among routines. Parameters bind different arguments to a routine on different invocations of the routine, and the global variables communicate arguments whose bindings do not change between calls to the routine. Parameters and global variables should be few in number. Long, involved parameters lists result in excessive complex routines that are difficult to understand and difficult to use; they result in inadequate decomposition of a software system. Global variables should be shared among routines on a selective, as needed basis. Selection of the number five as the suggested upper bound on the number of parameters in a subprogram is not an entirely arbitrary choice. Fewer parameters and fewer global variables improve the clarity and simplicity of subprograms. In this regard, five is a very lenient upper bound. Dont use an identifier for multiple purposes: Programmers sometimes use one identifier to denote several different entities. The rationale is memory efficiency. Three variables will require three or more memory cells, whereas one variable used in three different ways requires only onethird as many memory cells. There are several wrong things with this practice. First, each

117

identifier in a program should be given a descriptive name that suggests its purpose. This is not possible if the identifier is used for multiple purposes. Second, the use of a single identifier for multiple purposes indicates multiple regions of code that have low cohesion. Third, using an identifier for multiple purposes is a dangerous practise because it makes the source code very sensitive to future modifications. Finally, use of an identifier for multiple purposes can be extremely confusing to the reader of a program. If a system is so tightly constrained in memory space that single identifiers must be used for multiple purposes, the system should be segmented into several smaller routines with an overlay structure. If this violates execution time restrictions, the system should be redesigned.

8.4 Standards and Guidelines


Coding standards are specifications for a preferred coding style. Given a choice of ways to achieve and effect, a preferred way is specified. Coding standards are often viewed by programmers as mechanisms to constrain and devalue the programmers creative problem solving skills. Creatively always occurs within a basic framework of standards. Without a framework of standards to guide and channel an activity, creativity becomes meaningless chaos. Thus it is desirable that all programmers on a software project adopt similar coding styles so that code of uniform quality is produced. This does not mean that all programmers must think alike, or that they must implement all algorithms in exactly same manner. Indeed, the individual style of each programmer on a project is identifiable even when rigid adherence to standards of programming style is observed. A programming standard might specify items such as: 1. Go to statements will not be used. 2. The nesting depth of program constructs will not exceed 5 levels. 3. Subroutine length will not exceed 30 lines. A guideline rephrases these specifications in the following manner: 1. The use of go to statements should be avoided in normal circumstances. 2. The nesting depth of program constructs should be five or less in normal circumstances. 3. The number of executable statements in a subprogram should not exceed 30 in normal circumstances. 4. Departure from normal circumstances requires approval by the project leader. Several conditions are necessary to obtain voluntary adherence to programming guidelines, and to ensure that the guidelines are in fact followed. First, programmers must understand the value of programming guidelines. Second, programmers must have the opportunity to participate in establishing the guidelines. Third, the guidelines must be subject to review and revision when they become burdensome. Fourth, there must be a mechanism for allowing violations of the guidelines in special circumstances. Fifth, automated tools must be used to check adherence to the guidelines. With proper tact and diplomacy, programmers will view programming guidelines as positive aspects of the work environment.

118

8.5

Documentation Guidelines

Computer software includes the source code for a system and all the supporting documents generated during analysis, design, implementation, testing and maintenance of the system. Internal documentation includes standard prologues for compilation units and subprograms, the self documenting aspect of the source code and the internal comments embedded in the source code. Program unit notebooks provide mechanisms for organizing the wok activities and documentation efforts of individual programmers. Some guidelines for internal documentation of source code are:

8.5.1 Supporting Documents


Requirements specifications, design documents, test plans, users manuals, installation instructions and maintenance reports are example of supporting documents. These documents are the products that result from systematic development and maintenance of computer software. A systematic approach to software development assures that supporting documents evolve in an orderly manner and that the documents are available when needed. Supporting documents of substandard quality that are not available when needed, are a strong indication of problems with the processes being used to develop and maintain software. These documents should evolve as a natural by-product of the development process. Customer needs and constraints are recorded in the requirements specification: the requirements provide the framework for architectural design: detailed design evolves from architectural design; source code is developed from the architectural and detailed design. Test plans, user manuals, training programs, installation instructions, and maintenance procedures evolve throughout the development cycle. The quality, quantity, timeliness, and utility of the supporting documents are primary measures of the health and well being of a software project.

8.5.2 Program Unit Notebooks


A program unit is a unit of source code that is developed and maintained by one person; that person is responsible for the unit. In a well-designed system, a program unit is a subprogram or group of subprograms that provide a well-defined function or form a well defined subsystem. A program unit is also small enough, modular enough, that it can be thoroughly tested in isolation by the programmer who develops or modifies it. Program unit notebooks are used by individual programmers to organize their work activities and to maintain the documentation for their program units. A program unit notebook consists of a coversheet and several sections. The cover sheet is the table of contents and the sign-off sheet for the various milestones associated with the program unit. Maintaining the notebook is the responsibility of the programmer currently assigned to the program unit. The notebook stays with the program unit throughout the lifetime of the unit and passes from programmer to programmer as responsibility for the unit shifts. The section in the program unit notebook corresponds to the various phases of the units life cycle. The cover sheet is used to record projected and actual milestone dates, and the sign-off by

119

the reviewer indicates satisfactory completion of a life cycle phase. The collection of cover sheets for all program units in a system provides a detailed summary of the current status of the project at any given point in time. The requirements section of a program unit notebook contains only the specifications that relate to that particular program unit. Both the initial version, as agreed to by the customer and copies of subsequent modifications to the requirements are maintained in the notebook. The architectural and detailed design section of the notebook contains working papers and the final design specifications. Working papers should be organized to indicate why various alternatives were chosen and others rejected. The unit test plan contains a description of the approach to be used in unit testing, the testing configuration, the actual test cases, the expected outcomes for each test case, and the unit test completion criteria. The section containing unit source code contains a listing of the current version of the unit and listing of significant previous versions. It is not expected that all syntactic and debugging runs will be retained, but listing of earlier versions that existed prior to significant modifications should be retained. The test results section contains results from the test runs and an analysis of those results in terms of expected results. Not all debugging runs will be retained, but significant test runs should be retained. Problem reports are filed after the system comes under configuration control. Problem reports are the mechanism typically used to initiate a change. The problem report may describe an error situation or a desired modification. A copy of each problem report and its disposition should be retained in the program unit notebooks of affected program unit.

8.5.3 Internal Documentation


Internal documentation consists of a standard prologue for each program unit and compilation unit, the self-documenting aspects of the source code and the internal comments embedded in the executable portion of the code. A typical format for prologues is as follows: Name of author Date of compilation Functions performed Algorithms used Author/date/ purpose of modifications Parameters and modes Input assertions / output assertions Global variables Side effects

120

Major data structures Calling routines / called routines Timing constraints Exception handling Assumptions

121

UNIT 9

VERIFICATION AND VALIDATION

9.1 Introduction 9.2 Quality Assurance 9.3 Walkthroughs and Inspections 9.3.1 Walkthroughs 9.3.2 Inspections 9.4 Symbolic Execution 9.5 Unit Testing and Debugging 9.5.1 Unit Testing 9.5.2 Debugging 9.6 System Testing 9.6.1 Integration testing 9.6.2 Acceptance testing 9.7 Formal Verification 9.7.1 Input-Output Assertions 9.7.2 Weakest Preconditions 9.7.3 Structural Induction

9.1

Introduction

The goals of verification and validation activities are to assess and improve the quality of the work products generated during development and modification of the software. Quality attributes includes correctness, completeness, consistency and reliability, usefulness, efficiency, conformance to standards, and overall cost effectiveness. There are two types of verification: life-cycle verification and formal verification. Life-cycle verification is the process of determining the degree to which the work products of a given phase of the development cycle fulfill the specifications established during prior phases.

122

Formal verification is a rigorous mathematical demonstration that the source code conforms to its specifications. Validation is the process of evaluating software at the end of the software development process to determine compliance with the requirements. Verification and validation involve assessment of work products to determine conformance to specifications. Specifications include the requirements specifications, the design documentation, various stylistic guidelines, implementation language standards, project standards, organizational standards, and user expectations, as well as the met-specifications for the formats and notations used in the various product specifications. The requirements must be examined for conformance to the user needs, environmental constraints, and notational standards. The design documentation must be verified with respect to the requirements and notational conventions, and the source code must be examined for conformance to the requirements, the design documentation, user expectations, and various implementation and documentation standards. Errors occur when any aspect of the software product is incomplete, inconsistent and incorrect. The quality of the work product generated during analysis and design can be assessed and improved by using systematic quality assurance procedures, by walkthroughs and inspections, and by automated checks for consistency and completeness.

9.2 Quality Assurance


Quality Assurance is a planned and systematic pattern of all actions necessary to provide adequate confidence that the item or product conforms to established technical requirements. The purpose of a software quality assurance group is to provide assurance that the procedures, tools and techniques used during product development and modification are adequate to provide the desired level of confidence in the work products. Often, software quality assurance personnel are organizationally distinct from the software development group. This adds a degree of impartiality of quality assurance activities and allows quality assurance personnel to become specialists in their discipline. In some organisation, software quality personnel function in an advisory capacity, while in others the quality assurance group actively develops standards, tools and techniques and examines all work products for conformance to specifications. Preparation of software Quality Assurance Plan for each software project is a primary responsibility of the software quality group. Topics in a software quality assurance plan include: 1. Purpose and scope of the plan. 2. Documents referenced in the plan. 3. Organizational structure, task to be performed and specific responsibilities as they relate to product quality. 4. Documents to be prepared and checks to be made for adequacy of documentation.

123

5. Standards, practices and conventions to be used. 6. Reviews and audits to be conducted. 7. A configuration management plan that identifies software product items, controls and implements changes, and records and reports changed status. 8. Practices and procedures to be followed in reporting, tracking and resolving software problems. 9. Specific tools and techniques to be used to support quality assurance activities. 10. Methods and facilities to be used to maintain and store controlled versions of identified software. 11. Methods and facilities to be used to protect computer program physical media. 12. Provisions for ensuring the quality of vendor-provided and subcontractor-developed software. 13. Methods and facilities to be used in collecting, maintaining, and retaining quality assurance records. Other duties performed by quality assurance personnel include: 1. Development of standard policies, practices, and procedures. 2. Development of testing tools and other quality assurance aids. 3. Performance of the quality assurance functions described in the Software Quality Assurance Plan for each project. 4. Performance and documentation of final product acceptance tests for each software product. A software quality assurance group may perform the following functions: 1. During analysis and design, a Software Verification Plan and Acceptance Test Plan are prepared. The Verification plan describes the methods to be used in verifying that the requirements are satisfied by the design documents and that the source code is consistent with the requirements specifications and design documentation. The Acceptance Test Plan includes test cases, expected outcomes and capabilities demonstrated by each test case. Following the completion of the verification plan and the acceptance plan, a Software Verification Review is held to evaluate the adequacy of the plans. 2. During product evolution, In-Process Audits are conducted to verify consistency and completeness of the work products. Items to be audited for consistency include interface specifications for hardware, software, and the people; internal design versus functional specifications; source code versus design documentation; and functional requirements versus test descriptions. In practice, only certain critical portions of the system may be subjected to intensive audits. 3. Prior to product delivery, a Functional Audit and a Physical Audit are performed. The Functional Audit reconfirms that all requirements have been met. The Physical Audit verifies that the source code and all associated documents are complete, internally consistent, consistent with one another, and ready for delivery. A software Verification Summary is

124

prepared to describe the results of all reviews, audits, and tests conducted by quality assurance personnel throughout the development cycle. Quality assurance personnel are sometimes in charge of arrangements for walkthroughs, inspections and major milestones reviews. In addition, quality assurance personnel often conduct the project post mortem, write the project legacy document and provide long-term retention of project records. The Quality Assurance organisation can also serves as a focal point for collection, analysis and dissemination of quantitative data concerning cost distribution, schedule slippage, error rates and other factors that influence quality and productivity. Typically, the quality assurance group work with development group to derive the Source Code Test Plan. A test plan for the source code specifies the objectives of testing, the test completion criteria, and the system integration plan, methods to be used on particular modules, and particular test inputs and expected outcomes. There are four types of tests that the source code must satisfy; function tests, performance tests, stress tests and structure tests. Function tests and performance tests are based on the requirements specifications; they are designed to demonstrate that the system satisfies its requirements. Therefore, the test plan can be as good as the requirements. Functional test cases specify typical operating conditions, typical input values, and typical expected results. Performance tests are designed to verify response time under varying loads, percent of execution time spent in various segments of the program, throughput, primary and secondary memory utilization, and traffic rates on data channels and communication links. Stress tests are designed to overload a system in various ways. Structure tests are concerned with examining the internal processing logic of a software system. The particular routines called and the logical paths traversed through the routines are the objects of interest.

9.3

Walkthroughs and Inspections

Walkthroughs and Inspections can be used to systematically examine work products throughout the software life cycle.

9.3.1 Walkthroughs
A walkthrough team usually consists of a reviewee and three to five reviewers. Members of a walkthrough team may include the project leader, other members of the project team, a representative from the quality assurance group, a technical writer and other technical personnel who have an interest in the project. Customers and users should be included in walkthroughs during the requirements and preliminary design phases. Walkthrough sessions should be held in an open, non defensive atmosphere. The goal of walkthrough is to discover and make note of problem areas. Problems are not resolved during the walkthrough session. They are resolved by the reviewee following the

125

walkthrough session. A follow-up meeting or follow-up memo should be used to inform reviewers of action taken. Several guidelines should be observed to gain maximum benefit from walkthroughs. 1. Everyones work should be reviewed on a schedule basis. This approach ensures that all work products are reviewed, provides a way of communication among team members, and lessens the threat to individual reviewees. 2. Emphasis should be placed in detecting errors. A walkthrough session should not be used to correct errors. Errors should be noted for subsequent resolution by the reviewee. 3. Major issues should be addressed. Although it is difficult to distinguish between major and minor issues, walkthrough sessions should not degenerate into detailed discussions of minor problems. 4. Walkthrough sessions should be limited to 2 hours. A definite time limit ensures that the meeting will not drag on several hours. This helps to limit the scope of material to be examined, reinforces the emphasis on major issues, and provides incentive for active participation by the reviewers. Successful use of walkthrough is strongly dependent on establishing a positive, no threatening atmosphere for the walkthrough sessions. Sufficient time for walkthroughs must be allotted in the project schedule. Walkthrough sessions must be regarded as part of each participants normal workload, rather than as an overload commitment. Walkthroughs must not be used as vehicles for employee evaluation. Over period of time, the project leader will observe strengths and weaknesses in each team member.

9.3.2 Inspections
Inspections can be used throughout the software life cycle to assess and improve the quality of the various work products. Inspection team consists of one to four members who are trained for their tasks. Inspections are conducted in a similar manner to walkthroughs, but more structured and imposed on the sessions and each participant has a definite role to play. An Inspection team consists of 4 members who play the roles of moderator, designer, implementer and tester. Items to be inspected in a design inspection might include completeness of the design with respect to the user and functional requirements, internal completeness and consistency in definition and usage of terminology and correctness of the interface between modules. Items to be inspected in a code inspection might include subprogram interfaces, decision logic, data referencing, computational expressions, I/O statements, comments, data flow and memory usage. Actual items examined under the category of subprogram interfaces might include: 1. Are the number of actual parameters and formal parameters in agreement? 2. Do the type attributes of actual and formal parameters match? 3. Do the dimensional units of actual and formal parameters match? 4. Are the numbers, attributes and ordering of arguments to built-in functions correct? 5. Are constants passed as modifiable arguments? 6. Are global variable definitions and usage consistent among modules?

126

The actual items in the checklist will depend on the implementation language. In a language that enforces separate compilation, many of the error conditions listed above will be checked by checking across the compilation interfaces. In initial experiment, the Inspection team conducted two 2-hours sessions per day and expended 25 hours per person. Of the errors found during development, 67 percent were found before any unit testing was performed. During first 7 months of operation, 38 percent fewer errors were found in the inspected program than in a similar program produced by a control group using informal walkthroughs. An interesting aspect of inspection is the observation that different programmers tend to make different kinds of characteristic errors. Errors found during an inspection can be cycled back to the programmers who make them, and with this feedback, programmers tend to eliminate those types of errors from their subsequent work, thus improving quality and productivity.

9.4 Symbolic Execution


Symbolic execution is a validation technique in which the input variables of a program unit are assigned symbolic values rather than literal values. A program is analyzed by propagating the symbolic values of the inputs into the operands in expressions. The resulting symbolic expressions are simplified at each step in the computation so that all intermediate computations and decisions are always expressed in terms of the symbolic inputs. The following figure illustrates a program segment and its symbolic execution. Symbolic predicates from the IF statements can be conjoined to form path conditions that describe the constraints under which various segments of code will be executed. PROGRAM READ (B, C) A: =B+C; X: =A*C; IF (A<=X) THEN ELSE IF (B>=1) OR (B<=1) THEN ELSE SYMBOLIC EXECUTION B b; C c; A b+c; X (b+c) * c (b+c) < = (b+c) * c (b>=1) or (b<=1)

Symbolic execution can be used to derive path conditions that can be solved to find input data values that will drive a program along a particular execution path, provided all predicates in the corresponding path conditions are linear functions of the symbolic input values. When the predicates are nonlinear in the input values, the path condition may or may not be solvable because systems of nonlinear inequalities are in general unsolvable. In addition to their use in deriving test data, path conditions can be used to show that the operations such as division, array referencing and pointer operations area secure or unsecured in particular regions of a program.

127

Program loops can be analyzed using symbolic execution trees. Figure below illustrates a function that sums elements 1N of an integer array A, provided N is in the range of the array bounds. When the elements of array A are given symbolic values a1, a2, a3 an. function SUM (A: INT ARRAY, N: NATURAL) return INTEGER is S: INTEGER: = 0; I: NATURAL: = 1; SYMBOLIC VALUES begin while I<= N loop A1: a1 S: S + A (I); A2: a2 I: I + 1; A3: a3 end loop; return(S); An : an end SUM Symbolic execution of the function results in the symbolic execution tree illustrated in Figure below.

S = 0, I = 1 1<=N S = 0 + a1, I = 2 2<=N S = a1 + a2, I = 3 3<=N S = a1 + a2 + a3 , I = 4 4<=N Return a1 + a2 + a3 4> N Return a1 + a2 3> N Return a1 2> N Return 0 1> N

Symbolic execution can be used to generate intermediate expressions and output expressions as function of the input variables, to derive loop invariants, to detect infeasible paths through a program and to demonstrate the potential for semantic errors in certain regions of the code. Symbolic execution can be performed manually or automatically. Symbolic execution systems have been developed for various programming languages. It is an important conceptual tool that

128

forms the bridge from testing to formal verification. The bridge between testing and symbolic execution is demonstrated in connection with the discussion of domain testing theory, where the intermediate computations must be expressed in terms of the input variables. The bridge to testing is also apparent when one considers the possibility of supplying some program inputs in literal form and some in symbolic form. If all inputs are literal, the program is being tested. If all inputs are symbolic, the program is being analyzed using symbolic execution techniques. It is thus possible to vary the degree of symbolic execution from none in the pure testing case to all in the pure symbolic case. Often it is possible to determine the sensitivity of a program to a particular input variable by using a symbol value for the variable(s) of interest and literal values for all other inputs.

9.5 Unit Testing and Debugging


Static Analysis is used to investigate the structural properties of source code. Dynamic test cases are used to investigate the behaviour of source code by executing the program on the test data.

9.5.1 Unit Testing


Unit testing comprises the set of tests performed by an individual programmer prior to integration of the unit into a large system. The situation is illustrated as follows: Coding & debugging Unit testing Integration testing A program unit is usually small enough that the programmer who developed it can test it in great detail. There are 4 categories of tests that a programmer will typically perform on a program unit: 1. Functional test cases involve exercising the code with nominal input values for which the expected results are known as well as boundary values, such as minimum values, maximum values and values on and just outside the functional boundaries and special values, such as logically related inputs, the identity matrix, files of identical elements and empty files. 2. Performance test determines the amount of execution time spent in various parts of the unit, program throughput, and response time and device utilization by the program unit. A certain amount of performance tuning may be done during unit testing. However, caution must be exercised to avoid expending too much effort on fine tuning of a program unit that contributes little to the overall performance of the entire system. Performance testing is most productive at the subsystem and system levels. 3. Stress tests are those test designed to intentionally break the unit. A great deal can be learned about the strengths and limitations of a program by examining the manner in which a program unit breaks. 4. Structure tests are concerned with exercising the internal logic of a program and traversing particular execution paths. The major activities in structural testing are deciding which paths to exercise, deriving test data to exercise those paths, determining the test coverage criterion to be used, executing the test cases and measuring the test coverage achieved when the test cases are exercised.

129

A test coverage criterion must be established for unit testing, because program units usually contain too many paths to permit exhaustive testing. This can be seen by examining the program segment given below. As illustrated, loops introduce combinatorial numbers of execution paths and make exhaustive testing impossible.

N 0 1 2 10

P 2 4 8 2048

P= 2 N+1

Establishing a test completion is another difficulty encountered in unit testing of real programs. There are 3 commonly used measures of test coverage in unit testing: statement coverage, branch coverage and logical path coverage. These measures are illustrated in figure below 2 4 STMTS: 1, 2, 3, 4, 5 BRANCHES: 1, 2, 3, 4, 5 1, 3, 5 1 3 5 PATHS: 1, 2, 3, 4, 5 1, 3, 5 1, 2, 3, 5 1, 3, 4, 5

Using statement coverage as the test completion criterion, the programmer attempts to find a set of test cases that will execute each statement in the program at least once. Using branch coverage as the test completion criterion, the programmer attempts to find a set of test cases that will execute each branching statement in each direction at al least once. Logical path coverage acknowledges that the order in which branches are executed during a test is an important factor in determining the test outcome. It may not be possible to achieve 100 percent branch coverage because of the difficulties involved in finding input data that will exercise a nested branching statement in a particular way. A technique often used during unit testing is to measure path coverage achieved by functional, performance, and stress tests, and to add additional test cases until the desired level of branch coverage is achieved.

130

Test completion criteria based on considerations other than path coverage are sometimes used. These criteria typically involve termination of testing when a predetermined rate of error discovery is achieved, or when a predetermined number of errors have been discovered and corrected. Techniques for estimating the number of errors remaining in a program include predictive models, rules of thumb, error seeding, and trend plotting.

9.5.2

Debugging

Debugging is the process of isolating and correcting the causes of known errors. Commonly used debugging methods include induction, deduction and backtracking. Debugging by induction involves the following steps: 1. Collect the available information. Enumerate known facts about the observed failures and known facts concerning successful test cases. What are the observed symptoms? When did the error occur? Under what conditions did it occur? How does the failure case differ from successful cases? 2. Look for patterns. Examine the collected information for conditions that differentiate the failure case from the successful cases. 3. Form one or more hypothesis. Derive one or more hypotheses from the observed relationships. If no hypotheses are apparent, re-examine the available information and collect additional information, perhaps by running more test cases. If several hypotheses emerge, rank them in the order of most likely to least likely. 4. Prove or disprove each hypothesis. Re-examine the available information to determine whether the hypotheses explains all aspects of the observed problem. Do not overlook the possibility that there may be multiple errors. Do not proceed to step 5 until step 4 is completed. 5. Implement the appropriate corrections. Make the corrections indicated by your evaluation of the various hypotheses. Make the corrections to a back-up copy of the code, in case the modifications are not correct. 6. Verify the correction. Rerun the failure case to be sure that the fix corrects the observed symptoms. Run additional test cases to increase your confidence in the fix. Rerun the previously successful test cases to be sure that your fix has not created new problems. If the fix is successful, make the fix-up copy the primary version of the code and delete the old copy. If the fix is unsuccessful, go to step 1. Debugging by deduction proceeds as follows: 1. List possible causes for the observed failure. 2. Use the available information to eliminate various hypotheses. 3. Elaborate the remaining hypothesis. 4. Prove or disprove each hypothesis. 5. Determine the appropriate corrections. 6. Verify the correction.

131

Debugging by backtracking involves working background in the source code from point where the error was observed in an attempt to identify the exact point where the error occurred. It may be necessary to run additional test cases in order to collect more information. Traditional debugging techniques utilize diagnostic output statements, snapshots dumps, selective traces on data values and control flow and instruction-dependent breakpoints. Modern debugging tools utilize assertion-controlled break-points and execution histories. Diagnostic output statements can be embedded in the source code as specially formatted comment statements that are activated using a special translator option. Diagnostic output from these statements provides snapshots of selected components of the program state, from which the programmer attempts to infer program behaviour. The program state includes the values associated with all currently accessible symbols and any additional information, such as the program stack and program counter, needed to continue execution from a particular point in the execution sequence of the program. A snapshot dump is a machine-level representation of the partial or total program state at a particular point in the execution sequence. A structural snapshot dump is a source-level representation of the partial or total state of the program. The names and current values of symbols accessible at the time of the snapshot are provided. A trace facility lists the changes in selected state components. In its simplest form, a trace will print all change in data values for all variables and all changes in control flow. A selective trace will trace specific variables and control flow in specific regions of the source text. The traditional breakpoint facility interrupts program execution and transfers the control to the programmers terminal when execution reaches a specified break instruction in the source code. The programmer can typically examine the program state, change values of state components, set new break points and resume execution in either normal mode or single step mode, perhaps at another instruction location. Assertion-driven Debugging: Assertions are logical predicates written at the source-code level to describe the relationships among components of the current program state and the relationships between program states. An assertion violation can alter the execution sequence, because recording of state component information for later processing, or trap control to programmers terminal. Assertion violations that transfer control to the programmers terminal are called conditional breakpoints. They become unconditional breakpoints under assertions such as 0=1 or FALSE. Conditional breakpoints are state-dependent while unconditional breakpoints are instruction-dependent. Conditional breakpoints allow the programmer to focus on the nature of the error itself, rather than on instruction locations where the error might have occurred. An assertion-driven debugging and unit testing tool called ALADDIN is described. ALADDIN is an acronym for Assembly Language Assertion Driven Debugging Interpreter. It is an interactive conditional breakpoint facility for debugging and testing of assembly language programs. If an assertion becomes false during execution of an object program, a breakpoint is executed and control is passed to the users terminal. In ALADDIN system, assertions can be

132

used either for debugging-finding the cause of a known error, or for testing-determining if certain errors are present. Debugging assertion takes forms such as $ PC > 377 $. The example states that the program counter is always greater than 377(octal). The assertion is violated and a breakpoint will be executed if the machine attempts of execute an instruction in page zero of the machine. In ALADDIN, testing assertions take the form $ X < 10; Y >= 0 $ The example states that X < 10 or Y >= 0 is true. A breakpoint executed on any instruction cycle in which the assertion becomes false. In the debugging mode, an assertion facility frees the user from the customary trial and error guessing required to locate the source of an error because the program is interrupted on the instruction cycle in which the error occurs. In testing mode, only departures from asserted behaviour are reported. The programmer does not have to examine dumps and traces to ascertain correct program behaviour. The goal of assertion-driven testing is to provide a sufficient set of assertions so that the absence of an assertion violation assures proper functioning of the software being tested. Execution histories: An execution history is a record of execution events collected from an executing program. The history is typically stored in a database for post-mortem examination after the program has terminated execution. A trace back facility uses the execution history to trace control flow and data flow both forward and backward in execution time. Control flow and data flow dependencies can be traced either forward or backward in execution time from the current position in the execution history. Interpreting the execution history in reverse order provides the illusion that the program is executing backward in execution time. This permit analysis of how a particular computation was influenced by previous events. This capability is extremely valuable in debugging and testing of non repeatable error conditions such as those that arise in real-time processing of external events. The Extendable Debugging and Monitoring System (EXDAMS) and Interactive Semantic Modelling System (ISMS) are two examples of debugging and testing facilities that incorporates an execution history collection and display capability. In an execution history system, the user is unable to interact directly with the execution program, but instead examines program behaviour by interrogating the post-mortem data base. This inability to interact with the executing program characterizes the differences between the two fundamental approaches to debugging and unit testing: interpretation and history collection.

9.6

System Testing

System testing involves two kinds of activities: Integration testing and Acceptance testing. Strategies for integrating software components into a functioning product include the bottom-up strategy, the top-down strategy, and the sandwich strategy. Careful planning and scheduling are required to ensure that modules will be available for integration into the evolving software

133

product when needed. The integration strategy dictates the order in which modules must be available, and exerts a strong influence on the order in which the modules are written, debugged, and unit tested. Acceptance testing involves planning and execution of functional tests, performance tests, and stress tests to verify that the implemented system satisfies its requirements. Acceptance tests are performed by the quality assurance or customer organizations.

9.6.1 Integration testing


Bottom-up integration is the traditional strategy used to integrate the components of a software system into a functioning whole. It consists of unit testing, followed by subsystem testing, followed by testing of entire system. Unit testing has the goal of discovering errors in the individual modules of the system. Modules are tested in isolation from one another in an artificial environment known as test harness, which consists of the driver programs and data necessary to exercise the modules. A subsystem consists of several modules that communicate with each other through well-defined interfaces. A subsystem implements a major segment of the total system. The primary goal of subsystem testing is to verify operation of the interfaces between modules in the subsystem. Both control and data interfaces must be tested. Large software systems may require several levels of subsystem testing: lower-level subsystems are successively combined to form higher-level subsystems. System testing is concerned with subtleties in the interfaces, decision logic, control flow, recovery procedures, and throughput, capacity and timing characteristics of the entire system. Careful test planning is required to determine the extent and nature of system testing to be performed and to establish criteria by which the results will be evaluated. Disadvantages of the bottom-up testing include the necessity to write and debug test harnesses for the modules and subsystems, and level of complexity that result from combining modules and subsystems into larger and larger units. The extreme case of complexity results when each module is unit tested in isolation and all modules are then linked and executed in one single integration run. Top-down integration starts with the main routine and one or two immediately subordinate routine in the system structure. After this top-level skeleton has been thoroughly tested, it becomes the test harness for its immediate subordinate routines. It requires the use of program stubs to simulate the effect of lower-level routines that are called by those being tested. While it may appear that top-down integration is always preferable, there are many situations in which it is not possible to adhere to a strict top-down coding and integration strategy. For eg,. It may be difficult to find top-level input data that will exercise a lower level module in a particular desired manner. Also, the evolving system may be expensive to run as a test harness for new routines; it may not be cost effective to re-link and re-execute a system with many routines each time a new routine is added. Significant amount of machine time can often be saved by testing subsystems in isolation before inserting them into the evolving top-down structure. Sandwich integration is predominately top-down, but bottom-up techniques are used on some modules and subsystems. This mix alleviates many of the problems encountered in pure top-

134

down testing and retains the advantages of top-down integration at the subsystem and system level. Automated tools used in integration testing include module drivers, test data generators, environment simulators and a library management facility to allow easy configuration and reconfiguration of the system elements. Automated module drivers permit specification of test cases in a descriptive language. The driver tool then calls the routine using specified test cases, compares actual results with expected results, and reports discrepancies. Some module drivers also provide program stubs for top-down testing. Test cases are written for the stub, and when the stub is invoked by the routine being tested, the driver examines the input parameters to the stub and returns the corresponding outputs to the routine.

9.6.2 Acceptance testing


Acceptance testing involves planning and execution of functional tests, performance test and stress tests in order to demonstrate that the implemented system satisfies its requirements. It is not unusual for two sets of acceptance tests to be run; those developed by the quality assurance group and those developed by the customer. In addition to functional and performance tests, a stress tests are performed to determine the limitations of the system. For e.g., a compiler might be tested to determine the effect of symbol table overflow, or a real-time system might be tested to determine the effect of simultaneous arrival of numerous high-priority interrupts. Acceptance tests will incorporate test cases developed during unit testing and integration testing. Additional test cases are added to achieve the desired level of functional, performance, and stress testing of the entire system. Tools of special importance during acceptance testing include a test coverage analyzer, a timing analyzer and a coding standard checker. A test coverage analyzer records the control paths followed for each test case. The cumulative record is used to establish the extent of test coverage obtained during acceptance testing. A timing analyzer reports the time spent in various regions of the source code under different test cases. It is not unusual for a program to spend 80 to 90 percent of execution time in 20 percent or less of the code. These regions of the code are areas to concentrate on to improve system performance. Certain coding standards are often stated in the product requirement; for e.g., no GOTO, no recursive routines etc. coding standards may be project- related, customer-related, developerrelated, or language-related. Static analyzers and standard checkers can be used to inspect code for departures from standards and guidelines.

9.7

Formal Verification

Formal verification involves the use of rigorous, mathematical techniques to demonstrate that computer programs have certain desired properties. The methods of input-output assertion, weakest preconditions, and structural induction are three commonly used techniques.

135

9.7.1 Input-Output Assertions


The method of input-output assertion was introduced by Floyd and refined by Hoare and Dijkstra. Using Input-Output assertions, predicates (assertions) are associated with entry point, the exit point and various intermediate points in the source code. The predicates or verification conditions must be true whenever the associated code is executed. The notation (P) S (R) is used to mean that if predicate P is true prior to executing code segment S, predicate R will be true following execution of S. The method of input-output assertions states that if the conjunction of predicates from the input assertions to an output assertions is and if the input assertion is satisfied by the input conditions and if the program terminates after following the execution path of interest, then the output assertion will be true on termination of the program.

9.7.2 Weakest Preconditions


Given a preposition of the form (P) S (R), P is the weakest precondition for S if it is the weakest condition that will guarantee the truth of R following execution of S. The weakest precondition is expressed as P = wp(S,R) In practice, P is found by working backwards from R. If S is an assignment statement of the form X := E, the weakest precondition P is obtained by the substituting expression E in the place of X appears in predicate R: wp( X := E,R) = R(E -> X)

9.7.3 Structural Induction


Structural induction is a formal verification technique based on the general principle of mathematical induction. Given set S having the necessary properties and a proposition P to be proved, mathematical induction proceeds as follows: 1. Show P to be true for the minimal element(s) in S. 2. Assume P to be true for each element in S that has an ordinal number less than or equal to N and show P to be true for the N + first element in S.

136

UNIT 10

SOFTWARE MAINTENANCE

10.1 Introduction 10.2 Enhancing Maintainability during Development 10.2.1Analysis Activities 10.2.2 Standards and Guidelines 10.2.3 Design Activities 10.2.4 Implementation Issues 10.2.5 Supporting Documents 10.3 Managerial Aspects of Software Maintenance 10.3.1 Change Control Board 10.3.2 Change Request Summaries 10.3.3 Quality Assurance Activities 10.3.4 Organizing Maintenance Programmers 10.4 Configuration Management 10.4.1 Configuration management Databases 10.4.2 Version Control Libraries 10.5 Source-code Metrics 10.5.1 Halsteads Effort Equation 10.5.2 McCabes Cyclomatic Metric 10.6 Other Maintenance Tools and Techniques

10.1 Introduction
The term software maintenance is used to describe the software engineering activities that occur following delivery of the software product to the customer. The maintenance phase of the software life cycle is the time period in which a software product performs useful work. While the development cycle for a software product spans 1 or 2 years, the maintenance phase spans 5 to 10 years.

137

Maintenance activities involve making enhancements to software products, adapting products to new environments and correcting problems. Software product enhancement may involve providing new functional capabilities, improving user displays and modes of interaction, upgrading external documents and internal documentation, or upgrading the performance characteristics of a system. It is well established that maintenance activities consumes a large portion of the total life-cycle budget. As a general rule of thumb, the distribution of effort for software includes 60 percent of maintenance budget for enhancement and 20 percent for each for adaptation and correction. Maintainability, like all high-level quality attributes, can be expressed in terms of attributes that are built into the product. The primary product attributes that contribute to software maintainability are clarity, modularity, and good internal documentation of the source code, as well as appropriate supporting documents. Maintenance is an important part of the software development cycle. Enhancement and adaptation of the software reinitiates development in the analysis phase, while correction of a software problem may reinitiate the development cycle in the analysis phase, the design phase, or the implementation phase. Thus, all of the tools and techniques used to develop software are potentially useful for software maintenance. Analysis activities during software maintenance involve understanding the scope and effect of a desired change as well as the constraints on making the change. Design during maintenance involves redesigning the product to incorporate the desired changes. The changes must be implemented, internal documentation of code must be updated, and new test cases must be designed to assess the adequacy of the modification. Also, the supporting documents must be updated to reflect the changes. Updated version of the software must be distributed to various customer sites, and configuration control records for each site must be updated. All of these tasks must be accomplished using a systematic, orderly approach to tracking and analysis of change requests and careful redesign, reimplementation, revalidation and redocumentation of the changes.

10.2 Enhancing Maintainability during Development


Many activities performed during software development enhance the maintainability of a software product. Some of these activities are listed below: 1. Analysis Activities

Develop standards and Guidelines Set milestones for the supporting documents Specify quality assurance procedures Identify likely product enhancements Determine resources required for maintenance Estimate maintenance costs

138

2. Architectural Design Activities Emphasize clarity and modularity as design criteria Design to ease likely enhancements Use of standardized notations to document data flow, functions, structure and interconnections Observe the principles of information hiding, data abstraction and top-down hierarchical decomposition Use standardized notations to specify algorithms, data structures and procedures interface specifications Specify side effects and exception handling for each routine Provide cross-reference directories Use single entry, single exit constructs Use standard indentation of constructs Use simple, clear coding style Use symbolic constants to parameterized routines Provide margins on resources Provide standard documentation prologues for each routine Follow standard internal commenting guidelines Develop a maintenance guide Develop a test suite Provide test suite documentation

3. Detailed Design Activities

4. Implementation Activities

5. Other Activities

10.2.1 Analysis Activities


The analysis phase of the software development is concerned with determining customer requirements and constraints, and establishing feasibility of the product. From the maintenance viewpoint, the most important activities that occur during analysis are establishing standards and guidelines for the project and the work products to ensure uniformity of the products; setting of milestones to ensure that the work products are produced on schedule; specifying quality assurance procedures to ensure development of high-quality documents; identifying product enhancements that will occur following initial delivery of the system; and estimating the resources required to perform maintenance activities. The customer must be given an estimate of the resources required and likely costs to be incurred in maintaining the system. These estimates may exert a strong influence on the feasibility of

139

system requirements, and may result in modifications to the requirements. An estimate of the resources required for maintenance allows planning for the procurement of the necessary maintenance facilities and personnel during the development cycle, and minimizes unpleasant surprises for the customer.

10.2.2 Standard and Guidelines


Various types of standards and guidelines can be developed to enhance the maintainability of the software. Standard formats for requirements documents and design specifications, structured coding conventions, and standardized formats for the supporting documents such as the test plan, the principles of operations, the installation manual, and the users manual contribute to the understandability and hence maintainability of the software. The quality assurance group can be given the responsibility for developing and enforcing various standards and guidelines during software development.

10.2.3 Design Activities


Architectural design is concerned with developing the functional components, conceptual data structures, and interconnections in a software system. The most important activity for enhancing maintainability during architectural design is to emphasize clarity, modularity, and ease of modification as the primary design criteria. Given an alternative ways of structuring a system, the designers will choose a particular structure on the basis of certain design criteria that mat be explicitly stated or implicitly understood. The criteria may include coupling and cohesion of modules, efficiency considerations, interfaces to existing software, features in machine architecture, and other factors. Explicit emphasis on clarity, modularity, and ease of modification will usually result in a system that is easier to maintain than one designed using efficiency in executing time and minimization of memory space as the primary design criteria. Design concepts such as information hiding, data abstraction, and top-down hierarchical decomposition are appropriate mechanisms for achieving a clearly understandable, modular and easily modified system structure. For ease of understanding, and for ease of verifying completeness and consistency of the design, standardized notations such as data flow diagrams, structured charts and/or HIPOs should be used. These forms of design documentations aid the software maintainer who must understand the software product well enough to modify it and revalidate it. Detailed design is concerned with specifying algorithmic details, concrete data representations, and details of the interfaces among routines and data structures. Standardized notations should be used to describe algorithms, data structures, and interfaces. Procedure interface specifications should describe the modes and problem domain attributes of parameters and global variables used by each routine. In addition, selectively shared data areas, global variables, side effects, and exception handling mechanisms should be documented for each routine that incorporates those features. A call graph and cross-reference directory should be prepared to indicate the scope of effect of each routine; call graphs and directories provide the information needed to determine which routines and data structures are affected by modifications to other routines.

140

Implementation Activities
Implementation should have the primary goal of producing software that is easy to understand and easy to modify. Single entry, single exit coding constructs should be used, standard indentation of constructs should be observed, and a straightforward coding style should be adopted. Ease of maintenance is enhanced by use of symbolic constants to parameterize the software, by data encapsulation techniques, and by adequate margins on resources such as table sizes and overflow tracks on disks. In addition, standard prologues in each routine should provide the authors name, date of development, the name of the maintenance programmer, and the date and purpose of each modification. In addition, input and output assertions, side effects and exceptions and exception handling actions should be documented in the prologue of each routine.

Supporting Documents
There are two important supporting documents that should be prepared during the software development cycle in order to ease maintenance activities. These documents are the maintenance guide and the test suite description. The maintenance guide provides a technical description of the operational capabilities of the entire system, and hierarchy diagrams, call graphs, and crossreference directories for the system. An external description of each module, including ite purpose, input and output assertions, side effects, global data structures accessed, and exceptions and exception handling actions should be specified in the maintenance guide. Every delivered software product should be accompanied by a test suite. A test suite is a file of test cases developed during system integration testing and customer acceptance testing. The test suite should contain a set of test data and actual results from those tests. When software is modified, test cases are added to the test suite to validate the modifications, and the entire test suite is rerun to verify that the modifications have not introduced any unexpected side effects. Execution of a test suite following software modification is referred as regression testing. Documentation for the test suite should specify the system configuration, assumptions and conditions for each test case, the rationale for each test case, the actual input data for each test, and a description of the expected results for each test. During product development, the quality assurance group is often given responsibility for preparing the acceptance test and maintenance test suites.

10.3

Managerial Aspects of Software Maintenance

Successful software maintenance, like all software engineering activities, requires a combination of managerial skills and technical expertise. One of the most important aspects of software maintenance involves tracking and control of maintenance activities. Maintenance activity for a software product usually occurs in response to a change request filed by a user of the product. A change request may entail enhancement, adaptation or error correction. Major enhancements and major adaptations may require extensive analysis and negotiation with the customer. They are often handled as new development projects rather than as routine maintenance activities. A change request is first reviewed by an analyst. In some cases, the request may report a user problem that is not caused by the software being maintained. In this situation, the analyst notifies

141

the user and, with the concurrence of the user, closes the change request. Otherwise, the analyst submits to the control board the change request, the proposed fix, and an estimate of the resources required to satisfy request.

10.3.1 Change Control Board


The change control board reviews and approves all change requests. The board may deny a request, recommend a modified version of the change, and approve the change as submitted. The analyst provide liaison between the change control board and the request initiator. Approved changes are forwarded to the maintenance programmers for action in accordance with the priority and constraints established by the change control board. The software is modified, revalidated and submitted to the change control board for approval. If the control board approves, the master tapes and external documents are updated to reflect the changes, and the modified software is distributed to user sites as specified by the control board. In the event that the change control board does not approve the modifications performed by the maintenance programmers, the control board objections are addressed by the analysts and programmers and the update is again submitted to the board for approval. The composition of a change control board and the formality with which it functions, depend on the nature of the software, the organizational structure and the particular people involved. A change control board may consist of several high-level managers from both software and nonsoftware areas of the maintenance and user organizations.

10.3.2 Change Request Summaries


The status of change requests and software maintenance activities should be summarized on a weekly or monthly basis. The summary should report emergency problems and temporary fixes in effect since last report; new change requests received and their probable disposition; old open requests, along with the status of progress and probable closing data for each; and change request that have been closed since the last summary, including a description of each closed request and its disposition. A maintenance trends summary should be included in each change request summary. A trend summary is a graph showing the number of new requests and the total number of open requests as a function of time. The change request summary and the maintenance trends summary are valuable sources of information. New change requests can be checked against the change request summary to determine whether similar requests have been received, and if so, the current status of those requests, including suggested temporary fixes.

10.3.3 Quality Assurance Activities


The primary function of a quality assurance group during software maintenance is to ensure that software quality does not degrade as a result of maintenance activities. In particular, the quality assurance group should conduct audits and spot checks to determine that external documents are properly updated to reflected modifications, that the structure and internal documentation of the

142

source code are not being destroyed by quick fixes, that temporary fixes are followed up by the change requests and permanent modifications, that test suites are updated to reflect modifications, that physical protection of master tapes and test suites is adequate, that software change requests are resolved in a timely manner, and that the software configuration management plan is being enforced. A software configuration management plan itemizes the software products to be controlled, specifies the mechanisms of change control, reports changed status of the software products, and inventories the versions of the software products that are distributed to various user sites. In many organisations, the quality assurance group monitor change requests, prepares change summaries, perform regression testing of software modifications, provide configuration management and retains as well as protects the physical media for software products. The quality assurance group be represented on the change control board and should have the sign-off authority for new releases of the modified software products.

10.3.4 Organizing Maintenance Programmers


Software Maintenance can be performed by the development team or by members of a separate organization. Members of the development team will be intimately familiar with the product; they will understand the design philosophy of the system, and why it functions as it does. If members of the development team know they will be responsible for maintaining the product, it is likely that they will take great care to design and implement the system to enhance maintainability. They will also be less careful in preparing the supporting documentation. The developers will most likely be assigned to a new development project while retaining responsibility for maintenance of the released product. Maintenance activities may divert the developers from their new project, resulting in slippage of the new project schedule. Maintenance by a separate group forces more attention to standards and high-quality documentation. Also, it releases the development team to pursue other activities. In addition, maintenance team can become highly expert on various details of the product because they devote their full attention to the product.

10.4 Configuration Management


Configuration management is concerned with tracking and controlling of the work products that constitute a software product. During product development, the various milestones reviews results in formal acceptance of work products such as the project plan, the requirement specifications, the test plan, the users manual, the design documentation and the source code. When a work product passes a milestone review, it is placed under configuration control and any subsequent changes to the work product can be made only with formal agreement of both the customer and the developer organizations. During software maintenance, a configuration management plan and configuration management tool are required to track and control various versions of the work products. Tracking and controlling multiple versions of a software product is a significant issue in software maintenance.

143

10.4.1 Configuration Management Database


Software tools to support configuration management include configuration management databases and version control library systems. A configuration management database can provide information concerning product structure, current revision number, current status and change request history for each product version. The following are the questions that a configuration management database should be able to answer: 1. How many versions of each product exist? 2. How do the versions differ? 3. Which versions of which products are distributed to which sites? 4. What documents are available for each version of each product? 5. What is the revision history of each component of each version of each product? 6. When will the next revision of a given component of a given version of a product be available? 7. What hardware configuration is required to operate a specific version of the product? 8. Which revision constitute a specific version of the product/ 9. Which versions are affected by a given component revision? 10. Which versions are affected by a specific error report? 11. Which errors were corrected in a specific revision? 12. How many errors were reported/fixed in the past month? 13. What are the causes of reported product failures? 14. Which components are functionally similar in which versions? 15. Is a given old version still used by some users? 16. What was the configuration of a given version on a given date? 10.4.2 Version Control Libraries A version control library may be a part of configuration management database or it may be used as a stand-alone tool. Version control library controls the various files that constitute the various versions of a software product. Entities in a version control library may include source code, relocatable object code, job control commands, data files and supporting documents. Each entity in the library must carry an identity stamp that includes a version number, date, and time and programmer identity. Operations to be performed on a library include creation of the library, addition and deletion of components, preparation of back-up copies, editing of files, listing of summary statistics, and compilation/assembly of specified versions of the system. Effective configuration management systems integrate support tools such as configuration management data bases and version control library systems within the managerial framework of change control.

144

10.5 Source-code Metrics


During the past few years, a great deal of effort has been expended on developing metrics to measure the complexity of source code. Most of the metrics incorporate easily computed properties of the source code, such as the number of operators and operands, the complexity of the control flow graph, the number of parameters and global variables in routines, the number of levels and manner of interconnection of the call graph. The approach taken is to compute a number or a set of numbers that measures the complexity of the code. 10.5.1 Halsteads Effort Equation Halstead developed a number of metrics that are computed from easily obtained properties of the source code. These properties include the total number of operators in the program, N1; the number of operands in the program, N2; the number of unique operators in the program, n1; and the number of unique operands, n2. Halstead define several quantities using these numbers: 1. Halsteads estimator of program length is N = n1 log2 n1 + n2 log2 n2 2. Program volume is defined as V = (N1 + N2) log2 (n1 + n2) 3. Language level L = (2* n2) / (n1 * N2) 4. Program effort is defined as V/L E = (n1*N2*(N1 + N2) * log2 (n1 + n2)) / (2*n2)

10.5.2 McCabes Cyclomatic Metric


McCabe has observed that the difficulty of understanding a program is largely determined by the complexity of the control flow graph for that program. The cyclomatic number V of a connected graph G is the number of linearly independent paths in the graph. V (G) is computed as follows: V (G) = E n + 2p Where E is the number of edges n is the numbers of nodes p is the number of connected components McCabe observes that for a structured program with single entry, single exit constructs, V equals the number of predicates plus one. Also, provided G is planar, V is equal to the number of regions in G.

145

10.6 Other Maintenance Tools and Techniques


Because software maintenance is a microcosm of software development, the maintainer may analyze change requests; negotiates with users; formulate recommended solutions for the change control board, including cost estimates and priorities; debug software; provide temporary fixes and emergency repairs; redesign, modify and revalidate software; update external documentation and master tapes; retain and protect physical media; perform configuration control and provide liaison and user training for software product users. Automated tools to support software maintenance include technical support tools and managerial support tools. Tools to support the technical aspects of software maintenance span the spectrum from analysis and design tools to implementation tools to debugging and testing tools. Automated tools for software maintenance are as follows: Text Editors Debugging Aids Cross-reference Generators Linkage Editors Comparators Complexity Metric Calculators Version Control Systems Configuration Management Databases

146

Das könnte Ihnen auch gefallen