Sie sind auf Seite 1von 70

Software Development Process 1.

Introduction Course Contents | Prev : Next Computers are becoming a key element in our daily lives. Slowly and surely they are taking over many of the functions that affect our lives critically. They are now controlling all forms of monetary transactions, manufacturing, transportation, communication, defence systems, process control systems, and so on. In the near future, they will be found in our homes, controlling all forms of appliances. Left to themselves, they are harmless pieces of hardware. Load the right kind of software, they can take you to the moon, both literally and figuratively. It is the software that gives life to them.When they are going to play such a crucial role, one small flaw either in the hardware or the software can lead to catastrophic consequences. The sad part is, while there are well defined processes based on theoretical foundations to ensure the reliability of the hardware, same thing can not be said about software. There is no theory for software devlopment as yet. But at the same time, it is mandatory that software always behaves in a predictable manner, even in unforeseen circumstances. Hence there is a need to control its development through a well defined and systematic process. The old fashioned 'code & test' approach will not do any more. It may be good enough for 'toy' problems, but in real life, software is expected to solve enormously complex problems. Some of the aspects of real life software projects are: Team effort: Any large development effort requires the services of a team of specialists. For example the team could consist of domain experts, software design experts, coding specialists, testing experts, hardware specialists, etc. Each group could concentrate on a specific aspect of the problem and design suitable solution. However no group can work in isolation. There will be constant interaction among team members. Methodology: Broadly there are two types of methodologies, namely, 'procedure oriented methodolgies' and 'object oriented methodologies'. Though theoretically either of them could be used in any given problem situation, one of them should be chosen in advance. Documentation: Clear and unambiguous documentation of the artifacts of the development process are critical for the success of the software project. Oral communication and 'back of the envelop designs' are not sufficient. For example, documentation is necessary if client signoff is required at various stages of the process. Once developed, the software lives for a long time. During its life, it has to undergo lot of changes. Without clear design specifications and well documented code, it will be impossible to make changes. Planning: Since the development takes place against a client's requirements it is imperative that the whole effort is well planned to meet the schedule and cost constraints. Quality assurance: Clients expect value for money. In addition to meeting the client's requirements, the software should also meet additional quality constraints. They could be in terms of performance, security, etc. Lay user: Most of the time, these software packages will be used by non-computer savvy users. Hence the software has to be highly robust. Software tools: Documentation is important for the success of a software project, but it is a cumbersome task and many software practitioners balk at the prospect of documentation. There are tools known as Computer Aided Software Engineering (CASE) tools which simplify the process of documentation.

Conformance to standards: We need to follow certain standards to ensure clear and unambiguous documentation. For example, IEEE standards for requirements specifications, design, etc. Sometimes, clients may specify the standards to be used. Reuse: The development effort can be optimised, by reusing well-tested components. For example, mathematical libraries, graphical user interface tool kits, EJBs, etc. Non-developer maintenance: Software lives for a long time. The development team, may not be available to maintain the package. Some other team will have to ensure that the software continues to provide services. Change management: Whenever a change has to be made, it is necessary to analyse its impact on various parts of the software. Imagine modifying the value of global variable. Every function that accesses the variable will be effected. Unless care is taken to minimise the impact, the software may not behave as expected. Version control: Once changes are made to the software, it is important that the user gets the right version of the software. In case of failures, it should be possible to roll back to the previous versions. Subject to risks: Any large effort is subject to risks. The risks could be in terms of non-availability of skills, technology, inadequate resources, etc. It is necessary to constantly evaluate the risks, and put in place risk mitigation measures.

Software Development Process 2. Software Quality Course Contents | Prev : Next

The goal of any software development process is to produce high quality software. What is software quality? It has been variously defined as: Fitness for purpose Zero defects Conformability & dependability The ability of the software to meet customer's stated and implied needs Some of the important attributes that can be used to measure software quality are: Correctness: Software should meet the customer's needs Robustness: Software must always behave in an expected manner, even when unexpected inputs are given Usability: Ease of use. A software with a graphical user interface is considered more user-friendly than one without it Portability: The ease with which software can be moved from one platform to another Efficiency: Optimal resource (memory & execution time) utilization Maintainability: Ease with which software can be modified

Reliability: The probability of the software giving consistent results over a period of time Flexibility: Ease with which software can be adapted for use in different contexts Security: Prevention of unauthorised access Interoperabilty: The abilty of the software to integrate with existing systems Performance: The ability of the software to deliver the outputs with in the given constraints like time, accuracy, memory usage Correctness is the most important attribute. Every software must be correct. The other attributes may be present in varying degrees. For example, it is an expensive proposition to make a software 100% reliable and it is not required in all contexts. If the software is going to be used in life critical situations, then 100% reliability is mandatory. But, say, in a weather monitoring system, a little less reliability may be acceptable. However, the final decision lies with the client. One should keep in mind that some of the above attributes conflict with each other. For example, portability and efficiency could conflict with each other. To improve efficiency, one may resort to using system dependent features. but that will affect the portability. In the days, when DOS ruled the world, it was possible to access the internal architecture directly to improve performance. To port such a program to any other platform would require enormous changes. So in practice there will always be a tradeoff. Software Development Process 3. What is a Process Course Contents | Prev : Next

3.2 Process: Definition and Phases A Process is a series of definable, repeatable, and measurable tasks leading to a useful result. The benefits of a well defined process are numerous. It provides visibility into a project. Visibility in turn aids timely mid-course corrections It helps developers to weed out faults at the point of introduction. This avoids cascading of faults into later phases It helps to organize workflow and outputs to maximize resource utilization It defines everybody's roles and responsibilities clearly. Individual productivity increases due to specialization and at the same time the team's productivity increases due to coordination of activities A good software development process should: View software development as a value added business activity and not merely as a technical activity Ensure that every product is checked to see if value addition has indeed taken place Safeguard against loss of value once the product is complete Provide management information for in-situ control of the process

To define such a process the following steps need to be followed: Identify the phases of development and the tasks to be carried out in each phase Model the intra and inter phase transitions Use techniques to carry out the tasks Verify and Validate each task and the results Exercise process and project management skills The words 'verify' and 'validate' need some clarification. Verify means to check if the task has been executed correctly, while validate means to check if the correct task has been executed. In the context of software, the process of checking if an algorithm has been implemented correctly, is verification, while the process of checking if the result of the algorithm execution is the solution to the desired problem, is validation. The generic phases that are normally used in a software development process are: Analysis: In this phase user needs are gathered and converted into software requirements. For example, if the user need is to generate the trajectory of a missile, the software requirement is to solve the governing equations. This phase should answer the question: what is to be done to meet user ne eds? Design: This phase answers the question: How to meet the user needs? With respect to the above example, design consists of deciding the algorithm to be used to solve the governing equations. The choice of the algorithm depends on design objectives like execution time, accuracy, etc. In this phase we determine organisation of various modules in the software system Construction: Coding is the main activity in this phase Testing: There are three categories of testing: unit testing, integration testing, and system testing. There are two types of testing: Black box testing and White box testing. Black box testing focuses on generating test cases based on requirements. White box testing focuses on generating test cases based on the internal logic of various modules Maintenance: Maintenance is the last stage of the software life cycle. After the product has been released, the maintenance phase keeps the software up to date with environment changes and changing user requirements. The earlier phases should be done so that the product is easily maintainable. The design phase should plan the structure in a way that can be easily altered. Similarly, the code should be in such a way that it is easily read, understood, and changed. Maintenance can only happen efficiently if the earlier phases are done properly. Software Development Process 4. Software Life Cycle Models Course Contents | Prev : Next

In practice, two types of software life cycle models are used: sequential model and iterative model. 4.1 Waterfall Model Sequential model, also known as water fall model, is pictorially shown thus:

It represents the development process as a sequence of steps (phases). It requires that a phase is complete before the next phase is started. Because of the explicit recognition of phases and sequencing, it helps in contract finalisation with reference to delivery and payment schedules. In practice it is difficult to use this model as it is, because of the uncertainity in the software requirements. It is often difficult to envisage all the requirements a priori. If a mistake in understanding the requirements gets detected during the coding phase, then the whole process has to be started all over again. A working version of the software will not be available until late in the project life cycle. So, iteration both within a phase and across phases is a necessity. Software Development Process 4. Software Life Cycle Models 4.2 Prototyping Prototyping is discussed in the literature as a separate approach to software development. Prototyping as the name suggests, requires that a working version of the software is built early in the project life. There are two types of prototyping models, namely: Throw away prototype and Evolutionary prototype The objective of the throw away prototyping model is to understand the requirements and solution methodologies better. The essence is speed. Hence, an ad-hoc and quick development approach with no thought to quality, is resorted to. It is akin to 'code and test'. However, once the objective is met, the code is discarded and fresh development is started, ensuring that quality standards are met. Since the requirements are now well understood, one could use the sequential approach. This model suffers from wastage of effort, in the sense that developed code is discarded, because it does not meet the quality standards. Evolutionary prototyping takes a different approach. The requirements are prioritised and the code is developed for the most important requirements first, always with an eye on quality. Software is continuously refined and expanded with feedback from the client. The chief advantage of prototyping is that the client gets a feel of the product early in the project life cycle. As can be seen, evolutionary prototyping is an iterative model. Such a model can be characterised by doing a little analysis, design, code, test and repeat the process till the product is complete Software Development Process 4. Software Life Cycle Models 4.3 Spiral Model Barry Boehm has suggested another iteartive model called the spiral model. It is more in the nature of a framework, which needs to be adapted to specific projects. Pictorially it can be shown thus: It allows best mix of other approaches and focusses on eliminating errors and unattractive alternatives early. An important feature of this model is the stress on risk analysis. Once the objectives, alternatives, and constraints for a phase are identified, the risks involved in carrying out the phase are evaluated, which is expected to result in a 'go, no go' decision. For evaluation purposes, one could use prototyping, simulations, etc. This model is best suited for projects, which involve new technology development. Risk analysis expertise is most critical for such projects. Course Contents | Prev : Next Course Contents | Prev : Next

Software Development Process 4. Software Life Cycle Models 4.4 ETVX Model IBM introduced the ETVX model during the 80's to document their processes. 'E' stands for the entry criteria which must be satisfied before a set of tasks can be performed, 'T' is the set of tasks to be performed, 'V' stands for the verification & validation process to ensure that the right tasks are performed, and 'X' stands for the exit criteria or the outputs of the tasks. If an activity fails in the validation check, either corrective action is taken or a rework is ordered. It can be used in any development process. Each phase in the process can be considered as an activity and structured using the ETVX model. If required, the tasks can be further subdivided and each subtask can be further structured using the ETVX model. Software Development Process 4. Software Life Cycle Models Course Contents | Prev : Next Course Contents | Prev : Next

4.5 Rational Unified Process Model Among the modern process models, Rational Unified Process (RUP) developed by Rational Corporation is noteworthy. It is an iterative model and captures many of the best practices of modern software development. RUP is explained more fully in the module OOAD with UML. More information on RUP can be obtained here. The Rational Unified Process (RUP) is an iterative software development process framework created by the Rational Software Corporation, a division of IBM since 2003. RUP is not a single concrete prescriptive process, but rather an adaptable process framework, intended to be tailored by the development organizations and software project teams that will select the elements of the process that are appropriate for their needs. RUP is a specific implementation of the Unified Process. Four Project Life cycle Phases The RUP has determined a project life cycle consisting of four phases. These phases allow the process to be presented at a high level in a similar way to how a 'waterfall'-styled project might be presented, although in essence the key to the process lies in the iterations of development that lie within all of the phases. Also, each phase has one key objective and milestone at the end that denotes the objective being accomplished. The visualization of RUP phases and disciplines over time is referred to as the RUP hump chart. I. Inception Phase The primary objective is to scope the system adequately as a basis for validating initial costing and budgets. In this phase the business case which includes business context, success factors (expected revenue, market recognition, etc.), and financial forecast is established. To complement the business case, a basic use case model, project plan, initial risk assessment and project description (the core project requirements, constraints and key features) are generated. After these are completed, the project is checked against the following criteria: 1. Stakeholder concurrence on scope definition and cost/schedule estimates. 2. Requirements understanding as evidenced by the fidelity of the primary use cases.

3. Credibility of the cost/schedule estimates, priorities, risks, and development process. 4. Depth and breadth of any architectural prototype that was developed. 5. Establishing a baseline by which to compare actual expenditures versus planned expenditures. If the project does not pass this milestone, called the Lifecycle Objective Milestone, it either can be cancelled or repeated after being redesigned to better meet the criteria. II. Elaboration Phase The primary objective is to mitigate the key risk items identified by analysis up to the end of this phase. The elaboration phase is where the project starts to take shape. In this phase the problem domain analysis is made and the architecture of the project gets its basic form. The outcome of the elaboration phase is: 1. A use-case model in which the use-cases and the actors have been identified and most of the use-case descriptions are developed. The use-case model should be 80% complete. 2. A description of the software architecture in a software system development process. 3. An executable architecture that realizes architecturally significant use cases. 4. Business case and risk list which are revised. A development plan for the overall project. 5. Prototypes that demonstrably mitigate each identified technical risk. 6. A preliminary user manual (optional) This phase must pass the Lifecycle Architecture Milestone criteria answering the following questions: * Is the vision of the product stable? * Is the architecture stable? * Does the executable demonstration indicate that major risk elements are addressed and resolved? * Is the construction phase plan sufficiently detailed and accurate? * Do all stakeholders agree that the current vision can be achieved using current plan in the context of the current architecture? * Is the actual vs. planned resource expenditure acceptable? If the project cannot pass this milestone, there is still time for it to be cancelled or redesigned. However, after leaving this phase, the project transitions into a high-risk operation where changes are much more difficult and detrimental when made. The key domain analysis for the elaboration is the system architecture. III. Construction Phase

The primary objective is to build the software system. In this phase, the main focus is on the development of components and other features of the system. This is the phase when the bulk of the coding takes place. In larger projects, several construction iterations may be developed in an effort to divide the use cases into manageable segments that produce demonstrable prototypes. This phase produces the first external release of the software. Its conclusion is marked by the Initial Operational Capability Milestone. IV. Transition Phase The primary objective is to 'transit' the system from development into production, making it available to and understood by the end user. The activities of this phase include training the end users and maintainers and beta testing the system to validate it against the end users' expectations. The product is also checked against the quality level set in the Inception phase. If all objectives are met, the Product Release Milestone is reached and the development cycle is finished. Six Best Practices Six Best Practices as described in the Rational Unified Process is a paradigm in software engineering, that lists six ideas to follow when designing any software project to minimize faults and increase productivity. I. Develop iteratively It is best to know all requirements in advance; however, often this is not the case. Several software development processes exist that deal with providing solution on how to minimize cost in terms of development phases. II. Manage requirements Always keep in mind the requirements set by users. III. Use components Breaking down an advanced project is not only suggested but in fact unavoidable. This promotes ability to test individual components before they are integrated into a larger system. Also, code reuse is a big plus and can be accomplished more easily through the use of object oriented programming. IV. Model visually Use diagrams to represent all major components, users, and their interaction. "UML", short for Unified Modeling Language, is one tool that can be used to make this task more feasible. V. Verify quality Always make testing a major part of the project at any point of time. Testing becomes heavier as the project progresses but should be a constant factor in any software product creation. VI. Control changes Many projects are created by many teams, sometimes in various locations, different platforms may be used, etc. As a result it is essential to make sure that changes made to a system are synchronized and verified constantly.

Software Development Process 4. Software Life Cycle Models 4.6 Agile Methodologies All the methodologies described before are based on the premise that any software development process should be predictable and repeatable. One of the criticisms against these methodologies is that there is more emphasis on following procedures and preparing documentation. They are considered to be heavyweight or rigorous. They are also criticised for their excessive emphasis on structure. There is a movement called Agile Software Movement, questioning this premise. The proponents argue that software development being essentially a human activity, there will always have variations in processes and inputs and the model should be flexible enough to handle the variations. For example: the entire set of software requirements cannot be known at the begining of the project nor do they remain static. If the model cannot handle this dynamism, then there can be lot of wastage of effort or the final product may not meet the customer's needs. Hence the agile methodolgies advocate the principle "build short, build often". That is the given project is broken up in to subprojects and each subproject is developed and integrated in to the already delivered system. This way the customer gets continuous delivery of useful and usable systems. The subprojects are chosen so that they have short delivery cycles, usually of the order of 3 to 4 weeks. The development team also gets continuous feedback. A number of agile methodologies have been proposed. The more popular among them are SCRUM, DYNAMIC SYSTEMS DEVELOPMENT METHOD (DSDM), CRYSTAL METHODS, FEATURE-DRIVEN DEVELOPMENT, LEAN DEVELOPMENT (LD), EXTREME PROGRAMMING (XP). A short description of each of these methods follows: SCRUM: It is a project management framework. It divides the development in to short cycles called sprint cycles in which a specified set of features are delivered. It advocates daily team meetings for coordination and integration. More information on SCRUM can be obtained here DYNAMIC SYSTEMS DEVELOPMENT METHOD (DSDM): It is characterised by nine principles: 1. 2. 3. 4. 5. 6. 7. 8. 9. Active user involvement Team empowerment Frequent delivery of products Fitness for business purpose Iterative and incremental development All changes during development are reversible Baselining of requirements at a high level Integrated testing Collaboration and cooperation between stakeholders More information on DSDM can be obtained here Course Contents | Prev : Next

CRYSTAL METHODOLOGIES: They are a set of configurable methodologies. They focus on the people aspects of development. The configuration is carried out based on project size, criticality and objectives. Some of the names used for the methodologies are Clear, Yellow, Orange, Orange web, , Red , etc. More information can be obtained from here. FEATURE DRIVEN DEVELOPMENT (FDD): It is a short iteration framework for software development. It focuses on building an object model, build feature list, plan by feature, design by feature, and build by feature. More information can be obtained from here. LEAN DEVELOPMENT (LD): This methodology is derived from the principles of lean production, the restructuring of the Japanese automobile manufacturing industry that occurred in the 1980s. It is based on the following principles of lean thinking: Eliminate waste, Amplify learning, Decide as late as possible, Deliver as fast as possible, Empower the team, Build the integrity in, See the whole. More information can be obtained from here. EXTREME PROGRAMMING (XP): This methodology is probably the most popular among the agile methodologies. It is based on three important principles, viz., test first, continuous refactoring, and pair programming. More information can be obtained from here. One of the important concepts popularised by XP is pair programming. Code is always developed in pairs. While one person is keying in the code, the other person would be reviewing. This site is dedicated to pair programming. The paper by Laurie Williams et al., demonstrates the efficacy of pair programming. The site agilealliance.com is dedicated to promoting agile software development methodologies. Software Development Process 5. How to choose a process Among the plethora of available processes, how can we choose one? There is no single answer to this question. Probably the best way to attack this problem is to look at the software requirements. If they are stable and well understood, then waterfall model may be sufficient. If they are stable, but not clear, then throw away prototyping can be used. Where the requirements are changing, evolutionary prototyping is better. If the requirements are coupled with the underlying business processes, which are going through a process of change, then a model based on Boehm's spiral model, like the Rational Unified Process should be used. In these days of dynamic business environment, where 'time to market' is critical, and project size is relatively small, an agile process should be chosen. These are but guidelines only. Many organisations choose a model and adapt it to their business requirements. For example, some organisations use waterfall model, modified to include iterations within phases. Software Development Process 6. Conclusions Course Contents | Prev : Next The most important take away from this module is that software development should follow a disciplined process. The choice of the process should depend upon the stabilty of the requirements, completeness of requirements, underlying business processes, organisational structure, and the prevailing business environment.

2. Analysis Course Contents 1. Introduction 2. Understanding Requirements 2.1 Functional and Non-Functional Requirements 2.2 Other Classifications Analysis 1. Introduction Course Contents | Prev : Next The objectives of this module are To establish the importance / relevance of requirement specifications in software development To bring out the problems involved in specifying requirements To illustrate the use of modelling techniques to minimise problems in specifying requirements Requirements can be defined as follows: A condition or capability needed by a user to solve a problem or achieve an objective.

Main Menu

A condition or capability that must be met or possessed by a system to satisfy a contract, standard, specification, or other formally imposed document. At a high level, requirements can be classified as user/client requirements and software requirements. Client requirements are usually stated in terms of business needs. Software requirements specify what the software must do to meet the business needs. For example, a stores manager might state his requirements in terms of efficiency in stores management. A bank manager might state his requirements in terms of time to service his customers. It is the analyst's job to understand these requirements and provide an appropriate solution. To be able to do this, the analyst must understand the client's business domain: who are all the stake holders, how they affect the system, what are the constraints, what are the alterables, etc. The analyst should not blindly assume that only a software solution will solve a client's problem. He should have a broader vision. Sometimes, re-engineering of the business processes may be required to improve efficiency and that may be all that is required. After all this, if it is found that a software solution will add value, then a detailed statement of what the software must do to meet the client's needs should be prepared. This document is called Software Requirements Specification (SRS) document. Stating and understanding requirements is not an easy task. Let us look at a few examples: "The counter value is picked up from the last record" In the above statement, the word 'last' is ambiguous. It could mean the last accessed record, which could be anywhere in a random access file, or, it could be physically the last record in the file

"Calculate the inverse of a square matrix 'M' of size 'n' such that LM=ML=In where 'L' is the inverse matrix and 'In' is the identity matrix of size 'n' " This statement though appears to be complete, is missing on the type of the matrix elements. Are they integers, real numbers, or complex numbers. Depending on the answer to this question, the algorithm will be different. "The software should be highly user friendly" How does one determine, whether this requirement is satisfied or not. "The output of the program shall usually be given within 10 seconds" What are the exceptions to the 'usual 10 seconds' requirement? The statement of requirements or SRS should possess the following properties: All requirements must be correct. There should be no factual errors All requirements should have one interpretation only. We have seen a few examples of ambiguous statements above. The SRS should be complete in all respects. It is difficult to achieve this objective. Many times clients change the requirements as the development progresses or new requirements are added. The Agile development methodologies are specifically designed to take this factor in to account. They partition the requirements in to subsets called scenarios and each scenario is implemented separately. However, each scenario should be complete. All requirements must be verifiable, that is, it should be possible to verify if a requirement is met or not. Words like 'highly', 'usually', should not be used. All requirements must be consistent and non-conflicting As we have stated earlier, requirements do change. So the format of the SRS should be such that the changes can be easily incorporated. Analysis 2. Understanding Requirements Course Contents | Prev : Next

2.1 Functional and Non-Functional Requirements Requirements can be classified in to two types, namely, functional requirements and non-functional requirements. Functional requirements specify what the system should do. Examples are: Calculate the compound interest at the rate of 14% per annum on a fixed deposit for a period of three years Calculate tax at the rate of 30% on an annual income equal to and above Rs.2,00,000 but less than Rs.3,00,000 Invert a square matrix of real numbers (maximum size 100 X 100)

Non-functional requirements specify the overall quality attributes the system must satisfy. The following is a sample list of quality attributes: Portability Reliability Performance Testability Modifiability Security Presentation Reusability Understandability Acceptance criteria Interoperability Some examples of non-functional requirements are: Number of significant digits to which accuracy should be maintained in all numerical calculations is 10 The response time of the system should always be less than 5 seconds The software should be developed using C language on a UNIX based system A book can be deleted from the Library Management System by the Database Administrator only The matrix diagonalisation routine should zero out all off-diagonal elements, which are equal to or less than 103 Experienced officers should be able to use all the system functions after a total training of two hours. After this training, the average number of errors made by experienced officers should not exceed two per day. Analysis 2. Understanding Requirements 2.2 Other Classifications Requirements can also be classified in to the following categories: Satisfiability Course Contents | Prev : Next

Criticality Stability User categories Satisfiability: There are three types of satisfiability, namely, normal, expected, and exciting. Normal requirements are specific statements of user needs. The user satisfaction level is directly proportional to the extent to which these requirements are satisfied by the system. Expected requirements may not be stated by the users, but the developer is expected to meet them. If the requirements are met, the user satisfaction level may not increase, but if they are not met, users may be thoroughly dissatisfied. They are very important from the developer's point of view. Exciting requirements, not only, are not stated by the users, they do not even expect them. But if the developer provides for them in the system, user satisfaction level will be very high. The trend over the years has been that the exciting requirements often become normal requirements and some of the normal requirements become expected requirements. For example, as the story goes, on-line help feature was first introduced in the UNIX system in the form of man pages. At that time, it was an exciting feature. Later, other users started demanding it as part of their systems. Now a days, users do not ask for it, but the developer is expected to provide it.

3. Design Course Contents 1. Introduction to Design 1.1 Introduction 1.2 Qualities of a good design 1.3 Design Constraints 1.4 Popular Design Methods 2. High Level Design Activities 2.1 Architectural Design 2.2 Interface Design Criticality

Main Menu

This is a form of priortising the requirements. They can be classified as mandatory, desirable, and non-essential. This classification should be done in consultation with the users and helps in determining the focus in an iterative development model. Stability Requirements can also be categorised as stable and non-stable. Stable requirements don't change often, or atleast the time period of change will be very long. Some requirements may change often. For example, if business process reengineering is going on alongside the development, then the corresponding requirements may change till the process stabilises. User categories As was stated in the introduction, there will be many stake holders in a system. Broadly they are of two kinds. Those who dictate the policies of the system and those who utilise the services of the system. All of them use the system. There can be further subdivisons among these classes depending on the information needs and services required. It is important that all stakeholders are identified and their requirements are captured. Design 1. Introduction to Design 1.1 Introduction Design is an iterative process of transforming the requirements specification into a design specification. Consider an example where Mrs. & Mr. XYZ want a new house. Their requirements include, A room for two children to play and sleep

A room for Mrs. & Mr. XYZ to sleep A room for cooking A room for dining A room for general activities and so on. An architect takes these requirements and designs a house. The architectural design specifies a particular solution. In fact, the architect may produce several designs to meet this requirement. For example, one may maximize children's room, and other minimizes it to have large living room. In addition, the style of the proposed houses may differ: traditional, modern and two-storied. All of the proposed designs solve the problem, and there may not be a "best" design. Software design can be viewed in the same way. We use requirements specification to define the problem and transform this to a solution that satisfies all the requirements in the specification. Some definitions for Design: "Devising artifacts to attain goals" [H.A. Simon, 1981]. "The process of defining the architecture, component, interfaces and other characteristics of a system or component" [ IEEE 160.12]. "The process of applying various techniques and principles for the purpose of defining a device, a process or a system in sufficient detail to permit its physical realization"[Webster Dictionary]. Without Design, System will be Unmanageable since there is no concrete output until coding. Therefore it is difficult to monitor & control. Inflexible since planning for long term changes was not given due emphasis. Unmaintainable since standards & guidelines for design & construction are not used. No reusability consideration. Poor design may result in tightly coupled modules with low cohesion. Data disintegrity may also result. Inefficient due to possible data redundancy and untuned code. Not portable to various hardware / software platforms. Design is different from programming. Design brings out a representation for the program - not the program or any component of it. The difference is tabulated below. Design Programming Abstractions of operations and data ("What to do") Device algorithms and data representations Establishes interfaces Consider run-time environments

- Choose between design alternatives - Make trade-offs w.r.t.constraints etc. Choose functions, syntax of language Devices representation of program Design 1. Introduction to Design 1.2 Qualities of a Good Design Functional It is a very basic quality attribute. Any design solution should work, and should be constructable. Efficiency: This can be measured through: Run time (time taken to undertake whole of processing task or transaction) Response time (time taken to respond to a request for information) Throughput (no. of transactions / unit time) Memory usage, size of executable, size of source, etc Flexibility: It is another basic and important attribute. The very purpose of doing design activities is to build systems that are modifiable in the event of any changes in the requirements. Portability & Security: These are to be addressed during design - so that such needs are not "hard-coded" later. Reliability: It tells the goodness of the design - how it work successfully (More important for real-time and mission critical and on-line systems). Economy: This can be achieved by identifying re-usable components. Usability: Usability is in terms of how the interfaces are designed (clarity, aesthetics, directness, forgiveness, user control, ergonomics, etc) and how much time it takes to master the system. Course Contents | Prev : Next Construction of program

Design 1. Introduction to Design 1.3 Design Constraints Typical Design Constraints are: Budget Time Integration with other systems Skills Standards Hardware and software platforms Budget and Time cannot be changed. The problems with respect to integrating to other systems (typically client may ask to use a proprietary database that he is using) has to be studied & solution(s) are to be found. 'Skills' is alterable (for example, by arranging appropriate training for the team). Mutually agreed upon standards has to be adhered to. Hardware and software platforms may remain a constraint. Designer try answer the "How" part of "What" is raised during the requirement phase. As such the solution proposed should be contemporary. To that extent a designer should know what is happening in technology. Large, central computer systems with proprietary architecture are being replaced by distributed network of low cost computers in an open systems environment We are moving away from conventional software development based on hand generation of code (COBOL, C) to Integrated programming environments. Typical applications today are internet based. Design 1. Introduction to Design 1.4 Popular Design Methods Popular Design Methods (Wasserman, 1995) include: 1. Modular decomposition Based on assigning functions to components. It starts from functions that are to be implemented and explain how each component will be organized and related to other components. 2. Event-oriented decomposition Based on events that the system must handle. It starts with cataloging various states and then describes how transformations take place. Course Contents | Prev : Next Course Contents | Prev : Next

2.

Object-oriented design Based on objects and their interrelationships It starts with object types and then explores object attributes and actions.

Structured Design - uses modular decomposition. Design 2. High Level Design Activities Course Contents | Prev : Next

Broadly, High Level Design include Architectural Design, Interface Design and Data Design. 2.1 Architectural Design Shaw and Garlan (1996) suggest that software architecture is the first step in producing a software design. Architecture design associates the system capabilities with the system components (like modules) that will implement them. The architecture of a system is a comprehensive framework that describes its form and structure, its components and how they interact together. Generally, a complete architecture plan addresses the functions that the system provides, the hardware and network that are used to develop and operate it, and the software that is used to develop and operate it. An architecture style involves its components, connectors, and constraints on combining components. Shaw and Garlan (1996) describe seven architectural styles. Commonly used styles include: Pipes and Filters Call-and-return systems Main program / subprogram architecture Object-oriented systems Layered systems Data-centered systems Distributed systems Client/Server architecture In Pipes and Filters, each component (filter) reads streams of data on its inputs and produces streams of data on its output. Pipes are the connectors that transmit output from one filter to another. e.g. Programs written in Unix shell In Call-and-return systems, the program structure decomposes function into a control hierarchy where a "main" program invokes (via procedure calls) a number of program components, which in turn may invoke still other components. e.g. Structure Chart is a hierarchical representation of main program and subprograms. In Object-oriented systems, component is an encapsulation of data and operations that must be applied to manipulate the data. Communication and coordination between components is accomplished via message calls.

In Layered systems, each layer provides service to the one outside it, and acts as a client to the layer inside it. They are arranged like an "onion ring". e.g. OSI ISO model. Data-centered systems use repositories. Repository includes a central data structure representing current state, and a collection of independent components that operate on the central data store. In a traditional database, the transactions, in the form of an input stream, trigger process execution. e.g. Database A popular form of distributed system architecture is the Client/Server where a server system responds to the requests for actions / services made by client systems. Clients access server by remote procedure call. The following issues are also addressed during architecture design: Security Data Processing: Centralized / Distributed / Stand-alone Audit Trails Restart / Recovery User Interface Other software interfaces Design 2. High Level Design Activities 2.2 User Interface Design The design of user interfaces draws heavily on the experience of the designer. Pressman (Refer Chapter 15) presents a set of Human-Computer Interaction (HCI) design guidelines that will result in a "friendly," efficient interface. Three categories of HCI design guidelines are: 1. 2. 3. General interaction Information display Data entry Course Contents | Prev : Next

1. General interaction Guidelines for general interaction often cross the boundary into information display, data entry and overall system control. They are, therefore, all-encompassing and are ignored at great risk. The following guidelines focus on general interaction. Be consistent Use a consistent format for menu selection, command input, data display and the myriad other functions that occur in a HCI.

Offer meaningful feedback Provide the user with visual and auditory feedback to ensure that two way communication (between user and interface) is established. Ask for verification of any nontrivial destructive action If a user requests the deletion of a file, indicates that substantial information is to be overwritten, or asks for the termination of a program, an "Are you sure ..." message should appear. Permit easy reversal of most actions UNDO or REVERSE functions have saved tens of thousands of end users from millions of hours of frustration. Reversal should be available in every interactive application. Reduce the amount of information that must be memorized between actions The user should not be expected to remember a list of numbers or names so that he or she can re-use them in a subsequent function. Memory load should be minimized. Seek efficiency in dialog, motion, and thought Keystrokes should be minimized, the distance a mouse must travel between picks should be considered in designing screen layout, the user should rarely encounter a situation where he or she asks, "Now what does this mean." Forgive mistakes The system should protect itself from errors that might cause it to fail (defensive programming) Categorize activities by functions and organize screen geography accordingly One of the key benefits of the pull down menu is the ability to organize commands by type. In essence, the designer should strive for "cohesive" placement of commands and actions. Provide help facilities that are context sensitive Use simple action verbs or short verb phrases to name commands A lengthy command name is more difficult to recognize and recall. It may also take up unnecessary space in menu lists. 2. Information Display If information presented by the HCI is incomplete, ambiguous or unintelligible, the application will fail to satisfy the needs of a user. Information is "displayed" in many different ways with text, pictures and sound, by placement, motion and size, using color, resolution, and even omission. The following guidelines focus on information display. Display only information that is relevant to the current context

The user should not have to wade through extraneous data, menus and graphics to obtain information relevant to a specific system function. Don't bury the user with data, use a presentation format that enables rapid assimilation of information Graphs or charts should replace voluminous tables. Use consistent labels, standard abbreviations, and predictable colors The meaning of a display should be obvious without reference to some outside source of information. Allow the user to maintain visual context If computer graphics displays are scaled up and down, the original image should be displayed constantly (in reduced form at the corner of the display) so that the user understands the relative location of the portion of the image that is currently being viewed. Produce meaningful error messages Use upper and lower case, indentation, and text grouping to aid in understanding Much of the information imparted by a HCI is textual, yet, the layout and form of the text has a significant impact on the ease with which information is assimilated by the user. Use windows to compartmentalize different types of information Windows enable the user to "keep" many different types of information within easy reach. Use "analog" displays to represent information that is more easily assimilated with this form of representation For example, a display of holding tank pressure in an oil refinery would have little impact if a numeric representation were used. However, a thermometer-like display were used, vertical motion and color changes could be used to indicate dangerous pressure conditions. This would provide the user with both absolute and relative information. Consider the available geography of the display screen and use it efficiently When multiple windows are to be used, space should be available to show at least some portion of each. In addition, screen size (a system engineering issue) should be selected to accommodate the type of application that is to be implemented. 3. Data Input Much of the user's time is spent picking commands, typing data and otherwise providing system input. In many applications, the keyboard remains the primary input medium, but the mouse, digitizer and even voice recognition systems are rapidly becoming effective alternatives. The following guidelines focus on data input: Minimize the number of input actions required of the user

Reduce the amount of typing that is required. This can be accomplished by using the mouse to select from predefined sets of input; using a "sliding scale" to specify input data across a range of values; using "macros" that enable a single keystroke to be transformed into a more complex collection of input data. Maintain consistency between information display and data input The visual characteristics of the display (e.g., text size, color, placement) should be carried over to the input domain. Allow the user to customize the input An expert user might decide to create custom commands or dispense with some types of warning messages and action verification. The HCI should allow this. Interaction should be flexible but also tuned to the user's preferred mode of input The user model will assist in determining which mode of input is preferred. A clerical worker might be very happy with keyboard input, while a manager might be more comfortable using a point and pick device such as a mouse. Deactivate commands that are inappropriate in the context of current actions. This protects the user from attempting some action that could result in an error. Let the user control the interactive flow The user should be able to jump unnecessary actions, change the order of required actions (when possible in the context of an application), and recover from error conditions without exiting from the program. Provide help to assist with all input actions Eliminate "Mickey mouse" input Do not let the user to specify units for engineering input (unless there may be ambiguity). Do not let the user to type .00 for whole number dollar amounts, provide default values whenever possible, and never let the user to enter information that can be acquired automatically or computed within the program. 4. Reviews, Walkthroughs and Inspections Course Contents 1. Formal Definitions 2. Importance of Static Testing 3. Reviews 4. Walkthrough 5. Inspectio 6. Comparison 7. Advantages Main Menu

8. Metrics for Inspection 9. Common issues Reviews, Walkthroughs & Inspections 1. Formal Definitions Quality Control (QC): A set of techniques designed to verify and validate the quality of work products and observe whether requirements are met. Software Element: Every deliverable or in-process document produced or acquired during the Software Development Life Cycle (SDLC) is a software element. Verification and validation techniques: Verification: Is the task done correctly? Validation: Is the correct task done? Static Testing: V&V is done on any software Dynamic Testing: V&V is done on executing the software with pre-defined test cases. Reviews, Walkthroughs & Inspections 2. Importance of Static Testing Course Contents | Prev : Next Why Static Testing? The benefit is clear once you think about it. If you can find a problem in the requirements before it turns into a problem in the system, that will save time and money. The following statistics would be mind boggling: M.E. Fagan "Design and Code Inspections to Reduce Errors in Program Development", IBM Systems Journal, March 1976. Systems Product 67% of total defects during the development found in Inspection Applications Product 82% of all the defects found during inspection of design and code Course Contents | Prev : Next

A.F. Ackerman, L. Buchwald, and F. Lewski, "Software Inspections: An Effective Verification Process," IEEE Software, May 1989. Operating System Inspection decreased the cost of detecting a fault by 85% Marilyn Bush, "Improving Software Quality: The Use of Formal Inspections at the Jet Propulsion Laboratory", Proceedings of the 12th International Conference on Software Engineering, pages 196-199, IEEE Computer Society Press, Nice, France, March 1990 Jet Propulsion Laboratory Project Every two-hour inspection session results, on an average, in saving of $25,000 The following three 'stories' should communicate the importance of Static Testing: 1. 2. 3. When my daughter called me a 'cheat' My failure as a programmer Loss of "Mars Climate Obiter" A Few More Software Failures - Lessons for others The following diagram of Fagan (Advances in Inspections, IEEE Transactions on Software Engineering) captures the importance of Static Testing. The lesson learned could be summarized in one sentence - Spend a little extra earlier or

spend much more later.

The 'statistics', the above stories and Fagan's diagram emphasizes the need for Static Testing. It is appropriate to state that not all static testing involves people sitting at a table looking at a document. Sometimes automated tools can help. For C programmers, the lint program can help find potential bugs in programs. Java programmers can use tools like the JTest product to check their programs against a coding standard. When to start the Static Testing? To get value from static testing, we have to start at the right time. For example, reviewing the requirements after the programmers have finished coding the entire system may help testers design test cases. However, the significant return on the static testing investment is no longer available, as testers can't prevent bugs in code that's already written. For optimal returns, a static testing should happen as soon as possible after the item to be tested has been created, while the assumptions and inspirations remain fresh in the creator's mind and none of the errors in the item have caused negative consequences in downstream processes. Effective reviews involve the right people. Business domain experts must attend requirements reviews, system architects must attend design reviews, and expert programmers must attend code reviews. As testers, we can also be valuable participants, because we're good at spotting inconsistencies, vagueness, missing details, and the like. However, testers who attend review meetings do need to bring sufficient knowledge of the business domain, system architecture, and programming to each review. And everyone who attends a review, walkthrough or inspection should understand the basic ground rules of such events. The following diagram of Somerville (Software Engineering 6th Edition) communicates, where Static Testing starts.

Reviews, Walkthroughs & Inspections 3. Reviews Course Contents | Prev : Next

IEEE classifies Static Testing under three broad categories: Reviews Walkthroughs Inspections

What is a Review? A meeting at which the software element is presented to project personnel, managers, users, customers or other interested parties for comment or approval. The software element can be Project Plans, URS, SRS, Design Documents, code, Test Plans, User Manual. What are objectives of Reviews? To ensure that: The software element conforms to its specifications. The development of the software element is being done as per plans, standards, and guidelines applicable for the project. Changes to the software element are properly implemented and affect only those system areas identified by the change specification. Reviews - Input: A statement of objectives for the technical reviews The software element being examined Software project management plan Current anomalies or issues list for the software product Documented review procedures Earlier review report - when applicable Review team members should receive the review materials in advance and they come prepared for the meeting Check list for defects Reviews - Meeting: Examine the software element for adherence to specifications and standards Changes to software element are properly implemented and affect only the specified areas Record all deviations Assign responsibility for getting the issues resolved Review sessions are not expected to find solutions to the deviations. The areas of major concerns, status on previous feedback and review days utilized are also recorded. The review leader shall verify, later, that the action items assigned in the meeting are closed

Reviews - Outputs: List of review findings List of resolved and unresolved issues found during the later re-verification Back to Top Course Contents | Prev : Next

Reviews, Walkthroughs & Inspections 4. Walkthrough Course Contents | Prev : Next Walkthrough - Definition: A technique in which a designer or programmer leads the members of the development team and other interested parties through the segment of the documentation or code and the participants ask questions and make comments about possible errors, violation of standards and other problems. Walkthrough - Objectives: To find defects To consider alternative implementations To ensure compliance to standards & specifications Walkthrough - Input: A statement of objectives The software element for examination Standards for the development of the software Distribution of materials to the team members, before the meeting Team members shall examine and come prepared for the meeting Check list for defects Walkthrough - Meeting: Author presents the software element Members ask questions and raise issues regarding deviations Discuss concerns, perceived omissions or deviations from the specifications Document the above discussions Record the comments and decisions The walk-through leader shall verify, later, that the action items assigned in the meeting are closed

Walkthrough - Outputs: List of walk-through findings List of resolved and unresolved issues found during the later re-verification Reviews, Walkthroughs & Inspections 5. Inspection Course Contents | Prev : Next

Inspection - Definition: A visual examination of software element to detect errors, violations of development standards and other problems. An inspection is very rigorous and the inspectors are trained in inspection techniques. Determination of remedial or investigative action for a defect is a mandatory element of a software inspection, although the solution should not be determined in the inspection meeting. Inspection - Objectives: To verify that the software element satisfies the specifications & conforms to applicable standards To identify deviations To collect software engineering data like defect and effort To improve the checklists, as a spin-off Inspection - Input: A statement of objectives for the inspection Having the software element ready Documented inspection procedure Current defect list A checklist of possible defects Arrange for all standards and guidelines Distribution of materials to the team members, before the meeting Team members shall examine and come prepared for the inspection Inspection - Meeting: Introducing the participants and describing their role (by the Moderator) Presentation of the software element by non - author Inspectors raise questions to expose the defects

Recording defects - a defect list details the location, description and severity of the defect Reviewing the defect list - specific questions to ensure completeness and accuracy Making exit decision Acceptance with no or minor rework, without further verification Accept with rework verification (by inspection team leader or a designated member of the inspection team) Re-inspect Inspection - Output: Defect list, containing a defect location, description and classification An estimate of rework effort and rework completion date

Reviews, Walkthroughs & Inspections


6. Comparison of Reviews, Walk-Throughs and Inspections Objectives: Reviews - Evaluate conformance to specifications; Ensure change integrity Walkthroughs - Detect defects; Examine alternatives; Forum for learning Inspections - Detect and identify defects; Verify resolution Group Dynamics: Reviews: 3 or more persons ; Technical experts and peer mix Walkthroughs: 2 to 7 persons Technical experts and peer mix Inspections: 3 to 6 persons; Documented attendance; with formally trained inspectors Decision Making & Change Control: Reviews: Review Team requests Project Team leadership or management to act on recommendations Walkthroughs: All decisions made by producer; Change is prerogative of the author Inspections: Team declares exit decision Acceptance, Rework & Verify or Rework & Re-inspect Material Volume: Reviews: Moderate to high Walkthroughs: Relatively low Inspections: Relatively low Presenter: Course Contents | Prev : Next

Reviews: Software Element Representative Walkthroughs: Author Inspections: Other than author Reviews, Walkthroughs & Inspections 7. Advantages Course Contents | Prev : Next Advantages of Static Methods over Dynamic Methods: Early detection of software defects Static methods expose defects, whereas dynamic methods show only the symptom of the defect Static methods expose a batch of defects, whereas it is usually one by one in dynamic methods Some defects can be found only by Static Testing Code redundancy (when logic is not affected) Dead code Violations of coding standards Reviews, Walkthroughs & Inspections 8. Metrics for Inspections Fault Density: Specification and Design - Faults per page Code - Faults per 1000 lines of code Fault Detection Rate: Faults detected per hour Fault Detection Efficiency: Faults detected per person - hour Inspection Efficiency: (Number of faults found during inspection) / (Total number of faults during development) Maintenance Vs Inspection: "Number of corrections during the first six months of operational phase" and "Number of defects found in inspections" for different projects of comparable size Course Contents | Prev : Next

Reviews, Walkthroughs & Inspections

9. Common Issues for Reviews, Walk-throughs and Inspections Course Contents | Prev : Next Responsibilities for Team Members: Leader - To conduct / moderate the review / walkthrough / inspection process Reader - To present the relevant material in a logical fashion Recorder - To document defects and deviations Other Members - To critically question the material being presented Communication Factors: Discussion is Kept within bounds Not extreme in opinion Reasonable and calm Not directed at a personal level Concentrate on finding defects Not get bogged down with disagreement Not discuss trivial issues Not involve itself in solution-hunting The participants should be sensitive to each other by keeping the synergy of the meeting very high Being aware of, and correcting any conditions, either physical or emotional, that are draining off the participant's attention, shall ensure that the meeting is fruitful i.e. Maximum number of defects is found during the early stages of software development life cycle. 5. Testing and Debugging Course Contents 1. Introduction to Testing 1.1 A Self Assessment Test 1.2 Correctness of a software 1.3 Testing Approaches 2. Levels of Testing 2.1 Overview Main Menu

2.2 Unit Testing 2.3 Integration Testing 2.4 System Testing 2.5 Acceptance Testing 3. Test Techniques 3.1 Black box testing 3.2 White box testing 4. When to stop testing 5. Debugging Testing and Debugging 1. Introduction to Testing 1.1 A Self-Assessment Test [1] Take the following test before starting your learning: "A program reads three integer values. The three values are interpreted as representing the lengths of the sides of a triangle. The program displays a message that states whether the given sides can make a scalene or isosceles or equilateral triangle (Triangle Program)" On a sheet of paper, write a set of test cases that would adequately test this program. Evaluate the effectiveness of your test cases using this list of common errors. Testing is an important, mandatory part of software development; it is a technique for evaluating product quality and also for indirectly improving it, by identifying defects and problems. What is common between these disasters? Ariane 5, Explosion, 1996 AT&T long distance service fails for nine hours, 1990 Airbus downing during Iran-conflict, 1988 Shut down of Nuclear Reactors, 1979 Answer: Software faults!! Course Contents | Prev : Next

Refer Prof. Thomas Huckle's site for a collection of software bugs: http://wwwzenger.informatik.tu-muenchen.de/persons/huckle/bugse.html and refer http://www.cs.tau.ac.il/~nachumd/verify/horror.html for software horror stories! Back to Top Course Contents | Prev : Next

Testing and Debugging 1. Introduction to Testing Course Contents | Prev : Next

1.2 How do we decide correctness of a software? To answer this question, we need to first understand how a software can fail. Failure, in the software context, is nonconformance to requirements. Failure may be due to one or more faults. Error or incompleteness in the requirements Difficulty in implementing the specification in the target environment Faulty system or program design Defects in the code From the variety of faultsabove, it is clear that testing cannot be seen as an activity that will start after coding phase Software testing is an activity that encompasses the whole development life cycle. We need to test a program to demonstrate the existence of a fault. Contradictory to the terms, 'demonstrating correctness' is not a demonstration that the program works properly. Testing brings negative connotations to our normal understanding. Myers' classic (which is still regarded as the best fundamental book on testing), "The Art of Software Testing" lists the following as the most important testing principles: (Definition): Testing is the process of executing a program with the intent of finding errors A good test case is one that has a high probability of detecting an as-yet undiscovered error A successful test case is one that detects an as-yet undiscovered error. Myers discusses more testing principles: Test case definition includes expected output (a test oracle) Programmers should avoid testing their own programs (third party testing?) Inspect the result of each test Include test cases for invalid & unexpected input conditions See whether the program does what it is not supposed to do ('error of commission') Avoid throw-away test cases (test cases serve as a documentation for future maintenance) Do not assume that the program is bug free (non-developer, rather a destructive mindset is needed) The probability of more errors is proportional to the number of errors already found

Consider the following diagram:

If we abstract a software to be a mathematical function f, which is expected to produce various outputs for inputs from a domain (represented diagrammatically above), then it is clear from the definition of Testing by Myers that Testing cannot prove correctness of a program - it is just a series of experiments to find out errors (as f is usually a discrete function that maps the input domain to various outputs - that can be observed by executing the program) There is nothing like 100% error-free code as it is not feasible to conduct exhaustive testing, proving 100% correctness of a program is not possible. We need to develop an attitude for 'egoless programming' and keep a goal of eliminating as many faults as possible. Statistics on review effectiveness and common sense says that prevention is better than cure. We need to place static testing also in place to capture an error before it becomes a defect in the software. Recent agile methodologies like extreme programming addresses these issues better with practices like test-driven programming and paired programming (to reduce the psychological pressure on individuals and to bring review part of coding) Back to Top Back to Top Course Contents | Prev : Nex Course Contents | Prev : Next

Testing and Debugging 1. Introduction to Testing 1.3 Testing Approaches There are two major approaches to testing: Black-box (or closed box or data-driven or input/output driven or behavioral) testing White-box (or clear box or glass box or logic-driven) testing If the testing component is viewed as a "black-box", the inputs have to be given to observe the behavior (output) of the program. In this case, it makes sense to give both valid and invalid inputs. The observed output is then matched with expected result (from specifications). The advantage of this approach is that the tester need not worry about the internal structure of the program. However, it is impossible to find all errors using this approach. For instance, if one tried three equilateral triangle test cases for the triangle program, we cannot be sure whether the program will detect all equilateral triangles. The program may contain a hard coded display of 'scalene triangle' for values (300,300,300). To exhaustively test the triangle program, we need to create test cases for all valid triangles up to MAXIMUM integer size. This is an astronomical task - but still not exhaustive (Why?) To be sure of finding all possible

errors, we not only test with valid inputs but all possible inputs (including invalid ones like characters, float, negative integers, etc.). "White-box" approach examines the internal structure of the program. This means logical analysis of software element where testing is done to trace all possible paths of control flow. This method of testing exposes both errors of omission (errors due to neglected specification) and also errors of commission (something not defined by the specification). Let us look at a specification of a simple file handling program as given below. "The program has to read employee details such as employee id, name, date of joining and department as input from the user, create a file of employees and display the records sorted in the order of employee ids. " Examples for errors of omission: Omission of Display module, Display of records not in sorted order of employee ids, file created with fewer fields etc. Example of error of commission: Additional lines of code deleting some arbitrary records from the created file. (No amount of black box testing can expose such errors of commission as the method uses specification as the reference to prepare test cases. ) However, it is many a time practically impossible to do complete white box testing to trace all possible paths of control flow as the number of paths could astronomically large. For instance, a program segment which has 5 different control paths (4 nested if-then-else) and if this segment is iterated 20 times, the number of unique paths would be 520+519+ ... +51 = 1014 or 100 trillion [1]. If we were able to complete one test case every 5 minutes, it would take approximately one billion years to test every unique path. Due to dependency of decisions, not all control paths may be feasible. Hence, actually, we may not be testing all these paths. Even if we manage to do an exhaustive testing of all possible paths, it may not guarantee that the program work as per specification: Instead of sorting in descending order (required by the specification), the program may sort in ascending order. Exhaustive path testing will not address missing paths and data-sensitive errors. A black-box approach would capture these errors! In conclusion, It is not feasible to do exhaustive testing either in black or in white box approaches. None of these approaches are superior - meaning, one has to use both approaches, as they really complement each other. Not, but not the least, static testing still plays a large role in software testing. The challenge, then, lies in using a right mix of all these approaches and in identifying a subset of all possible test cases that have highest probability of detecting most errors. The details of various techniques under black and white box approach are covered in Test Techniques. Back to Top Course Contents | Prev : Next

Testing and Debugging 2. Levels of Testing 2.1 Overview In developing a large system, testing usually involves several stages (Refer the following figure [2]). Course Contents | Prev : Next

Unit Testing Integration Testing System Testing Acceptance Testing

Initially, each program component (module) is tested independently verifying the component functions with the types of input identified by studying component's design. Such a testing is called Unit Testing (or component or module testing). Unit testing is done in a controlled environment with a predetermined set of data fed into the component to observe what output actions and data are produced. When collections of components have been unit-tested, the next step is ensuring that the interfaces among the components are defined and handled properly. This process of verifying the synergy of system components against the program Design Specification is called Integration Testing. Once the system is integrated, the overall functionality is tested against the Software Requirements Specification (SRS). Then, the other non-functional requirements like performance testing are done to ensure readiness of the system to work successfully in a customer's actual working environment. This step is called System Testing. The next step is customer's validation of the system against User Requirements Specification (URS). Customer in their working environment does this exercise of Acceptance Testing usually with assistance from the developers. Once the system is accepted, it will be installed and will be put to use. Back to Top Course Contents | Prev : Next

Testing and Debugging 2. Levels of Testing 2.2 Unit Testing Pfleeger [2] advocates the following steps to address the goal of finding faults in modules (components): Examining the code: Typically the static testing methods like Reviews, Walkthroughs and Inspections are used (Refer RWI course) Course Contents | Prev : Next

Proving code correct: After coding and review exercise if we want to ascertain the correctness of the code we can use formal methods. A program is correct if it implements the functions and data properly as indicated in the design, and if it interfaces properly with all other components. One way to investigate program correctness is to view the code as a statement of logical flow. Using mathematical logic, if we can formulate the program as a set of assertions and theorems, we can show that the truth of the theorems implies the correctness of the code. Use of this approach forces us to be more rigorous and precise in specification. Much work is involved in setting up and carrying out the proof. For example, the code for performing bubble sort is much smaller than its logical description and proof. Testing program components (modules): In the absence of simpler methods and automated tools, "Proving code correctness" will be an elusive goal for software engineers. Proving views programs in terms of classes of data and conditions and the proof may not involve execution of the code. On the contrary, testing is a series of experiments to observe the behaviour of the program for various input conditions. While proof tells us how a program will work in a hypothetical environment described by the design and requirements, testing gives us information about how a program works in its actual operating environment. To test a component (module), input data and conditions are chosen to demonstrate an observable behaviour of the code. A test case is a particular choice of input data to be used in testing a program. Test case are generated by using either black-box or white-box approaches (Refer Test Techniques) Testing and Debugging 2. Levels of Testing 2.3 Integration Testing Integration is the process of assembling unit-tested modules. We need to test the following aspects that are not previously addressed while independently testing the modules: Interfaces: To ensure "interface integrity", the transfer of data between modules is tested. When data is passed to another module, by way of a call, there should not be any loss or corruption of data. The loss or corruption of data can happen due to mis-match or differences in the number or order of calling and receiving parameters. Module combinations may produce a different behaviour due to combinations of data that are not exercised during unit testing. Global data structures, if used, may reveal errors due to unintended usage in some module. Integration Strategies: Depending on design approach, one of the following integration strategies can be adopted: Big Bang approach Incremental approach Top-down testing Course Contents | Prev : Next

Bottom-up testing Sandwich testing

To illustrate, consider the following arrangement of modules:

Big Bang approach consists of testing each module individually and linking all these modules together only when every module in the system has been tested.

Though Big Bang approach seems to be advantageous when we construct independent module concurrently, this approach is quite challenging and risky as we integrate all modules in a single step and test the resulting system. Locating interface errors, if any, becomes difficult here. The alternative strategy is an incremental approach, wherein modules of a system are consolidated with already tested components of the system. In this way, the software is gradually built up, spreading the integration testing load more evenly through the construction phase. Incremental approach can be implemented in two distinct ways: Top-down and Bottom-up. In Top-down testing, testing begins with the topmost module. A module will be integrated into the system only when the module which calls it has been already integrated successfully. An example order of Top-down testing for the above illustration will be:

The testing starts with M1. To test M1 in isolation, communications to modules M2, M3 and M4 have to be somehow simulated by the tester somehow, as these modules may not be ready yet. To simulate responses of M2, M3 and M4 whenever they are to be invoked from M1, "stubs" are created. Simple applications may require stubs which simply return control to their superior modules. More complex situation demand stubs to simulate a full range of responses, including parameter passing. Stubs may be individually created by the tester (as programs in their own right) or they may be provided by a software testing harness, which is a piece of software specifically designed to provide a testing environment. In the above illustration, M1 would require stubs to simulate the activities of M2, M3 and M4. The integration of M3 would require a stub or stubs (?!) for M5 and M4 would require stubs for M6 and M7. Elementary modules (those which call no subordinates) require no stubs. Bottom-up testing begins with elementary modules. If M5 is ready, we need to simulate the activities of its superior, M3. Such a "driver" for M5 would simulate the invocation activities of M3. As with the stub, the complexity of a driver would depend upon the application under test. The driver would be responsible for invoking the module under test, it could be responsible for passing test data (as parameters) and it might be responsible for receiving output data. Again, the driving function can be provided through a testing harness or may be created by the tester as a program. The following diagram shows the bottom-up testing approach for the above illustration:

For the above example, driver must be provided for modules M2, M5, M6, M7, M3 and M4. There is no need for a driver for the topmost node, M1. Myers lists the advantages and disadvantages of Top-down testing and Bottom-up testing: Testing Advantages Top - Down Disadvantages Advantageous if major flaws occur toward the top of the program

Early skeletal program allows demonstrations and boosts morale Stub modules must be produced Test conditions might be impossible, or very difficult to create Observation of test output is more difficult, as only simulated values will be used initially. For the same reason, program correctness can be misleading

Bottom - up

Advantageous if major flaws occur toward the bottom of the program

Test conditions are easier to create Observations of test results is easier (as live data is used from the beginning) Driver modules must be produced The program as an entity does not exist until the last module is added To overcome the limitations and to exploit the advantages of Top-down and Bottom-up testing, a sandwich testing is used: The system is viewed as three layers- the target layer in the middle, the levels above the target, and the levels below the target. A top-down approach is used in the top layer and a bottom-up one in the lower layer. Testing converges on the target layer, chosen on the basis of system characteristics and the structure of the code. For example, if the bottom layer contains many general-purpose utility programs, the target layer (the one above) will be components using the utilities. This approach allows bottom-up testing to verify the utilities' correctness at the beginning of testing. Choosing an integration strategy depends not only on system characteristics, but also on customer expectations. For instance, the customer may want to see a working version as soon as possible, so we may adopt an integration schedule that produces a basic working system early in the testing process. In this way coding and testing can go concurrently. Back to Top Course Contents | Prev : Next

Testing and Debugging 2. Levels of Testing 2.4 System Testing The objective of unit and integration testing was to ensure that the code implemented the design properly. In system testing, we need to ensure that the system does what the customer wants it to do. Initially the functions (functional requirements) performed by the system are tested. A function test checks whether the integrated system performs its functions as specified in the requirements. After ensuring that the system performs the intended functions, the performance test is done. This non-functional requirement includes security, accuracy, speed, and reliability. System testing begins with function testing. Since the focus here is on functionality, a black-box approach is taken (Refer Test Techniques). Function testing is performed in a controlled situation. Since function testing compares the system's actual performance with its requirements, test cases are developed from requirements document (SRS). For example, a word processing system can be tested by examining the following functions: document creation, document modification and document deletion. To test document modification, adding a character, adding a word, adding a paragraph, deleting a character, deleting a word, deleting a paragraph, changing the font, changing the type size, changing the paragraph formatting, etc. are to be tested. Performance testing addresses the non-functional requirements. System performance is measured against the performance objectives set by the customer. For example, function testing may have demonstrated how the system handles deposit or withdraw transactions in a bank account package. Performance testing evaluates the speed with which calculations are made, the precision of the computation, the security precautions required, and the response time to user inquiry. Course Contents | Prev : Next

Types of Performance Tests: 1. Stress tests - evaluates the system when stressed to its limits. If the requirements state that a system is to handle up to a specified number of devices or users, a stress test evaluates system performance when all those devices or users are active simultaneously. This test brings out the performance during peak demand. 2. Volume tests - addresses the handling of large amounts of data in the system. This includes Checking of whether data structures have been defined large enough to handle all possible situations, Checking the size of fields, records and files to see whether they can accommodate all expected data, and Checking of system's reaction when data sets reach their maximum size. 3. Configuration tests - analyzes the various software and hardware configurations specified in the requirements. (e.g. system to serve variety of audiences) 4. Compatibility tests - are needed when a system interfaces with other systems (e.g. system to retrieve information from a large database system) 5. Regression tests - are required when the system being tested is replacing an existing system (Always used during a phased development - to ensure that new system's performance is at least as good as that of the old) 6. Security tests - ensure the security requirements (testing characteristics related to availability, integrity, and confidentiality of data and services) 7. Timing tests - include response time, transaction time, etc. Usually done with stress test to see if the timing requirements are met even when the system is extremely active. 8. Environmental tests - look at the system's ability to perform at the installation site. If the requirements include tolerances to heat, humidity, motion, chemical presence, moisture, portability, electrical or magnetic fields, disruption of power, or any other environmental characteristics of the site, then our tests should ensure that the system performs under these conditions. 9. Quality tests - evaluate the system's reliability, maintainability, and availability. These testsinclude calculation of mean time to failure and mean time to repair, as well as average time to find and fix a fault.

10. Recovery tests - addresses response to the loss of data, power, device or services. The system is subjected to loss of system resources and tested if it recovers properly. 11. Maintenance tests - addresses the need for diagnostic tools and procedures to help in finding the source of problems. To verify existence and functioning of aids like diagnostic program, memory map, traces of transactions, etc. 12. Documentation tests - ensures documents like user guides, maintenance guides and technical documentation exists and to verify consistency of information in them. 13. Human factor (or Usability) tests - investigates user interface related requirements. Display screens, messages, report formats and other aspects are examined for ease of use. Back to Top Course Contents | Prev : Next

Testing and Debugging 2. Levels of Testing 2.5 Acceptance Testing Acceptance testing is the customer (and user) evaluation of the system, primarily to determine whether the system meets their needs and expectations. Usually acceptance test is done by customer with assistance from developers. Customers can evaluate the system either by conducting a benchmark test or by a pilot test. In benchmark test, the system performance is evaluated against test cases that represent typical conditions under which the system will operate when actually installed. A pilot test installs the system on an experimental basis, and the system is evaluated against everyday working. Sometimes the system is piloted in-house before the customer runs the real pilot test. The in-house test, in such case, is called an alpha test, and the customer's pilot is a beta test. This approach is common in the case of commercial software where the system has to be released to a wide variety of customers. A third approach, parallel testing, is done when a new system is replacing an existing one or is part of a phased development. The new system is put to use in parallel with previous version and will facilitate gradual transition of users, and to compare and contrast the new system with the old. Testing and Debugging 3. Test Techniques Course Contents | Prev : Next Course Contents | Prev : Next

We shall discuss Black Box and White Box approach.

3.1 Black Box Approach

1. 2. 3. 4. 5.

Equivalence Partitioning Boundary Value Analysis Cause Effect Analysis Cause Effect Graphing Error Guessing

1. Equivalence Partitioning: Equivalence partitioning is partitioning the input domain of a system into finite number of equivalent classes in such a way that testing one representative from a class is equivalent to testing any other value from that class. To put this in simpler words, since it is practically infeasible to do exhaustive testing, the next best alternative is to check whether the program extends similar behaviour or treatment to a certain group of inputs. If such a group of values can be found in the input domain treat them together as one equivalent class and test one representative from this. This can be explained with the following example. Consider a program which takes "salary" as input with values 12000...37000 in the valid range. The program calculates tax as follows: Salary up to Rs. 15000 - No Tax Salary between 15001 and 25000 - Tax is 18 % of Salary Salary above 25000 - Tax is 20% of Salary Here, the specification contains a clue that certain groups of values in the input domain are treated "equivalently" by the program. Accordingly, the valid input domain can be divided into three valid equivalent classes as below: c1: c2: c3: values in the range 12000...15000 values in the range 15001...25000 values > 25000

However, it is not sufficient that we test only valid test cases. We need to test the program with invalid data also as the users of the program may give invalid inputs, intentionally or unintentionally. It is easy to identify an invalid class "c4: values < 12000". If we assume some maximum limit (MAX) for the variable Salary, we can modify the class c3 above to "values in the range 25001...MAX and identify an invalid class "c5: values > MAX". Depending on the system, MAX may be either defined by the specification or defined by the hardware or software constraints later during the design phase.

If we further expand our discussion and assume that user or tester of the program may give any value which can be typed in through the keyboard as input, we can form the equivalence classes as explained below. Since the input has to be "salary" it can be seen intuitively that numeric and non-numeric values are treated differently by the program. Hence we can form two classes class of non-numeric values class of numeric values Since all non-numeric values are treated as invalid by the program class c1 need not be further subdivided. Class of numeric values needs further subdivision as all elements of the class are not treated alike by the program. Again, within this class, if we look for groups of values meeting with similar treatment from the program the following classes can be identified: values < 12000 values in the range 12000 ... 15000 values in the range 15001 ...25000 values in the range 25001...MAX values > MAX Each of these equivalent classes need not be further subdivided as the program should treat all values within each class in a similar manner. Thus the equivalent classes identified for the given specification along with a set of sample test cases designed using these classes are shown in the following table. Class Input Condition (Salary) Expected Result (Tax amount) Actual/Observed Result Remarks Error Msg: "Invalid Input" Error Msg: "Invalid Input"

c1 - class of non-numeric values A non-numeric value c2 - values < 12000 A numeric value < 12000

c3 - vaues in the range 12000 ... 15000 A numeric value >=12000 and <= 15000 No Tax c4 - values in the range 15001 ... 25000 A numeric value >=150001 and <= 25000 c5 - values in the range 25001...MAX c6 - values > MAX

Tax = 18% of Salary

A numeric value >=250001 and <= MAX Tax = 20% of Salary

A numeric value > MAX Error Msg: "Invalid Input"

We can summarise this discussion as follows: To design test cases using equivalence partitioning, for a range of valid input values identify one valid value within the range one invalid value below the range and

one invalid value above the range Similarly, to design test cases for a specific set of values: one valid case for each value belonging to the set one invalid value Eg: Test Cases for Types of Account (Savings, Current) will be: Savings, Current (valid cases) Overdraft (invalid case)

It may be noted that we need fewer test cases if some test cases can cover more than one equivalent class. 2. Boundary Value Analysis: Even though the definition of equivalence partitioning states that testing one value from a class is equivalent to testing any other value from that class, we need to look at the boundaries of equivalent classes more closely. This is so since boundaries are more error prone. To design test cases using boundary value analysis, for a range of values, Two valid cases at both the ends Two invalid cases just beyond the range limits Consider the example discussed in the previous section. For the valid equivalence class "c2-2: values in the range 12000...15000" of Salary, the test cases using boundary value analysis are Input Condition (Salary) Expected Result (Tax amount) Actual/Observed Result Remarks 11999 Invalid input 12000 No Tax 15000 No Tax

150001 Tax = 18% of Salary

If we closely look at the Expected Result column we can see that for any two successive input values the expected results are always different. We need to perform testing using boundary value analysis to ensure that this difference is maintained. The same guidelines need to be followed to check output boundaries also.

Other examples of test cases using boundary value analysis are: A compiler being tested with an empty source program Trigonometric functions like TAN being tested with values near p/2 A function for deleting a record from a file being tested with an empty data file or a data file with just one record in it. Though it may sound that the method is too simple, boundary value analysis is one of the most effective methods for designing test cases that reveal common errors made in programming. 3. Cause Effect Analysis: The main drawback of the previous two techniques is that they do not explore the combination of input conditions. Cause effect analysis is an approach for studying the specifications carefully and identifying the combinations of input conditions (causes) and their effect in the form of a table and designing test cases It is suitable for applications in which combinations of input conditions are few and readily visible. 4. Cause Effect Graphing: This is a rigorous approach, recommended for complex systems only. In such systems the number of inputs and number of equivalent classes for each input could be many and hence the number of input combinations usually is astronomical. Hence we need a systematic approach to select a subset of these input conditions. Guidelines for graphing: Divide specifications into workable pieces as it may be practically difficult to work on large specifications. Identify the causes and their effects. A cause is an input condition or an equivalence class of input conditions. An effect is an output condition or a system transformation. Link causes and effects in a Boolean graph which is the cause-effect graph. Make decision tables based on the graph. This is done by having one row each for a node in the graph. The number of columns will depend on the number of different combinations of input conditions which can be made. Convert the columns in the decision table into test cases. Consider the following specification: A program accepts Transaction Code - 3 characters as input. For a valid input the following must be true. 1st character (denoting issue or receipt) + for issue - for receipt 2nd character - a digit

3rd character - a digit To carry out cause effect graphing, the control flow graph is constructed as below.

In the graph:

(1) or (2) must be true (V in the graph to be interpreted as OR) (3) and (4) must be true (^ in the graph to be interpreted as AND) The Boolean graph has to be interpreted as follows: node (1) turns true if the 1st character is '+' node (2) turns true if the 1st character is '-' (both node (1) and node (2) cannot be true simultaneously) node(3) becomes true if the 2nd character is a digit node(4) becomes true if the 3rd character is a digit the intermediate node (5) turns true if (1) or (2) is true (i.e., if the 1st character is '+' or '-') the intermediate node (6) turns true if (3) and (4) are true (i.e., if the 2nd and 3rd characters are digits) The final node (7) turns true if (5) and (6) are true. (i.e., if the 1st character is '+' or '-', 2nd and 3rd characters are digits) The final node will be true for any valid input and false for any invalid input.

A partial decision table corresponding to the above graph:

Node (1) (2) (3) (4) (5) (6) (7)

Some possible combination of node states 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 1 0 1 0 $xy 1 0 1 1 1 1 1 +ab +a4 +2y @45 +67

Sample Test Case for the Column

The sample test cases can be derived by giving values to the input characters such that the nodes turn true/false as given in the columns of the decision table. 5. Error Guessing: Error guessing is a supplementary technique where test case design is based on the tester's intuition and experience. There is no formal procedure. However, a checklist of common errors could be helpful here. Back to Top Course Contents | Prev : Next

Testing and Debugging 3. Test Techniques Course Contents | Prev : Next

We shall discuss Black Box and White Box approach. 3.2 White Box Approach Basis Path Testing Basis Path Testing is white box testing method where we design test cases to cover every statement, every branch and every predicate (condition) in the code which has been written. Thus the method attempts statement coverage, decision coverage and condition coverage. To perform Basis Path Testing: Derive a logical complexity measure of procedural design Break the module into blocks delimited by statements that affect the control flow (eg.: statement like return, exit, jump etc. and conditions) Mark out these as nodes in a control flow graph

Draw connectors (arcs) with arrow heads to mark the flow of logic Identify the number of regions (Cyclomatic Number) which is equivalent to the McCabe's number Define a basis set of execution paths Determine independent paths Derive test case to exercise (cover) the basis set

McCabe' s Number (Cyclomatic Complexity) Gives a quantitative measure of the logical complexity of the module Defines the number of independent paths Provides an upper bound to the number of tests that must be conducted to ensure that all the statements are executed at least once. Complexity of a flow graph 'g', v(g), is computed in one of three ways: V(G) = No. of regions of G V(G) = E - N +2 (E: No. of edges & N: No. of nodes) V(G) = P + 1 (P: No. of predicate nodes in G or No. of conditions in the code)

McCabe's Number = No. of Regions (Count the mutually exclusive closed regions and also the whole outer space as one region) = 2 (in the above graph) Two other formulae as given below also define the above measure: McCabe's Number = E - N +2 ( = 6 - 6 +2 = 2 for the above graph) McCabe's Number = P + 1 ( =1 + 1= 2 for the above graph) Please note that if the number of conditions is more than one in a single control structure, each condition needs to be separately marked as a node. When the McCabe's number is 2, it indicates that there are two linearly independent paths in the code. i.e., two different ways in which the graph can be traversed from the 1st node to the last node. The independents in the above graph are: i) 1-2-3-5-6 ii) 1-2-4-5-6 The last step is to write test cases corresponding to the listed paths. This would mean giving the input conditions in such a way that the above paths are traced by the control of execution. The test cases for the paths listed here are shown in the following table: Path i) ii) Input Condition Expected Result Actual Result Remarks

value of 'a' > value of 'a' Increment 'a' by 1 value of a <= value of b

Increment b by 1 -

Back to Top

Course Contents | Prev : Nex

Testing and Debugging 4. When to stop testing Course Contents | Prev : Nex The question arises as testing is never complete and we cannot scientifically prove that a software system does not contain any more errors. Common Criteria Practiced: Stop When Scheduled Time For Testing Expires Stop When All The Test Cases Execute Without Detecting Error Both are meaningless and counterproductive as the first can be satisfied by doing absolutely nothing and the second is equally useless as it does not ensure quality of test cases. Stop When: All test cases, derived from equivalent partitioning, cause-effect analysis & boundary-value analysis, are executed without detecting errors. Drawbacks: Rather than defining a goal & allowing the tester to select the most appropriate way of achieving it, it does the opposite !!! Defined methodologies are not suitable for all occasions !!! No way to guarantee that the particular methodology is properly & rigorously used Depends on the abilities of the tester & not quantification attempted ! Completion Criterion Based On The Detection Of Pre-Defined Number Of Errors In this method the goal of testing is positively defined as to find errors and hence this is a more goal oriented approach. Eg. Testing of module is not complete until 3 errors are discovered For a system test : Detection of 70 errors or an elapsed time of 3 months, whichever come later How To Determine "Number Of Predefined Errors " ? Predictive Models Defect Seeding Models

Based on the history of usage / initial testing & the errors found

Based on the initial testing & the ratio of detected seeded errors to detected unseeded errors (Very critically depends on the quality of 'seeding') Using this approach, as an example, we can say that testing is complete if 80% of the pre-defined number of errors are detected or the scheduled four months of testing is over, whichever comes later. Caution ! The Above Condition May Never Be Achieved For The Following Reasons: Over Estimation Of Predefined Errors (The Software Is Too Good !!) Inadequate Test Cases Hence a best completion criterion may be a combination of all the methods discussed. Module Test: Defining test case design methodologies (such as boundary value analysis...) Function & System Test: Based on finding the pre-defined number of defects Back to Top Course Contents | Prev : Next

Testing and Debugging 5. Debugging Course Contents | Prev : Next

Debugging occurs as a consequence of successful testing. It is an exercise to connect the external manifestation of the error and the internal cause of the error Debugging techniques include use of: Breakpoints: A point in a computer program at which execution can be suspended to permit manual or automated monitoring of program performance or results Desk Checking: A technique in which code listings, test results or other documentation are visually examined, usually by the person who generated them, to identify errors, violations of development standards or other problems Dumps: 1. A display of some aspect of a computer programs execution state, usually the contents of internal storage or registers 2. A display of the contents of a file or device

Single-Step Operation: In this debugging technique a single computer instruction, is executed in response to an external signal.

Traces: A record of the execution of computer program, showing the sequence of instructions executed, the names and values of variables or both. Software Maintenance 1. What is Software Maintenance 1.1 Introduction Software maintenance is often considered to be (if it is considered at all) an unpleasant, time consuming, expensive and unrewarding occupation - something that is carried out at the end of development only when absolutely necessary (and hopefully not very often). As such it is often considered to be the poor relation of software development budgets often do not allow for it (or allow too little), and few programmers given a choice would choose to carry out maintenance over developmental work. This view, that software maintenance is the last resort is largely born out of ignorance. Misconceptions, misunderstandings and myths concerning this crucial area of software engineering abound. Software Maintenance suffers from an image problem due to the fact that although software has been maintained for years, relatively little is written about the topic. Little funding for research about software maintenance exists, thus, the academic community publishes relatively few papers on the subject. Maintenance organisations within business publish even less because of the corporate fear of giving away the "competitive edge". Although there are some textbooks on software maintenance, they are relatively few and far between (examples are included in the bibliography). Periodicals address the topic infrequently and few universities include software maintenance explicitly in their degree programmes . This lack of published information contributes to the misunderstandings and misconceptions that people have about software maintenance. Part of the confusion about software maintenance relates to its definition; when it begins, when it ends and how it relates to development. Therefore it is necessary to first consider what is meant by the term software maintenance. What is Software Maintenance? In order to define software maintenance we need to define exactly what is meant by software. It is a common misconception to believe that software is programs and that maintenance activities are carried out exclusively on programs. This is because many software maintainers are more familiar with, or rather are more exposed to programs than other components of a software system usually it is the program code that attracts most attention. Software Maintenance 1. What is Software Maintenance 1.2 A Definition of Software A more comprehensive view of software is given by McDermid (1991) who states that it consists of the programs, documentation and operating procedures by which computers can be made useful to man. Table below depicts the components of a software system according to this view and includes some examples of each. McDermid's definition suggests that software is not only concerned with programs - source and object code - but also relates to documentation of any facet of the program, such as requirements analysis, specification, design, system and user manuals, and the procedures used to set up and operate the software system. McDermid's is not the only definition of a software system but it is comprehensive and widely accepted. Course Contents | Prev : Next Course Contents | Prev : Next

Software Components Examples Program Source code

Object code Documentation Analysis/specification: (a) Formal specification (b) Context diagram (c) Data flow diagrams Design: (a) Flowcharts (b) Entity-relationship charts Implementation: (a) Source code listings

(b) Cross-reference listing Testing: (a) Test data (b) Test results Operating Procedures 1. Instructions to set up and use the software system 2. Instructions on how to react to system failures Back to Top Course Contents | Prev : Next

Software Maintenance 1. What is Software Maintenance 1.3 A Definition of Maintenance The use of the word maintenance to describe activities undertaken on software systems after delivery has been considered a misnomer due to its failure to capture the evolutionary tendency of software products. Maintenance has traditionally meant the upkeep of an artifact in response to the gradual deterioration of parts due to extended use, which is simply corrective maintenance. So for example, one carries out maintenance on a car or a house usually to correct problems e.g. replacing the brakes or fixing the leaking roof. If however we were to build an extension to the house or fit a sun roof to a car then those would usually be thought of as improvements (rather than a maintenance activities). Therefore to apply the traditional definition of maintenance in the context of software means that software maintenance is only concerned with correcting errors. However, correcting errors accounts for only part of the maintenance effort Consequently, a number of authors have advanced alternative terms that are considered to be more inclusive and encompass most, if not all, of the activities undertaken on existing software to keep it operational and acceptable to the users. These include software evolution, post-delivery evolution and support. However it can be argued that there is nothing wrong with using the word maintenance provided software engineers are educated to accept its meaning within the software engineering context regardless of what it means in non-software engineering disciplines. After all, any work that needs to be done to keep a software system at a level considered useful to its users will still have to be carried out regardless of the name it is given. Course Contents | Prev : Next

Software Maintenance 1. What is Software Maintenance 1.5 Maintenance Image Problems The inclusion of the word maintenance in the term software maintenance has been linked to the negative image associated with the area. Higgins (1988) describes the problem: ...programmers, ..tend to think of program development as a form of puzzle solving, and it is reassuring to their ego when they manage to successfully complete a difficult section of code. Software maintenance, on the other hand, entails very little new creation and is therefore categorised as dull, unexciting detective work. Similarly, Schneidewind (1987) contends that to work in maintenance has been akin to having bad breath. Further, some authors argue that the general lack of consensus on software maintenance terminology has also contributed to the negative image associated with it. Software Maintenance 2. Why is Software Maintenance necessary Course Contents | Prev : Nex Course Contents | Prev : Next

In order to answer this question we need to consider what happens when the system is delivered to the users. The users operate the system and may find things wrong with it, or identify things they would like to see added to it. Via management feedback the maintainer makes the approved corrections or improvements and the improved system is delivered to the users. The cycle then repeats itself, thus perpetuating the loop of maintenance and extending the life of the product. In most cases the maintenance phase ends up being the longest process of the entire life cycle, and so far outweighs the development phase in terms of time and cost. Error! Reference source not found. shows the lifecycle of maintenance on a software product and why (theoretically) it may be never ending.

Lehman's (1980) first two laws of software evolution help explain why the Operations and Maintenance phase can be the longest of the life-cycle processes. His first law is the Law of Continuing Change, which states that a system needs to change in order to be useful. The second law is the Law of Increasing Complexity, which states that the structure of a program deteriorates as it evolves. Over time, the structure of the code degrades until it becomes more cost-effective to rewrite the program. Back to Top Course Contents | Prev : Next

Software Maintenance 3. Types of Software Maintenance Course Contents | Prev : Next

In order for a software system to remain useful in its environment it may be necessary to carry out a wide range of maintenance activities upon it. Swanson (1976) was one of the first to examine what really happens during maintenance and was able to identify three different categories of maintenance activity 3.1 Corrective Changes necessitated by actual errors (defects or residual "bugs") in a system are termed corrective maintenance. These defects manifest themselves when the system does not operate as it was designed or advertised to do. A defect or bug can result from design errors, logic errors and coding errors. Design errors occur when for example changes made to the software are incorrect, incomplete, wrongly communicated or the change request misunderstood. Logic errors result from invalid tests and conclusions, incorrect implementation of design specification, faulty logic flow or incomplete test data. Coding errors are caused by incorrect implementation of detailed logic design and incorrect use of the source code logic. Defects are also caused by data processing errors and system performance errors. All these errors, sometimes called residual errors or bugs prevent the software from conforming to its agreed specification. In the event of a system failure due to an error, actions are taken to restore operation of the software system. The approach here is to locate the original specifications in order to determine what the system was originally designed to do. However, due to pressure from management, maintenance personnel sometimes resort to emergency fixes known as patching. The ad hoc nature of this approach often gives rise to a range of problems that include increased program complexity and unforeseen ripple effects. Increased program complexity usually arises from degeneration of program structure which makes the program increasingly difficult, if not impossible, to comprehend. This state of affairs can be referred to as the spaghetti syndrome or software fatigue. Unforeseen ripple effects imply a change to one part of a program may affect other sections in an unpredictable fashion. This is often due to lack of time to carry out a thorough impact analysis before effecting the change. Corrective maintenance has been estimated to account for 20% of all maintenance activities.

Software Maintenance 3. Types of Software Maintenance 3.2 Adaptive Any effort that is initiated as a result of changes in the environment in which a software system must operate is termed adaptive change. Adaptive change is a change driven by the need to accommodate modifications in the environment of the software system, without which the system would become increasingly less useful until it became obsolete. The term environment in this context refers to all the conditions and influences which act from outside upon the system, for example business rules, government policies, work patterns, software and hardware operating platforms. A change to the whole or part of this environment will warrant a corresponding modification of the software. Unfortunately, with this type of maintenance the user does not see a direct change in the operation of the system, but the software maintainer must expend resources to effect the change. This task is estimated to consume about 25% of the total maintenance activity Software Maintenance 3. Types of Software Maintenance 3.3 Perfective The third widely accepted task is that of perfective maintenance. This is actually the most common type of maintenance encompassing enhancements both to the function and the efficiency of the code and includes all changes, insertions, deletions, modifications, extensions, and enhancements made to a system to meet the evolving and/or expanding needs of the user. A successful piece of software tends to be subjected to a succession of changes resulting in an increase in its requirements. This is based on the premise that as the software becomes useful, the users tend to experiment with new cases beyond the scope for which it was initially developed. Expansion in requirements can take the form of enhancement of existing system functionality or improvement in computational efficiency. As the program continues to grow with each enhancement the system evolves from an average-sized program of average maintainability to a very large program that offers great resistance to modification. Perfective maintenance is by far the largest consumer of maintenance resources, estimates of around 50% are not uncommon. The categories of maintenance above were further defined in the 1993 IEEE Standard on Software Maintenance (IEEE 1219 1993) which goes on to define a fourth category. Course Contents | Prev : Next Course Contents | Prev : Next

Software Maintenance 3. Types of Software Maintenance 3.4 Preventive The long-term effect of corrective, adaptive and perfective change is expressed in Lehman's law of increasing entropy: As a large program is continuously changed, its complexity, which reflects deteriorating structure, increases unless work is done to maintain or reduce it. (Lehman 1985). The IEEE defined preventative maintenance as "maintenance performed for the purpose of preventing problems before they occur" (IEEE 1219 1993). This is the process of changing software to improve its future maintainability or to provide a better basis for future enhancements. The preventative change is usually initiated from within the maintenance organisation with the intention of making programs easier to understand and hence facilitate future maintenance work. Preventive change does not usually give rise to a substantial increase in the baseline functionality. Preventive maintenance is rare (only about 5%) the reason being that other pressures tend to push it to the end of the queue. For instance, a demand may come to develop a new system that will improve the organisations competitiveness in the market. This will likely be seen as more desirable than spending time and money on a project that delivers no new function. Still, it is easy to see that if one considers the probability of a software unit needing change and the time pressures that are often present when the change is requested, it makes a lot of sense to anticipate change and to prepare accordingly. The most comprehensive and authoritative study of software maintenance was conducted by B. P. Lientz and E. B. Swanson (1980). Figure 1.2 depicts the distribution of maintenance activities by category by percentage of time from the Lientz and Swanson study of some 487 software organisations. Clearly, corrective maintenance (that is, fixing problems and routine debugging) is a small percentage of overall maintenance costs, Martin and McClure (1983) provide similar data. Back to Top Course Contents | Prev : Next Course Contents | Prev : Next

Software Maintenance 3. Types of Software Maintenance 3.5 Maintenance as Ongoing Support This category of maintenance work refers to the service provided to satisfy non-programming related work requests. Ongoing support, although not a change in itself, is essential for successful communication of desired changes. The objectives of ongoing support include effective communication between maintenance and end user personnel, training of end-users and providing business information to users and their organisations to aid decision making. Course Contents | Prev : Nex

Effective communication is essential as maintenance is probably the most customer-intensive part of the software life cycle, since a greater proportion of maintenance effort is spent providing enhancements requested by customers than is spent on other types of system change. Good customer relations are important for several reasons and can lead to a reduction in the misinterpretation of users change requests, a better understanding of users' business needs and increased user involvement in the maintenance process. Failure to achieve the required level of communication between the maintenance organisation and those affected by the software changes may eventually lead to software failure. Training of end users - typical services provided by the maintenance organisation include manuals, telephone support , help desk, on-site visits, informal short courses, and user groups. Business information - users need various types of timely and accurate business information (for example, time, cost, resource estimates) to enable them take strategic business decisions. Questions such as should we enhance the existing system or replace it completely may need to be considered. Software Maintenance 4. The Importance of Categorising Software Changes Course Contents | Prev : Next

In principle, software maintenance activities can be classified individually. In practice, however, they are often intertwined. For example, in the course of modifying a program due to the introduction of a new operating system (adaptive change), obscure 'bugs' may be introduced. The bugs have to be traced and dealt with (corrective maintenance). Similarly, the introduction of a more efficient sorting algorithm into a data processing package (perfective maintenance) may require that the existing program code be restructured (preventive maintenance). Figure 1.3 depicts the potential relations between the different types of software change. Despite the overlapping nature of these changes, there are several reasons why a good understanding of the distinction between them is important. Firstly, it allows management to set priorities for change requests. Some changes require a faster response than others. Secondly, there are limitations to software change. Ideally changes are implemented as the need for them arises. In practice, however this is not always possible for several reasons: Resource Limitations: Some of the major hindrances to the quality and productivity of maintenance activities are the lack of skilled and trained maintenance programmers and the suitable tools and environment to support their work. Cost may also be an issue. Quality of the existing system: In some 'old' systems, this can be so poor that any change can lead to unpredictable ripple effects and a potential collapse of the system. Organisational strategy: The desire to be on a par with other organisations, especially rivals, can be a great determinant of the size of a maintenance budget. Inertia: The resistance to change by users may prevent modification to a software product, however important or potentially profitable such change may be. Thirdly software is often subject to incremental release where changes made to a software product are not always done all together. The changes take place incrementally, with minor changes usually implemented while a system is in operation. Major enhancements are usually planned and incorporated, together with other minor changes, in a new release or upgrade. The change introduction mechanism also depends on whether the software package is bespoke or off-the-shelf. With bespoke software, change can often be effected as the need for it arises. For off-the-shelf packages, users normally have to wait for the next upgrade.

Swanson's definitions allow the software maintenance practitioner to be able to tell the user that a certain portion of a maintenance organisation's efforts is devoted to user-driven or environment-driven requirements. The user requirements should not be buried with other types of maintenance. The point here is that these types of updates are not corrective in nature - they are improvements and no matter which definitions are used, it is imperative to discriminate between corrections and enhancements. By studying the types of maintenance activities above it is clear that regardless of which tools and development model is used, maintenance is needed. The categories clearly indicate that maintenance is more than fixing bugs. This view is supported by Jones (1991), who comments that "organisations lump enhancements and the fixing of bugs together". He goes on to say that this distorts both activities and leads to confusion and mistakes in estimating the time it takes to implement changes and budgets. Even worse, this "lumping" perpetuates the notion that maintenance is fixing bugs and mistakes. Because many maintainers do not use maintenance categories, there is confusion and misinformation about maintenance. Back to Top Course Contents | Prev : Next

Software Maintenance

5. A Comparison between Development and Maintenance

Course Contents | Prev : Next

Although maintenance could be regarded as a continuation of development, there is a fundamental difference between the two activities. The constraints that the existing system imposes on maintenance gives rise to this difference. For example, in the course of designing an enhancement, the designer needs to investigate the current system to abstract the architectural and the low-level designs. This information is then used to: ascertain how the change can be accommodated predict the potential ripple effect of the change determine the skills and knowledge required to do the job To explain the difference between new development and software maintenance further, Jones (1986) provides an interesting analogy: "The task of adding functional requirements to existing systems can be likened to the architectural work of adding a new room to an existing building. The design will be severely constrained by the existing structure, and both the architect and the builders must take care not to weaken the existing structure when additions are made. Although the costs of the new room usually will he lower than the costs of constructing an entirely new building, the costs per square foot may be much higher because of the need to remove existing walls, reroute plumbing and electrical circuits and take special care to avoid disrupting the current site", (quoted in Corbi 1989).

Software Maintenance 6. The Cost of Maintenance Course Contents | Prev : Next

Although there is no real agreement on the actual costs, sufficient data exist to indicate that maintenance does consume a large portion of overall software lifecycle costs. Arthur (1988) states that only a quarter to a third of all lifecycle costs are attributed to software development, and that some 67% of lifecycle costs are expended in the operations and maintenance phase of the life cycle. Jones (1994) states that maintenance will continue to grow and become the primary work of the software industry. Table 1.2 (Arthur 1988) provides a sample of data complied by various people and organisations regarding the percentage of lifecycle costs devoted to maintenance. These data were collected in the late 1970s, prior to all the software engineering innovations, methods, and techniques that purport to decrease overall costs. However, despite software engineering innovations, recent literature suggests that maintenance is gaining more notoriety because of its increasing costs. A research marketing firm, the Gartner Group, estimated that U.S. corporations alone spend over $30 billion annually on software maintenance, and that in the 1990s, 95% of lifecycle costs would go to maintenance (Moad 1990 Figure 1.4). Clearly, maintenance is costly, and the costs are increasing. All the innovative software engineering efforts from the 1970s and 1980s have not reduced lifecycle costs. Table 1.2 Maintenance Costs as a Percentage of Total Software Life-cycle Costs Survey Year Canning Boehm 1973 Maintenance (%) 1972 40-80 60-70 60

deRose/Nyman 1976 Mills 1976 75 1979

Zeikowitz

67 1979 60-80

Cashman and Holt

Today programmers' salaries consume the majority of the software budget, and most of their time is spent on maintenance as it is a labour intensive activity. As a result, organisations have seen the operations and maintenance phase of the software life cycle consume more and more resources over time. Others attribute rising maintenance costs to the age and lack of structure of the software. Osborne and Chikofsky (1990) state that much of today's software is ten to fifteen years old, and were created without benefit of the best design and coding techniques. The result is poorly designed structures, poor coding, poor logic, and poor documentation for the systems that must be maintained. Over 75% of maintenance costs are for providing enhancements in the form of adaptive and perfective maintenance. These enhancements are significantly more expensive to complete than corrections as they require major redesign work and considerably more coding than a corrective action. The result is that the user driven enhancements (improvements) dominate the costs over the life cycle.

Several later studies confirm that Lientz and Swanson's data from 1980 was still accurate in 1990. Table 1.3 summarises data from several researchers and shows that noncorrective work ranges from 78% to 84% of the overall effort, therefore that the majority of maintenance costs are being spent on enhancements. Maintenance is expensive therefore because requirements and environments change and the majority of maintenance costs are driven by users. Table 1.3 Percentage effort spent on Corrective and Non-Corrective maintenance Maintenance Category Lientz & Swanson 1980 1987 1990 1990 Corrective 22% 17% 83% 16% 84% 21% 79% Ball Deklava Abran

Non-corrective 78%

The situation at the turn of the millennium shows little sign of improvement. The Maintenance Budget: Normally, after completing a lengthy and costly software development effort, organisations do not want to devote significant resources to postdelivery activities. Defining what is meant by "a significant resource" is in itself problematic how much should maintenance cost? Underestimation of maintenance costs is partly human nature as developers do not want to believe that maintenance for the new system will consume a significant portion of lifecycle costs. They hope that the new system will be the exception to the norm as modern software engineering techniques and methods were used. Therefore the software maintenance phase of the life cycle will not, by definition, consume large amounts of money. Accordingly, sufficient amounts of money are often not allocated for maintenance. With limited resources, maintainers can only provide limited maintenance. The lack of financial resources for maintenance is due in large part to the lack of recognition that "maintenance" is primarily enhancing delivered systems rather than correcting bugs. Another factor influencing high maintenance costs is that needed items are often not included initially in the development phase, usually due to schedules or monetary constraints, but are deferred until the operations and maintenance phase. Therefore maintainers end up spending a large amount of their time coding the functions that were delayed until maintenance. As a result development costs remain within budget but maintenance costs increase. As can be seen from Table 1.3, the maintenance categories are particularly useful when trying to explain the real costs of maintenance. If organisations have this data, they will understand why maintenance is expensive and will be able to defend their estimates of time to complete tasks and resources required.

Back to Top

Course Contents | Prev : Next

Software Maintenance 7. A Software Maintenance Framework Course Contents | Prev : Next 7.1 Overview To a large extent the requirement for software systems to evolve in order to accommodate changing user needs contributes to the high maintenance cost. However, there are other factors which contribute indirectly to this by hindering maintenance activities. A Software Maintenance Framework (which is a derivative of the Software Maintenance Framework proposed by Haworth et al. 1992) will be used to discuss some of the factors that contribute to the maintenance challenge. The elements of this framework are the user requirements, organisational and operational environments, maintenance process, maintenance personnel, and the software product (Table 1.4). Table 1.4 Components of the Software Maintenance Framework Component Feature Requests for additional functionality, error correction and improve

1. Users & requirements maintainability

Request for non-programming related support 2. Organisational environment Competition in the market place 3. Operational environment Software innovations 4. Maintenance process Capturing requirements Hardware innovations Change in policies

Variation in programming and working practices Paradigm shift Error detection and correction 5. Software product Quality of documentation Malleability of programs Complexity of programs Program structure Inherent quality Maturity and difficulty of application domain

6. Maintenance personnel Domain expertise Back to Top Course Contents | Prev : Next

Staff turnover

Software Maintenance 7. A Software Maintenance Framework Course Contents | Prev : Next 7.2 Users and their Requirements Users often have little understanding of software maintenance and so can be unsupportive of the maintenance process. They may take the view that: Software maintenance is like hardware maintenance Changing software is easy Changes cost too much and take too long Users may be unaware that their request: may involve major structural changes to the software which may take time to implement must be feasible, desirable, prioritised, scheduled and resourced may conflict against one another or against company policy such that it is never implemented Software Maintenance 7. A Software Maintenance Framework Course Contents | Prev : Next 7.3 Organisational and Operational Environment An environmental factor is a factor which acts upon the product from outside and influences its form or operation. The two categories of environment are: The organisational environment - e.g. business rules, government regulations, taxation policies, work patterns, competition in the market place The operational environment - software systems e.g. operating systems, database systems, compilers and hardware systems e.g. processor, memory, peripherals In this environment the scheduling of maintenance work can be problematic as urgent fixes always go to the head of the queue thus upsetting schedules and unexpected, mandatory large scale changes will also need urgent attention. Further problems can stem from the organisational environment in that the maintenance budget is often underfunded.

Software Maintenance 7. A Software Maintenance Framework Course Contents | Prev : Next 7.4 Maintenance Process The term process here refers to any activity carried out or action taken either by a machine or maintenance personnel during software maintenance. The facets of a maintenance process which affect the evolution of the software or contribute to maintenance costs include: The difficulty of capturing change (and changing) requirements - requirements and user problems only become clearer when a system is in use. Also users may not be able to express their requirements in a form understandable to the analyst or programmer - the 'information gap'. The requirements and changes evolve, therefore the maintenance team is always playing "catch-up" Variation in programming practice - this may present difficulties if there is no consistency, therefore standards or stylistic guidelines are often provided. Working practices impact on the way a change is effected. "Time to change" can be adversely affected by "clever code"; undocumented assumptions; and undocumented design and implementation decisions. After some time, programmers find it difficult to understand their own code Paradigm shift - older systems developed prior to the advent of structured programming techniques may be difficult to maintain. However, existing programs may be restructured or 'revamped' using techniques and tools e.g. structured programming, object orientation, hierarchical program decomposition, reformatters and pretty-printers. Error detection and correction - error-free software is virtually non-existent. Software products tend to have 'residual' errors. The later these errors are discovered the more expensive they are to correct. The cost gets even higher if the errors are detected during the maintenance phase (Figure 1.5).

Back to Top

Course Contents | Prev : Next

Software Maintenance 7. A Software Maintenance Framework Course Contents | Prev : Next 7.5 Software Product Aspects of a software product that contribute to the maintenance challenge include: Maturity and difficulty of the application domain: The requirements of applications that have been widely used and well understood are less likely to undergo substantial modifications than those that are still in their infancy. Inherent difficulty of the original problem: For example, programs dealing with simple problems such as sorting are obviously easier to handle than those used for more complex computations such as weather forecasting. Quality of the documentation: The lack of up-to-date systems documentation effects maintenance productivity. Maintenance is difficult to perform because of the need to understand (or comprehend) program code. Program understanding is a labour-intensive activity that increases costs. IBM estimated that a programmer spends around 50% of his time in the area of program analysis. Malleability of the programs: The malleable or 'soft' nature of software products makes them more vulnerable to undesirable modifications than hardware items. Inadvertent software changes may have unknown and even fatal repercussions. This is particularly true of 'safety-related' or 'safety-critical' systems.

Inherent quality: The tendency for a system to decay as more changes are undertaken implies that preventive maintenance needs to be undertaken to restore order in the programs. Software Maintenance 7. A Software Maintenance Framework Course Contents | Prev : Next 7.6 Maintenance Personnel This refers to individuals involved in the maintenance of a software product. They include maintenance managers, analysts, designers, programmers and testers. The aspects of personnel that affect maintenance activities include the following: Staff turnover: Due to high staff turnover many systems end up being maintained by individuals who are not the original authors therefore a substantial proportion of the maintenance effort is spent on just understanding the code. Staff who leave take irreplaceable knowledge with them. Domain expertise: Staff may end up working on a system for which they have neither the system domain knowledge nor the application domain knowledge. Maintainers may therefore inadvertently cause the "ripple effect". This problem may be worsened by the absence of documentation or out of date or inadequate documentation. A contrary situation is where a programmer becomes 'enslaved' to a certain application because she/he is the only person who understands it. Obviously the factors of product, environment, user and maintenance personnel do not exist in isolation but interact with one another. Three major types of relation and interaction that can be identified are product/environment, product/user and product/maintenance personnel as shown in figure below: Relationship between product and environment: as the environment changes so must the product in order to be useful. Relationship between product and user: in order for the system to stay useful and acceptable to its users it also has to change to accommodate their changing requirements. Interaction between personnel and product: the maintenance personnel who implement changes also act as receptors of the changes. That is, they serve as the main avenue by which changes in the other factors - user requirements, maintenance process, organisational and operational environments - act upon the software product. The nature of the maintenance process used and the attributes of the maintenance personnel will impact upon the quality of the change.

Back to Top

Course Contents | Prev : Next

Software Maintenance 8. Potential Solutions to the Maintenance Problem Course Contents | Prev : Next

A number of possible solutions to maintenance problems have been suggested, they include: Budget and Effort Reallocation Complete Replacement of the System Maintenance of the Existing System 8.1 Budget and Effort Reallocation Based on the observation that software maintenance costs at least as much as new development, some authors have proposed that rather than allocating less resource to develop unmaintainable or difficult to maintain systems, more time and resource should be invested in the development specification and design of more maintainable systems. However, this is difficult to pursue, and even if it was possible, would not address the problem of legacy systems that are already in a maintenance crisis situation.

Software Maintenance 8. Potential Solutions to the Maintenance Problem 8.2 Complete Replacement of the System One might be tempted to suggest that "if maintaining an existing system costs as much as developing a new one, then why not develop a new system from scratch". In practice of course it is not that simple as there are costs and risks involved. The organisation may not be able to afford the capital outlay and there is no guarantee that the new system will function any better than the old one. Additionally the existing system represents a valuable knowledge base that could prove useful for the development of future systems thereby reducing the chance of re-inventing the wheel. It would be unrealistic for an organisation to part with such an asset. Software Maintenance 8. Potential Solutions to the Maintenance Problem 8.3 Maintenance of the existing system In most maintenance situations budget and effort reallocation is not possible and completely redesigning the whole software system is usually undesirable (but nevertheless forced in some situations). Given that these approaches are beset with problems, maintaining the existing system is often the only alternative. Maintenance could be achieved, albeit it in an "ad hoc" fashion by making necessary modifications as 'patches' to the source code, often by hitting alien code 'head on'. However this approach is fraught with difficulty and is often adopted because of time pressures. Corrective maintenance is almost always done in this way for the same reason. An alternative and cost-effective solution is to apply preventative maintenance. Preventative maintenance may take several forms, it may: be applied to those sections of the software that are most often the target for change (this approach may be very cost effective as it has been estimated that 80% of changes affect only 30% of the code). involve updating inaccurate documentation involve restructuring, reverse engineering or the re-use of existing software components Course Contents | Prev : Next Course Contents | Prev : Next

Unfortunately preventive maintenance is rare; the reason being that other pressures tend to push it to the end of the queue.

Das könnte Ihnen auch gefallen