Sie sind auf Seite 1von 43

www.nskinfo.

com- College Student and Faculty Website Department Of nskinfo i-education CS2301-Software Engineering-16MARKS

www.nsksofttech.com- AnnaUniversity Chennai all department soft copy question paper& More than 500 c&c++ programs coming soon.

CS2301 SOFTWARE ENGINNERING IMPORTANT SIXTEEN MARKS QUESTIONS


UNIT I SOFTWARE PRODUCT AND PROCESS
Introduction S/W Engineering Paradigm Verification Validation Life Cycle Models System Engineering Computer Based System Business Process Engineering Overview Product Engineering Overview.

1. Explain the Water Fall Software Process Model with neat diagram.
The waterfall model is the classical model of software engineering. This model is one of the oldest models and is widely used in government projects and in many major companies. As this model emphasizes planning in early stages, it ensures design flaws before they develop. In addition, its intensive document and planning make it work well for projects in which quality control is a major concern. It is used when requirements are well understood in the beginning and also called as classic life cycle. A systematic, sequential approach to Software development begins with customer specification of Requirements and progresses through planning, modeling, construction and deployment. This Model suggests a systematic, sequential approach to software development that begins at the system level and progresses through analysis, design, code and testing. The waterfall method does not prohibit returning to an earlier phase, for example, returning from the design phase to the requirements phase. However, this involves costly rework. Each completed phase requires formal review and extensive documentation development. Thus, oversights made in the requirements phase are expensive to correct later.

Advantages: 1. 2. 3. 4. Easy to understand and implement. Widely used and known Reinforces good habits: define-before- design, design-before-code. Works well on mature products and weak teams.

Disadvantages:

1. Idealized doesnt match reality well. 2. Unrealistic to expect accurate requirements so early in project. 3. Difficult to integrate risk management. 2. Explain the Spiral S/W Process Model with neat diagram. The spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations. The baseline spiral, starting in the planning phase, requirements is gathered and risk is assessed. Each subsequent spiral builds on the baseline spiral. Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase. Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral. In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.

The task regions of Spiral Model are, Customer communication: Tasks required are establishing effective communication between developer and planning. Estimation Scheduling Risk analysis

Risk analysis: Analysis and Design Construct and release: Code and Test Customer evaluation: Delivery and Feedback Advantages: 1. High amount of risk analysis. 2. Good for large and mission-critical projects. 3. Software is produced early in the software life cycle. Disadvantages: 1. Can be a costly model to use. 2. Risk analysis requires highly specific expertise. 3. Projects success is highly dependent on the risk analysis phase. 4. Doesnt work well for smaller projects.

3. Explain System Engineering Process in detail.


System engineering process follows a waterfall model for the parallel development of different parts of the system. Specifying, designing, implementing, validating, deploying and maintaining socio-technical systems. Concerned with the services provided by the system, constraints on its construction and operation and the ways in which it is used. Usually follows a waterfall model because of the need for parallel development of different parts of the system. There is a little scope for iteration between phases because hardware changes are very expensive. Software may have to compensate for hardware problems. Inevitably involves engineers from different disciplines who must work together. Also there is a much scope for misunderstanding. Different disciplines use a different vocabulary and much negotiation is required. Engineers may have personal agendas to fulfil.

SYSTEM REQUIREMENTS DEFINITION: Three types of requirement defined at this stage Abstract functional requirements: System functions are defined in an abstract way; System properties: Non-functional requirements for the system in general are defined; Undesirable characteristics: Unacceptable system behaviour is specified. THE SYSTEM DESIGN PROCESS: Process steps: Partition requirements: Organise requirements into related groups. Identify sub-systems: Identify a set of sub-systems which collectively can meet the system requirements. Assign requirements to sub-systems: Causes particular problems when COTS are integrated. Specify sub-system functionality Define sub-system interfaces:

Critical activity for parallel sub-system development. SUB-SYSTEM DEVELOPMENT PROCESS: Typically parallel projects developing the hardware, software and communications may involve some COTS (Commercial Off-the-Shelf) systems procurement. This involves Lack of communication across implementation teams. Bureaucratic and slow mechanism for proposing system changes means that the development schedule may be extended because of the need for rework. SYSTEM INTEGRATION: The process of putting hardware, software and people together to make a system is called System Integration. This should be tackled incrementally so that sub-systems are integrated one at a time. Interface problems between sub-systems are usually found at this stage and may be problems arises with uncoordinated deliveries of system components. SYSTEM INSTALLATION: System Installation Issues are, Environmental assumptions may be incorrect. There may be human resistance to the introduction of a new system. System may have to coexist with alternative systems for some period. There may arise some physical installation problems (e.g. cabling problem). Operator training has to be identified.

SYSTEM EVOLUTION: The lifetime of large systems is too long. They must evolve to meet change requirements. The evolution may be costly. Existing systems that must be maintained are sometimes called as legacy systems. SYSTEM DECOMMISSIONING: Taking the system out of service after its useful lifetime is called as System Decommissioning.

4. Write short notes on Business Process Engineering overview and Product Engineering overview?
BUSINESS PROCESS ENGINEERING: Business process engineering defines architectures that will enable a business to use information effectively. It involves the specification of the appropriate computing architecture and the development of the software architecture for the organization's computing resources. There are three different architectures must be analyzed and designed within the context of business objectives and goals,

The data architecture provides a framework for the information needs of a business (e.g., ERD) The application architecture encompasses those elements of a system that transform objects within the data architecture for some business purpose The technology infrastructure provides the foundation for the data and application architectures. It includes the hardware and software that are used to support the applications and data

PRODUCT ENGINEERING: Product engineering translates the customer's desire for a set of defined capabilities into a working product. It achieves this goal by establishing product architecture and a support infrastructure. Product architecture components consist of people, hardware, software, and data. Support infrastructure includes the technology required to tie the components together and the information to support the components. Requirements engineering elicits the requirements from the customer and allocates function and behavior to each of the four components. System component engineering happens next as a set of concurrent activities that address each of the components separately. Each component takes a domain-specific view but maintains communication with the other domains. The actual activities of the engineering discipline take on an element view. Analysis modeling allocates requirements into function, data, and behavior.

Design modeling maps the analysis model into data/class, architectural, interface, and component design.

UNIT II SOFTWARE REQUIREMENTS


Functional and Non-Functional Software Document Requirement Engineering Process Feasibility Studies Software Prototyping Prototyping in the Software Process Data Functional and Behavioral Models Structured Analysis and Data Dictionary.

1. What is Software Prototyping? Explain the prototyping approaches in software process?


SOFTWARE PROTOTYPING: Prototyping is the rapid development of a system. The principal use is to help customers and developers understand the requirements for the system Requirements elicitation Users can experiment with a prototype to see how the system supports their work

Requirements validation The prototype can reveal errors and omissions in the requirements. Prototyping can be considered as a risk reduction activity. PROTOTYPING APPROACHES IN SOFTWARE PROCESS:
Evolutionary prototyping Outline Requirements Throw-away Prototyping Executable Prototype + System Specification Delivered system

There are Two Approaches, Evolutionary prototyping - In this approach of system development, the initial prototype is prepared and it is then refined through number of stages to final stage. Throw-away prototyping - Using this approach a rough practical implementation of the system is produced. The requirement problems can be identified from this implementation. It is then discarded. System is then developed using some different engineering paradigm. EVOLUTIONARY PROTOTYPING: Objective: The principal objective of this model is to deliver the working system to the end-user. Example: AI systems. Based on techniques which allow rapid system iterations Verification is impossible as there is no specification. Validation means demonstrating the adequacy of the system.

The Specification, design and implementation are inter-twined. The system is developed as a series of increments that are delivered to the customer. Techniques for rapid system development are used such as CASE tools and 4GLs. User interfaces are usually developed using a GUI development toolkit.

Advantages: Fast delivery of the working system User is involved while developing the system More useful system can be delivered. Problems: Management problems Maintenance problem Verification THROW-AWAY PROTOTYPING: Objective: The principal objective of this model is to validate or to derive the system requirements. It is developed to reduce requirement risks.

Outline requirements Reusable components

Develop prototype

Evaluate prototype

Specify system

Develop software

Validate system

Delivered software system

The prototype is developed from an initial specification, delivered for experiment then discarded. Advantage: Requirement risks are very less. Problems: It can be undocumented. Changes made during the software development proceed may degrade the system structure. Sometimes organizational quality standard may not be strictly applied. INCREMENTAL DEVELOPMENT: System is developed and delivered in increments after establishing an overall architecture. Requirements and specifications for each increment may be developed. Users may experiment with delivered increments while others are being developed.

Define system deliverables

Design system architectur e

Specify system increment NO

Build system increment

Validate increment

Deliver final system YES

System complete?

Validate system

Integrate increment

Therefore, these serve as a form of prototype system. The main intension is to combine some of the advantages of prototyping but with a more manageable process and better system structure.

2. With suitable examples and required diagrammatic representation explain the Functional Modeling in detail.
All functional models really do is describe the computational structure of the system. On the other hand, this system even though it may have many Use Cases should only have one functional model, yet this may be composed of many functional diagrams. Furthermore, the activity of creating a functional model is commonly known as functional modeling. A functional model describes how the system is changing. This can be best viewed with an Activity Diagram which shows how we change from one state to another depends on what actions are being performed or the overall state of the system. This leads me to the key feature (subject) of a functional model Functions and Flows. It can be representing as with the help of functional diagrams. For example: Activity Diagrams Use Case Diagrams Context Model ACTIVITY DIAGRAM: The purpose of an activity diagram is to represent data and activity flows in an application. The components are, within an activity diagram there are many key modeling concepts, here is a select main few of them:

An activity represents an action or a set of actions to be taken. This is a control flow. It shows the sequence of execution. This is the initial start node the beginning of the set of actions (the start basically)

10

This is the Final node. It stops all flow in an activity diagram. This is a decision node. It represents a test condition much like an IF statement.

USE CASE DIAGRAM: They are diagrams to help aid the creation, visualization and documentation of various aspects of the software engineering process. Use Cases come in pairs, so we have

Use Case Diagram: an overview of the system Use Case Description: details about each function

An Actor is something that performs use cases upon a system. An Actor is just an entity, meaning it can be a Human or other artifacts that directly play an external role in the system as long as they either directly use the system or is used by the system. For each Use Case we have to know entry conditions (preconditions) and Exit conditions (post-conditions), so basically What is true before the Use Case and What is true after the Use Case. CONTEXT MODEL: Context models are used to illustrate the boundaries of a system. Social and organisational concerns may affect the decision on where to position system boundaries. Architectural models show the system and its relationship with other systems. Example for Context Model is the ATM Systems,
Security system Branch accounting system Auto-teller system Branch counter system Maintenance system Usage database Account database

11

3. With suitable examples and required diagrammatic representation explain the Behavioral Modeling in detail.
Behavioural models are used to describe the overall behaviour of a system. Two types of behavioural model are,

Data processing models that show how data is processed as it moves through the system State machine models that show the systems response to events Both of these models are required for a description of the systems behaviour. DATA PROCESSING MODELS: Data flow diagrams are used to model the systems data processing. These show the processing steps as data flows through a system. Intrinsic part of many analysis methods. Simple and intuitive notation by which the customers can understand and it shows end-to-end processing of data. DATA FLOW DIAGRAMS: DFDs model the system from a functional perspective. Tracking and documenting how the data associated with a process is helpful to develop an overall understanding of the system. Data flow diagrams may also be used in showing the data exchange between a system and other systems in its environment.
Completed order form Complete order form Valida te order Signed order form Signed order form Record order Order details Signed order form Adjust available budget Order amount + account details Orders file Budget file Send to supplier Checked and signed order + order notification

Or der details + blank order form

STATE MACHINE MODELS: These model the behaviour of the system in response to external and internal events. They show the systems responses to stimuli so are often used for modelling real-time systems. State machine models show system states as nodes and events as arcs between these nodes. When an event occurs, the system moves from one state to another. State charts are an integral part of the UML.

12

Example for State Machine Model is the Microwave Oven Model,


Full power Full power do: set power = 600 Timer Number Full power Half power Set time do: get number exit: set time Door closed Door open Half power do: set power = 300 Door closed Disabled do: display 'Waiting' Start Enabled do: display 'Ready' Door open Waiting do: display time Operation do: operate oven Cancel

Waiting do: display time

Half power

Timer

4. Explain the various functionalities accomplished in the Requirement Engineering Process in detail.
OBJECTIVES:

To describe the principal requirements engineering activities and their relationships To introduce techniques for requirements elicitation and analysis To describe requirements validation and the role of requirements reviews To discuss the role of requirements management in support of other requirements engineering processes REQUIREMENTS ENGINEERING PROCESS: The processes used for RE vary widely depending on the application domain, the people involved and the organisation developing the requirements. There are a number of generic activities common to all processes.

Requirements Elicitation; Requirements Analysis; Requirements Validation; Requirements Management.

13

FEASIBILITY STUDIES: A feasibility study decides whether or not the proposed system is worthwhile. A short focused study that checks. If the system contributes to organisational objectives; if the system can be engineered using current technology and within budget; if the system can be integrated with other systems that are used. ELICITATION AND ANALYSIS: Sometimes called requirements elicitation or requirements discovery. Involves technical staff working with customers to find out about the application domain, the services that the system should provide and the systems operational constraints. May involve end-users, managers, engineers involved in maintenance, domain experts, trade unions, etc. These are called stakeholders. REQUIREMENTS VALIDATION: Concerned with demonstrating that the requirements define the system that the customer really wants. Requirements error costs are high so validation is very important. Fixing a requirements error after delivery may cost up to 100 times the cost of fixing an implementation error. Requirements Validation Techniques:

Requirements reviews: Systematic manual analysis of the requirements. Prototyping: Using an executable model of the system to check requirements. Test-case generation: Developing tests for requirements to check testability.

REQUIREMENTS MANAGEMENT: Requirements management is the process of managing changing requirements during the requirements engineering process and system development. Requirements are inevitably incomplete and inconsistent New requirements emerge during the process as business needs change and a better understanding of

14

the system is developed; Different viewpoints have different requirements and these are often contradictory. During the requirements engineering process plan, Requirements identification: How requirements are individually identified; A change management process: The process followed when analysing a requirements change; Traceability policies: The amount of information about requirements relationships that is maintained; CASE tool support: The tool support required to help manage requirements change; Traceability: Traceability is concerned with the relationships between requirements, their sources and the system design Source traceability: Links from requirements to stakeholders who proposed these requirements; Requirements traceability: Links between dependent requirements; Design traceability: Links from the requirements to the design; Case Tool Support:

Requirements storage Requirements should be managed in a secure, managed data store. Change management The process of change management is a workflow process whose stages can be defined and information flow between these stages partially automated.

Change Management:

Should apply to all proposed changes to the requirements. Principal stages Problem analysis. Discuss requirements problem and propose change; Change analysis and costing. Assess effects of change on other requirements;

Change implementation modifies requirements document and other documents to reflect change.

15

UNIT III ANALYSIS, DESIGN CONCEPTS AND PRINCIPLES


Systems Engineering - Analysis Concepts - Design Process And Concepts Modular Design Design Heuristic Architectural Design Data Design User Interface Design Real Time Software Design System Design Real Time Executives Data Acquisition System Monitoring And Control System.

1. Explain the Modular Design with necessary diagrams.


EFFECTIVE MODULAR DESIGN: Modularity has become an accepted approach in all engineering disciplines. A modular design reduces complexity, facilitates change, and results in easier implementation by encouraging parallel development of different parts of a system. FUNCTIONAL INDEPENDENCE: The concept of functional independence is a direct outgrowth of modularity and the concepts of abstraction and information hiding. Functional independence is achieved by developing modules with "single-minded" function and an "aversion" to excessive interaction with other modules. Stated another way, we want to design software so that each module addresses a specific sub function of requirements and has a simple interface when viewed from other parts of the program structure. It is fair to ask why independence is important. Software with effective modularity, that is, independent modules, is easier to develop because function may be compartmentalized and interfaces are simplified. Independent modules are easier to maintain because secondary effects caused by design or code modification are limited, error propagation is reduced, and reusable modules are possible. To summarize, functional independence is a key to good design, and design is the key to software quality. Independence is measured using two qualitative criteria: cohesion and coupling. Cohesion is a measure of the relative functional strength of a module. Coupling is a measure of the relative interdependence among modules. COHESION: Cohesion is a natural extension of the information hiding concept. A cohesive module performs a single task within a software procedure, requiring little interaction with procedures being performed in other parts of a program. Stated simply, a cohesive module should do just one thing. Cohesion may be represented as a "spectrum." We always strive for high cohesion, although the mid-range of the spectrum is often acceptable. Low-end cohesiveness is much "worse" than middle range, which is nearly as "good" as high-end cohesion. In practice, a designer need not be concerned with categorizing cohesion in a specific module. Rather, the overall concept should be understood and low levels of cohesion should be avoided when modules are designed. At the low end of the spectrum, we encounter a module that performs a set of tasks that relate to each other loosely, if at all. Such modules are termed coincidentally cohesive. A module that performs tasks that are related logically is logically cohesive. e.g., one module may read all kinds of input. When a module contains tasks that are related by the fact that all must be executed

16

with the same span of time, the module exhibits temporal cohesion. As an example of low cohesion, consider a module that performs error processing for an engineering analysis package. The module is called when computed data exceed pre-specified bounds. It performs the following tasks: (1) computes supplementary data based on original computed data, (2) produces an error report on the user's workstation, (3) performs follow-up calculations requested by the user, (4) updates a database, and (5) enables menu selection for subsequent processing. Although the preceding tasks are loosely related, each is an independent functional entity that might best be performed as a separate module. Combining the functions into a single module can serve only to increase the likelihood of error propagation when a modification is made to one of its processing tasks. Moderate levels of cohesion are relatively close to one another in the degree of module independence. When processing elements of a module are related and must be executed in a specific order, procedural cohesion exists. When all processing elements concentrate on one area of a data structure, communicational cohesion is present. High cohesion is characterized by a module that performs one distinct procedural task. COUPLING: Coupling is a measure of interconnection among modules in a software structure. Coupling depends on the interface complexity between modules, the point at which entry or reference is made to a module, and what data pass across the interface. In software design, we strive for lowest possible coupling. Simple connectivity among modules results in software that is easier to understand and less prone to a "ripple effect", caused when errors occur at one location and propagates through a system.

Modules a and d are subordinate to different modules. Each is unrelated and therefore no direct coupling occurs. Module c is subordinate to module a and is accessed via a conventional argument list, through which data are passed. As long as a simple argument list is present, low coupling is exhibited in this portion of structure. A variation of data coupling, called stamp coupling is found when a portion of a data structure is passed via a module interface. This occurs between

17

modules b and a. At moderate levels, coupling is characterized by passage of control between modules. Control coupling is very common in most software designs and is shown in Figure 1 where a control flag is passed between modules d and e. High coupling occurs when a number of modules reference a global data area Common coupling. Modules c, g, and k each access a data item in a global data area. Module c initializes the item. Later module g recomputes and updates the item. Let's assume that an error occurs and g updates the item incorrectly. Much later in processing module, k reads the item, attempts to process it, and fails, causing the software to abort. The apparent cause of abort is module k; the actual cause, module g. Diagnosing problems in structures with considerable common coupling is time consuming and difficult. However, this does not mean that the use of global data is necessarily "bad." It does mean that a software designer must be aware of potential consequences of common coupling and take special care to guard against them. The highest degree of coupling, content coupling, occurs when one module makes use of data or control information maintained within the boundary of another module. Secondarily, content coupling occurs when branches are made into the middle of a module. This mode of coupling can and should be avoided.

2. List out Design Heuristics for effective Modular Design


Evaluate the first iteration of the program structure to reduce coupling and improve cohesion. Attempt to minimize structures with high fan-out; strive for fan-in as structure depth increases. Keep the scope of effect of a module within the scope of control for that module. Evaluate module interfaces to reduce complexity, reduce redundancy, and improve consistency. Define modules whose function is predictable and not overly restrictive (e.g. a module that only implements a single sub function). Strive for controlled entry modules, avoid pathological connection (e.g. branches into the middle of another module)

Modular Design Evaluation Criteria:


Modular Decomposability - provides systematic means for breaking problem into sub problems Modular Composability - supports reuse of existing modules in new systems Modular Understandability - module can be understood as a stand-alone unit Modular Continuity - side-effects due to module changes minimized Modular Protection - side-effects due to processing errors minimized

3. Write short notes on User Interface Design.


User Interface Design means, Designing effective interfaces for software systems. OBJECTIVES:

18

To explain different interaction styles To introduce styles of information presentation To describe the user support which should be built-in to user interfaces To introduce usability attributes and system approaches to system evaluation

USER INTERFACE DESIGN PRINCIPLES: UI design must take account of the needs, experience and capabilities of the system users. Designers should be aware of peoples physical and mental limitations (e.g. limited short-term memory) and should recognise that people make mistakes. UI design principles underlie interface designs although not all principles are applicable to all designs. User familiarity: The interface should be based on user-oriented terms and concepts rather than computer concepts. For example, an office system should use concepts such as letters, documents, folders etc. rather than directories, file identifiers, etc. Consistency: The system should display an appropriate level of consistency. Commands and menus should have the same format; command punctuation should be similar, etc. Minimal surprise: If a command operates in a known way, the user should be able to predict the operation of comparable commands Recoverability: The system should provide some resilience to user errors and allow the user to recover from errors. This might include an undo facility, confirmation of destructive actions, 'soft' deletes, etc. User guidance: Some user guidance such as help systems, on-line manuals, etc. should be supplied User diversity: Interaction facilities for different types of user should be supported. For example, some users have seeing difficulties and so larger text should be available INTERFACE EVALUATION: Some evaluation of a user interface design should be carried out to assess its suitability. Full scale evaluation is very expensive and impractical for most systems. Ideally, an interface should be evaluated against a usability specification. However, it is rare for such specifications to be produced.

4. Write short notes on Real Time Software Design.


Real Time Software Design means, Designing embedded software systems whose behaviour is subject to timing constraints.

19

OBJECTIVES: To describe a design process for real-time systems To explain the role of a real-time executive To introduce generic architectures for monitoring and control and data acquisition systems. REAL TIME SYSTEMS: Real Time System is the system which monitors and controls their environment. Inevitably associated with Two hardware devices, Sensors: Collect data from the system environment Actuators: Change (in some way) the system's environment. Time is critical. Real-time systems MUST respond within specified times. A real-time system is a software system where the correct functioning of the system depends on the results produced by the system and the time at which these results are produced. A soft real-time system is a system whose operation is degraded if results are not produced according to the specified timing requirements. A hard real-time system is a system whose operation is incorrect if results are not produced according to the timing specification.
Sensor Sensor Sensor Sensor Sensor Sensor

Real-time control system

Actuator

Actuator

Actuator

Actuator

SYSTEM ELEMENTS: Sensors control processes: Collect information from sensors. May buffer information collected in response to a sensor stimulus Data processor: Carries out processing of collected information and computes the system response Actuator control: Generates control signals for the actuator.

20

Sensor

Actuator

Stimulus Sensor control Data processor

Response Actuator control

SYSTEM DESIGN: Design both the hardware and the software associated with system. Partition functions to either hardware or software. Design decisions should be made on the basis on non-functional system requirements. Hardware delivers better performance but potentially longer development and less scope for change. MONITORING AND CONTROL SYSTEMS: This is the important class of real-time systems. Continuously check sensors and take actions depending on sensor values Monitoring systems examine sensors and report their results Control systems take sensor values and control hardware actuators. DATA ACQUISITION SYSTEMS: Collect data from sensors for subsequent processing and analysis. Data collection processes and processing processes may have different periods and deadlines. Data collection may be faster than processing e.g. collecting information about an explosion. Circular or ring buffers are a mechanism for smoothing speed differences.

UNIT IV TESTING Taxonomy Of Software Testing Types Of S/W Test Black Box Testing Testing Boundary Conditions Structural Testing Test Coverage Criteria Based On Data Flow Mechanisms Regression Testing Unit Testing Integration Testing Validation Testing System Testing And Debugging Software Implementation Techniques
1. Explain Cyclomatic Complexity and its Calculation with example.
The number of tests to test all control statements equals the cyclomatic complexity Cyclomatic complexity equals number of conditions in a program

21

Useful if used with care. Does not imply adequacy of testing. Although all paths are executed, all combinations of paths are not executed.
1

bottom > top

while bottom < = top 2

if (elemArray [mid] == key

8 5 9

(if (elemArray [mid]< key 6

Independent Paths: 1, 2, 3, 8, 9 1, 2, 3, 4, 6, 7, 2 1, 2, 3, 4, 5, 7, 2 1, 2, 3, 4, 6, 7, 2, 8, 9 Test cases should be derived so that all of these paths are executed A dynamic program analyser may be used to check that paths have been executed.

Test drivers Level N Level N Le vel N Level N Level N Testing sequence

Test drivers

Level N 1

Level N 1

Level N 1

22

2. Explain the types of Black Box Testing in detail.


EQUIVALENCE PARTITIONING:

Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. An ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing of all character data) that might otherwise require many cases to be executed before the general error is observed. Equivalence partitioning strives to define a test case that uncovers classes of errors, thereby reducing the total number of test cases that must be developed. Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition. Using concepts introduced in the preceding section, if a set of objects can be linked by relationships that are symmetric, transitive, and reflexive, an equivalence class is present. An equivalence class represents a set of valid or invalid states for input conditions. Typically, an input condition is a specific numeric value, a range of values, a set of related values, or a Boolean condition. Equivalence classes may be defined according to the following guidelines: If an input condition specifies a range, one valid and two invalid equivalence classes are defined. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined. If an input condition is Boolean, one valid and one invalid class are defined. As an example, consider data maintained as part of an automated banking application. The user can access the bank using a personal computer, provide a six-digit password, and follow with a series of typed commands that trigger various banking functions. During the log-on sequence, the software supplied for the banking application accepts data in the form area codeblank or three-digit number Prefix three-digit number not beginning with 0 or 1 Suffix four-digit number

23

Password six digit alphanumeric string Commands check, deposit, bill pay, and the like The input conditions associated with each data element for the banking application can be specified as area code: Input condition, Boolean the area code may or may not be present. Input Condition, range values defined between 200 and 999, with specific exceptions. Prefix: Input condition, range specified value >200 Input condition, value four-digit length Password: Input Condition, Boolean a password may or may not be present Input Condition, value six-character string Command: Input Condition, set containing commands noted previously. Applying the guidelines for the derivation of equivalence classes, test cases for each input domain data item can be developed and executed. Test cases are selected so that the largest numbers of attributes of an equivalence class are exercised at once.

BOUNDARY VALUE ANALYSIS:

For reasons that are not completely clear, a greater number of errors tends to occur at the boundaries of the input domain rather than in the "center." It is for this reason that Boundary Value Analysis (BVA) has been developed as a testing technique. Boundary value analysis leads to a selection of test cases that exercise bounding values. Boundary value analysis is a test case design technique that complements equivalence partitioning. Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases at the "edges" of the class. Rather than focusing solely on input conditions, BVA derives test cases from the output domain as well Guidelines for BVA are similar in many respects to those provided for Equivalence Partitioning: If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b and just above and just below a and b.

24

If an input condition specifies a number of values, test cases should be developed that exercise the minimum and maximum numbers. Values just above and below minimum and maximum are also tested. Apply guidelines 1 and 2 to output conditions. For example, assume that temperature vs. pressure table is required as output from an engineering analysis program. Test cases should be designed to create an output report that produces the maximum (and minimum) allowable number of table entries. If internal program data structures have prescribed boundaries (e.g., an array has a defined limit of 100 entries), be certain to design a test case to exercise the data structure at its boundary. Most software engineers intuitively perform BVA to some degree. By applying these guidelines, boundary testing will be more complete, thereby having a higher likelihood for error detection.

3. Explain Unit Testing and Structural Testing in detail.


UNIT TESTING:

In unit testing the individual components are tested independently to ensure their quantity. The focus is to uncover the errors in design and implementation. The various tests that are conducted during the unit test are described as below. Module interfaces are tested for proper information flow in and out of the program. Local data are examined to ensure that integrity is maintained. Boundary conditions are tested to ensure that the module operates properly at boundary established to limit or restrict processing. All the basis (independent) paths are tested for ensuring that all statements in the module have been executed only once. All error handling paths should be tested.

25

a. Drivers and Stub software need to be developed to test incomplete software. The driver is a program that accepts the test data and prints the relevant results and the stub is a subprogram that uses the module interfaces and performs the minimal data if required. This is given by following figure.

b. The unit testing is simplified when a component with high cohesion(with one function) is designed.in such a design the number of test cases are less and one can easily predict or uncover errors.

26

STRUCTURAL TESTING:

White-box testing, sometimes called glass-box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Using white-box testing methods, the software engineer can derive test cases that (1) guarantee that all independent paths within a module have been exercised at least once,(2) exercise all logical decisions on their true and false sides, (3) execute all loops at their boundaries and within their operational bounds, and (4) exercise internal data structures to ensure their validity. A reasonable question might be posed at this juncture: "Why spend time and energy worrying about (and testing) logical minutiae when we might better expend effort ensuring that program requirements have been met?" Stated another way, why dont we spend all of our energy on black-box tests? The answer lies in the nature of software defects Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. Errors tend to creep into our work when we design and implement function, conditions, or controls that are out of the mainstream. Everyday processing tends to be well understood, while "special case" processing tends to fall into the cracks. We often believe that a logical path is not likely to be executed when, in fact, it may be executed on a regular basis. The logical flow of a program is sometimes counterintuitive, meaning that our unconscious assumptions about flow of control and data may lead us to make design errors that are uncovered only once path testing commences. Typographical errors are random. When a program is translated into programming language source code, it is likely that some typing errors will occur. Many will be uncovered by syntax and type checking mechanisms, but others may go undetected until testing begins. It is as likely that a typo will exist on an obscure logical path as on a mainstream path. Basis Path Testing: Basis path testing is a white-box testing technique first proposed by Tom McCabe. The basis path method enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set

27

are guaranteed to execute every statement in the program at least one time during testing. Flow Graph Notation: Before the basis path method can be introduced, a simple notation for the representation of control flow, called a flow graph (or program graph) must be introduced. The flow graph depicts logical control flow using the notation illustrated in Figure. Each structured construct has a corresponding flow graph symbol. To illustrate the use of a flow graph, we consider the procedural design representation in Figure. Here, a flowchart is used to depict program control structure

Figure maps the flowchart into a corresponding flow graph (assuming that no compound conditions are contained in the decision diamonds of the flowchart). Referring to Figure, each circle, called a flow graph node, represents one or more procedural statements. A sequence of process boxes and a decision diamond can map into a single node. The arrows on the flow graph, called edges or links, represent flow of control and are analogous to flowchart arrows. An edge must terminate at a node, even if the node does not represent any procedural statements (e.g., see the symbol for the if-then-else construct). Areas bounded by edges and nodes are called regions. When counting regions, we include the area outside the graph as a region. When compound conditions are encountered in a procedural design, the generation of a flow graph becomes slightly more complicated. A compound condition occurs when one or more Boolean operators (logical OR, AND, NAND, NOR) is present in a conditional statement. Referring to Figure, the PDL segment translates into the flow graph shown. Note that a separate node is

28

created for each of the conditions a and b in the statement IF a OR b. Each node that contains a condition is called a predicate node and is characterized by two or more edges emanating from it.

4. Explain the Regression Testing and Integration Testing in detail.


REGRESSION TESTING:

Regression testing is testing done to check that a system update does not reintroduce errors that have been corrected earlier. All or almost all regression tests aim at checking the, Functionality black box tests. Architecture grey box tests Since they are supposed to test all functionality and all previously done changes, regression tests are usually large. Thus, regression testing needs automatic, Execution no human intervention Checking. Leaving the checking to developers will not work. We face the same challenge when doing automating regression test as we face when doing automatic test checking in general: Which parts of the output should be checked against the oracle? This question gets more important as we need to have more version of the same test due to system variability. Simple but annoying and sometimes expensive problems are e.g. Use of date in the test output Changes in number of blanks or line shifts Other format changes Changes in lead texts Simple but annoying and sometimes expensive problems are e.g. Use of date in the test output

29

Changes in number of blanks or line shifts Other format changes Changes in lead texts Regression testing is a critical part of testing, but is often overlooked. Whenever a defect gets fixed, a new feature gets added, code gets re-factored or changed in any way, and there is always a chance that the changes may break something, that was previously working. Regression testing is the testing of features, functions, etc. that have been tested before, to make sure they still work, after a change has been made to software. Within a set of release cycles, the flow is typically as follows: The testers will test the software and find several defects. The developers will fix the defects, possibly add a few more features and give it back to be tested. The testers will then have not only tested the new features, but test all of the old features to make sure they still work. Questions often arise as to how much Regression Testing needs to be done. Ideally, in Regression Testing, everything would be tested just as thoroughly as it was the first time, but this becomes impractical as time goes on and more and more features and, therefore, test cases get added. When looking at which tests to execute during regression testing, some compromises need to be made. When deciding, you will want to focus on what has changed. If a feature has been significantly added to or changed, then you will want to execute a lot of tests against this feature. If a defect has been fixed in a particular area you will want to check that area to see that the fix didn't cause new defects. If, on the other hand, a feature has been working well for some time and hasn't been modified only a quick test may need to be executed.

INTEGRATION TESTING:

Integration Testing, Tests complete systems or subsystems composed of integrated components Integration testing should be black-box testing with tests derived from the specification

30

Main difficulty is localising errors Incremental integration testing reduces this problem.

A A T1 A T2 B T3 C T4 D Test sequence 1
Approaches to Integration Testing: Top-down Testing:

T1

T1 T2 T2 B T3 T3 C T4

T5 Test sequence 3

Test sequence 2

Start with high-level system and integrate from the top-down replacing individual components by stubs where appropriate.

Level 1

Testing sequence

Level 1

. ..

Level 2 Level 2 stubs

Level 2

Level 2

Level 2

Level 3 stubs
Bottom-up Testing: Integrate individual components in levels until the complete system is created.

31

Test drivers Level N Level N Le vel N Level N Level N Testing sequence

Test drivers

Level N 1

Level N 1

Level N 1

. In practice, most integration involves a combination of these strategies

UNIT V SOFTWARE PROJECT MANAGEMENT


Measures And Measurements ZIPFs Law Software Cost Estimation Function Point Models COCOMO Model Delphi Method Scheduling Earned Value Analysis Error Tracking Software Configuration Management Program Evolution Dynamics Software Maintenance Project Planning Project Scheduling Risk Management CASE Tools

1. Explain Software Cost Estimation in detail?


Software productivity Estimation techniques Algorithmic cost modelling Project duration and staffing Fundamental estimation questions are, How much effort is required to complete an activity? How much calendar time is needed to complete an activity? What is the total cost of an activity? Project estimation and scheduling are interleaved management activities.

32

SOFTWARE COST COMPONENTS: Hardware and software costs. Travel and training costs. Effort costs (the dominant factor in most projects) o The salaries of engineers involved in the project; o Social and insurance costs. Effort costs must take overheads into account o Costs of building, heating, lighting. o Costs of networking and communications. o Costs of shared facilities (e.g library, staff restaurant, etc.).

COSTING AND PRICING: Estimates are made to discover the cost, to the developer, of producing a software system. There is not a simple relationship between the development cost and the price charged to the customer. Broader organisational, economic, political and business considerations influence the price charged.

SOFTWARE PRODUCTIVITY: A measure of the rate at which individual engineers involved in software development produce software and associated documentation. Not quality-oriented although quality assurance is a factor in productivity assessment. Essentially, we want to measure useful functionality produced per time unit.

PRODUCTIVITY MEASURES:

33

Size related measures based on some output from the software process. This may be lines of delivered source code, object code instructions, etc. Function-related measures based on an estimate of the functionality of the delivered software. Function-points are the best known of this type of measure.

LINES OF CODE: Whats a line of code? o The measure was first proposed when programs were typed on cards with one line per card; o How does this correspond to statements as in Java which can span several lines or where there can be several statements on one line. What programs should be counted as part of the system? This model assumes that there is a linear relationship between system size and volume of documentation.

ESTIMATION TECHNIQUES:

There is no simple way to make an accurate estimate of the effort required to develop a software system o Initial estimates are based on inadequate information in a user requirements definition; o The software may run on unfamiliar computers or use new technology; o The people in the project may be unknown. Project cost estimates may be self-fulfilling o The estimate defines the budget and the product is adjusted to meet the budget. Algorithmic cost modelling. Expert judgement.

34

Estimation by analogy. Parkinson's Law. Pricing to win. PROJECT DURATION AND STAFFING: As well as effort estimation, managers must estimate the calendar time required completing a project and when staff will be required. Calendar time can be estimated using a COCOMO 2 formula o TDEV = 3 (PM)(0.33+0.2*(B-1.01)) o PM is the effort computation and B is the exponent computed as discussed above (B is 1 for the early prototyping model). This computation predicts the nominal schedule for the project. The time required is independent of the number of people working on the project. STAFFING REQUIREMENTS: Staff required cant be computed by diving the development time by the required schedule. The number of people working on a project varies depending on the phase of the project. The more people who work on the project, the more total effort is usually required. A very rapid build-up of people often correlates with schedule slippage.

2. Explain Software Configuration Management in detail?


a) b) Configuration Management Planning Change Management

35

c) d) e)

Version and Release Management System Building CASE tools for Configuration Management

CONFIGURATION MANAGEMENT PLANNING: All products of the software process may have to be managed: a. Specifications; b. Designs; c. Programs; d. Test data; e. User manuals. Thousands of separate documents may be generated for a large, complex software system. The Configuration Management Plans: Defines the types of documents to be managed and a document naming scheme. Defines who takes responsibility for the CM procedures and creation of baselines. Defines policies for change control and version management. Defines the CM records which must be maintained. Describes the tools which should be used to assist the CM process and any limitations on their use. Defines the process of tool use. Defines the CM database used to record configuration information. May include information such as the CM of external software, process auditing, etc.

36

Configuration Hierarchy:

PCL -T OOLS

COMPILE

BIND

EDIT

MAKEGEN

FORM

STR UCTU RES

HELP

DISP LAY

QUERY

FORM-SPECS

AST -INTER FACE

FORM-IO

OBJECTS

CODE

TESTS

The Configuration Database: All CM information should be maintained in a configuration database. This should allow queries about configurations to be answered: o Who has a particular system version? o What platform is required for a particular version? o What versions are affected by a change to component X? o How many reported faults in version T? The CM database should preferably be linked to the software being managed. New versions of software systems are created as they change: o For different machines/OS; o Offering different functionality; o Tailored for particular user requirements. Configuration management is concerned with managing evolving software systems: o System change is a team activity; o CM aims to control the costs and effort involved in making changes to a system.

37

It involves the development and application of procedures and standards to manage an evolving software product. The CM may be seen as part of a more general quality management process. When released to CM, software systems are sometimes called baselines as they are a starting point for further development.

HP version Windows XP version Initial sy stem PC version Linux version Sun version

Desktop version

Server version

CM Standards: CM should always be based on a set of standards which are applied within an organisation. Standards should define how items are identified; how changes are controlled and how new versions are managed. Standards may be based on external CM standards (e.g. IEEE standard for CM). Some existing standards are based on a waterfall process model - new CM standards are needed for evolutionary development.

CHANGE MANAGEMENT:

Change management is a procedural process so it can be modelled and integrated with a version management system. Change management tools Form editor to support processing the change request forms; Workflow system to define who does what and to automate information transfer;

38

Change database that manages change proposals and is linked to a VM system; Change reporting system that generates management reports on the status of change requests.

VERSION AND RELEASE MANAGEMENT:

Version

and

Release

Identification:

Systems

assign

identifiers

automatically when a new version is submitted to the system. Storage Management: System stores the differences between versions rather than all the version code. Change History Recording: Record reasons for version creation. Independent Development: Only one version at a time may be checked out for change. Parallel working on different versions. Project Support: Manages groups of files associated with a project rather than just single files.

SYSTEM BUILDING: It is easier to find problems that stem from component interactions early in the process. This encourages thorough unit testing - developers are under pressure not to break the build. A stringent change management process is required to keep track of problems that have been discovered and repaired.

CASE TOOLS FOR CONFIGURATION MANAGEMENT: CM processes are standardised and involve applying pre-defined procedures. Large amounts of data must be managed. CASE tool support for CM is therefore essential. Mature CASE tools to support configuration management are available ranging from stand-alone tools to integrated CM workbenches

39

3. Explain COCOMO Model in detail?


COCOMO MODELS:

COCOMO has three different models that reflect the complexity: The Basic Model The Intermediate Model The Detailed Model The Development Modes: Project Characteristics: Organic Mode: o Relatively small, simple software projects o Small teams with good application experience work to a set of less than rigid requirements o Similar to the previously developed projects o relatively small and requires little innovation Semidetached Mode: o Intermediate (in size and complexity) software projects in which teams with mixed experience levels must meet a mix of rigid and less than rigid requirements. Embedded Mode: o Software projects that must be developed within a set of tight hardware, software, and operational constraints

Some Assumptions: Primary cost driver is the number of Delivered Source Instructions (DSI) / Delivered Line Of Code developed by the project

40

COCOMO estimates assume that the project will enjoy good management by both the developer and the customer Assumes the requirements specification is not substantially changed after the plans and requirements phase Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs It does not account for differences in hardware constraints, personnel quality and experience, use of modern tools and techniques, and other project attributes known to have a significant influence on software costs, which limits its accuracy. BASIC COCOMO MODEL:

Formula: E=ab (KLOC or KDSI) bb D=cb (E) db P=E/D where E is the effort applied in person-months, D is the development time in chronological months, KLOC / KDSI is the estimated number of delivered lines of code for the project (expressed in thousands), and P is the number of people required. The coefficients ab, bb, cb and db are given in next slide. Software project Organic Semi-detached Embedded ab 2.4 3.0 3.6 bb 1.05 1.12 1.20 cb 2.5 2.5 2.5 db 0.38 0.35 0.32

41

Equation:

Mode Organic

Effort

Schedule

E=2.4*(KDSI)1.05 TDEV=2.5*(E)0.38

Semidetached E=3.0*(KDSI)1.12 TDEV=2.5*(E)0.35 Embedded E=3.6*(KDSI)1.20 TDEV=2.5*(E)0.32

Limitation: Its accuracy is necessarily limited because of its lack of factors which have a significant influence on software costs The Basic COCOMO estimates are within a factor of 1.3 only 29% of the time, and within a factor of 2 only 60% of the time Example: We have determined our project fits the characteristics of Semi-Detached mode We estimate our project will have 32,000 Delivered Source Instructions. Using the formulas, we can estimate: Effort = 3.0*(32) 1.12 Schedule = 2.5*(146) 0.35 Productivity = = = = Average Staffing = = 146 man-months 14 months 32,000 DSI / 146 MM 219 DSI/MM 146 MM /14 months 10 FSP

42

Das könnte Ihnen auch gefallen