Sie sind auf Seite 1von 16

6) Requirements Engineering Process

a) Requirements Engineering: Includes all of the activities needed to create and maintain a system requirements
document.

b) Requirements Engineering Process Activities: i) Feasibility Studies: A short and focused


study. Its objective is to answer:

(1) Input: Outline description of the system


and how it will be used within an organisation.

(2) Outcomes: Feasibility Report should (3) Questions to be answered:


(a) The system contribution to the overall objectives of the organisation. (b) Possibility of implementing the system using current technology and within given cost and schedule constraints. (c) Possibility of integrating the system with other systems, which are already in place.

(4) Activities: Information Assessment, Information Collection and Report Writing. (5) Sources of Information:
(a) (b) (c) (d) Department managers where the system will be used; Software engineers who are familiar with the type of system that is proposed; Technology experts; and End-users of the system.

ii) Requirements Elicitation and Analysis: An iterative process which involves


domain understanding, requirements collection, classification, structuring, prioritisation and validation.

(1) Stakeholders: Everyone who has some direct or indirect influence on the
system requirements

(2) Its a Difficult Process Because:


(a) Stakeholders often do not really know what they want from the computer system. They often make unrealistic demands. (b) Stakeholders in a system normally express requirements in their own terms and with own knowledge of their own work. (c) Stakeholders with different job descriptions normally have different requirements and it could be expressed in several different ways. (d) Political Factor, as it can increase influence (e) Economic and business environment in which the analysis takes place keeps changing.

(3) Techniques of Requirement Elicitation and Analysis


(a) Scenarios: Descriptions of how a system is used in practice. People can relate to these more readily than abstract statement of what they require from a system. (i) A scenario may include: 1. A system states description at the starting of the scenario. 2. A description of the normal flow of events that happen in the scenario. 3. A description of what can go wrong and how to handle it. 4. Information about other activities that could be happening at the same time. 5. A description of the state of the system after the ending of the scenario.
www.oumstudents.tk

Page 1

(ii) Event scenarios are used to document the system behaviour when presented with specific events. Includes a description of data flows and the actions of the system and document the exceptions. (iii) Use-case: A scenario-based technique for requirements elicitation, will identify the actors involved in an interaction and will name the type of interaction. (b) Ethnography is a technique of observation that can be used to understand social and organisational requirements. (i) System analyst involves himself in the working environment where the system will be used. (ii) It helps discover hidden system requirements which represent the actual process. (iii) Two types of requirements that Ethnography is usually useful at discovering: 1. Requirements taken from the way people actually work rather than the way process definitions say they should work. 2. Requirements that are taken from cooperation and knowledge of other peoples activities.

iii) Requirements Specification: iv) Requirements Validation: Proving that the requirements match the users requests. Concerned with finding
problems with the requirements.

(1) Validation is important because errors in a requirements document can lead to extensive modification
costs when they are later discovered during development or after the system is in service.
In Pastpaper Sep 2010

(2) Different Types of Requirement Validation Checks:


(a) Validity Checks (b) Consistency Checks Requirements in the document should not conflict with each other. (c) Completeness Checks The requirements document should include requirements that define all functions and constraints requested by the system user. (d) Realism Checks All requirements should be checked to make sure that they can be implemented using existing technology, budget and schedule for the system development. (e) Verifiability To minimise the potential for disagreement between customer and contractor, system requirements should always be written so that they are verifiable.

(3) Requirement Validation Techniques:


(a) (b) (c) (d)
In Sample Paper

Requirements Reviews Systematic manual analysis of the requirements. Prototyping It is using an executable model of the system to check requirements. Test-case Generation Developing tests for requirements to check testability. Automated Consistency Analysis Checking the consistency of a structured requirements description.

(4) Requirements review is defined as a manual process of checking the requirements document for
abnormalies and omission, which involves the client and contractor staff. (a) Informal Requirements Review just involve requirements discussion between developers with as many system users as possible. (b) Formal Requirements Review, the development team should walk the client through the system requirements, explaining the implications of each requirement. organisational and technical changes inevitably lead to changes to the requirements for a software system.

c) Requirements management is the process to understand and control changes to system requirements. Business,
In Sample Paper

i) New requirements appear because: (1) Different users have different requirements and priorities. (2) The people who pay for a system and the users of a system are usually not the same people. (3) The business and technical environment of the system changes frequently and these changes must be
reflected in the system itself.

ii) Three principal stages to a change management process (1) Problem Analysis and Change Specification (2) Change Analysis and Costing (3) Change Implementation
www.oumstudents.tk Page 2

7) User Interface Design


a) Graphical User Interface (GUI): i) Advantages: (1) Easy to learn and Use (2) Can open several screens for system interaction (3) Possible to have fast and full screen interaction with immediate access to anywhere on the screen. ii) Characteristics: (1) Windows: Multiple windows allow display of different information simultaneously. (2) Icons: Represents different types of information, easy and fast to understand. (3) Menus: Commands are selected from a menu rather than typed in command language. (4) Pointing: Selecting choices are fast with a pointing device such as a mouse. (5) Graphics: Graphical elements can be mixed with text on the same display. iii) User Interface Design Process: (1) Exploratory development is considered the most
effective design.

(2) This prototyping process can begin with simple


paper-based interface test design before starting to develop screen-based designs that simulate user interaction.

(3) A user-centered approach should be used, where


the end-users of the system playing an active part in the design process.

iv) Techniques Used to understand the users needs: Task analysis, Ethnographic studies, User interviews and
Observations or a mixture of all of these techniques.

b) User Interface Design Principles: (FC.MR.GD )


In Pastpaper Sep 2010, Sep 2008

i) User Familiarity Principle stated that users should not be forced to adapt to an interface because it is
convenient to implement.

(1) The interface should use terms and concepts which are drawn from the experience of the people who will
make most use of the system

ii) Consistency Principle: System commands and menus should have the same format, command punctuation
should be the same and parameters should be passed to all commands in the same way.

In Sample Paper

(1) The interface should be consistent in that, wherever possible, comparable operations should be activated
in the same way.

iii) Minimal Surprise Principle: User can get irritated when a system behaves unexpectedly (1) Users should never be surprised by the behaviour of a system. iv) Recoverability Principle: Users cannot avoid from making mistakes when using a system. (1) The interface should include mechanisms to allow users to recover from errors. v) User Guidance (Assistance) Principle: stated that interfaces should have built-in user help facilities. (1) The interface should provide meaningful feedback when errors occur and provide context-sensitive user
help facilities.

vi) User Diversity Principle: States that, there are different types of users for many interactive systems. (1) The interface should provide appropriate interaction facilities for different types of system user (2) There are two types of users:
(a) Casual Users: users who interact occasionally with the system. (b) Power Users: users who use the system for several hours each day.

vii) Principle of acknowledging user diversity can conflict with the other interface design principles. The reason is
that some types of user may prefer to have very rapid interaction rather than user interface consistency.
www.oumstudents.tk Page 3

c) User interaction: Giving commands and


data to the computer system.

i) Direct Manipulation: For example,


to delete a file, a user can drag it to a trashcan on the screen.

ii) Menu Selection: A user selects a


command from a list of choices

iii) Form Fill-in: Filling the form fields iv) Command Language: Issuing a
special command and related parameters to instruct the system what to do.

v) Natural Language: In order to delete a file, the user could type delete the file named xxx.

d) Information Presentation: By separating the presentation


system from the data, the representation on the users screen can be changed without changing the basic computational system.

i) Model-View-Controller (MVC): First used in Smalltalk. It is a useful way to support multiple presentations of
data and users can interact with each presentation using a style that is more suitable to it. The data to be displayed is encapsulated together in a model object. Each model object can have several separate view objects associated with it where each view is different display representation of model.

ii) Factors to be considered to decide on how to present information: (1) User interest in specific information or in the relationships between different data values. (2) Information values change rate. (3) User response to information change. (4) User interaction with the displayed information. (5) Type of information to be displayed. For example: textual or numeric. iii) Guidelines for Effective Color Use in User Interfaces: (1) Limit the colors and be conservative on how they are used (2) Use color change to show a change in system status (3) Use color coding to support the task which users are trying to perform (4) Use color coding in a thoughtful and consistent way (5) Be careful about color pairing iv) Two most frequent Errors made by designers when using colour in a user interface: (1) Using too many colours in a display, (2) Associating meanings with particular colours. Color should not be used to represent meaning because:
(a) About 10% of men are colour-blind (b) Human colour perceptions are different and there are different interpretations in different professions about the meaning of particular colours.

e) User Support: Help systems are one part of user interface design. It is used for user guidance. i) Three areas covered by User Support Help Systems: (1) The message created by the system to react to user actions. (2) The online users help system. (3) The documentation given with the system.
www.oumstudents.tk Page 4

In Sample Paper

In Pastpaper Sep 2010, Jan 2009

ii) Factors that should be considered when designing help text or error messages: (1) Context: Be aware of what the user is doing and should adjust the output message to the current context. (2) Experience: Should provide both long meaningful messages (for beginners) and short messages (for more
experienced users) and allow the user to control message conciseness.

(3) Skill Level: Messages should be tailored to the users skills as well as their experience (terminologies etc). (4) Style Messages should be positive rather than negative. They should use the active rather than the passive
mode of address. They should never be insulting or try be funny.

(5) Culture Wherever possible, the designer of messages should be familiar with the culture of the country
where the system is sold. A suitable message for one culture might be unacceptable in another.

iii) Error Messages: Very important as it can be the first impression users have about a system. Error messages
should constantly be polite, concise, consistent and constructive. Good error message should suggest how the error can be corrected and provide a link to a help system.

iv) Help System Design: The


structure of the help frame network is usually hierarchical with cross-links. General information is held at the top of the hierarchy while detailed information is located at the bottom.

f) User Documentation (System Manual):


In Pastpaper Jan 2009

is important to guide the users on how to use a particular system.

i) Functional Description: Describes,


very briefly, the services which the system provides

ii) Installation Document: Contains


details of how to install the system

iii) Introductory Manual: Presents an informal introduction to the system, describing its normal usage, how to get
started and how end-users might use the common system facilities.

iv) Reference Manual: Explains the system facilities and their usages, gives a list of error messages and possible
causes and explains how to recover from detected errors.

v) Administrators Manual: Explain the messages generated when the system interacts with other systems and
how to respond to these messages.

g) Interface Evaluation: Process of testing the usability of an interface and testing whether it meets user requirement. i) Usability Attributes: (1) Learnability How long does it take a new user to become productive with the system? (2) Speed of Operation How well does the system response match the users work practice? (3) Robustness How tolerant is the system of user error? (4) Recoverability How good is the system at recovering from user errors? (5) Adaptability How closely is the system tied to a single model of work? ii) Simpler, Less Expensive Techniques of User Interface Evaluation: (1) Questionnaires that are used to collect information about users opinion of the interface. (2) Observation and Interview of users working with the system. (3) Video recording of usual system use. (4) The insertion of code that collects information about the most-used facilities and most common errors.
www.oumstudents.tk Page 5

8) Design with Re-Use


In Pastpaper Jan 2008

a) Design with Re-Use: involves designing the software

based on existing examples of good design and use these software components where available and suitable.

b) Two types of Re-Use: i) Opportunistic Re-Use: Used in programming when


components are suitable for a requirement.

ii) Systematic Re-Use: Needs a design process that


looks at how existing designs can be d, and that use the design in available software components.

c) Reused-based Software Engineering: An approach to


software development where it tries to maximise the reuse of existing software.

d) There are three main requirements for component Re-Use: i) The components that are reusable need to be kept for future use. ii) The people who reuse the components must have confidence that the components are reliable and functional. iii) The reusable components must have related documentation to help the people who want to reuse these
components to understand them and use them in a new application.
In Pastpaper Sep 2010, Jan 2008

e) Advantages of Software Re-Use: i) Reduce in Overall Development Costs: Fewer software components will be specified, designed, implemented
and validated.

ii) Increased Reliability: Reused components that have been implemented in working systems should be more
reliable than new components.

iii) Reduced Process Risk: The uncertainty in the cost of reusing software component is less than developing a new
component.

iv) Effective Use of Specialists: The application specialists can develop reusable components that use their
knowledge instead of doing the same work on different projects.

v) Standard Compliance: The use of standard user interfaces enhances reliability as users make fewer mistakes
when given a familiar interface.

vi) Accelerated Development: Speed of system production is increased because time for both development and
validation should be reduced.
In Pastpaper Jan 2008

f) Disadvantages of Software Re-Use: i) Increased Maintenance Costs: Reused elements of the system may become incompatible with system changes,
if source code is not available.

ii) Lack of Tool Support: It is difficult to find a tool to support component reuse. Eg: CASE tools doesnt support it. iii) Not-invented-here Syndrome Writing new software viewed more challenging than reusing others software. iv) Finding and Adapting Reusable Components Software components have to be searched in a library or archive,
understood and adapted to work in a new environment.

g) Generator-Based Re-Use: Re-usable knowledge is included in a program generator system that can be
programmed in a domain-oriented language

i) ii) iii) iv)

Cost-effective Depends on the identification of conventional domain abstractions. Used in business data processing, language processing and in command and control systems. Easier for end-users to develop programs.

h) Component-Based Development: Introduced due to the insufficient use of object-oriented development. i) Stand-alone Providers: Means that when a system needs some service, it will call on a component to provide
that service without any concern about where that component is executing or the programming language used to develop the component.
www.oumstudents.tk Page 6

In Pastpaper Sep 2008

ii) Two Characteristics of a Reusable Component: (1) The component is treated as an independent executive entity, source code not available. (2) Components bring out their interface and all interactions are made through that. iii) Component Interfaces: (1) Requires Interface: Indicates what services must be available from the system which is using component. (2) Provides Interface: Describes the services provided by the component. iv) Five Different Levels of Abstraction: (1) Functional Abstraction: The component uses a single function such as mathematical function. (2) Casual Groupings: The component is a collection of loosely related entities that could consist of data
declarations, functions and many more.

In Pastpaper Jan 2008

In Sample Paper

In Pastpaper Sep 2008

(3) Data Abstractions: The component symbolises a data abstraction or class in an object-oriented language. (4) Cluster Abstractions: The component is a group of interrelated classes that work together. These classes
are sometimes known as frameworks.

(5) System Abstractions: The component is an entire self-contained system. Reusing system-level abstractions
is sometimes called COTS (Commercial-Off-The-Shelf) product reuse.

v) Objects were the most appropriate abstraction for use, suggests users of object -oriented development. vi) Frameworks (or application frameworks): Subsystem design that consists of a collection of abstract and
concrete classes and also the interface between them.

(1) Three classes of frameworks:


(a) System Infrastructure Frameworks: Supports the development of system infrastructures like communications, user interfaces and compilers. (b) Middleware Integration Frameworks: Consists of a set of standards and associated object classes that support component communication and information exchange. Eg: CORBA, DCOM, Java Beans etc. (c) Enterprise Application Frameworks: Related to specific application domains. Eg: telecom system.

(2) Primary problem with frameworks is their inherent complexity and the time it takes in order to learn how
to use them, high cost of introduction.

vii) Commercial-Off-The-Shelf (COTS): Offered by a third-party vendor. Related to the reuse of large-scale, off-the-
shelf systems. These provide a lot of functionally and their reuse hugely reduces costs and development time.

(1) Advantages of COTS: COTS offers more functionalities than specialised components. (2) Four Problems with COTS System Integration:
(a) (b) (c) (d) Lack of Control over Functionality and Performance Problems with COTS System Interoperability No Control over System Evolution Support from COTS Vendors

viii) Component Development for Reuse: (1) Implement-based process: Reusable components are constructed from existing components that have
been successfully reused

(2) Component characteristics that lead to reusability:


(a) Stable domain abstractions: The main concepts in the application domain that slowly change. (b) The component should not reveal the way its state is represented and should provide operations that allow the state to be accessed and updated (c) The component should be as independent as possible, Stand-alone. (d) All exceptions must be part of the component interface.

i) Design Patterns: In software design, design patterns have been linked with object-oriented design. usually rely on
object characteristics like inheritance and equally applicable to all approaches to software design:
In Pastpaper Jan 2008

i) Four Essential Elements of a Design Patterns:


(1) Name (3) Solution Description (2) Description of the Problem Area (4) Statement of Consequences or Results and Profits of using the pattern.
www.oumstudents.tk Page 7

9) Verification & Validation


a) Verification and Validation: Process of looking for defects in a software system. Conducted throughout the whole
project lifecycle. It starts with requirements reviews and continues through design reviews and code inspections to product testing.

i) Verification: Process of checking software to conform it meets the specification. ii) Validation: More general process. Ensures the software meets the expectations of the customer. iii) Main goal of software verification and validation: is to make sure that the software is good enough for its
original purpose.

iv) Should include activities like (1) Draw up standards and procedures for software inspections and testing, (2) Establish checklists to drive program inspections and define the software test plan. b) Debugging: The process of locating and correcting defects are called. c) Regression Testing: Re-inspecting the program or repeating previous test runs, to check that the new changes to a
program do not create new errors into the system. d) The required confidence of Software depends on:

i) Software Function: Level of confidence depends on how important a software is to an organization. ii) User Expectations: Many users have low expectations of their software.


e) System checking and analysis techniques that can be used for verification and validation process: i) Software Testing: Executing an implementation of the software with test data and examining the outputs, and
its operational behaviour, to find out if its performing as required.

(1) Two distinct types of testing that may be used in different stages in the software process are:
(a) Defect testing is aimed to find inconsistencies between a program and its specification. These inconsistencies are normally caused by program faults or defects. (b) Statistical testing is done to assess the programs performance and reliability and to test how it works under operational conditions. Tests are planned to show the actual user inputs and their frequency.

(2) Testing is very important for: reliability assessment, performance analysis and user interface validation
and to check whether the software requirements are the same as the user specifications.

(3) Major Components of a Test Plan:


(a) (b) (c) (d) (e) (f) (g) Testing Process: Description of the major phases of the testing process Requirements Traceability: Plan testing so that all requirements are tested individually Tested Items: Software process products, which will be tested, should be identified. Testing Schedule: Overall testing schedule, includes resource allocation Test Recording Procedure: Test results should be systematically recorded, for ease of auditing etc. Hardware & Software Requirements: Determines software tools and estimated hardware required Constrains: that affect the testing process, like staff shortage
In Pastpaper Jan 2008

ii) Software Inspections: To ensure that the software being developed does not contain error. Involves the
activities of analysing and checking system representations like requirements document, design diagrams, codes etc. Can be implemented in all stages of the process.

(1) Inspection techniques consist of:


(a) Program inspections, (b) Automated source code analysis (c) Formal verification.
In Pastpaper Jan 2009, Jan 2008

(2) Two reasons why Inspections are more effective than testing to discover defects:
(a) During a single inspection session, a lot of different defects may be detected. Testing, detect only one error per test. (b) Reviews and inspections reuse domain and programming language knowledge.
www.oumstudents.tk Page 8

(3) Program Inspection: Widely used nowadays to detect program defect. . The inspection process should be
guided by a defect checklist. Program code is often thoroughly checked by a small team. (a) Roles of team members in the Inspection Process: (i) Author or Owner: Programmer or designer responsible for producing the program or document. (ii) Inspector: Responsible for finding errors, omissions and inconsistencies in programs and documents. (iii) Reader: Responsible for rephrasing the code or document at an inspection meeting. (iv) Scribe: Responsible for recording the results of the inspection meeting. (v) Chairman or Moderator: Responsible for managing the process and facilitates the inspection. (vi) Chief Moderator Responsible for inspection of process improvements, checklist updating, standards development and many more. (b) Among the responsibilities of the moderator are: (i) Selecting an inspection team; (ii) Organising a meeting room; (iii) Making sure that the material to be inspected and its specifications are complete. (c) Before a program inspection starts, it is important that: (i) The inspection team prepares an accurate specification of the code to be inspected. (ii) The members of the inspection team know the organisational standards very well. (iii) The latest and syntactically correct version of the code is available. (d) Inspection Process: (i) Program is given to the inspection team during overview stage, where author describes what the program is supposed to do. (ii) A Period of Individual Preparation follows this, where each member tries to find defects, anomalies and non-compliance with standard in code, not suggesting how to correct these. (iii) After inspection completes, programmer corrects the identified problems. (iv) In the follow-up stage, moderator must decide whether re-inspection of code is needed. (v) Lastly, the document is approved by the moderator for release. (e) Inspection Checks (also Automated Static Analysis Checks): (i) Data Faults: Variables initialised before their values are used? Etc. (ii) Control Faults: Condition statements correct? Is each loop certain to terminate? Etc. (iii) Input/Output Faults: All input variables used? Output variables assigned to a value? Etc. (iv) Interface Faults: Correct No. of Parameters for functions? Parameter types match? Etc. (v) Storage Management Faults: Have all links been correctly reassigned after modification? Etc. (vi) Exception Management Faults: Have all possible error condition been taken into account? (a) Static Program Analysers: Software tools that are used to scan the source text of a program and detect possible faults and abnormalities. (b) 5 Stages of Static Analysis (i) Control Flow Analysis: Looks for loops with multiple exit or entry points and unreachable code. (ii) Data Use Analysis: This stage emphasises how variables in the program are used. (iii) Interface Analysis: Inspects the consistency of routine and procedure declarations and their use. (iv) Information Flow Analysis The process of identifying the dependencies between input variables and output variables. (v) Path Analysis This semantic phase detects all possible paths through the program and creates the statements executed in that path.
www.oumstudents.tk Page 9

In Pastpaper Jan 2008

In Pastpaper Sep 2010

(4) Automated Static Analysis:

f) Cleanroom Software Development: Software


In Sample Paper In Pastpaper Sep 2008, Jan 2008

development philosophy that uses a rigorous inspection process to avoid software defect. Its purpose is to produce zero-defect software.

i) 5 Main Characteristics of Cleanroom SD (1) Formal Specification: The software that


will be developed is formally specified.

(2) (3) (4) (5)

Incremental Development: The software is divided into increments, developed and validated separately. Structured Programming: Where only partial number of control and data abstraction constructs are used. Static Verification: The software being developed is statically verified using rigorous software inspections. Statistical Testing of the System: In order to determine the reliability of integrated software increment, it is tested statistically, derived from an operational profile, developed in line with the system specification.

ii) Cleanroom Process Teams: (1) Specification Team: Responsible for developing and maintaining system specifications. (2) Development Team: Responsible for Developing and Verifying the software. (3) Certification Team: Responsible for developing statistical tests to test the software as it is developing.

10) Software Testing:


a) Defect testing: Discover hidden defects in the

software system before it is delivered to customer.

(1) A successful defect test is a test which will


cause the system to perform improperly and then reveal a defect.

(2) Test Cases: Specifications of test inputs, expected system output with a statement of what is being tested. (3) Guidelines for Testing:
(a) All system functions that are linked through menus must be tested. (b) Combinations of functions must be tested. (c) If user input is required by system, all functions are tested with both correct and incorrect input.

In Pastpaper Jan 2009

ii) Exhaustive Testing: Testing every program execution sequence (This is impractical). iii) Defect Testing Techniques: (1) Black-Box (Functional) Testing: Tests are designed based on the program or
component specification. The tester gives input to the component or the system and examines the corresponding outputs. If the outputs are not as expected, then a problem has been detected. It does not require access to source code

In Pastpaper Sep 2010, Jan 2009

(2) Equivalent Partitioning: A way of building test cases, which depends on finding
partitions in the input and output data sets and running the program with values from these partitions. Tests are successful if program handles invalid output groups successfully. (a) Input Equivalence Partitions: Sets of data where all set members must be processed in equivalent way. (b) Output Equivalence Partitions: Program outputs, which have common characteristics so it can be considered as a separate class.

In Pastpaper Jan 2009

(3) Structural (White-Box, Glass-Box, Clear-Box) Testing: Tests are derived from knowledge of softwares
structure and implementation. To make sure each independent program path is executed at least once. (a) Applied to small program units: Subroutine or operations related with an object. (b) Code Analysis: To find no. of test cases needed to ensure all statements in program are executed at least once.
www.oumstudents.tk Page 10

In Sample Paper

(4) Path testing is a structural testing strategy and its objective is to


implement every independent execution path through a component or program. (a) If every independent path is executed then all statements in the component should have been executed at least once. (b) Used at the unit testing and module testing stages of the testing process. (c) Skeletal Model: A starting point testing, a program flow graph, which consists of all paths through the program. (d) No. of Independent Paths in a program can be found by computing the Cyclomatic Complexity (CC) of any connected graph (G) and it can be computed according to the following:

b) Integration Testing: A process that involves building the system and testing the
complete system for problems that are caused by component interactions. Integration tests should be built from the system specification.

(1) Main problem: Localising errors found during the process. ii) Incremental Approach: Used to make it easier to locate errors, minimal
system configuration is integrated and tested. Then add components to this minimal configuration and test after each added increment.
In Sample Paper

iii) Top-Down and Bottom-Up Testing: (1) Top-down testing is an essential part of a top-down development process where the development process
begins with high-level components and goes down to the components hierarchy.

In Pastpaper Jan 2009

(2) Bottom-up testing


involves integrating and testing the modules at the lower levels in the hierarchy, and then goes up the hierarchy of modules until the final module is tested.

In Pastpaper Sep 2008

iv) Interface Testing: To detect errors that could be introduced into the system because of interface errors or
invalid input to the interfaces. Very important for Object-Oriented development.

(1) Types of Interface Errors:


(a) (b) (c) (d) Parameter Interfaces: Data passed from one procedure to another. Shared Memory Interfaces: Block of memory is shared between procedures Procedural Interfaces: Subsystem encapsulates a set of procedures to be called by other sub-systems. Message Passing Interfaces: Subsystems request services from other subsystems.

(2) Classes (Three Categories) of Interface Errors:


(a) Interface Misuse: Calling component calls another component later making error in interface usage. (b) Interface Misunderstanding: Calling component misunderstands the interface specification of the called component, and makes wrong assumption about the behaviour of the called component. (c) Timing Errors: Occur in real-time systems using shared memory, or message passing interface.
In Sample Paper

v) Stress Testing: Planning a series of tests where the load is steadily increased until the system performance
become unacceptable.

www.oumstudents.tk

Page 11

c) Testing Workbench Tools: i) Test Manager: Used to manage the execution of program tests. ii) Test Data Generator: This is used to generate test data for the
program to be tested.

iii) Oracle: Used to generate predictions of expected test results. iv) File Comparator: Used to compare the results of program tests
with previous test results and reports differences between them.

v) Report Generator: Used to provide report definition and


generation facilities for test results.

vi) Dynamic Analyser: Used to add code to a program to count how many times each statement had executed. vii) Simulator: (1) Target simulators simulate the machine where program will be executed. (2) User interface simulators are script-driven programs that simulate multiple simultaneous user interactions.

11) Process Improvement and Software Quality Assurance:


a) Process improvement: includes process analysis, standardisation, measurement and change.

b) Process Characteristics: i) Understandability To what extent is the process explicitly defined and how easy is it to understand definition? ii) Visibility Do the process activities culminate in clear results, so that progress of process is externally visible? iii) Supportability To what extent can the process activities be supported by CASE tools? iv) Acceptability Is the defined process acceptable to and usable by the engineers for producing the software? v) Reliability Is process designed so that process errors are avoided or trapped before resulting in product errors? vi) Robustness Can the process continue in spite of unexpected problems? vii) Maintainability Can the process evolve to reflect changing organisational requirements? viii) Rapidity How fast can the process of delivering a system from a given specification be completed?

In Pastpaper Jan 2008

c) Process Improvement Procedure: Stages i) Process Analysis: Includes the


activities of checking existing processes and producing a process model in order to document and understand the process.

ii) Improvement Identification: Using


the process analysis results to identify quality, schedule or cost problems where process factors might influence the product quality.

iii) Process Change Introduction: Introducing new procedures, methods,


and tools and integrating them with other process activities.

iv) Process Change Training: It is impossible to get the full benefits from
process changes without training.

to the process are proposed and are introduced. Should continue for several months until software engineers are satisfied with new process.

d) Purpose of process improvement: To reduce the No. of product defects. e) Large Projects: The main factor of products quality is the software process i) Biggest problems with large projects: Integration, project
www.oumstudents.tk

In Pastpaper Jan 2009

v) Change Tuning: When minor problems are discovered; modifications

In Sample Paper

Page 12

management and communications.

f) Small Projects: The quality of the development team is more important than the development process used. g) Process Analysis & Modeling: Require studying existing processes and developing an abstract model of these
processes that identifies their main characteristics.

i) Process Analysis: The study of existing processes


to know relationships between different parts of the process.

ii) Process Analysis Techniques: (1) Questionnaires & Interviews: Asking


software engineers on the project about what happens in the project. The answered to which are then used during personal interviews with those involved in the process.

(2) Ethnographic Studies: Used to


understand the nature of software development as a human activity.

iii) Process Model Elements: iv) Process Exceptions: examples of


exception types that a project manager must handle are:

(1) Several important people


become ill at the same time just before an important project review.

(2) A communications line or


network failure that causes electronic mail cannot be used for several days.

(3) An organisational
reorganisation that cause the managers to spend most of their time working on organisational matters rather than on project management.

(4) An unexpected request for


new project proposals being made. Instead of concentrating on the project, the team has to work on a proposal.

In Pastpaper Sep 2008

h) Process Measurement: Consist of quantitative data about the software process. i) Process Metrics: Can be used to assess whether or not the efficiency of a process has been improved. (1) Three Classes of Process Metrics:
(a) Time Taken for a Particular Process to be completed. Eg: Total time dedicated to the process. (b) Resources Required for a particular process. Eg: Total effort in person-days. (c) Number of Occurrences of a particular event. Eg: No of errors found during code inspection.
www.oumstudents.tk Page 13

In Pastpaper Sep 2010

ii) Goal-Question-Metric (GQM) paradigm: Used to help the developers to decide what measurements should be
taken and how they should be used

(1) GQM Measurement:


(a) Goals: What is the organisation trying to achieve? The objective of process improvement is to satisfy these goals. (b) Questions: Questions about areas of uncertainty related to the goals. You need process knowledge to derive these. (c) Metrics: Measurements to be collected to answer the questions.

In Sample Paper

(2) Advantage GQM:


(a) It separates organisational concerns or goals from specific process concerns or questions. (b) It focuses on data collection and proposes that collected data should be analysed in different ways depending on the question it should answer.
In Pastpaper Sep 2010

iii) SEI Software Capability Maturity Model (CMM): (1) Categorises software processes as:
(a) Initial: Essentially uncontrolled (b) Repeatable: Product management procedures defined and used (c) Defined: Process management procedures and strategies defined and used (d) Managed: Quality management strategies defined and used (e) Optimising: Process improvement strategies defined and used

(2) Three problems in CMM:


(a) The model focuses entirely on project management rather than product development. (b) It does not include risk analysis and resolution as key process (c) The area of applicability of the model is not defined.

In Pastpaper Jan 2009

iv) Capability Assessment: Relies on a standard


questionnaire that is intended to identify the main processes in the organisation.

v) Six Sigma strategy is a rigorous and disciplined


methodology that uses data and statistical analysis to measure and improve a companys operational performance by identifying and eliminating defects in manufacturing and service-related processes.

(1) 3 Core Steps:


(a) Define customer requirements, deliverables and project goals via well-defined methods of customer communication. (b) Measure the existing process and its output to determine current quality performance. (c) Analyse defect metrics and determine causes.

(2) 2 Additional Steps Suggested for existing software processes in place (DMAIC)
(a) Improve the process by eliminating the root causes of defects. (b) Control the process to ensure that future work does not reintroduce the causes of defects.

(3) 2 Additional Steps Suggested for if an organisation is developing a software process (DMADV)
(a) Design the process to: (i) Avoid the root causes of defects (ii) Meet customer requirements. (b) Verify that the process model will avoid defects and meet customer requirements.

www.oumstudents.tk

Page 14

In Pastpaper Jan 2009

vi) ISO 9000 Quality Standard: Describes a quality assurance system in general that can be applied to any business
regardless of the products or services offered.

(1) ISO 9001:2000: The quality assurance standard that applies to software engineering. The standard contains
20 requirements that must be present for an effective quality assurance system.

(2) ISO 9000-3: Developed to help interpret the standard for use in the software process because the ISO
9001:2000 standard is applicable to all engineering disciplines. Process Classification: Different types of Processes:

i)
In Sample Paper

i) Informal Processes These are the processes


where there is no strictly defined and clear process model required.

ii) Managed Processes These are the processes


where there is a defined process model prepared. This is used to guide the development process.

iii) Methodical Processes These are the processes where some defined development method or methods like
systematic methods for object-oriented design are used.

iv) Improving Processes These are the processes that have inherent improvement objectives. There is a specific
budget for process improvements and procedures ready for introducing such improvements.

12) Software Change:


a) Three Different Strategies for Software Change: i i) Software Maintenance: Changes to the software are made to response to the requirements change but the
basic structure of the software remains stable. This is the most frequent approach used to system change.

ii) Architectural Transformation: This is a very drastic software change approach compared to maintenance
because it involves making major changes to the architecture of the software system. For example, systems change from a centralised, data-centric architecture to client-server architecture.

iii) Software Reengineering: In this strategy, the system is modified to make it easier to understand and change. b) Configuration Management: The management of changing software products. c) Program evolution dynamics: The study of system change. d) Lehmans Laws: i) Continuing Change: A program that is used in a real-world environment necessarily must change or become
progressively less useful in that environment.

In Pastpaper Jan 2008

ii) Increasing Complexity: As an evolving program changes, its structure tends to become more complex. iii) Large Program Evolution: Program evolution is a self-regulating process. System attributes such as size, time
between releases and the numbers of reported errors are approximately invariant for each to system release.

iv) Organisational Stability: Over a programs lifetime, its rate of development is approximately constant and
independent of the resources devoted to system development.

v) Conservation of Familiarity: Over the lifetime of the system, the incremental change in each release is
approximately constant.

e) Software maintenance is the normal process of changing a system after it has been delivered. i) Types of Maintenance: (1) Maintenance to Repair Software Faults (Bug Fixing): Coding errors is normally inexpensive; design errors
are more costly. Requirements errors are the most expensive to be repaired.

(2) Maintenance to Adapt the Software to a Different Operating Environment: This maintenance type is
needed when some aspect of the systems environment such as the hardware, the platform operating system or other support software changes.

(3) Maintenance to Add to OR Modify the Systems Functionality: This type of maintenance is essential when
the system requirements change in line with organisational or business change.
www.oumstudents.tk Page 15

ii) Overall lifetime costs can be reduced if more effort is given during
system development to produce a maintainable system.
In Pastpaper Sep 2008

iii) One main reason why maintenance costs are high is that it is more
expensive to add functionality after a system is already in operation than to implement the same functionality during development.

iv) The Main Factors that Differentiate Development and Maintenance that Lead to Higher Maintenance Costs: (1) Team Stability: After a system has been released to users, usually the development team will disintegrate
and they will work on new projects.

(2) Contractual Responsibility: The contract to maintain a system is normally different from the system
development contract.

(3) Staff Skills: Maintenance staffs are usually inexperienced and unfamiliar with the application area. (4) Program Age and Structure: As programs grow old, their structure is likely to be degraded by change and
they become harder to understand and change.

v) Maintenance Process: Different between one


organisation to another depending on the software being maintained as well as the development processes used in an organisation and the people involved in the process

(1) Change requests: Requests for system


changes from users, customers or management.

(2) 3 Reasons that some change requests


must be implemented urgently: (a) Fault repair (b) Changes to the systems environment (c) Urgently required business changes

vi) Maintenance Prediction: Maintenance


prediction is concerned with assessing which parts of the system may cause problems and have high maintenance costs

(1) Predicting the number of changes requires


and understanding of the relationships between a system and its environment.

(2) Tightly coupled systems require changes


whenever the environment is changed.

(3) Factors influencing this relationship are


(a) Number and complexity of system interfaces; (b) Number of inherently volatile system requirements; (c) The business processes where the system is used.

(4) Process measurements may be used to assess maintainability


(a) (b) (c) (d) Number of requests for corrective maintenance; Average time required for impact analysis; Average time taken to implement a change request; Number of outstanding change requests.

(5) If any or all of these is increasing, this may indicate a decline in maintainability

www.oumstudents.tk

Page 16

Das könnte Ihnen auch gefallen