Beruflich Dokumente
Kultur Dokumente
a) Requirements Engineering: Includes all of the activities needed to create and maintain a system requirements
document.
(4) Activities: Information Assessment, Information Collection and Report Writing. (5) Sources of Information:
(a) (b) (c) (d) Department managers where the system will be used; Software engineers who are familiar with the type of system that is proposed; Technology experts; and End-users of the system.
(1) Stakeholders: Everyone who has some direct or indirect influence on the
system requirements
Page 1
(ii) Event scenarios are used to document the system behaviour when presented with specific events. Includes a description of data flows and the actions of the system and document the exceptions. (iii) Use-case: A scenario-based technique for requirements elicitation, will identify the actors involved in an interaction and will name the type of interaction. (b) Ethnography is a technique of observation that can be used to understand social and organisational requirements. (i) System analyst involves himself in the working environment where the system will be used. (ii) It helps discover hidden system requirements which represent the actual process. (iii) Two types of requirements that Ethnography is usually useful at discovering: 1. Requirements taken from the way people actually work rather than the way process definitions say they should work. 2. Requirements that are taken from cooperation and knowledge of other peoples activities.
iii) Requirements Specification: iv) Requirements Validation: Proving that the requirements match the users requests. Concerned with finding
problems with the requirements.
(1) Validation is important because errors in a requirements document can lead to extensive modification
costs when they are later discovered during development or after the system is in service.
In Pastpaper Sep 2010
Requirements Reviews Systematic manual analysis of the requirements. Prototyping It is using an executable model of the system to check requirements. Test-case Generation Developing tests for requirements to check testability. Automated Consistency Analysis Checking the consistency of a structured requirements description.
(4) Requirements review is defined as a manual process of checking the requirements document for
abnormalies and omission, which involves the client and contractor staff. (a) Informal Requirements Review just involve requirements discussion between developers with as many system users as possible. (b) Formal Requirements Review, the development team should walk the client through the system requirements, explaining the implications of each requirement. organisational and technical changes inevitably lead to changes to the requirements for a software system.
c) Requirements management is the process to understand and control changes to system requirements. Business,
In Sample Paper
i) New requirements appear because: (1) Different users have different requirements and priorities. (2) The people who pay for a system and the users of a system are usually not the same people. (3) The business and technical environment of the system changes frequently and these changes must be
reflected in the system itself.
ii) Three principal stages to a change management process (1) Problem Analysis and Change Specification (2) Change Analysis and Costing (3) Change Implementation
www.oumstudents.tk Page 2
iv) Techniques Used to understand the users needs: Task analysis, Ethnographic studies, User interviews and
Observations or a mixture of all of these techniques.
i) User Familiarity Principle stated that users should not be forced to adapt to an interface because it is
convenient to implement.
(1) The interface should use terms and concepts which are drawn from the experience of the people who will
make most use of the system
ii) Consistency Principle: System commands and menus should have the same format, command punctuation
should be the same and parameters should be passed to all commands in the same way.
In Sample Paper
(1) The interface should be consistent in that, wherever possible, comparable operations should be activated
in the same way.
iii) Minimal Surprise Principle: User can get irritated when a system behaves unexpectedly (1) Users should never be surprised by the behaviour of a system. iv) Recoverability Principle: Users cannot avoid from making mistakes when using a system. (1) The interface should include mechanisms to allow users to recover from errors. v) User Guidance (Assistance) Principle: stated that interfaces should have built-in user help facilities. (1) The interface should provide meaningful feedback when errors occur and provide context-sensitive user
help facilities.
vi) User Diversity Principle: States that, there are different types of users for many interactive systems. (1) The interface should provide appropriate interaction facilities for different types of system user (2) There are two types of users:
(a) Casual Users: users who interact occasionally with the system. (b) Power Users: users who use the system for several hours each day.
vii) Principle of acknowledging user diversity can conflict with the other interface design principles. The reason is
that some types of user may prefer to have very rapid interaction rather than user interface consistency.
www.oumstudents.tk Page 3
iii) Form Fill-in: Filling the form fields iv) Command Language: Issuing a
special command and related parameters to instruct the system what to do.
v) Natural Language: In order to delete a file, the user could type delete the file named xxx.
i) Model-View-Controller (MVC): First used in Smalltalk. It is a useful way to support multiple presentations of
data and users can interact with each presentation using a style that is more suitable to it. The data to be displayed is encapsulated together in a model object. Each model object can have several separate view objects associated with it where each view is different display representation of model.
ii) Factors to be considered to decide on how to present information: (1) User interest in specific information or in the relationships between different data values. (2) Information values change rate. (3) User response to information change. (4) User interaction with the displayed information. (5) Type of information to be displayed. For example: textual or numeric. iii) Guidelines for Effective Color Use in User Interfaces: (1) Limit the colors and be conservative on how they are used (2) Use color change to show a change in system status (3) Use color coding to support the task which users are trying to perform (4) Use color coding in a thoughtful and consistent way (5) Be careful about color pairing iv) Two most frequent Errors made by designers when using colour in a user interface: (1) Using too many colours in a display, (2) Associating meanings with particular colours. Color should not be used to represent meaning because:
(a) About 10% of men are colour-blind (b) Human colour perceptions are different and there are different interpretations in different professions about the meaning of particular colours.
e) User Support: Help systems are one part of user interface design. It is used for user guidance. i) Three areas covered by User Support Help Systems: (1) The message created by the system to react to user actions. (2) The online users help system. (3) The documentation given with the system.
www.oumstudents.tk Page 4
In Sample Paper
ii) Factors that should be considered when designing help text or error messages: (1) Context: Be aware of what the user is doing and should adjust the output message to the current context. (2) Experience: Should provide both long meaningful messages (for beginners) and short messages (for more
experienced users) and allow the user to control message conciseness.
(3) Skill Level: Messages should be tailored to the users skills as well as their experience (terminologies etc). (4) Style Messages should be positive rather than negative. They should use the active rather than the passive
mode of address. They should never be insulting or try be funny.
(5) Culture Wherever possible, the designer of messages should be familiar with the culture of the country
where the system is sold. A suitable message for one culture might be unacceptable in another.
iii) Error Messages: Very important as it can be the first impression users have about a system. Error messages
should constantly be polite, concise, consistent and constructive. Good error message should suggest how the error can be corrected and provide a link to a help system.
iii) Introductory Manual: Presents an informal introduction to the system, describing its normal usage, how to get
started and how end-users might use the common system facilities.
iv) Reference Manual: Explains the system facilities and their usages, gives a list of error messages and possible
causes and explains how to recover from detected errors.
v) Administrators Manual: Explain the messages generated when the system interacts with other systems and
how to respond to these messages.
g) Interface Evaluation: Process of testing the usability of an interface and testing whether it meets user requirement. i) Usability Attributes: (1) Learnability How long does it take a new user to become productive with the system? (2) Speed of Operation How well does the system response match the users work practice? (3) Robustness How tolerant is the system of user error? (4) Recoverability How good is the system at recovering from user errors? (5) Adaptability How closely is the system tied to a single model of work? ii) Simpler, Less Expensive Techniques of User Interface Evaluation: (1) Questionnaires that are used to collect information about users opinion of the interface. (2) Observation and Interview of users working with the system. (3) Video recording of usual system use. (4) The insertion of code that collects information about the most-used facilities and most common errors.
www.oumstudents.tk Page 5
based on existing examples of good design and use these software components where available and suitable.
d) There are three main requirements for component Re-Use: i) The components that are reusable need to be kept for future use. ii) The people who reuse the components must have confidence that the components are reliable and functional. iii) The reusable components must have related documentation to help the people who want to reuse these
components to understand them and use them in a new application.
In Pastpaper Sep 2010, Jan 2008
e) Advantages of Software Re-Use: i) Reduce in Overall Development Costs: Fewer software components will be specified, designed, implemented
and validated.
ii) Increased Reliability: Reused components that have been implemented in working systems should be more
reliable than new components.
iii) Reduced Process Risk: The uncertainty in the cost of reusing software component is less than developing a new
component.
iv) Effective Use of Specialists: The application specialists can develop reusable components that use their
knowledge instead of doing the same work on different projects.
v) Standard Compliance: The use of standard user interfaces enhances reliability as users make fewer mistakes
when given a familiar interface.
vi) Accelerated Development: Speed of system production is increased because time for both development and
validation should be reduced.
In Pastpaper Jan 2008
f) Disadvantages of Software Re-Use: i) Increased Maintenance Costs: Reused elements of the system may become incompatible with system changes,
if source code is not available.
ii) Lack of Tool Support: It is difficult to find a tool to support component reuse. Eg: CASE tools doesnt support it. iii) Not-invented-here Syndrome Writing new software viewed more challenging than reusing others software. iv) Finding and Adapting Reusable Components Software components have to be searched in a library or archive,
understood and adapted to work in a new environment.
g) Generator-Based Re-Use: Re-usable knowledge is included in a program generator system that can be
programmed in a domain-oriented language
Cost-effective Depends on the identification of conventional domain abstractions. Used in business data processing, language processing and in command and control systems. Easier for end-users to develop programs.
h) Component-Based Development: Introduced due to the insufficient use of object-oriented development. i) Stand-alone Providers: Means that when a system needs some service, it will call on a component to provide
that service without any concern about where that component is executing or the programming language used to develop the component.
www.oumstudents.tk Page 6
ii) Two Characteristics of a Reusable Component: (1) The component is treated as an independent executive entity, source code not available. (2) Components bring out their interface and all interactions are made through that. iii) Component Interfaces: (1) Requires Interface: Indicates what services must be available from the system which is using component. (2) Provides Interface: Describes the services provided by the component. iv) Five Different Levels of Abstraction: (1) Functional Abstraction: The component uses a single function such as mathematical function. (2) Casual Groupings: The component is a collection of loosely related entities that could consist of data
declarations, functions and many more.
In Sample Paper
(3) Data Abstractions: The component symbolises a data abstraction or class in an object-oriented language. (4) Cluster Abstractions: The component is a group of interrelated classes that work together. These classes
are sometimes known as frameworks.
(5) System Abstractions: The component is an entire self-contained system. Reusing system-level abstractions
is sometimes called COTS (Commercial-Off-The-Shelf) product reuse.
v) Objects were the most appropriate abstraction for use, suggests users of object -oriented development. vi) Frameworks (or application frameworks): Subsystem design that consists of a collection of abstract and
concrete classes and also the interface between them.
(2) Primary problem with frameworks is their inherent complexity and the time it takes in order to learn how
to use them, high cost of introduction.
vii) Commercial-Off-The-Shelf (COTS): Offered by a third-party vendor. Related to the reuse of large-scale, off-the-
shelf systems. These provide a lot of functionally and their reuse hugely reduces costs and development time.
(1) Advantages of COTS: COTS offers more functionalities than specialised components. (2) Four Problems with COTS System Integration:
(a) (b) (c) (d) Lack of Control over Functionality and Performance Problems with COTS System Interoperability No Control over System Evolution Support from COTS Vendors
viii) Component Development for Reuse: (1) Implement-based process: Reusable components are constructed from existing components that have
been successfully reused
i) Design Patterns: In software design, design patterns have been linked with object-oriented design. usually rely on
object characteristics like inheritance and equally applicable to all approaches to software design:
In Pastpaper Jan 2008
i) Verification: Process of checking software to conform it meets the specification. ii) Validation: More general process. Ensures the software meets the expectations of the customer. iii) Main goal of software verification and validation: is to make sure that the software is good enough for its
original purpose.
iv) Should include activities like (1) Draw up standards and procedures for software inspections and testing, (2) Establish checklists to drive program inspections and define the software test plan. b) Debugging: The process of locating and correcting defects are called. c) Regression Testing: Re-inspecting the program or repeating previous test runs, to check that the new changes to a
program do not create new errors into the system. d) The required confidence of Software depends on:
i) Software Function: Level of confidence depends on how important a software is to an organization. ii) User Expectations: Many users have low expectations of their software.
e) System checking and analysis techniques that can be used for verification and validation process: i) Software Testing: Executing an implementation of the software with test data and examining the outputs, and
its operational behaviour, to find out if its performing as required.
(1) Two distinct types of testing that may be used in different stages in the software process are:
(a) Defect testing is aimed to find inconsistencies between a program and its specification. These inconsistencies are normally caused by program faults or defects. (b) Statistical testing is done to assess the programs performance and reliability and to test how it works under operational conditions. Tests are planned to show the actual user inputs and their frequency.
(2) Testing is very important for: reliability assessment, performance analysis and user interface validation
and to check whether the software requirements are the same as the user specifications.
ii) Software Inspections: To ensure that the software being developed does not contain error. Involves the
activities of analysing and checking system representations like requirements document, design diagrams, codes etc. Can be implemented in all stages of the process.
(2) Two reasons why Inspections are more effective than testing to discover defects:
(a) During a single inspection session, a lot of different defects may be detected. Testing, detect only one error per test. (b) Reviews and inspections reuse domain and programming language knowledge.
www.oumstudents.tk Page 8
(3) Program Inspection: Widely used nowadays to detect program defect. . The inspection process should be
guided by a defect checklist. Program code is often thoroughly checked by a small team. (a) Roles of team members in the Inspection Process: (i) Author or Owner: Programmer or designer responsible for producing the program or document. (ii) Inspector: Responsible for finding errors, omissions and inconsistencies in programs and documents. (iii) Reader: Responsible for rephrasing the code or document at an inspection meeting. (iv) Scribe: Responsible for recording the results of the inspection meeting. (v) Chairman or Moderator: Responsible for managing the process and facilitates the inspection. (vi) Chief Moderator Responsible for inspection of process improvements, checklist updating, standards development and many more. (b) Among the responsibilities of the moderator are: (i) Selecting an inspection team; (ii) Organising a meeting room; (iii) Making sure that the material to be inspected and its specifications are complete. (c) Before a program inspection starts, it is important that: (i) The inspection team prepares an accurate specification of the code to be inspected. (ii) The members of the inspection team know the organisational standards very well. (iii) The latest and syntactically correct version of the code is available. (d) Inspection Process: (i) Program is given to the inspection team during overview stage, where author describes what the program is supposed to do. (ii) A Period of Individual Preparation follows this, where each member tries to find defects, anomalies and non-compliance with standard in code, not suggesting how to correct these. (iii) After inspection completes, programmer corrects the identified problems. (iv) In the follow-up stage, moderator must decide whether re-inspection of code is needed. (v) Lastly, the document is approved by the moderator for release. (e) Inspection Checks (also Automated Static Analysis Checks): (i) Data Faults: Variables initialised before their values are used? Etc. (ii) Control Faults: Condition statements correct? Is each loop certain to terminate? Etc. (iii) Input/Output Faults: All input variables used? Output variables assigned to a value? Etc. (iv) Interface Faults: Correct No. of Parameters for functions? Parameter types match? Etc. (v) Storage Management Faults: Have all links been correctly reassigned after modification? Etc. (vi) Exception Management Faults: Have all possible error condition been taken into account? (a) Static Program Analysers: Software tools that are used to scan the source text of a program and detect possible faults and abnormalities. (b) 5 Stages of Static Analysis (i) Control Flow Analysis: Looks for loops with multiple exit or entry points and unreachable code. (ii) Data Use Analysis: This stage emphasises how variables in the program are used. (iii) Interface Analysis: Inspects the consistency of routine and procedure declarations and their use. (iv) Information Flow Analysis The process of identifying the dependencies between input variables and output variables. (v) Path Analysis This semantic phase detects all possible paths through the program and creates the statements executed in that path.
www.oumstudents.tk Page 9
development philosophy that uses a rigorous inspection process to avoid software defect. Its purpose is to produce zero-defect software.
Incremental Development: The software is divided into increments, developed and validated separately. Structured Programming: Where only partial number of control and data abstraction constructs are used. Static Verification: The software being developed is statically verified using rigorous software inspections. Statistical Testing of the System: In order to determine the reliability of integrated software increment, it is tested statistically, derived from an operational profile, developed in line with the system specification.
ii) Cleanroom Process Teams: (1) Specification Team: Responsible for developing and maintaining system specifications. (2) Development Team: Responsible for Developing and Verifying the software. (3) Certification Team: Responsible for developing statistical tests to test the software as it is developing.
(2) Test Cases: Specifications of test inputs, expected system output with a statement of what is being tested. (3) Guidelines for Testing:
(a) All system functions that are linked through menus must be tested. (b) Combinations of functions must be tested. (c) If user input is required by system, all functions are tested with both correct and incorrect input.
ii) Exhaustive Testing: Testing every program execution sequence (This is impractical). iii) Defect Testing Techniques: (1) Black-Box (Functional) Testing: Tests are designed based on the program or
component specification. The tester gives input to the component or the system and examines the corresponding outputs. If the outputs are not as expected, then a problem has been detected. It does not require access to source code
(2) Equivalent Partitioning: A way of building test cases, which depends on finding
partitions in the input and output data sets and running the program with values from these partitions. Tests are successful if program handles invalid output groups successfully. (a) Input Equivalence Partitions: Sets of data where all set members must be processed in equivalent way. (b) Output Equivalence Partitions: Program outputs, which have common characteristics so it can be considered as a separate class.
(3) Structural (White-Box, Glass-Box, Clear-Box) Testing: Tests are derived from knowledge of softwares
structure and implementation. To make sure each independent program path is executed at least once. (a) Applied to small program units: Subroutine or operations related with an object. (b) Code Analysis: To find no. of test cases needed to ensure all statements in program are executed at least once.
www.oumstudents.tk Page 10
In Sample Paper
b) Integration Testing: A process that involves building the system and testing the
complete system for problems that are caused by component interactions. Integration tests should be built from the system specification.
(1) Main problem: Localising errors found during the process. ii) Incremental Approach: Used to make it easier to locate errors, minimal
system configuration is integrated and tested. Then add components to this minimal configuration and test after each added increment.
In Sample Paper
iii) Top-Down and Bottom-Up Testing: (1) Top-down testing is an essential part of a top-down development process where the development process
begins with high-level components and goes down to the components hierarchy.
iv) Interface Testing: To detect errors that could be introduced into the system because of interface errors or
invalid input to the interfaces. Very important for Object-Oriented development.
v) Stress Testing: Planning a series of tests where the load is steadily increased until the system performance
become unacceptable.
www.oumstudents.tk
Page 11
c) Testing Workbench Tools: i) Test Manager: Used to manage the execution of program tests. ii) Test Data Generator: This is used to generate test data for the
program to be tested.
iii) Oracle: Used to generate predictions of expected test results. iv) File Comparator: Used to compare the results of program tests
with previous test results and reports differences between them.
vi) Dynamic Analyser: Used to add code to a program to count how many times each statement had executed. vii) Simulator: (1) Target simulators simulate the machine where program will be executed. (2) User interface simulators are script-driven programs that simulate multiple simultaneous user interactions.
b) Process Characteristics: i) Understandability To what extent is the process explicitly defined and how easy is it to understand definition? ii) Visibility Do the process activities culminate in clear results, so that progress of process is externally visible? iii) Supportability To what extent can the process activities be supported by CASE tools? iv) Acceptability Is the defined process acceptable to and usable by the engineers for producing the software? v) Reliability Is process designed so that process errors are avoided or trapped before resulting in product errors? vi) Robustness Can the process continue in spite of unexpected problems? vii) Maintainability Can the process evolve to reflect changing organisational requirements? viii) Rapidity How fast can the process of delivering a system from a given specification be completed?
In Pastpaper Jan 2008
iv) Process Change Training: It is impossible to get the full benefits from
process changes without training.
to the process are proposed and are introduced. Should continue for several months until software engineers are satisfied with new process.
d) Purpose of process improvement: To reduce the No. of product defects. e) Large Projects: The main factor of products quality is the software process i) Biggest problems with large projects: Integration, project
www.oumstudents.tk
In Sample Paper
Page 12
f) Small Projects: The quality of the development team is more important than the development process used. g) Process Analysis & Modeling: Require studying existing processes and developing an abstract model of these
processes that identifies their main characteristics.
(3) An organisational
reorganisation that cause the managers to spend most of their time working on organisational matters rather than on project management.
h) Process Measurement: Consist of quantitative data about the software process. i) Process Metrics: Can be used to assess whether or not the efficiency of a process has been improved. (1) Three Classes of Process Metrics:
(a) Time Taken for a Particular Process to be completed. Eg: Total time dedicated to the process. (b) Resources Required for a particular process. Eg: Total effort in person-days. (c) Number of Occurrences of a particular event. Eg: No of errors found during code inspection.
www.oumstudents.tk Page 13
ii) Goal-Question-Metric (GQM) paradigm: Used to help the developers to decide what measurements should be
taken and how they should be used
In Sample Paper
iii) SEI Software Capability Maturity Model (CMM): (1) Categorises software processes as:
(a) Initial: Essentially uncontrolled (b) Repeatable: Product management procedures defined and used (c) Defined: Process management procedures and strategies defined and used (d) Managed: Quality management strategies defined and used (e) Optimising: Process improvement strategies defined and used
(2) 2 Additional Steps Suggested for existing software processes in place (DMAIC)
(a) Improve the process by eliminating the root causes of defects. (b) Control the process to ensure that future work does not reintroduce the causes of defects.
(3) 2 Additional Steps Suggested for if an organisation is developing a software process (DMADV)
(a) Design the process to: (i) Avoid the root causes of defects (ii) Meet customer requirements. (b) Verify that the process model will avoid defects and meet customer requirements.
www.oumstudents.tk
Page 14
vi) ISO 9000 Quality Standard: Describes a quality assurance system in general that can be applied to any business
regardless of the products or services offered.
(1) ISO 9001:2000: The quality assurance standard that applies to software engineering. The standard contains
20 requirements that must be present for an effective quality assurance system.
(2) ISO 9000-3: Developed to help interpret the standard for use in the software process because the ISO
9001:2000 standard is applicable to all engineering disciplines. Process Classification: Different types of Processes:
i)
In Sample Paper
iii) Methodical Processes These are the processes where some defined development method or methods like
systematic methods for object-oriented design are used.
iv) Improving Processes These are the processes that have inherent improvement objectives. There is a specific
budget for process improvements and procedures ready for introducing such improvements.
ii) Architectural Transformation: This is a very drastic software change approach compared to maintenance
because it involves making major changes to the architecture of the software system. For example, systems change from a centralised, data-centric architecture to client-server architecture.
iii) Software Reengineering: In this strategy, the system is modified to make it easier to understand and change. b) Configuration Management: The management of changing software products. c) Program evolution dynamics: The study of system change. d) Lehmans Laws: i) Continuing Change: A program that is used in a real-world environment necessarily must change or become
progressively less useful in that environment.
ii) Increasing Complexity: As an evolving program changes, its structure tends to become more complex. iii) Large Program Evolution: Program evolution is a self-regulating process. System attributes such as size, time
between releases and the numbers of reported errors are approximately invariant for each to system release.
iv) Organisational Stability: Over a programs lifetime, its rate of development is approximately constant and
independent of the resources devoted to system development.
v) Conservation of Familiarity: Over the lifetime of the system, the incremental change in each release is
approximately constant.
e) Software maintenance is the normal process of changing a system after it has been delivered. i) Types of Maintenance: (1) Maintenance to Repair Software Faults (Bug Fixing): Coding errors is normally inexpensive; design errors
are more costly. Requirements errors are the most expensive to be repaired.
(2) Maintenance to Adapt the Software to a Different Operating Environment: This maintenance type is
needed when some aspect of the systems environment such as the hardware, the platform operating system or other support software changes.
(3) Maintenance to Add to OR Modify the Systems Functionality: This type of maintenance is essential when
the system requirements change in line with organisational or business change.
www.oumstudents.tk Page 15
ii) Overall lifetime costs can be reduced if more effort is given during
system development to produce a maintainable system.
In Pastpaper Sep 2008
iii) One main reason why maintenance costs are high is that it is more
expensive to add functionality after a system is already in operation than to implement the same functionality during development.
iv) The Main Factors that Differentiate Development and Maintenance that Lead to Higher Maintenance Costs: (1) Team Stability: After a system has been released to users, usually the development team will disintegrate
and they will work on new projects.
(2) Contractual Responsibility: The contract to maintain a system is normally different from the system
development contract.
(3) Staff Skills: Maintenance staffs are usually inexperienced and unfamiliar with the application area. (4) Program Age and Structure: As programs grow old, their structure is likely to be degraded by change and
they become harder to understand and change.
(5) If any or all of these is increasing, this may indicate a decline in maintainability
www.oumstudents.tk
Page 16