Sie sind auf Seite 1von 18

Model Question Paper Software Project Management (MB3G2IT)

Section A : Basic Concepts (30 Marks)


This section consists of questions with serial number 1 - 30. Answer all questions. Each question carries one mark. Maximum time for answering Section A is 30 Minutes.

1. The prototyping model of software development is (a) (b) (c) (d) (e) I. II. A reasonable approach when requirements are well defined A useful approach when a customer cannot define requirements clearly The best approach to use for projects with large development teams A risky approach that rarely produces a meaningful product Another name for component-based development approach.

2. Which of the following statements is/are false about the component-based development model? The component-based development model incorporates many of the characteristics of the spiral model. The component-based development model does not compose applications from prepackaged software components. III. The component-based development model leads to software reuse and reusability provides software engineers with a number of measurable benefits. (a) (b) (c) (d) (e) I. Only (II) above Only (III) above Both (I) and (II) above Both (I) and (III) above Both (II) and (III) above.

3. Which of the following statements is/are true about the formal methods model? The formal methods model encompasses a set of activities that leads to formal mathematical specification of computer software. II. Formal methods model enable a software engineer to specify, develop and verify computer-based system by applying a rigorous, mathematical notation. III. The development of formal methods model is currently quite-time consuming and expensive. (a) (b) (c) (d) (e) Only (I) above Only (II) above Only (III) above Both (I) and (II) above All (I), (II) and (III) above.

4. Constantine suggests four organizational paradigms for software engineering teams. Which of the following statements is/are false about open paradigm? I. II. Heavy communication and consensus-based decision making are the trade marks of open paradigm teams. Open paradigm team structures are well suited to the solution of complex problems but may not perform as efficiently as other teams. III. Open paradigm relies on the natural compartmentalization of a problem and organizes team members to work on pieces of the problem with little active communication among themselves. Only (I) above Only (II) above Only (III) above Both (I) and (II) above Both (II) and (III) above.

(a) (b) (c) (d) (e)

5. Project management activity encompasses

I. Measurement and metrics. II. Estimation and scheduling. III. Risk analysis. IV. Tracking and control. (a) (b) (c) (d) (e) (a) (b) (c) (d) (e) I. II. Both (I) and (II) above (I), (II) and (III) above (I), (II) and (IV) above (I), (III) and (IV) above All (I), (II), (III) and (IV) above. Information domain values Project schedule Software functions Process activities Hardware functions.

6. FP-based estimation techniques require problem decomposition based on

7. Which of the following statements is/are false about the Software Project Metrics? Software project metrics are used for strategic purposes. Software project metrics are used to assess product quality on an ongoing basis and, when necessary, modify the technical approach to improve quality. III. Software project metrics are used to minimize the development schedule by making the adjustments necessary to avoid delays and mitigate potential problems and risks. (a) (b) (c) (d) (e) (a) (b) (c) (d) (e) (a) (b) (c) (d) (e) Only (I) above Only (II) above Only (III) above Both (I) and (II) above Both (II) and (III) above. E/(E-D) (E * D)/(E+D) (E+D)/E E/(E+D) 2E/ (E+D). 2 3 4 5 1.

8. The formula for Defect Removal Efficiency (DRE) is

9. In the basis path testing, if the flow graph contains 4 regions and 2 nodes, then the number of edges are

10.Which of the following statements is/are true about Outsourcing? I. The decision to outsource can be either strategic or tactical. II. Outsourcing improves business performance and profitability. III. Outsourcing can be viewed more generally as any activity that leads to the acquisition of software or software components from a source outside the software engineering organization. (a) (b) (c) (d) (e) Only (I) above Only (II) above Only (III) above Both (I) and (II) above All (I), (II) and (III) above.

11.Which of the following risks identify the potential design, implementation, interface and verification and maintenance problems? (a) (b) (c) Project risk Technical risk Strategic risk

(d) (e) (a) (b) (c) (d) (e)

Market risk Budget risk. Risk identification Risk mitigation Risk refinement Risk estimation Risk management.

12.Risk projection is also called as

13.Which of the following is/are the primary objectives of Risk Monitoring? I. To assess whether predicted risks do, infact, occur. II. To ensure that risk aversion steps defined for the risk are being properly applied. III. To collect information that can be used for future risk analysis. (a) (b) (c) (d) (e) (a) (b) (c) (d) (e) Only (I) above Both (I) and (II) above Both (I) and (III) above Both (II) and (III) above All (I), (II) and (III) above. Compartmentalization Market assessment Interdependency Time allocation Effort validation.

14.Which of the following is not a guiding principle of software project scheduling?

15.A web application and its support environment has not been fully fortified against attack. Web engineers estimate that the likelihood of repelling an attack is only 30 percent. The system does not contain sensitive or controversial information, so the threat probability is 25 percent. What is the integrity of the web application? (a) (b) (c) (d) (e) 0.625 0.725 0.825 0.775 0.875.

16.A web engineering team has built an e-commerce web application that contains 145 individual pages. Of these pages, 65 are dynamic i.e., they are internally generated based on end-user input. What is the customization index for this application? (a) (b) (c) (d) (e) 0.34 0.23 0.52 0.44 0.68.

17.A legacy system has 940 modules. The latest release required that 90 of these modules be changed. In addition, 40 new modules were added and 12 old modules were removed. Compute the software maturity index for the system. (a) (b) (c) (d) (e) 0.524 0.628 0.725 0.848 0.923.

18.Quality costs may be divided into costs associated with prevention, appraisal and failure. Failure costs may be subdivided into internal failure costs and external failure costs. Which of the following is included in internal failure cost? (a) (b) (c) Complaint resolution Product return and replacement Helpline support

(d) (e) I. II.

Warranty work Failure mode analysis.

19.Which of the following statements is/are false about Formal Technical Reviews (FTR) in software development? FTR is a software quality control activity performed only by a project leader. The FTR is actually a class of reviews that includes walkthroughs, inspections, round robin reviews and other small group technical assessments of software. III. Each FTR is conducted as a meeting and will be successful only if it is properly planned, controlled and attended. Only (I) above Only (II) above Only (III) above Both (I) and (II) above Both (II) and (III) above.

(a) (b) (c) (d) (e)

20.To control and manage software configuration items, each should be separately named and then organized using an object-oriented approach. Two types of objects can be identified namely Basic object and Aggregate objects. Which of the following is an example of Aggregate object? (a) (b) (c) (d) (e) DesignSpecification Source code for a component Suite of test cases Part of design model Section of requirement specification.

21.Configuration status reporting is a Software Configuration Management (SCM) task that answers number of questions. Which of the following questions is not included in configuration status reporting? (a) (b) (c) (d) (e) What happened? Who did it? How it happened? When did it happen? What else will be affected?

22.Which of the following is/are the objectives of Software Configuration Management (SCM) process? I. To identify all items that collectively defines the software configuration. II. To facilitate the construction of different versions of an application. III. To ensure that software quality is maintained as the configuration evolves over time. (a) (b) (c) (d) (e) Only (I) above Both (I) and (II) above Both (I) and (III) above Both (II) and (III) above All (I), (II) and (III) above.

23.By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting. Which of the following characteristics of testability refers to the above statement? (a) (b) (c) (d) (e) (a) (b) (c) (d) (e) (a) Operability Observability Controllability Decomposability Stability. Equivalence partitioning Graph based testing Boundary value analysis Basis path testing Orthogonal array testing. Complexity

24.Which of the following is not an example of black box testing technique?

25.The physical connections between elements of the object-oriented design is called

(b) (c) (d) (e)

Coupling Cohesion Primitiveness Volatility.

26.A strategy for software testing is developed by I. Project managers. II. Software engineers. III. Testing specialists. (a) (b) (c) (d) (e) I. Only (I) above Both (I) and (II) above Both (I) and (III) above Both (II) and (III) above All (I), (II) and (III) above.

27.Which of the following statements is/are true about Unit testing? Unit testing focuses verification effort on the smallest unit of software design-the software component or module. II. The unit test focuses on the internal processing logic and data structures within the boundaries of the component. III. Smoke testing is an unit testing approach commonly used when software products are being developed. Only (I) above Both (I) and (II) above Both (I) and (III) above Both (II) and (III) above All (I), (II) and (III) above.

(a) (b) (c) (d) (e)

28.Which of the following tests executes a system in a manner that demands resources in abnormal quantity, frequency, or volume? (a) (b) (c) (d) (e) Recovery testing Security testing Stress testing Alpha testing Smoke testing.

29.PERT and CPM provide quantitative tools that allow the software planner to I. Determine the critical path. II. Establish most likely time estimates for individual tasks by applying statistical models. III. Calculate boundary times that define a time window for a particular task. (a) (b) (c) (d) (e) Only (I) above Both (I) and (II) above Both (I) and (III) above Both (II) and (III) above All (I), (II) and (III) above.

30.Which of the following tests is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects? (a) (b) (c) (d) (e) Regression testing Integration testing Unit testing Smoke testing System testing. END OF SECTION A

Software Project Management (MB3G2IT)

Section B : Caselets (50 Marks)


This section consists of questions with serial number 1 7. Answer all questions. Marks are indicated against each question. Detailed explanations should form part of your answer. Do not spend more than 110 - 120 minutes on Section B.

Caselet 1
Read the case let carefully and answer the following questions: 1. If you are the project manager at TechnoPark Corp., what are the factors you will consider for accuracy of the software project estimation? ( 5 marks) 2. Critically analyze the objectives of software cost estimation. ( 6 marks) 3. Explain in detail about COCOMO II model that help in accurate software project cost estimation. ( 10 marks) When it comes to developing software, accurate projections are 80% of the challenge. In an effort to improvement its internal budgeting and to further reduce surprises to clients, TechnoPark Corp. went on a quest to find the most successful method of estimating project cost and duration at the start of each project. In March 2007, a modified COCOMO II model went into operation at TechnoPark Corp. Read on to learn more on the secrets of effective project estimation as revealed by this successful U.S.-based outsourcing software development firm. Pre-sales estimation of project costs and durations has been a software dilemma that started with the profession itself. Despite all good intentions, Murphy's Law seems to leave everyone room to be unhappy about something. Customers expect accuracy and often are disappointed even by what software companies consider minor post-sales adjustments. IT project managers dread giving pre-sales estimates because they know that just about every project has hidden work. The burning question has been, "Is it possible to make an accurate estimation before the project architecture is built?" The dream answer of "YES" is all the more desirable for outsourcing software firms, such as TechnoPark Corp., because accuracy is also related to trust and reputation, the cornerstone qualities of any outsourcing company. Since the 1970s, there have been many attempts to attain this dream of accurate pre-sales estimations of projects. None however have stood up to the rigors of real world challenges and client expectations. Today, TechnoPark Corp. shares its successful experience in the fine art of pre-sales estimation. First, let's review the basis of the estimation process. There are some common mistakes and some important issues ignored by many estimators. A common initial mistake by software development companies is that they estimate the project as if everything will go "just right." However, it shall not. It is best to estimate the reality of the process, and not some Pollyannaish scenario. Risks ought to be the first things analyzed. Any estimation not based on analysis of all probable risks, in conjunction with a company's true capabilities, inevitably will result in an estimation of one's hopes and wishes and not probability. According to the new model by TechnoPark Corp a requirements-based model is the most suitable for pre-sales estimation. The source lines of code (SLOC) model, popular in the 1980s-90s, prove to not be effective coding is significantly more complex and interactive than the number of lines of a program. The SLOC model an admittedly still helps with measuring the efforts of a completed project, but that doesn't help much with estimating before the coding begins. Requirements-based estimation, on the other hand, accounts in advance for a project's features, risks and complexity. Analysis of the requirements provides an estimator with a more abstract but accurate vision, as the estimator views the whole project. Requirementsbased estimation however is not the solution either according to the new model by TechnoPark Corp. Pre-sales estimation needs a general understanding of the project but also needs to account for the details, an area where requirements-based modeling comes up short. Another golden rule of estimating reads: Estimate the diapason or range, not the precise figure. It usually is impossible to count the precise size of efforts and get a correct estimation at the pre-sales phase. It is better to estimate the spectrum of possibilities and set aside more precise estimation for detailed examination of the project. In order to get the diapason, three figures for each task are needed: Worst Case (WC) is the maximum amount of person-hours needed to implement the feature and depicts the situation when everything

is going wrong; Best Case (BC) is an optimistic estimation providing minimum amount of person-hours; and Most Likely (ML) depicts the situation which is the most probable in an estimators" assessment, which may be close to either the worst or best case. "TechnoPark Corp involved three developers each time estimation is facilitated. Two of them provide their versions of WC, BC and ML estimations, and the third one estimates complexity and function points. When counting the diapason of person-hours, we use a number of additional coefficients such as maturity of process, required software reliability, programming language experience, etc. After all necessary data are entered, automated counting is implemented by the system based on COCOMO II", explains Victoria Malinovskaya, the estimation facilitator at TechnoPark Corp. COCOMO II, or the Constructive Cost Model, is growing in popularity as an estimation model. Its first version, COCOMO 81, was introduced in 1981 by Barry Boehm. The model is used for estimating the number of person-hours and person-months it will take to develop a software product. The first version was based on SLOC counting. COCOMO II appeared in the 1990s. The power behind COCOMO II is that it is based on function points instead of SLOC. A function point is a rough estimate of a unit of delivered functionality of a software project. To calculate the number of function points for a software project, one counts all of the user inputs, user outputs, user inquiries, number of files and number of external interfaces, dividing them all into simple, average and complex categories. COCOMO II coefficients and formulas are clear and useful for software development companies, both big and small. The model offered by TechnoPark Corp. is based on COCOMO II with only small fluctuations making it more suitable for outsourcing software development companies. TechnoPark Corp. has adapted the COCOMO II approach to client costing with great success. END OF CASELET 1

Caselet 2
4. 5. 6. 7. Read the caselet carefully and answer the following questions: After reading the caselet, critically analyze the elements which influence software project management. Sometimes the software projects will fail due to poor vision of the project manager. What are the ideal characteristics that define an effective project manager? Explain. Why do software teams fail in any software project? What is the role of project manager in the above situation? To manage a successful software project, we must understand what can go wrong so that problems can be avoided. If you are the project manager, explain the signs that a software project is in jeopardy. Corporate America spends more than $275 billion each year on approximately 200,000 application software development projects. A great many of these projects will fail for lack of skilled project management. The opportunities for project failure are legion. Large-scale software development efforts today are conducted in complex, distributed IT environments. Development occurs in a fragile matrix of applications, users, customer demands, laws, internal politics, budgets, and project and organizational dependencies that change constantly. Project managers who lack enterprise-wide multi project planning, control, and tracking tools often find it impossible to comprehend the system as a whole. Underestimating project complexity and ignoring changing requirements are basic reasons why projects fail. Under these conditions, software project management is almost an oxymoron. Moreover, software today must not just automate processes; it must create business value, by improving customer service or delivering a competitive advantage. Raising the stakes of every large-scale development project is Return On Investment (ROI): Software must have a measurable impact on a company's bottom line. Finally, urgent multiproject, multisite projects like Y2K, Euro conversion, ERP, and the rush to the Internet add to the daunting development burden.

( 8 marks) ( 7 marks) ( 8 marks)

( 6 marks)

For all these reasons, the business case for adapting project management disciplines to large-scale software development is obvious. It's the implementation that's dicey. Project management is a process that spans the full life cycle of a project from inception to completion. Its cornerstone tenets are planning, execution, and control of all resources, tasks, and activities necessary to complete a project. Project management is not an isolated activity, but rather a team effort. In the end, project management is about people and process -- how work is being performed. The four Ps of project management are: People, product, process and project. Most team efforts fail because members are not committed to winning. Why are the Yankees the Yankees? Because they have an expectation of winning, not losing. And they have a repeatable process that mitigates failure. Project management is gaining traction in IT organizations, and the results are encouraging. Failure rates and costs are down, and project success rates are up. Large companies are taking a small approach to project management. Minimization means scaled down features/functions as well as scope minimization. IT organizations are adopting standard project methodologies or building enterprise-level formal project management disciplines. This level of proactivity is quite a change. For years project failure was simply not discussed. And it certainly was not discussed with the CEO. But 1996 was a watershed year in IT project management, according to Standish surveys. We finally came to acknowledge the cost, size, and scope of the problem. We discovered that there was no silver technology bullet. Technology was neither the problem nor the solution. The problem -- and the solution -- lay in people and processes. Valuable lessons had been learned and were being applied to current projects. We learned to develop better processes, to organize teams more effectively, and to deal with problems faster. We developed smaller, less complex projects. Troubled projects were euthanized quickly. With the advent of professional project managers, we learned to mitigate project risk through scope minimization, standard infrastructures, and improved communications. History is replete with examples ambitious projects that failed. The Standish Group believes that failure is critical to success. Only by examining our mistakes and applying lessons learned can we stem the tide of project failures and enhance our organization's probability of success. Using project management practices, we have begun to do just that. The body of the five years of The Standish Group CHAOS research, our infamous report including detailed information on IT project success and failure, shows decided improvement in IT project management. Project success rates are up across the board, while cost and time overruns are uniformly down. The best news is that we are learning how to succeed more often. In 1994, only 16% of application development projects met the criteria for success -- completed on time, on budget, and with all the features/functions originally specified. By 1998, 26% of projects were successful. END OF CASELET 2 END OF SECTION B

Section C : Applied Theory (20 Marks)


This section consists of questions with serial number 8 - 9. Answer all questions. Marks are indicated against each question. Do not spend more than 25 - 30 minutes on Section C.

8. 9.

What is Capability Maturity Model (CMM)? Discuss the various models used by organizations for the process improvement. ( 10 marks) Explain in detail about spiral software process model with neat diagram. ( 10 marks)

END OF SECTION C END OF QUESTION PAPER

Suggested Answers Software Project Management (MB3G2IT)


Section A: Basic Concepts
1. 2. Answer Reason B The prototyping model of software development is useful approach when a customer cannot define requirements clearly. A The component-based development model composes applications from prepackaged software components. The component-based development model incorporates many of the characteristics of the spiral model. The component-based development model leads to software reuse, and reusability provides software engineers with a number of measurable benefits. E The formal methods model encompasses a set of activities that leads to formal mathematical specification of computer software. Formal methods model enable a software engineer to specify, develop and verify computer-based system by applying a rigorous, mathematical notation. The development of formal methods model is currently quite-time consuming and expensive. C A synchronous paradigm relies on the natural compartmentalization of a problem and organizes team members to work on pieces of the problem with little active communication among themselves. Heavy communication and consensus-based decision making are the trade marks of open paradigm teams. Open paradigm team structures are well suited to the solution of complex problems but may not perform as efficiently as other teams. E Project management activity encompasses measurement and metrics, estimation and scheduling, risk analysis, tracking and control. A FP-based estimation techniques require problem decomposition based on information domain values. A Software process metrics are used for strategic purposes where as software project metrics are tactical. Software project metrics are used to assess product quality on an ongoing basis and, when necessary, modify the technical approach to improve quality. Software project metrics are used to minimize the development schedule by making the adjustments necessary to avoid delays and mitigate potential problems and risks. D Defect Removal Efficiency (DRE) is defined by the given formula , DRE= E/(E+D) where E is the number of errors found before delivery of the software to the end-user, and D is the number of defects found after delivery. C V(G) = 4 Where V(G) = Cyclomatic complexity (or) number of regions. V(G ) = E N + 2 4=E2+2 E = 4. E The decision to outsource can be either strategic or tactical. Outsourcing can be viewed more generally as any activity that leads to the acquisition of software or software components from a source outside the software engineering organization. Outsourcing improves business performance & profitability. B Technical risk identifies the potential design, implementation, interface, and verification and maintenance problems. D Risk projection is also called risk estimation. E Risk monitoring is a project tracking activity with three primary objectives: 14. B To assess whether predicted risks do, infact, occur. To ensure that risk aversion steps defined for the risk are being properly applied.

3.

4.

5. 6. 7.

8.

9.

10.

11. 12. 13.

To collect information that can be used for future risk analysis. The basic principles that guides software project scheduling includes Compartmentalization, Interdependency, Time allocation, Effort validation, Desired

responsibilities, Defined outcomes, and Defined milestones. Market assessment is not a guiding principle of software project scheduling. 15. C integrity =

[1 (threat (1 sec urity))]

= (1 (0.25 (1 0.30))) =(1- (0.25 0.70)) =(1- 0.175) =0.825. 16. D Customization index C =

N dp N dp + N sp

Where N dp = number of dynamic web pages.

N sp = number of static web pages.


= 0.448 . 145 Software maturity index (SMI)= [M T (Fa + Fc + Fd )] / M T

Customization index C =

65

17.

Where

M T =The number of modules in the current release

Fc = The number of modules in the current release that have been changed Fa = The number of modules in the current release that have been added Fd = The number of modules from the preceding release that were deleted in the current release. SMI =940-(90+40+12)/940= 940-142/940 =0.848. Failure mode analysis is included in internal failure cost and rest of the options are included in external failure costs.
FTR is a software quality control activity performed by software engineers (and others). The FTR is actually a class of reviews that includes walkthroughs, inspections, round robin reviews and other small group technical assessments of software.Each FTR is conducted as a meeting and will be successful only if it is properly planned, controlled and attended. DesignSpecification is an example of Aggregate objects and rest of the other options are examples of Basic objects. The Software configuration Management task configuration status reporting does not ask a question like How it happened?. Rest of the other options given is correct. The software configuration management process has the following objectives: I. To identify all items that collectively define the software configuration. II. To facilitate the construction of different versions of an application. III. To ensure that software quality is maintained as the configuration evolves over time. By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting. This statement refers to the characteristic: decomposability. Basis path testing is an example for white box testing technique. Remaining all are examples of black box testing technique. The physical connections between elements of the object-oriented design is called coupling (i.e., the number of collaborations between classes or the number of messages passes between objects represent coupling within an object oriented system).

18. 19.

E A

20. 21. 22.

A C E

23. 24. 25.

D D B

26. 27.

E B

28. 29.

C E

30.

A strategy for software testing is developed by project manger, software engineers and testing specialists. Unit testing focuses verification effort on the smallest unit of software design-the software component or module. The unit test focuses on the internal processing logic and data structures within the boundaries of the component. Smoke testing is an integration testing approach commonly used when software products are being developed. Stress tests are designed to execute a system in a manner that demands resources in abnormal quantity, frequency, or volume. PERT and CPM provide quantitative tools that allow the software planner to determine the critical path , establish most likely time estimates for individual tasks by applying statistical models, calculate boundary times that define a time window for a particular task. Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.

Software Project Management (MB3G2IT) Section B: Caselets


1. If i am the project manger at Technopark Corp, I will consider the following factors: (1) The degree to which the planner has properly estimated the size of the product to be built; (2) The ability to translate the size estimate into human effort, calendar time, and dollars (a function of the availability of reliable software metrics from past projects); (3) The degree to which the project plan reflects the abilities of the software team; and (4) The stability of product requirements and the environment that supports the software engineering effort. 2. The following are the objectives of software cost estimation: To introduce the fundamentals of software costing and pricing To describe three metrics for software productivity assessment To explain why different techniques should be used for software estimation

To describe the principles of the COCOMO 2 algorithmic cost estimation model 3. In his classic book on "software engineering economics," Barry Boehm [BOE81] introduced a hierarchy of software estimation models bearing the name COCOMO, for constructive Cost Model. The original COCOMO model became one of the most widely used and discussed software cost estimation models in the industry. It has evolved into a more comprehensive estimation model, called COCOMO II [BOE96, BOE00]. Like its predecessor, COCOMO II is actually a hierarchy of estimation models that address the following areas: Application composition model. Used during the early stages of software engineering, when prototyping of user interfaces, consideration of software and system interaction, assessment of performance, and evaluation of technology maturity are paramount. Early design stage model Used once requirements have been stabilized and basic software architecture has been established.

Post-architecture stage model. Used during the construction of the software. The four main elements of the COCOMO II strategy are: Preserve the openness of the original COCOMO Key the structure of COCMO II to the future software marketplace sectors Key the inputs and outputs of the COCOMO II sub models to the level of information available

Enable the COCOMO II submodels to be tailored to a projects particular process strategy. COCOMO II follows the openness principles used in the original COCOMO. Thus, all of its relationships and algorithms will be publicly available. Also, all of its interfaces are designed to be public, well-defined, and parametrized,so that complementary preprocessors(analogy, case-based, or other size estimation models), post-processors(project planning and control tools, project dynamics models, risk analyzers), and higher level packages (project management packages, project negotiation aids), can be combined straightforwardly with COCOMO II. Like all estimation models for software, the COCOMO II models require sizing information. Three different sizing options are available as part of the model hierarchy: object points, function points, and lines of source code. The COCOMO II application composition model uses object points an indirect software measure that is computed using counts of the number of (1) screens (at the user interface), (2) reports, and (3) components likely to be required to build the application. Each object instance (e.g., a screen or report) is classified into one of three complexity levels (i.e., simple, medium, or difficult) using criteria suggested by Boehm [BOE96]. In essence, complexity is a function of the number and source of the client and server data tables that are required to generate the screen or report and the number of views or sections presented as part of the screen or report. Once complexity is determined, the number of screens, reports, and components are weighted according to the table illustrated below. The object point count is then determined by multiplying the original number of object instances by the weighting factor in the figure (1) and summing to obtain a total object point count. When component-based development or general software reuse is to be applied, the percent of reuse (%reuse) is estimated and the object point count is adjusted: NOP = (object points) x [(100 - %reuse)/100]

Where NOP is defined as new object points. To derive an estimate of effort based on the computed NOP value, a "productivity rate" must be derived. Figure (2) represents the productivity rate. PROD = NOP/person-month for different levels of developer experience and development environment maturity. Once the productivity rate has been determined, an estimate of project effort can be derived as estimated effort = NOP/PROD

Figure (1). Complexity weighting for object types

Figure (2). Productivity rate for object points In more advanced COCOMO II models, a variety of scale factors, cost drivers, and adjustment procedures are required. 4. Effective software project management focuses on the four Ps: people, product, process and project. The people factor is so important that the Software Engineering Institute has developed a people management capability maturity model to enhance the readiness of software organizations to undertake increasingly complex applications by helping to attract, grow, motivate, deploy and retain the talent needed to improve their software development capability. Organizations that achieve high levels of maturity in the people management area have a higher likelihood of implementing effective software engineering practices. Before a project can be planned, product objectives and scope should be established, alternative solutions should be considered and technical and management constraints should be identified. Without product information, it is impossible to define reasonable estimates of the cost, an effective assessment of risk, a realistic breakdown of project tasks, or a manageable project schedule that provides a meaningful indication of progress. Once the product objectives and scope are understood, alternatives enable managers and practitioners to select a best approach, given the constraints imposed by delivery deadlines, budgetary restrictions, personnel availability, technical interfaces and myriad other factors. A software process provides the framework from which a comprehensive plan for software development can be established. A small number of framework activities are applicable to all software projects, regardless of their size or complexity. A number of different task sets-tasks, milestones, work products and quality assurance points-enable the framework activities to be adapted to the characteristics of the software project and the requirements of the project team. The software projects should be planned and controlled so that complexity to a great extent can be managed. Although the success rate for software projects has improved somewhat, the project failure rate remained higher than it should be. To avoid project failure, a software project manger and the software engineers who build the product must heed a set of common warning signs, understand the critical success factors that lead to good project management, and develop a commonsense approach for planning, monitoring and controlling the project. 5. The following are the ideal characteristics of a project manager:

Problem solving: An effective software project manger can diagnose the technical and organizational issues that are most relevant, systematically structure a solution or properly motivate other practitioners to develop the solution, apply lessons learned from past projects to new situations, and remain flexible enough to change direction if initial attempts at problem solution are fruitless. Managerial identity: A good project manger must take charge of the project and must have the confidence to assume control when necessary and the assurance to allow good technical people to follow their instincts. Achievement: To optimize the productivity of a project team, a manager must reward initiative and accomplishment and demonstrate through his own actions that controlled risk taking will not be punished. Influence and team building: An effective project manager must be able to read people; and must be able to understand verbal and non verbal signals and react tot the needs of the people sending these signals. The manager must remain under control in high-stress situations. 6. Software teams fails due to the following factors: 1. Frenzied work atmosphere 2. High frustration that causes friction among team members 3. Fragmented or poorly coordinated software process 4. Unclear definition of roles on the software team 5. Continuous and repeated exposure to failure. To avoid a frenzied work environment, the project manger should be certain that the team has access to all information required to do the job and the major goals and objectives, once defined, should not be modified unless absolutely necessary. A software team can avoid frustration if it is given as much responsibility for decision making as possible. An inappropriate process can be avoided by understanding the product to be built and the people doing the work, and by allowing the team to select its own process model. The team itself should establish mechanisms for accountability and define a series of corrective approaches when a member of the team fails to perform. And finally, the key to avoiding an atmosphere of failure is to establish teambased techniques for feedback and problem solving. 7. To manage a successful software project, a project manger must understand what can go wrong so that problems can be avoided. The following are the 10 signs that indicate an information systems project in jeopardy: 1. Software people dont understand their customers needs. 2. The product scope is poorly defined. 3. Changes are managed poorly. 4. The chosen technology changes. 5. Business needs change or is ill-defined. 6. Deadlines are unrealistic. 7. Users are resistant. 8. Sponsorship is lost or was never properly obtained. 9. The project team lacks people with appropriate skills. 10. Managers avoid best practices and lessons learned.

Section C: Applied Theory


8. The Software Engineering institute (SEI) has developed a comprehensive process meta-model that is predicated on a set of system and software engineering capabilities that should be present as organizations reach different levels of process capability and maturity. The CMMI represents a process meta-model in two different ways: (1) as a continuous model and (2) as a staged model. The continuous CMMI meta-model describes a process in two dimensions as illustrated in figure below. Each process area (e.g., project planning or requirements management) is formally assessed against specific goals and practices and is rated according to the following capability levels:

CMMI process area capability profile Level 0: Incomplete. The process area (e.g., requirements management) is either not performed or does not achieve all goals and objectives defined by the CMMI for level 1 capability. Level 1: Performed. All of the specific goals of the process area (as defined by the CMM1) have been satisfied. Work tasks required to produce defined work products are being conducted. Level 2: Managed. All level 1 criteria have been satisfied. In addition, all work associated with the process area conforms to an organizationally defined policy; all people doing the work have access to adequate resources to get the job done; stakeholders are actively involved in the process area as required; all work tasks and work products are "monitored, controlled, and reviewed; and are evaluated for adherence to the process description" Level 3: Defined. All level 2 criteria have been achieved. In addition, the process is "tailored from the organization's set of standard processes according to the organization's tailoring guidelines, and contributes work products, measures, and other process-improvement information to the organizational process assets". Level 4: Quantitatively managed. All level 3 criteria have been achieved. In addition, the process area is controlled and improved using measurement and quantitative assessment. "Quantitative objectives for quality and process performance are established and used as criteria in managing the process". Level 5: Optimized. All capability level 4 criteria have been achieved. In addition, the process area is adapted and optimized using quantitative (statistical) means to meet changing customer needs and to continually improve the efficacy of the process area under consideration". The staged CMMI model defines the same process areas, goals, and practices as] the continuous model. The primary difference is that the staged model defines five maturity levels, rather than five capability levels. To achieve a maturity level, the specific goals and practices associated with a set of process areas must be achieved. The relationship between maturity levels and process area is shown below:

Process areas required to achieve a maturity level 9. The spiral mode!, originally proposed by Boehm [BOE88], is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of the waterfall model. It provides the potential for rapid development of increasingly more complete versions of the software. Boehm [BOE01] describes the model in the following manner: The spiral development model is a risk-driven process model generator that is used to guide multi-stakeholder concurrent engineering of software intensive systems. It has two main distinguishing features. One is a cyclic approach for incrementally growing a system's degree of definition and implementation while decreasing its degree of risk. The other is a set of anchor point milestones for ensuring stakeholder commitment to feasible and mutually satisfactory system solutions. Using the spiral model, software is developed in a series of evolutionary releases. During early iterations, the release might be a paper model or prototype. During later iterations, increasingly more complete versions of the engineered system are produced. A spiral model is divided into a set of framework activities defined by the software engineering team. Each of the framework activities represent one segment of the spiral path illustrated in figure. As this evolutionary process begins, the software team performs activities that are implied by a circuit around the spiral in a clockwise direction, beginning at the center. Risk is considered as each revolution is made. Anchor point milestonesa combination of work products and conditions that are attained along the path of the spiralare noted for each evolutionary pass. The first circuit around the spiral might result in the development of a product specification; subsequent passes around the spiral might be used to develop a prototype and then progressively more sophisticated versions of the software.

123

Each pass through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on feedback derived from the customer after delivery. In addition, the project manager adjusts the planned number of iterations required to complete the software. Unlike other process models that end when software is delivered, the spiral model can be adapted to apply throughout the life of the computer software. Therefore, the first circuit around the spiral might represent a "concept development project" which starts at the core of the spiral and continues for multiple iterations until concept development is complete. If the concept is to be developed into an actual product, the process proceeds outward on the spiral and a "new product development project" commences. The new product will evolve through a number of iterations around the spiral. Later, a circuit around the spiral might be used to represent a "product enhancement project." In essence, the spiral, when characterized in this way, remains operative until the software is retired. There are times when the process is dormant, but whenever a change is initiated, the process starts at the appropriate entry point (e.g., product enhancement). The spiral model is a realistic approach to the development of large-scale systems and software. Because software evolves as the process progresses, the developer and customer better understand and react to risks at each evolutionary level. The spiral model uses prototyping as a risk reduction mechanism but, more importantly, enables the developer to apply the prototyping approach at any stage in the evolution of the product. It maintains the systematic stepwise approach suggested by the classic life cycle but incorporates it into an iterative framework that more realistically reflects the real world. The spiral model demands a direct consideration of technical risks at all stages of the project and, if properly applied, should reduce risks before they become problematic. But like other paradigms, the spiral model is not a panacea. It may be difficult to convince customers (particularly in contract situations) that the evolutionary approach is controllable. It demands considerable risk assessment expertise and relies on this expertise for success. If a major risk is not uncovered and managed, problems will undoubtedly occur.

Das könnte Ihnen auch gefallen