Sie sind auf Seite 1von 9

Emerging Trends In Software Size And Effort Estimation Techniques: A Review

Jaspreet Bhatia , Eshna Jain, Sakshi Garg, Ritu Dagar Computer Science and Engineering Department, Indira Gandhi Institute of Technology (IGIT), GGSIPU, Delhi, India 1. Software project estimates Efficient development of software requires accurate estimates. Accurate and early software project estimates such as software size, cost, effort, and schedule is probably the biggest challenge facing software developers today. It speaks poorly of the software community that the issue of accurate estimating, early in the life cycle, has not been adequately addressed and standardized. 2. Software sizing Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material Software sizing is an important activity in software engineering. An accurate estimate of software size is an essential element in the calculation of estimated project costs and schedules. The fact that these estimates are required very early on in the project (often while a contract bid is being prepared) makes size estimation a formidable task. Initial size estimates are typically based on the known system requirements. You must hunt for every known detail of the proposed system, and use these details to develop and validate the software size estimates. We need to measure the size of software in order to be able to measure productivity. it is used to estimate the size of a software application or component in order to be able to implement other software project management activities (such as estimating or tracking). 2.1. Software Sizing Methods SLOC: the most common software sizing methodology was counting the lines of code written in the application source. Software Lines Of Code (SLOC) is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or effort once the software is produced. IFPUG method - Function point analysis: Function points allow the measurement of software size in standard units, independent of the underlying language in which the software is developed. Instead of counting the lines of code that make up a system, count the number of externals (inputs, outputs, inquiries, and interfaces) that make up the system. The IFPUG FPA functional sizing method (FSM) has proven successful and accurate for more than thirty years. However, its accuracy and effectiveness has lately become highly controversial. Reasons behind the criticism of FP analysis include lack of sensitivity towards algorithmic complexity and its relative difficulty Mark II FPA: The MK II Method is one of the software sizing methods in functional point group of measurements. This is a method for analysis and measurement of information processing applications based on end user functional view of the system. The COSMIC method defines the principles, rules and a process for measuring a standard functional size of a piece of software. 'Functional size' is a measure of the amount of functionality provided by the software, completely independent of any technical or quality considerations. New trends of software sizing

2.2.

New trends of software sizing have recently emerged. Some of them are discussed below: 2.2.1. Object-based Output Measurement Metrics in a CASE Environment Explore the use of object points and object counts as size measures for software for computer-aided software engineering (CASE) tools rather than raw function counts and function points. OBJECT COUNTS: represent a simple count of all objects in applications object hierarchy stored in repository. The objects in object-based CASE development environment offer a conceptually simple measure of functionality. CASE development methods tend to reduce the relative complexity of creating software functionality. Objects are not only counted but also weighted, so as to distinguish different levels of functionality that would require different levels of labour. OBJECT POINTS: these are similar to function-points however they use weighted object counts instead of function counts. The weights applied to object counts were determined based on extensive project manager interviews and group estimation sessions. Measuring Maintenance Activities Within Development Projects Although function points (FPs) are a good measure of the functionality that is added, changed, or removed through a development project, they do not measure other functions that may be impacted by a specific change but are not actually changed themselves. As well, there is often project work separate from the FP measurable functionality that cannot be counted under the current International Function Point Users Group (IFPUG) rules. Impact points (IPs) is a measure which accounts for these issues. IPs account for functions that are impacted but not changed by a project. They follow the same concept as FPs, but focus on non-FP countable projects and functions within projects. It is imperative that IPs only be used for sizing functionality not accounted for under

2.2.2.

traditional FP analysis. The intent is not to diminish the use of FP measures with overlapping measures but rather to fill a void that exists in FP-based software measurement. Since the IP measure is intended to be complementary to FPs, it is important to account for each separately. Data related to IPs should be kept in a separate repository from FPs; IP productivity rates should be developed and reported independently from FP productivity rates. Once a non-countable function is identified, the IFPUG concepts are used to define the function and measure the complexity. Projects that can be counted using FP analysis are not candidates for IP counts. IP countable items include: Table Updates- Examples are rate changes, adding products and/or services, and parameter/configuration changes. Code/Text Changes- Examples are static page updates, Web content updates, cosmetic changes, format changes, sort changes, adding or changing help, or error messaging. Data Management - Examples are data migration and database restructuring. Technical - Examples are multiple browsers and new sources of data (e.g., networks). In all of these, the functions using the new text, updates, etc., would be identified as impacted functions and counted to derive the projects IPs. 2.2.3. Measuring Object Oriented Software with Predictive Object Points Recognizing that traditional software metrics are inadequate for object oriented software when used for productivity tracking and effort prediction, PRICE Systems has developed a new metric, Predictive Object Points. Predictive Object Points were designed specifically for Object Oriented software and result from measurement of the object-oriented properties of the system. Project Estimation With Use Case Points

2.2.4.

Software developers frequently rely on use cases to describe the business processes of object-oriented projects. Since use cases consist of the strategic goals and scenarios that provide value to a business domain, they can also provide insight into a projects complexity and required resources. The Use Case Points method employs a projects use cases to produce a reasonable estimate of a projects complexity and required manhours. The Use Case Points (UCP) method provides the ability to estimate the manhours a software project requires from its use cases. The UCP method analyzes the use case actors, scenarios, and various technical and environmental factors and abstracts them into an equation. The following equation can be used to estimate the number of man-hours needed to complete a project. UCP = UUCP * TCF * ECF * PF UUCP- Unadjusted Use Case Points TCF- The Technical Complexity Factor ECF- The Environment Complexity Factor PF- Productivity Factor Sizing Software with Testable Requirements The concept of a testable requirement has been used for years as a test of the quality and detail of system specifications. The criterion that the requirement be precisely defined and unambiguous will be met only if the requirement is testable. A requirement is testable if someone is able to write one or more test cases that would validate whether the requirement has or has not been implemented correctly. The number of testable requirements can be used as a measure of the size of the system. Testable requirements can be used to measure and analyze a system in ways that are not possible with other measures. Because testable requirements can measure external user requirements as well as internal technical requirements, it is possible not only to size the user requirements, but also to quantify their impact on the technical design. Performance requirements, may contribute 2.2.5.

relatively few testable requirements when viewed from an end-user perspective. However, when viewed from a technical perspective, meeting the performance requirements may require a complex, realtime technical design, which in turn requires many testable requirements. Hence a Testable requirement offers a new paradigm for software measurement. Web Objects Counting Conventions Estimating the size of web applications poses new problems for cost analysts. Because hypertext languages (html, xml, etc.), multi-media files (audio, video, etc.), scripts (for animation, bindings, etc.) and web building blocks (active components like ActiveX and applets, building blocks like buttons and objects like shopping carts, and static components like DCOM and OLE) are employed in such applications, it is difficult to use traditional size metrics like source lines of code and function points. Improved size estimating techniques are therefore needed to address the shortfall. Else, the size estimates that we use to drive our cost models will be flawed. 2.3. Software sizing tools 2.2.6.

1. CCCC - C and C++ Code Counter CCCC is a tool which analyzes C++ and Java files and generates a report on various metrics of the code. Metrics supported include lines of code, McCabe's complexity and metrics proposed by Chidamber Kemerer and Henry Kafura. 2. CodeCount Tools The CodeCount toolset is copyright USC Center for Software Engineering but is made available with a Limited Public License which permits the distribution of the modifications you make provided you return a copy to us so we can further enhance the toolset for the benefit of all. Automating the collection of software sizing data reduces the time and effort required to gather data. It also improves accuracy and consistency of information. A downloadable presentation provides more details on the approach(es) used by the programs to count lines of code.

3. Constructive COTS Model (COCOTS) COCOTS is an amalgam of four related sub-models, each addressing individually what we have identified as the four primary sources of COTS software integration costs. These are costs due to the effort needed to perform a. candidate COTS component assessment, b. COTS component tailoring, c. the development and testing of any integration or "glue" code needed to plug a COTS component into a larger system, and d. increased system level programming due to volatility in incorporated COTS components. 4. PMPal PMPal is a fully collaborative, full featured, integrated tool for software project management and software metrics programs. 5. Sizing By Comparison A tool for estimating software size, the single most significant driver of development cost, effort, and schedule. Sizing by Comparison helps the user define software scope through a series of project analogies and/or comparisons to a users repository of past projects. In this way users can develop a reliable estimate on a projects scope even when information is scarce. Sizing can be determined using analogies, pair-wise comparison, or through an array of metrics such as Function Points, Source Lines of Code (SLOC), Function Based Sizing, and Use Cases. 6. Software Sizing Model (SSM) The Software Sizing Model is one of the most mature in use today. The model was developed in 1980. A key advantage of SSM is that it can be applied very early in the development life cycle, even during the requirements analysis phase. This helps produce accurate cost and schedule estimates for contract and product proposals. 7. SD Metrics SDMetrics analyzes the structural properties of your UML designs. Use

object-oriented measures of design size, coupling, and complexity to establish quality benchmarks to identify potential design problems early on, predict relevant system qualities such fault-proneness or maintainability to better focus your review and testing efforts, increase system quality and quality assurance effectiveness, find more faults earlier and save development cost, and refine your LOC or effort estimates for implementation and testing 8. True S Software Sizing PRICE Systems True S model contains a capability for software sizing. Software sizing is typically an inexact science left to "opinion" and "best guess" when all lines of code are known becomes totally flawed with the additional uncertainties of COTS integration. Instead, True S provides a knowledge base and a variety of reliable, easy-to-use sizing toolsincluding wizards, function point sizing, and Predictive Object Point sizing developed by PRICE for object-oriented software. True S also includes complete sizing descriptors to estimate tasks and costs for code that is new, adapted, reused, or deleted. 3. Software development efforts estimation Software development efforts estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds. It is essential to differentiate between software sizing and software effort estimation. Measuring the size of a piece of software is different from measuring the effort needed to build it. Inaccurate software estimates cause trouble in business processes related to software development such as project feasibility analyses, bidding, budgeting and planning. It is unrealistic to expect very accurate effort estimates of software development effort because of the inherent uncertainty in software development projects, and the complex and dynamic interaction of factors that impact software development effort use. Still, it is likely that estimates can be improved because software development effort estimates are

systematically overoptimistic and very inconsistent. Even small improvements will be valuable because of the large scale of software development. 3.1. Basic models of effort estimation There are two basic models for estimating software development effort (or cost): holistic and activity-based. The single biggest cost driver in either model is the estimated project size. 3.1.1. Holistic models are useful for organizations that are new to software development, or that do not have baseline data available from previous projects to determine labour rates for the various development activities. Holistic models relate size, effort, and (sometimes) schedule by applying equations to determine the overall cost, and then applying a percent of the overall cost to each development activity. They do not consider the actual labour rates and costs of each activity. For ex: SDM, SLIM, COCOMO. 3.1.2. Estimates produced with activitybased models are more likely to be accurate, as they are based on the software development rates common to each organization. Unfortunately, you require related data from previous projects to apply these techniques. The activity-based model uses data from the metrics database to determine the labour rates for the various development activities. For this reason, you can only apply it once the metrics program is established and there is a baseline from which to work. 3.2. Estimation approaches There are many ways of categorizing estimation approaches. The top level categories are the following: I. Expert estimation: The quantification step, i.e., the step where the estimate is produced based on judgmental processes. For ex: group estimation, WBS based Bottom-up estimation. II. Formal estimation model: The quantification step is based on mechanical processes, e.g., the use of

III.

a formula derived from historical data. For ex- analogy based, size based and parametric models. Combination-based estimation: The quantification step is based on a judgmental or mechanical combination of estimates from different sources. For ex- mechanical combination, judgemental combination.

3.3. TECHNIQUES: Boehm (1981) discusses seven techniques of software cost estimation: 1. Algorithmic cost modeling - A model is developed using historical cost information which relates some software metric (usually its size) to the project cost. An estimate is made of that metric and the model predicts the effort required. 2. Expert judgement - One or more experts on the software development techniques to be used and on the application domain are consulted. They each estimate the project cost and the final cost estimate is arrived at by consensus. 3. Estimation by analogy - This technique is applicable when other projects in the same application domain have been completed. The cost of a new project is estimated by analogy with these completed projects. 4. Parkinson's Law - Parkinson's Law states that work expands to fill the time available. In software costing, this means that the cost is determined by available resources rather than by objective assessment. If the software has to be delivered in 12 months and 5 people are available, the effort required is estimated to be 60 personmonths. 5. Pricing to win - The software cost is estimated to be whatever the customer has available to spend on the project. The estimated effort depends on the customer's budget and not on the software functionality. 6. Top- down estimation - A cost estimate is established by considering the overall functionality of the product and how that functionality is provided

by interacting sub-functions. Cost estimates are made on the basis of the logical function rather than the components implementing that function. 7. Bottom- up estimation - The cost of each component is estimated. All these costs are added to produce a final cost estimate. 3.4. ONGOING RESEARCH AREAS Research is found in areas such as: 3.4.1 Creation and evaluation of estimation methods Describes work on the creation and evaluation of estimation methods, such as methods based on expert judgment, structured group processes, regression-based models, simulations and neural networks. 3.4.2. Calibration of estimation models Tailoring a model to a particular context (calibration) has been found to be difficult in practice. Problematic issues are related to, among others, when, how and how much local calibration of the models that are beneficial. 3.4.3. Software system size measures The main input to estimation models is the size of the software to be developed. It has been proposed many size measures, for example based on the amount of functionality that is described in the requirement specification. 3.4.4. Uncertainty assessments Software developers are typically over-confident in the accuracy of their effort estimates. Realistic uncertainty assessments are important in order to enable proper software project budgets and plans. 3.4.5. Measurement and analysis of estimation error Proper accuracy measurement is essential when evaluating estimation methods, and identifying causes of estimation error.

3.4.6.

Organizational issues related to estimation Organizational issues such as processes to control the cost and scope of the project, may have a large impact on estimation accuracy.

3.5.

Trends in effort estimation Algorithmic Cost Models (Bournemouth) To date most work carried out in the software cost estimation field has focused on algorithmic cost modelling. In this process costs are analysed using mathematical formulae linking costs or inputs with metrics to produce an estimated output. The formulae used in a formal model arise from the analysis of historical data. The accuracy of the model can be improved by calibrating the model to your specific development environment, which basically involves adjusting the weightings of the metrics. On an initial instinct you might expect formal models to be advantageous for their 'off-the-shelf' qualities, but after close observation this is regarded as a disadvantage by cost estimators due to the additional overhead of calibrating the system to the local circumstances. However, the more time spent calibrating a formal model the more accurate the cost estimate should be. A distinct disadvantage of formal models is the inconsistency of estimates; the estimates varied from as much as 85 610 % between predicated and actual values. Software Cost Estimation With Use Case Points Estimating the amount of work required to deliver software is hard. Estimating the amount of work in the very early stages of a project is even harder. A method was developed to estimate the amount of work required by analyzing what the system will allow its users to do. That method is called estimating with use case points. Use case points is a measurement of how much effort is required to write

3.5.1.

3.5.2.

software based on how much work the software is intended to do. The cost estimation technique of use case points evaluates the following factors to determine an estimate of cost: 1. Technical Factors of the Implementation - Primarily nonfunctional requirements of the system. 2. Environmental Factors - Mostly characterizing the implementation team, but touching on process as well. 3. Use Case Quantity and Complexity - The number of use cases and the number of steps within the use cases. 4. Actor Quantity and Complexity -The number and type of actors and interface. 5. Effort Estimation-The previously collected data is converted into man-hours. 3.6. Effort & Cost estimation tools. 1. ACEIT (Automated Cost Estimating Integrated Tools) A family of applications that support program managers and cost/financial analysts during all phases of a program's life-cycle. ACEIT applications are the premier tool for analyzing, developing, sharing, and reporting cost estimates, providing a framework to automate key analysis tasks and simplify/standardize the estimating process. Please visit our Products page to find out more about which ACEIT application best suits your needs and how to purchase. 2. Agile COCOMO II A web-based software cost estimation tool that enables you to adjust your estimates by analogy through identifying the factors that will be changing and by how much. 3. Charismatek FP Workbench Provides a counting tool for all situations and for all software sizing needs. It is specifically designed to be scaleable for effective use by

individual counters as well as for large scale, distributed IT environments. It is s software tool utilized when sizing software applications and projects using the IFPUG Function Point Analysis technique. The WORKBENCH provides support for sizing, analyzing and reporting at all software life cycle stages from requirements analysis through to production. In addition, the WORKBENCH provides support for related activities including project estimation, requirements communication & negotiation, scope management and project tracking. 4. Construx Software Builders Construx Software Builders provide Construx Estimation Software, a free estimation tool that includes both COCOMO II and SLIM functionality. 5. CoolSoft It utilizes a hybrid approach of intermediate and detailed versions of the Constructive Cost Model (COCOMO). This allows for the reuse of existing code, development of new code, the purchase and integration of third party code, and hardware integration. The output is then displayed as man-months of programming effort, calendar schedule, support costs and hardware costs. 6. Cost Estimating Tools Index This index contains abstracts of tools and models that were used in the Department of Defense and had the potential for wider application at the time of posting (2004- 2006). This link takes you to the Internet Archive since the original link no longer exists. 7. Cost Xpert Cost Xpert stands for innovation transferred into measurable and lasting value creation, with a short amortization time and high customer satisfaction. 8. Costar and SystemStar

Costar is an automated implementation of COCOMO II developed by SoftStar Systems. SystemStar, an automated implementation of COSYSMO. 9. Costar Software Estimation Tool Costar is a software cost estimation tool based on COCOMO II. A software project manager can use Costar to produce estimates of a project's duration, staffing levels, effort, and cost. Costar is an interactive tool that permits managers to make trade-offs and experiment with what-if analyses to arrive at the optimal project plan. 3.7. Selection of estimation approach The evidence on differences in estimation accuracy of different estimation approaches and models suggest that there is no best approach and that the relative accuracy of one approach or model in comparison to another depends strongly on the context . This implies that different organizations benefit from different estimation approaches. Some of the findings are: 1. Expert estimation is on average at least as accurate as model-based effort estimation. In particular, situations with unstable relationships and information of high importance not included in the model may suggest use of expert estimation. This assumes, of course, that experts with relevant experience are available. 2. Formal estimation models not tailored to a particular organizations own context, may be very inaccurate. Use of own historical data is consequently crucial if one cannot be sure that the estimation models core relationships (e.g., formula parameters) are based on similar project contexts. 3. Formal estimation models may be particularly useful in situations where the model is tailored to the organizations context (either through use of own historical data or that the model is derived from similar projects and contexts), and/or it is likely that the experts estimates will be subject to a strong degree of wishful thinking. 4. The most robust finding, in many forecasting domains, is that

combination of estimates from independent sources, preferable applying different approaches, will on average improve the estimation accuracy. 5. In addition, other factors such as ease of understanding and communicating the results of an approach, ease of use of an approach, cost of introduction of an approach should be considered in a selection process 3.8. Psychological issues related to effort estimation There are many psychological factors potentially explaining the strong tendency towards over-optimistic effort estimates that need to be dealt with to increase accuracy of effort estimates. These factors are essential even when using formal estimation models, because much of the input to these models is judgment-based. Factors that have been demonstrated to be important are: Wishful thinking, anchoring, planning fallacy and cognitive dissonance. It's easy to estimate what you know. It's hard to estimate what you know you don't know. It's very hard to estimate things that you don't know you don't know CONCLUSION The ability to accurately estimate the time/cost taken for a project to come to its successful conclusion has been a serious problem for software engineers. The use of repeatable, clearly defined and well understood software development process has in recent years shown itself to be the most effective method of gaining useful historical data that can be used for statistical estimation. Inaccurate software estimates cause trouble in business processes related to software development such as project feasibility analyses, bidding, budgeting and planning. References [1] Barry Boehm, Chris Abts, Sunita Chulani, Software Development Cost Estimation Approaches [2] Rajiv d. Banker, Robert j. Kauffman, rachna kumar; An Empirical Test of Objectbased Output Measurement Metrics in a CASE Environment

[3] Lori Holmes, Roger Heller; Measuring Maintenance Activities Within Development Projects [4] Roy K. Clemmons , Diversified Technical Services, Inc; Project Estimation With Use Case Points [5] Peter B. Wilson; Sizing Software with Testable Requirements [6] Karner, Gustav. Resource Estimation for Objectory Projects. Objective Systems SF AB, 1993. [7] Jacobson, I., G. Booch, and J. Rum-baugh. The Objectory Development Process . Addison-Wesley, 1998. [8] Anda, Bente. Improving Estimation Practices By Applying Use Case Models. June 2003 [9] Anda, Bente, et al. Effort Estimation of Use Cases for Incremental Large-Scale Software Development. 27th International Conference on Software Engineering, St Louis, MO, 15-21 May 2005: 303-311.

Das könnte Ihnen auch gefallen