Sie sind auf Seite 1von 7

What Gets Measured, Gets Done

How the Right Measures Drive Better Engineering

Table of Contents

What gets measured, gets done how the right measures drive better engineering ..................3 About the Author ...........................................................................................................................7 Topics trending now ......................................................................................................................7 For More Information ....................................................................................................................7

What gets measured, gets done how the right measures drive better engineering
Performance measurement has permeated almost every aspect of a modern business. The IT industry has played a crucial role in the ability to measure everything from financial results to social media sentiment, and turning this into useful information to report back to stakeholders, and to use for driving the business. It is peculiar to see how performance measurement has had a limited role in measuring and managing the discipline that has advocated it so much: the area of information technology itself. Sure, there are endless reports on system performance, and on IT operations, but what about the core of IT? Engineering and system development? I think there are two main reasons for that, and they are related to each other. First, how do you determine the value of IT investments? Particularly when the investments are infrastructural in nature. What is the return of investment of a data warehouse infrastructure, an email system, middleware, a development environment? The value is in using it, and the use is located somewhere else than where the cost and effort are spent and part of a bigger picture. How to separate, for instance, the IT contribution of a CRM system from an overall performance improvement in campaign management? How to build the business case for introducing a new search mechanism in content management? This introduces the second reason. In our quest for accountability, we demand that the contribution of every part of the organization is clear. In other words, we are looking for measures to show the effectiveness of an IT solution, in isolation of the bigger picture. This then leads to weird business cases such as estimating 15 minutes of time saved for employees using the new search mechanism, adding up to being able to save a number of FTEs per month. Like that is going to happen. The traditional and easy way out is to focus on what engineers can influence themselves: the efficiency of the process. Many metrics in engineering and system development are focused on productivity, cost and time and scope management. Take for instance function point analysis, that was popular in the days of COBOL system development, and is now re-emerging as a measure of complexity and a unit of productivity to assess components in a service-oriented architecture. Productivity Actual cost/time vs plan Man hours per function point Time to resolve bugs Quality Number of defects per function point Number of defects in production Number of defects reopened % Manhours rework Business Value Number of realized features vs original requirements % of original business case achieved Time to market Different roles of an organization require different types of performance measurement. In operations, both the process that drives value, and the outcomes are relatively clear. Effort and outcome can be linked, measures have predictive value. Research initiatives, on the other hand, often have both less predictable processes and outcomes. Engineering and development is in the middle. The development process should be clear, but it is hard to link this to tangible business results.

Not all performance measurement is the same

Table 1: Examples of traditional software engineering metrics

These measures are certainly useful. They are actionable as development managers can influence them. They add to system development accountability. They are easy to measure and can be fully attributed to system development. But in the bigger picture they are not enough. Traditional performance measures often reflect the hierarchic structure of organizations. Hierarchy may be a good to focus an organization, however it is not always the best tool for driving high performance results. In business and engineering performance, it is the combination of points of view, skills and deliverables that makes the difference. In other words, collaboration is key. Trying to make everyone accountable for their specific contribution in a hierarchic manner doesnt always drive collaboration. In fact, the opposite may be true.

What gets measured, gets done

Business interface metrics

Measurement drives behavior


An old myth says that Charles Dickens prose was so lengthy because he was paid by the word. Although debunked, it is a useful example to explore how measurement drives behavior. If a commercial author is paid based on the success of a book, it doesnt matter whether it is long or short, it needs to appeal to the public as a whole. As a work of art or entertainment If an author is paid by the line, most likely it will contain lengthy conversations, creating as many lines as possible. Paid by the word, the book will provide elaborate background descriptions. Paid by installment of a series (like Dickens was), and stories will look like soaps that go on forever. Measurement drives behavior. Engineering is the same If you measure lines of code as a measure of productivity for developers, you punish developers for thinking things through and writing elegant and compact code. If you measure the number of bugs reported as a quality management measure, developers are rewarded for solving problems later on, treating them as a bug. Vice versa Measuring the right things drives the right behaviors. If you measure progress on completed working code (i.e. tested, deployed and integrated), or measure completed and accepted turnaround time of change requests that do not affect other sprints, you balance both effort and result and drive balanced behavior.

Successful systems development is not an activity in isolation, it crosses multiple business domains, such as IT development and IT operations, and one step up between IT and the various business departments. Successful systems development requires collaboration. As measurement drives behavior, it then makes sense to measure and reward collaboration. This is done by recognizing business interfaces next to business domains.

Figure 1: Business interfaces

A business interface is the point where one business domains activities and processes interact, bordering with activities and processes in another business domain. See figure 1. It is where people need to collaborate and where, in practice, most of the efficiency and effectiveness of work is lost. Handover moments are crucial, and business interface metrics are needed to manage them. Lets apply this idea to the collaboration between IT development and IT operations. The business model of the IT development department is project-based. Developers go from project to project, implementing different systems or updating existing ones. The IT operations department typically works differently. The daily activities are less structured than projects, and they are managed through the hierarchy of the IT operations department. A certain amount of rigidity is needed, as the transactions of an organization often have contractual value and need to be protected. The business interface between IT development and operations mostly consists of taking new developments into production. Those can be new systems or modifications to existing systems. There is typically a strict process that needs to be followed for development, testing, acceptance, and production. In a classic situation this is a process where the IT operations department has rules about acceptance, informs IT development about those rules, and tests compliance of those rules ex post. This process often leads to frustrating, long, and difficult implementation processes. Moreover, it can hamper successful implementation of more modern methodologies such as Agile. A more collaborative approach is needed. See Figure 2.

CollabNet, Inc. All rights reserved.

Figure 2: The business interface between IT Development and IT Operations

Business interface metrics for this process could include the monitoring of handover time per function point. It might be a good idea to also introduce a risk factor for acceptance. Acceptance here means the success of taking new developments into production. At the beginning of the implementation project and at all checkpoints both IT development and IT operations should jointly estimate the risks to production, as a feedforward loop. The moment this risk estimate increases, a correction process can start (instead of waiting for acceptance testing). Lastly, there might be a need for a feedback loop, tracking how many incidents are happening over the first weeks or months of taking an application into production, related to the handover procedure. This metric will show negative results if the handover process was rushed through. In all three cases IT development and IT operations share the responsibility for hitting the targets on these business interface metrics.

Figure 3: Share responsibility

Business interface metrics drive different, more collaborative behaviors. It is now in the best interest of IT operations professionals to involve themselves in projects from the beginning, so that the actual acceptance is nothing more but a formality. All risks have been mitigated before the hand-over moment, and IT operations people can run the system perfectly from the beginning. In the same way, developers are strongly motivated to involve IT operations people, to make sure the new system operates flawlessly. Their performance measurements depend on it. This approach seems to violate one of the core principles of performance measurement: people who are held accountable should have the means to control the outcome of the measures. In the case of business interfaces, there is no full control. Instead, professionals depend on each other. It is the responsibility of their manager, however, to make sure his or her departments collaborate. Successfully managing business interfaces is also the success of the manager.

What gets measured, gets done

Taking DevOps to the Extreme


The idea of managing system development in a more collaborative way is not new. In fact, DevOps (Development + Operations) is a complete set of processes and methods to drive collaboration. Other, perhaps more well-known, related approaches are agile software development and SCRUM. These new development methodologies work much more incremental, using continuous deployments, time boxes and cross-functional teams. These newer, more flexible approaches have metrics designed for them as well. Again, there are productivity metrics, such as sprint completion bars, quality metrics, such as post release defect arrivals, and predictability metrics, such as burndown charts. Still, these are all metrics that focus, in a traditional way, of the efficiency of the process, and not on the business value of the results. Yet, some extreme views on DevOps offer a new perspective. In some areas, particularly in data analysis, the DevOps approach has become so interactive and real-time that there are no hand-off moments and deployments anymore. Data scientists are professionals combining technical, analytical and business skills. They are an integral part of a business team involved in a certain campaign, or internal project, and it is their task to provide analytical support. On the spot, data is located, loaded, accessed and analyzed, used modern cloud development technology. There are no metrics anymore to measure the efficiency of the development process, the only thing left is measuring the business value of the total initiative. Less fraud, higher campaign conversion, more efficient business processes, higher utilization rate of resources. The things that matter.

Best practices and recommendations


Although engineering performance measurements has its own performance indicators and some unique challenges, the best practices of how to organize performance measurement in general apply. Keep performance measures close to the engineering process and engineering systems, instead of just creating high-level management reports in a distant data warehouse. Performance measurement is most useful when it is used as feedback for continuous review and continuous improvement of activities across the engineering life cycle, such as planning, coding, testing, deployment, use and change. Aggregated general management reporting can be an automated derived result from the measurement process. At the same time, do not treat performance measurement in isolation. Engineering is part of the overall organization. Use existing frameworks such as the balanced scorecard, or specific models like SCOR for operations management, to link engineering metrics to business results. For instance, average sprint-todeployment time can be a leading indicator for agility in business operations, that can be linked to first mover advantage in new business opportunities, improving market share and shareholder value. More is not always better. Dont go overboard, trying to measure everything, simply because the data is available. Performance measurement should be a by-product of the engineering process itself. Engineers should focus on productivity, and not on filling in spreadsheets and reports. Automate the measurement and reporting process as much as possible, using standard available software. Simply reviewing how the project results compare to the original business case is not enough (although even that often doesnt happen in practice). Reality rarely develops as originally planned. Instead, link engineering efforts to real business results. In doing so, accept that it may not be possible to single out the engineering effort from the overall business result. Business results are a collaborative effort. Understand that measurement drives behavior, and this is not always a rational effect. The wrong measurements, as objective as they may be, lead to wrong behaviors such as gaming the numbers and suboptimal results. Particularly when performance indicators are directly linked to personal compensation plans. The right measurements drive the right behaviors and the right results, such as collaboration and tangible business performance. Grow from traditional indicators to business interface metrics and lessons learned from modern engineering approaches such as DevOps.

CollabNet, Inc. All rights reserved.

About the Author


Franks professional background in strategy, performance management and organizational behavior gives him a strong perspective across many domains in business and IT. He is an exceptional speaker at conferences all over the world, known for his out-of-the-box, entertaining and slightly provocative style. Frank is also a visiting fellow at Cranfield University School of Management, and author of various books, including Performance Leadership (McGraw-Hill, September 2008), and Dealing with Dilemmas (Wiley & Sons, August 2010). Follow Frank on Twitter@FrankBuytendijk.

CONTACT US
Corporate Headquarters 8000 Marina Blvd, Suite 600 Brisbane, CA 94005 United States Phone: +1 (650) 228-2500 Toll Free: +1 (888) 778-9793

For More Information


For more information on CollabNets agile tools and training visit www.collab.net/CTFdemo

Topics trending now


Many of the latest technology announcements have implications for PaaS and cloud development that will serve agile businesses everywhere. Enterprise Cloud Development, www.collab.net/ecd Analytics and Reporting, www.collab.net/analytics Continuous Integration, www.collab.net/getci

About CollabNet CollabNet is a leading provider of Enterprise Cloud Development and Agile ALM products and services for software-driven organizations. With more than 10,000 global customers, the company provides a suite of platforms and services to address three major trends disrupting the software industry: Agile, DevOps and hybrid cloud development. Its CloudForge development-Platform-as-a-Service (dPaaS) enables cloud development through a flexible platform that is team friendly, enterprise ready and integrated to support leading third party tools. The CollabNet TeamForge ALM, ScrumWorks Pro project management and SubversionEdge source code management platforms can be deployed separately or together, in the cloud or on-premise. CollabNet complements its technical offerings with industry leading consulting and training services for Agile and cloud development transformations. Many CollabNet customers improve productivity by as much as 70 percent, while reducing costs by 80 percent. For more information, please visit (www.collab.net)
2012 CollabNet, Inc., All rights reserved. CollabNet is a registered trademark in the US and other countries. All other trademarks, brand names, or product names belong to their respective holders. CollabNet, Inc. 8000 Marina Blvd., Suite 600 CA 94005 Tel +1 650 228 2500 Fax +1 650 228 2501 www.collab.net info@collab.net Blogs blogs.collab.net Twitter twitter.com/collabnet Facebook www.facebook.com/CollabNetHQ LinkedIn www.linkedin.com/company/collabnet-inc

Das könnte Ihnen auch gefallen