Sie sind auf Seite 1von 153

A Study in the Application of Six Sigma Process Improvement Methodology to a Transactional Process

By Blain Graphenteen

A thesis submitted in partial fulfillment of the requirements for the Master of Science Degree in Industrial Management South Dakota State University 2003

UMI Number: 1415386

________________________________________________________
UMI Microform 1415386 Copyright 2003 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ____________________________________________________________ ProQuest Information and Learning Company 300 North Zeeb Road PO Box 1346 Ann Arbor, MI 48106-1346

ii A Study in the Application of Six Sigma Process Improvement Methodology to a Transactional Process

This thesis is approved as a creditable investigation by candidate for the Master of Science degree and is acceptable for meeting the thesis requirements for this degree. Acceptance of this thesis does not imply that the conclusions reached by the candidate are necessarily the conclusions of the major department.

______________________________________________________ Dr. Robert J. Lacher, Thesis Advisor Date

______________________________________________________ Dr. Ross P. Kindermann, Major Advisor Date

iii Abstract

Title: A Study in the Application of Six Sigma Process Improvement Methodology to a Transactional Process Author: Blain Graphenteen Date: March 14, 2003 Six Sigma process improvement techniques are described as a structured, disciplined, and rigorous approach for improving business leadership and performance. Six Sigma Methodology is designed to provide for the application of statistical tools in the context of a process improvement structure summarized by the acronym DMAIC Define, Measure, Analyze, Improve, and Control. The DMAIC model provides a framework to identify and eliminate sources of variation in a process, improve and sustain performance with well-executed control plans, and promote one process improvement language for all members of an organization to employ. Six Sigma Methodology has been proven successful in improving operational processes like machine performance and product quality. However, limited documentation exists to demonstrate application of Six Sigma toolsets to improve transactional business processes like inventory optimization. This research paper will examine a transactional process improvement effort using the Six Sigma DMAIC model. Highlighted for the reader will be a summary of the progress relating to the process improvement effort and an analysis of the applicability of the Six Sigma tools used at each stage of the DMAIC model.

iv Table of Contents

Page Abstract List of Abbreviations ... List of Tables List of Figures Chapter 1. Statement of Research Problem 1.1 1.2 1.3 1.4 2. Introduction.. Business Case Method and Procedure.. Review of Literature. 1 2 6 8 9 15 18 22 23 24 29 29 37 40 iii vi ix x

Background of the Study.. 2.1 2.2 2.3 Initial Project Data Gathering Identification of Assumptions... Process Definition. 2.3.1 Process Map.. 2.4 Process Measurement 2.4.1 Cause and Effects Matrix.. 2.4.2 Data Collection Plan..

2.4.3 Measurement System Analysis.

2.5

Process Analysis.. 2.5.1 Failure Mode and Effects Analysis.. 2.5.2 Multivariate Analysis 2.5.3 Designed Experiments..

50 51 60 70 80 97 98 108 126 127 132 135 138

2.6 2.7

Process Improvement Process Control. 2.7.1 2.7.2 Project Controls Process Capability

3.

Results and Conclusions.. 3.1 3.2 Research Problem Results Recommendations for Future Study

Bibliography Supplemental Research References.

vi List of Abbreviations

ANOVA: Analysis of Variance

C&E Matrix: Cause and Effects Matrix

CV : Coefficient of Variation

Cp : Process capability

Cpk: Process capability with centering

CSIP: Customer Service Interruption Point

DFD: Date Flow Diagram

DMAIC: Define, Measure, Analyze, Improve, Control

DoE: Design of Experiment

DOS: Days of Stock

vii FMEA: Failure Mode and Effects Analysis

Gage R&R: Gage Repeatability and Reproducibility

I-MR: Individuals and Moving Range Chart

LCL: Lower Control Limit

LSL: Lower Specification Limit

MANOVA: Multivariate Analysis of Variance

MSA: Measurement System Analysis

OPQ: Optimal Production Quantity

PDCA: Plan, Do, Check, Act Process Improvement Model

P/T Ratio: Precision to Tolerance Ratio

Pp: Process performance

viii Ppk: Process performance with centering

RACI Matrix: Responsible, Accountable, Consultant, Informed Matrix

RPN: Risk Priority Number

SKU: Stock Keeping Unit

UCL: Upper Control Limit

USL: Upper Specification Limit

ix List of Tables

Table No. 1-1 2-1 2-2

Table Name Alternative Solutions to achieving Six Sigma Goals. Information Technology Feasibility Matrix...

Page 11 85

Key Process Information Definitions. 113

x List of Figures

Figure Number 1-1 2-1

Figure Name The DMAIC Model Semi-Finished Inventory Days-of-Stock Baseline Measure

Page 5

21

2-2

Manufacturing Planning and Control High Level Process Map 26 27 30 35 44 45 45 48 48

2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11

Process Inputs and Outputs Basic Cause & Effects Diagram. Cause and Effects Matrix... Current Inventory State Baseline I-MR Chart Goal Inventory State I-MR Chart (simulation)... Optimal Inventory State I-MR Chart (simulation) Service np Chart. Capacity Availability I-MR Chart.. Failure Mode and Effects Analysis Detection Rating Scale

54

2-12

Failure Mode and Effects Analysis Summary Diagram.. 55

xi Figure Number 2-13 Figure Name Sources of Material Requirements Planning Variability 2-14 2-15 2-16 2-17 2-18 2-19 2-20 2-21 2-22 2-23 2-24 2-25 2-26 2-27 2-28 2-29 2-30 2-31 Gateway Product Flow Diagram.. Inventory Cycle Count np Chart.. Gateway Inventory Cycle Count np Chart Demand Variability Box Plot Item Schedule Attainment Box Plot.. Baseline Cycle Frequency I-MR Chart. Schedule Change Control Chart Schedule Change Pareto Chart.. Parameter Simulation Model Example (Ha) Inputs. Parameter Simulation Output Example (Ha). Test for Equal Variances Constraint-Anchored Planning (Ha) Test Results. Stakeholder Analysis Excerpt Schedule Change Guidelines. Optimal Order Quantity Model. Optimal Production Quantity Simulation Model Group Technology Scheduling Plan.. Buffer Management Inventory Monitor. 56 58 63 63 64 66 67 68 69 74 75 78 79 81 83 88 89 90 93 Page

xii Figure Number 2-32 Figure Name Buffer Management Cycle Frequency Individuals Chart 2-33 2-34 2-35 2-36 2-37 2-38 Buffer Management Demand Individuals Chart Semi-Finished Inventory Control Plan... Primary Control Plan Measures.. Control Plan Measurement Enablers.. Counterbalance Control Plan Measures. Responsible, Accountable, Consulted, Informed (RACI) Matrix 2-39 2-40 2-41 Supply Plan Attainment Detail Screen Schedule Attainment Detail Screen Group Technology Cycle Frequency Individuals Chart. 2-42 2-43 Gateway Stock Keeping Unit 1 I-MR Chart Capability Results Baseline Data for Gateway 1 2-44 Capability Results Post-Improvement Data for Gateway SKU 1 2-45 2-46 2-47 Days-of-Stock I-MR Chart Gateway SKU 1.. Days-of-Stock I-MR Chart-Gateway Total Gateway Inventory Improvement Measure 121 122 123 124 119 107 115 101 104 104 93 94 98 99 100 100 Page

xiii Figure Number 2-48 3-1 Figure Name Downstream Inventory Improvement Measure.. Quality Digest Survey Results Page 125 128

1 Chapter 1 Statement of Research Problem

This research paper examines a transactional process improvement effort using the Six Sigma Define, Measure, Improve, and Control (DMAIC) model. A summary of the progress relating to the process improvement effort and an analysis of the applicability of the Six Sigma tools demonstrated will be discussed at each stage of the DMAIC model. My research is aimed at analyzing the functionality of the Six Sigma tools used during the process improvement effort and reporting on the inventory optimization solutions implemented. The attraction of this topic as a thesis paper stems not only from my personal involvement as a Six Sigma Green Belt project leader for this business case but also from the lack of similar business case research material found relating Six Sigma with inventory reduction/optimization projects. Six Sigma topic searches using library and Internet search engines resulted in examples of where Six Sigma methodology had been successfully applied to improve operational performance such as product quality and production yield. Very few examples were found in my literature search in which Six Sigma tools were used to improve transactional process performance. Not every Six Sigma tool will be analyzed for its applicability to this transactional process improvement effort. As a Green Belt project leader, I was not trained on every Six Sigma tool available. As the project work progressed through the DMIAC model, use of every Six Sigma tool was not necessary to achieve results.

2 The business case focuses on identifying and implementing supply chain process improvements that result in semi-finished inventory reduction without negatively impacting customer delivery performance. The desired improvement result is the freeing-up of cash for a corporation. By reducing inventory while maintaining acceptable customer service goals, cash can be made available for reinvestment into corporate growth strategies. This business case pertains only to the inventory asset category of semi-finished inventory.

1.1 Introduction

The origin of six sigma as a measurement standard can be traced back to Carl Frederick Gauss who introduced the concept of the normal curve. Walter Shewhart expanded the use of six sigma as a measurement standard by demonstrating that three sigma from the mean is the point where a process requires correction. [1] The term sigma () is used in statistics to describe variability, where a higher sigma level indicates a process that is less likely to create defects. When used as a metric, Six Sigma technically means having no more than 3.4 defects per million opportunities, in any process, product or service. Statisticians noted that having specification limits six standard deviations away from the average of an assumed normal distribution will not result in 3.4 defects per million. The number is arrived at by assuming that, in addition to random variability, the process average drifts over the long term by 1.5 standard

3 deviations, despite efforts to control it. This results in a one-sided integration under the normal curve beyond 4.5 standard deviations - an area of approximately 3.4 defects per million opportunities. [2] An engineer at Motorola, the late Bill Smith, is widely accredited for coining the term Six Sigma.(Six Sigma is actually a federally registered trademark of Motorola). [3] Smith noted that system failure rates were substantially higher than predicted by final product test and concluded that a much higher level of internal quality was required. He convinced Motorola corporate management of the importance of setting Six Sigma as a quality goal for achieving this higher level of quality. Smiths holistic view of reliability (as measured by mean time to failure) and quality (as measured by process variability and defect rates) was new as was the Six Sigma quality objective. [4] Six Sigma has evolved from its meager beginnings as a quality goal to become labeled as a business process management system. The foundation of Six Sigma is the application of statistical tools in the context of a disciplined and easy to follow methodology. It is an approach to sustainable continuous improvement that fosters a common language and cooperation using basic statistical and process understanding tools. While the tools have most often been applied to improve operational performance such as product quality and production yield, their application to transactional process performance like customer service response time and hospital patient care is becoming more prevalent. Regardless of the process type, the goal of Six Sigma improvement is still the same: To achieve breakthroughs in process performance using a structured process

4 improvement technique that identifies, quantifies and eliminates sources of variation and provides a roadmap for sustaining performance with well-executed control plans. Many Six Sigma consultants suggest the use of the DMAIC model (Define, Measure, Analyze, Improve, and Control) as the structured roadmap to follow during the course of managing a process improvement effort. At each step of the model, process definition and statistical analysis tools are available as process understanding transitions from intuitive and subjective to defined and objective. Moving from a subjectively defined problem to an objectively defined problem requires an effort to understand the process. This can be summarized in Six Sigma terminology as identifying the critical process inputs having the most significant influence on the performance of the process. The relationship between the process output and process inputs is represented by y as a function of the xs, where y represents a process output and x represents a process input. (The formula y = f (x1,x2,..xk) can be used as a simplified representation of this relationship.) The DMAIC roadmap attempts to lead the process improvement effort to the core problem through the funneling from the trivial many process inputs to the critical few process inputs determined to have the most influence on the capability of the process. Once isolated, these critical inputs should be recognized as the primary sources of variation in the process. The desired outcome from following the DMAIC roadmap is the identification and implementation of control plans that will serve as the indicator for process capability and control of the critical inputs. Figure 1-1is a graphical representation of the most important tools used in the Six Sigma DMAIC process and the

5 desired effect these tools are designed to have in funneling the process input variables from the trivial many to the vital few.

Define New Project - Project Scope & Boundary - Leadership Approval - Process Map Trivial Many Inputs Input Funnel

Control - Documentation - Monitor & Evaluate - Standardize - Transfer to Process Owners Measure - Cause & Effects Matrix - Data Collection Plan - Measurement System Analysis

Critical Few Inputs Analyze - Failure Mode & Effects Analysis - Mutltivariate Analysis - Design of Experiments

Improve - Identify Solutions - Pilot Improvements - Implementation

Figure 1-1. DMAIC Tools and the Funneling Effect

6 1.2 Business Case

This business case focuses on identifying and implementing supply chain process improvements that result in sustained semi-finished inventory reduction without negatively impacting customer delivery performance. The desired improvement result is the freeing-up of cash for a corporation. By reducing inventory while maintaining acceptable customer service goals, cash can be made available for reinvestment into corporate growth strategies. This business case pertains only to the inventory asset category of semi-finished inventory. Semi-Finished inventory can be defined as products that have been stored uncompleted awaiting final operations that adapt them to different uses or customer specifications. [2] The company sponsoring the process improvement effort is a multinational firm with product offerings in several market centers including: Aeronautical, Automotive, Business Products, and Health Care. The companys day-to-day manufacturing functions are managed by site - with a few hundred-production facilities located worldwide. The Sales, Marketing, and product development functions are centralized at three primary corporate locations. The specific production facility for this process improvement effort primarily manufactures medical products serving both the Consumer and Health Care customer segments. By all accounts, it is the largest medical products manufacturer in its corporate Health Care Markets division. By virtue of its size, the facility is also the most influential contributor to income statement and balance sheet performance in the division.

7 At this manufacturing site, semi-finished inventory is the largest category of inventory assets as measured in dollars. This result is driven by four key factors. First, for most all products manufactured, semi-finished inventory is the most flexible stocking point. One supply of semi-finished inventory provides for many demands. The largest population of semi-finished inventory is in roll form or jumbos. The adhesive-coated, woven-coated, or extruded-film jumbo rolls typically run anywhere from 1,000 lineal yards to 10,000 lineal yards of material. Smaller rolls (referred to as slit rolls) are typically less than 1,000 lineal yards and are used in production of the finished product. Conversion or commitment of the jumbo rolls to slit rolls is delayed as long as possible to allow for conversion flexibility. Another factor impacting this inventory asset category is the proliferation of semifinished good stock keeping units (SKUs). The semi-finished inventory category includes approximately 1900 active (with inventory movement) SKUs representing 70 commodities manufactured across 117 work centers. Since one supply of semi-finished inventory, as measured in a jumbo roll of material, provides for many demands with a variety of size configurations, converting the entire jumbo to finished goods would require additional converting resource time, additional storage space, and the potential for shelf-life expiration for slower-moving SKUs. A third factor is the lack of synchronization between the semi-finished producing resources and the downstream converting work centers. Due to various factors like length of changeovers, minimum jumbo size requirements, product family scheduling requirements, and run frequency, semi-finished supply resources produce more than the

8 consuming resource demands. These excess amounts of inventory could be termed incidental buffers because they occur as a result of the process capability differences rather than a safety stock buffer used to protect against demand and supply variability. Additional synchronization issues include: various stocking strategies, build plans, productivity goals, and operating expense goals. The final key factor impacting the level of semi-finished inventory is the lack of process understanding. Process understanding can be described as: 1) Knowing the level of semi-finished inventory required to protect against supply and demand variability as well as understanding the level of inventory that is an inherent result of the process capability; 2) Quantifying the cost versus cash tradeoff. The Optimal Production Quantity (OPQ) that strikes a balance between the costs of carrying inventory versus the costs of producing it.

1.3 Method and Procedure

This research paper will examine a transactional process improvement effort using the Six Sigma DMAIC model. Progress relating to the process improvement effort will be presented as well as analysis of the applicability of the Six Sigma tools at each stage of the DMAIC model. Augmenting the examples provided from the process improvement project will be additional information or recommendations for the use or applicability of Six Sigma toolsets discovered in the research. This process improvement

9 effort did not attempt to apply every Six Sigma tool available. Only the tools that were used or tried will be covered in this paper.

1.4 Review of Literature

The purpose of this literature review is to summarize areas of controversy surrounding the application of Six Sigma Process Improvement Methodology to process improvement. The literature review type can be described as both quantitative research (on the effectiveness of Six Sigma process improvement application) and methodology research (on the type of processes where Six Sigma tools were applied). Various types of research sources were explored. Research databases included: InfoTrac, SDNET, JStor, ProQuest, MINITEX/WebSPIRS, OCLC FirstSearch, and ProjectMUSE. The primary library resources included the Hilton M. Briggs Library (South Dakota State University) and the Brookings Public Library (Brookings, South Dakota) - using primarily the South Dakota Library Network. Several professional journals were researched including: American Production and Inventory Control Society (APICS) Journal, Quality Digest, Harvard Business Review, Academy of Management Journal, Management Science, Journal of Management Studies, Journal of Organizational Change Management, Strategic Management Journal, MIT Sloan Management Review, and the Strategic Management Journal. Research textbooks include The Six Sigma Way (Pande, Neuman, Cavanagh (2000)) and Implementing Six Sigma (Breyfogle

10 (1999)). Several consulting companies and other miscellaneous Six Sigma worldwide web sights were explored via the Internet. Several key words or phrases were searched including: Six Sigma, 6 Sigma, DMAIC, Inventory Optimization, Lean Manufacturing, Process Improvement, Deming, Quality Function Deployment, QFD, Failure Modes and Effects Analysis, FMEA, Cause and Effects Matrix, Cause and Effects Diagram, C&E, COPQ, RTY, transactional processes, operational processes, DPMO (defects per million opportunities), etc. Several areas of controversy were discovered during the course of research. One criticism of the Six Sigma methodology is that it has little to offer that cannot be found elsewhere. Six Sigma may sound new, but critics view it as fundamentally the same as statistical process control and/or Total Quality Management. Much of the Six Sigma methodology is based on tools that have been useful in previous quality initiatives. [5][6] E.H. Stamatis (2000) described that quality professionals seem mesmerized with Six Sigma for at least two reasons. First, it offers easy money, because both the training and qualification are controlled as though the concepts are unique and innovative and can only be understood, taught and implemented in one way. In reality, many consultants who promote the Six Sigma methodology lack consistency in their training materials and course content, and they themselves lack a knowledge base to build on. Second, Six Sigma sounds impressive because some major corporations claim exceptional returns on their Six Sigma investments. Although it's true that some companies--and they constitute a small percentage of the whole--have had exceptional returns on investment, they only experienced such a tremendous turnaround because they attacked the simplest, easiest-to-

11 solve problems first, and their quality levels were so low that anything they tried would have been a success.[6] Stamatis supported his claim that the Six Sigma breakthrough is nothing more than a repackaging of the automotive methodologies of advanced product quality planning (APQP), problem solving and statistical process control (SPC) by providing a comparison (Table 1-1) of alternative solutions to achieving Six Sigma goals.

Table 1-1. Alternative Solutions to achieving Six Sigma Goals [6]

A second criticism is more statistically technical. Critics argue that assuming a process mean to be 1.5sigma off-target is somewhat ridiculous. Perhaps 1.5sigma is a bit large but even more ridiculous is the assumption that one could keep the process mean exactly on target. Furthermore, sigma, as defined in process capability studies, is the

12 short-term capability within sample variability. Thus the 1.5-sigma shift allows for variation of the mean about the target. Any processs long-term variation is often larger than its short-term variation due to other sources of variability introduced by operator, materials and operating conditions. (Motorola determined, through years of process and data collection, that every process varies and drifts over time. Motorola referred to this phenomenon as the Long-Term Dynamic Mean Variation. This variation typically falls between 1.4 and 1.6 sigma. ) [3] Although the structured approach to Six Sigma implementation has been viewed as a positive, it has also been criticized as a weakness. The speed of implementing the Six Sigma structure was reported as an issue with Six Sigma. There are other approaches that can drive process improvement at a faster implementation rate and at a comparable shortterm success rate and return on investment. [5][7][16][17] In addition to the issue of implementation speed, Martin (2001) found smaller companies tend to subscribe to other process improvements methodologies due to the significant costs associated with Six Sigma training. [8] Costanzo observed that some companies find the statistical nature of Six Sigma tools do not always translate well to transactional processes and often find it difficult to know when a process improvement project should not require adherence to the rigorous Six Sigma methodology. [9] Not every improvement needs to be a Six Sigma project in order to be successfully implemented. The belief that every improvement effort needs to be a Six Sigma project can paralyze an organization from making the less difficult and

13 more obvious process improvements as well as inundate the workforce in collecting data that may not be necessary. U.S. Bancorp studied Six Sigma as a potential approach to improve customer service and decided that mapping out every service situation an employee might encounter to develop a best-in-class response would prove to be a time consuming effort with minimal return on the investment in time. Because Six Sigma is so statistical, U.S Bancorp determined it (Six Sigma) does not correlate well to customer service and is viewed as missing the human element as the leading statement for customer service delivery.[9] Mel Bergstein, the chairman and chief executive office of the Chicago consulting firm Diamond Cluster International Inc. wrote that Six Sigma doesnt work well at finding innovative ideas because it was designed for fine-tuning existing products and processes. Six Sigma appeals to a managers need to exert controloften over processes beyond their control. As great as Six Sigmas statistical analysis tools are in many situations, they simply wont stretch as far as many would have us believe. [9] Clifford presents a compelling argument that while Six Sigma process improvement efforts implemented by a committed CEO and management team have proven successful in reducing variability and defects, its results do not necessarily guarantee stock market success. [10] Reducing defects does not seem to matter a great deal if the company is making a product no one wants to buy. So while many Six Sigma implementers may be saving money with their error reduction programs, others are

14 spending valuable time and resources for something that may never have any tangible return on investment for shareholders. Although Six Sigma improvement techniques may have some merit in identifying sources of variation in safety practices or processes, Gyorki observed Six Sigma metrics may be not be adequate for measuring safety. Machine and process safety for employees and product safety for customers deserve better than six sigma results. [11] An information deficiency seems to exist in the specific coverage of Six Sigma applications to transactional process improvement like inventory optimization, market growth, supplier performance optimization, and improvement in customer response time. Hahn, Hill, Hoerl, and Zingraf noted that Allied Signal and General Electric embarked on commercialization programs centered around Six Sigma concepts, voice of the customer, value chain analysis, and customer satisfaction. [12] However, no examples were given to demonstrate how Six Sigma was applied in those commercial processes. Very few transactional process examples were found that demonstrated the application of the Six Sigma methodology. Three specific transactional process improvement examples discovered included the following: Deployment of Six Sigma Methodology in Human Resource Function: A Case Study, [13] Use of Six Sigma to Improve the Safety and Efficacy of Acute Anticoagulation with Heparin, [14] and Six Sigma Method Application in Reducing ED Wait Time.[15] Although the primary focus of each example was the process improvement benefits, each article demonstrated the application of different aspects of the Six Sigma methodology. None of these articles presented a case for which Six Sigma tools were effective or not effective in their process

15 improvement project. One could speculate there is a lack of Six Sigma process improvement examples because divulging them would detract from the ability of consultants to solicit business in this arena. Based upon the results from this Literature Review, there is sufficient evidence to suggest a research void exists in published examples that demonstrate the specific application of Six Sigma tools to a transactional process improvement effort.

Chapter 2 Background of the Study

Six Sigma Methodology has been criticized for not contributing anything new to the area of process improvement. [5][6] Six Sigma concepts have been described as a compilation of several process improvement techniques - but seems to most resemble Demings Plan, Do, Check, Act (PDCA) model. [16] Regardless of the process improvement model used, examples of application to transactional processes are difficult to find. Six Sigma Methodology has demonstrated success in improving operational processes. Operational processes can broadly be defined as those activities relating to the production of tangible goods. Other terms used to describe operational processes include manufacturing, production, engineering, and plant floor.[16] The application of Six Sigma methodologies to transactional process improvement efforts is not as well documented as its operational counterpart. A transactional process can broadly be defined as any function of a company not directly

16 involved in producing tangible goods. Other terms used to describe transactional processes include service, commercial, non-technical, support, and administrative. [16] The disparity in the number of published Six Sigma case study examples between operational and transactional processes is one primary driver for this study. A smaller number of Transactional case study examples were available. Transactional processes exhibit characteristics that make the application of Six Sigma methodologies more challenging. Transactional processes are typically invisible work processes with evolving workflows and procedures, possess a lack of facts and data, and relative to capability typically do not have specifications. [16] Given these challenging characteristics, transactional process improvement is not impossible. Companies like General Electric, Allied Signal, and Motorola have been reported as succeeding in their Six Sigma efforts around transactional processes. However, the majority of transactional activities seem to not have been touched by the Six Sigma methodology. The second driver of this study stems from the opportunity to present the results of a process improvement effort using Six Sigma methodologies. Typically Six Sigma projects share some common characteristics. A gap exists between current and desired process performance, they are process-focused and include complex relationships, and process improvement solutions are not easy and clear. [16] The essence of these characteristics is captured via goals and parameters in what is usually called the Six Sigma Project Charter.

17 The project selected for presentation in this paper is inventory optimization. The project charter was co-written by Corporate Manufacturing Directors who had overall responsibility for the performance of the production facility studied. The project charter was not well defined as it lacked adequate metrics and failed to provide any insight into potential constraints or assumptions surrounding achievement of the project goal. The presentation of this transactional process improvement effort will follow closely the process improvement roadmap recommended by many Six Sigma consultants and advocated by the company represented in this case. [10][12] This roadmap can be summarized by the acronym DMAIC - Define, Measure, Analyze, Improve, and Control (see Figure 1-1 on page 5). The project discussed in this paper will demonstrate and question the application of Six Sigma tools to a transactional process improvement effort. Following the description of each DMAIC step, an analysis will address the content relative to the business case and offer opinions and recommendations concerning the Six Sigma tools used. This approach will provide readers basic insight into the tools that may or may not work for other transactional processes and provide specific application examples used in this business case.

18 2.1 Initial Project Data Gathering

The initial project charter developed by management indicated their collective understanding of semi-finished inventory was inadequate to identify the level of inventory required to protect against supply and demand variability and the level of inventory that is an inherent result of the process performance. Those responsible for identifying semi-finished inventory reduction as a Six Sigma project also seemed to believe that semi-finished inventory was a process. As data gathering progressed, the project team concluded that inventory is an outcome of several processes. Therefore, the first challenge of the Six Sigma project team was to gather data relating to the processes that contribute to the outcome defined as semi-finished inventory. Gathering data for this project included defining the sources of semi-finished inventory data, categorizing and segmenting the data, and attempting to correlate process effects to this inventory asset. In addition to gathering data, a secondary objective was to evaluate existing inventory measures and then select and/or develop measures that would best detect progress in providing and sustaining the project goals. Defining the sources of semi-finished inventory data entailed a review of the systems and software programs used to record and report inventory transactions. This review was not meant to re-validate the processes associated with recording inventory or to measure the capability of this process. The purpose of this exercise was to verify the sources of information could confidently be used as an input source for measuring

19 performance. Both the systems and software code had previously been validated and documented by the Information Technology group. The inventory balance integrity was reviewed using physical inventory cycle count information. The activity of cycle counting compares the computer system balances with the physical floor location balances. This comparison is reported as a percentage and as absolute adjustment dollars. (Additional physical inventory accuracy data will be presented in greater detail in the Multivariate Analysis section.) A software review was completed on an inventory usage program critical to calculating a Days-of-Stock (DOS) measure. The data associated with material usage is derived from the plant production reporting process. Specific operation codes are used to report and categorize machine time, labor time, material consumption, and output production. The Information Technology group had previously validated the material usage program. A random sampling of data was used to re-confirm data integrity. The data sources to be used for reporting inventory data were validated and deemed to be reliable for use in inventory measurement systems. Semi-finished inventory data is archived in a database by week going back three years. Other information such as primary work centers, market codes, analyst codes, material forms, last material activity date, usage data, etc. are available in other database tables and can be linked to the inventory database via a common Item Master table. Categorizing and segmenting the inventory data assisted in proving or disproving previously held assumptions about inventory distribution as well as a means for analyzing the inventory from various perspectives. The ability to view inventory using a variety of

20 Pareto techniques was accomplished by using Microsoft Query to access the previously validated inventory databases, Microsoft Excel to organize and view the data, and MINITABTM to analyze the data statistically. The final phase of data gathering consisted of gaining an understanding of the current state of this process outcome called semi-finished inventory. Historically the only measure used to monitor semi-finished inventory levels was the dollar value in stock. The inventory dollar value failed in most instances to describe the performance of a process. When customer service levels were high for a sustained period of time, inventories were scrutinized for reduction even though inventory metrics like days-ofstock and inventory dollars were meeting expectations. When service levels were deemed too low inventory levels were scrutinized for mix instead of recognizing the contributions of demand variability or short-term, intermittent capacity constraints. The project team concluded measuring inventory in terms of days-of-stock would be the primary measure used to represent the impact of process improvement efforts. The measure of inventory dollars would be used as a secondary process measure. Figure 2-1 represents an example of the baseline semi-inventory days-of-stock Individuals and Moving Range (I-MR) charts and the project teams first attempt at measuring the current state of semi-finished inventory performance.

21

In d iv id u a l V a lu e

17

U C L= 1 7 .0 5

16

Me a n = 1 6 .0 6

15 1

LC L= 1 5 .0 7

S u b g ro u p

10

20

U C L= 1 .2 1 4 M o v in g R a n g e 1 .0

0 .5

R = 0 .3 7 1 4 LC L= 0

0 .0

Figure 2-1. Semi-Finished Inventory Days-of-Stock Baseline Measure

At this stage of the project, it is too early to infer any quantitative improvement information pertaining to the processes contributing to the level of semi-finished inventory. The baseline measurement data led the project team to observe two interesting phenomenon: the semi-finished inventory days-of-stock metric appears to be trending upward and the first data point is out of control relative to the lower specification limit. This kind of general analysis can be useful in gaining insight into the current state of performance, determining a more realistic process improvement goal, and in deciding the type of statistical tools to use.

22 The inputs that may have contributed to the days-of-stock measurement results could include factors such as: forecast error, inventory builds, and constraint equipment protection. These preliminary measurement observations will be used to identify Measurement System Analysis issues later on in the Analyze phase of the DMAIC process.

2.2 Identification of Assumptions

In this section are four key assumptions relating to this study and the business case. One key assumption addresses the question to be answered from this paper: Can Six Sigma Methodology be Successfully Applied to Transactional Processes? The remaining key assumptions pertain specifically to the process improvement effort of this business case. The first assumption is that tools exist within the Six Sigma methodology that can be successfully applied to this transactional process improvement effort. The company represented in the business case had very little experience in using Six Sigma to improve transactional processes and there is a minimal amount of published research to support this assumption. The second assumption is that there is an opportunity to reduce the amount of cash investment in the semi-finished inventory category. Perhaps an opportunity exists

23 for optimization of the inventory but an optimized inventory may not lead to inventory reduction, just a redistribution of the assets. The third assumption is that there is sufficient Information Technology infrastructure available to support implementation, measurement, and control of the project solution. Information Technology infrastructure includes hardware, software, and programming resources. The final assumption is that there are sufficient personnel resources to support the Six Sigma project. The faster projects are generated and the more people that are involved, the less resources that are available to staff the project teams. The Six Sigma project leaders also require a commitment from management to have some portion, or all their current responsibilities, reassigned to other employees to be able to devote sufficient time to the project.

2.3 Process Definition

The purpose of the Six Sigma Define phase is to seek an understanding of the process including: identifying the process problem, determining the project goal, and (if applicable) identifying the customers to be impacted by the process. The initial project direction is typically set by the management team in the form of a project charter. A good project charter includes (at a minimum) a statement of the problem, a statement of the goal, and a summary of constraints and assumptions. The project is typically aligned

24 with a critical business strategy and, when possible, includes definition of the customer specifications or process control limits.

2.3.1 Process Map

Although simplistic by nature compared to many other Six Sigma tools, the process map is among the most essential project tools of Six Sigma. A process map is a pictorial representation of the steps in a given process. The steps are presented graphically in sequence so team members can examine the order presented and arrive at a common understanding of how the process operates. Some of the most enlightening information leading to process improvement comes from the actual process map creation sessions as cross functional team members begin to hear about how work is done and the process is managed in other parts of the business. [16] The process map serves as a primary building block for the input variables to the Cause and Effect Matrix and the Failure Mode and Effects Analysis. The desired results of process mapping are to identify systems needing measurement studies, process step disconnects, bottlenecks, redundancies, and potential non-value added process steps. What makes these results possible is the classification of the key input variables. Input variables are classified as controlled, uncontrolled, and critical. Controlled inputs are input variables that can be changed and have a direct and obvious effect on the output variables.

25 Uncontrolled inputs are input variables that also impact the output variables but are difficult or impossible to control. Minimal effort should be spent in dealing with uncontrolled input variables since the return on investment in time is very low. Critical inputs are input variables that have been statistically shown to have a major impact on the performance of the output variables. Critical input variables may be controlled or uncontrolled in the current process flow and are typically defined using the Cause and Effects Matrix and Failure Mode and Effects Analysis. A criticism of Six Sigma is that its process improvement structure can paralyze an organization from implementing obvious and less complicated solutions.[9] The project team agreed that improvement opportunities defined as easy to identify, quick to implement, and having controllable solutions would not be delayed by adhering to all of the Six Sigma process steps. The team obtained approval from process owners to proceed with this approach with understanding that a control plan would be developed and implemented to manage process performance. High level and detailed process maps were developed to facilitate communication with various levels of management and process owners. The project team decided early on that the maps would be kept as uncomplicated as possible. To accomplish this end the maps use a minimal number of symbols and include descriptive labels to emphasize important flows of data or physical inventory. Color was also used in the map to identify transitions between different segments of manufacturing planning processes. Additional detailed process map work included identifying the critical inputs and outputs for each process step and a determination of whether the input is controlled or uncontrolled.

26 Figures 2-2 represents an example of the high-level process map and Figure 2-3 is an example of the critical inputs and outputs for the Material Requirements Planning Process.

Figure 2-2. Manufacturing Planning and Control High Level Process Map

27

Figure 2-3. Process Inputs and Outputs

Although tedious at times, the learning that was accomplished as a result of the Six Sigma process mapping exercise was beneficial. The completion of the exercise led to the following first impressions of the project focus and boundary: 1. Semi-Finished inventory is not a process but the result of many processes. 2. The process improvement approach will be horizontal (across a resource or resources) versus vertical (through a product line).

28 3. There is an apparent lack of control in the Material Requirements Planning process. 4. An opportunity exists to synchronize dependent operations of constrained resources if a constraint-based plan can be implemented. 5. Planning parameters and scheduling rules have an influence at each planning and scheduling process step. 6. The current project charter is much too broad in definition and must be reduced to a more manageable focus. The Six Sigma process mapping approach is very similar to classical flowcharting. Based upon personal experience, the activity of process mapping is valueadded in gaining insight into how a process works. Six Sigma Methodology does add a dimension to conventional process mapping that enhanced the ability of this project team to analyze the process. The identification and documentation of the critical process inputs and outputs for each process step provided a higher level of understanding around the identification of areas where variability may have the greatest potential impact on process performance.

29 2.4 Process Measurement

The purpose of the Measurement Phase is to pinpoint the location or source of variation by building a deeper understanding of existing process conditions and problems. That knowledge will assist in narrowing the range of potential causes to investigate in the Analyze Phase. The key tools this project team used in the Measurement Phase included: Cause and Effects Matrix Data Collection Planning Measurement System Analysis

The desired outcomes for the measurement phase included: Definition and prioritization of critical inputs Definition of measurement systems Definition of baseline process capability Documentation and communication of charter revisions (as necessary)

2.4.1 Cause and Effects Matrix

The Cause and Effects (C&E) Matrix is not a new tool to process improvement. The C&E Matrix has also been called the fishbone or Ishikawa Diagram. Karoru

30 Ishikawa (1969) is credited for developing and using the cause and effects methodology in the 1960s. Figure 2-4 depicts a basic Cause and Effects Diagram.

Figure 2-4. Basic Cause and Effects Diagram

The Cause and Effects (C&E) Matrix is a graphics tool used to explore and display opinion about sources of variation in a process. Its purpose is to arrive at a few key sources that contribute most significantly to the problem being examined. These sources are then targeted for improvement. The C&E Matrix also illustrates the relationships among the wide variety of possible contributors to the effect. The conclusions reached from the C&E matrix exercise feed directly into the Failure Mode and Effects Analysis (FMEA).

31 The main possible causes or effects of the problem are identified and then categorized. The "Four M" categories are typically used as a starting point. The "Four Ms" can be defined as follows: [16] Materials consumables or raw inputs used in the process Machines equipment, including computers and non-consumable tools Manpower those who participate in and/or affect the process Methods procedures, processes, work instructions

Different category names can be chosen to fit the process problem or these general categories can be revised. Six Sigma consultants recommend the use of three to six main categories that encompass all possible influences. [16] Brainstorming is typically done to add possible causes to the main effect and more specific causes to the causes. This subdivision into increasing specificity continues as long as the problem areas can be further subdivided. The practical maximum depth of this diagram is usually about four or five levels. [18] The C&E Matrix builds on the work completed in the Cause and Effects Diagram by assigning ratings of importance to both the process inputs and outputs. The first step in constructing a C&E Matrix is to list the key output variables horizontally on the C&E Matrix grid. The selection of critical outputs for the C&E Matrix is derived from a combination of the critical outputs identified in the project charter and any additional outputs the project team would like to ensure are not compromised by improvement efforts. A rating scale is used to determine the degree of importance of the critical outputs to process performance. The critical output rating scale for this business case

32 ranged from a high ranking of 10 to a low ranking of 6. The higher the ranking value the more critical the output variable. The resulting critical output scale was developed to differentiate the importance of each output and evolved through the consolidation and elimination of a list of output variables. The key input variables that may cause variability or nonconformance to one or more of the key process output variables are then listed vertically on the left side of the C&E Matrix. The input variables identified for this project were assigned 1 of 4 possible values. A value of 9 was assigned for inputs having a significant or strong impact on the output. A value of 3 was assigned for inputs having a moderate impact on the output. A value of 1 was assigned for inputs having a weak impact on the output. A value of 0 was assigned for inputs having no impact on the output. The scale used for rating the effect of the input variable was developed to differentiate the importance of each input on an output. Six Sigma consultants generally recommend a scale of 0,1,3,5 or 0,1,3,9. [16][18] The next step is to determine the result for each process input variable by first multiplying the key process output priority by the consensus of the effect for the key process input variable and then summing the products. Each input is scored independently relative to each output. A low rating number indicates that changes in the input variable are perceived to have a small effect on the output variable. A high rating indicates changes in the input variable can greatly affect the output variables. The final version of the C&E Matrix should contribute towards reducing the critical inputs from the trivial many to the vital few. As more is learned about the process we begin to deduce which inputs can be filtered out because they appear to have little or

33 no effect on the desired outcome or output Y. The key process input variables can then be prioritized by the results by summing of products and/or by using a percentage of the total calculation. Our project team struggled in two distinct areas during the construction of the C&E Matrix. The first area of debate centered on the list of key outputs. The level of semi-finished inventory and customer service were listed as critical outputs in the original charter. The project team concluded the solutions this project generated to reduce inventory could not be implemented without considering more than just semi-finished inventory and service. Ignoring other outputs could result in a sub-optimized solution whereby semi-finished inventory is reduced at the expense of another critical business goal. For example, if the number of changeovers is increased for a resource and production lot sizes are reduced as strategies to reduce inventory, we could create a capacity constraint, decrease productivity, and increase costs. The resource may no longer be able to support demand because of the extra changeover time and reduced run time and may require overtime work to meet demand. Avoiding the potential for sub-optimization required consideration of two additional critical outputs. We labeled these outputs as Capacity Impact and Raw Material & Finished Good Inventory Level. In addition to avoiding potential suboptimization, consideration of the critical outputs brought to light the conflict between cost and cash as we focused on ways to optimize semi-finished inventory. Relative to production resources the cost versus cash conflict can be restated as the conflict between efficiency and flexibility. The team realized as solutions to optimize inventory were

34 developed, the relationship between efficiency (cost savings) and cash improvements (flexibility) would be a pivot point for measuring the impact of the solution. The addition of these two critical outputs made the development of the Rating of Importance scale and the assignment of a rating number much more challenging. The next area of debate focused on the consolidation or elimination of nonessential inputs. It was my observation that this exercise can be hindered somewhat by having a cross-functional team. Team members not close to the process tended to put more stock into inputs having little or no influence on the performance of the process and sometimes failed to understand the relationship or commonality between some inputs. The significant difference between initial versions and the final version of the C&E Matrix was the consolidation of the process inputs as well as the elimination of process inputs that had no quantifiable effect on the critical outputs. This effort was essential to reducing the project to a manageable and meaningful level. Figure 2-5 provides the final version of the C&E Matrix for the Semi-Finished Inventory optimization project.

35

350
Tier 1

120%

300

Tier 2 Tier 3

100%

250 80% 200


Tier 4

60%

150 40% 100 20%

50

0%

Y's 10 SF Inventory Level 10 On Time & In Full Delivery 8 Capacity Impact 6 RM & FG Inventory Level 9 3 9 3

Score

Tier

Process Steps

No. X's

X's Constrained or Unconstrained Resource; Schedule sequencing rules within resource; Planning Param eters (M ins,M ult.,pallet qty); Part Buffer Style (ie lot-4-lot, consolidation, Tim e Buffer); M PSParam eters (Lead Tim e Fence; Safety Stock) M ake-to-Stock Forecast Error; M ake-to-O rder Dem and Variability Yield (Planning for w aste); Production Execution Feedback (Schedule Attainm ent or Supply Variability) Operating Expense Policy; Planned Crew ing &Coverage; Capacity Planning Feedback to M PS; Planning Rates; Utilization (crew ing &coverage, constraint anchored planning); Resource Dow ntim e (Planned/Unplanned)

Relationship Score: 9=Strong; 3=M edium ; 1=W eak; 0=N one

306

270

246

150

Figure 2-5. Cause and Effects Matrix

36 From the process mapping activity, there were a total of 64 inputs identified that carried forward to the C&E Matrix. Of the 64 inputs, 16 were determined to be critical inputs to performance based upon their assigned rating values in the C&E Matrix prioritization activity. The net result was the classification of the 16 inputs into four distinct tiers that accounted for 83% of the total score. The tiers were created based on the C&E Matrix score and the combination of process steps with shared inputs. An example of a shared critical input was planning parameters. Planning parameters were identified as an input to the Master Product Schedule and Material Requirements Planning process. Tier 1 received a total rating score of 306 and included four process steps and six critical inputs. These critical were summarized as: planning and scheduling parameters; unconstrained resource capacity; and production sequencing rules. Tier 2 received a total rating score of 270 and included two process steps and two critical inputs. Critical inputs for Tier 2 were summarized as: make-to-stock product forecast error and make-to-order product demand variability. Tier 3 received a total rating score of 246 and included two process steps and two critical inputs. The critical inputs for Tier 3 included [planning for] process waste and production execution (supply variability). Tier 4 received a total rating score of 150 and encompassed five process steps and six critical inputs. Critical inputs for Tier 4 included: operating expense policy; resource crewing & coverage; capacity planning feedback to MPS; planning rates of production;

37 utilization (crewing & coverage, constraint anchored planning); and resource downtime (planned/unplanned). The C&E Matrix can be a very helpful tool in narrowing the focus of the improvement effort by identifying input variables perceived to be critical to process output performance. One shortcoming of using the C&E Matrix for this process improvement effort is that it was invented by and for people involved in operational process improvement. Operational processes tend to have simpler and more linear causal structures (i.e. Process Step AProcess Step BProcess Step CProcess Step D). But many transactional processes are not so simple and do not follow a repetitive feedback loop (i.e. Process Step AProcess Step BProcess Step CProcess Step D). An example of a non-repetitive feedback loop is when Process Step A causes Process Step B and Process Step C, but Process Step C is in a different category than Process Step B. When categorizing Process Step A, it is difficult to determine where it should be categorized on the C&E Matrix.

2.4.2 Data Collection Plan

The process map and C&E Matrix focused the process improvement effort on reducing from the trivial many process inputs to the vital few. In order to validate specific process improvement observations, a data collection plan was needed to ensure any data collected around a process change would reflect a response as a result of the

38 change. This included data that described the problem being studied, related conditions that might provide clues about causes, and could be analyzed in ways that can answer questions about the input measured. [16] The data collection plan focused on three primary measurement categories and were stated as follows: 1. Current Inventory State: The Current Inventory State represents the baseline performance of semi-finished inventory as measured in days-ofstock. It serves as the benchmark against which future process improvement efforts resulting from this project will be measured. This inventory state was measured as the actual semi-finished inventory days-of-stock over time employing the current planning model (material constrained planning model with current planning parameters). 2. Optimal Inventory State: The Optimal Inventory State represents the best possible semi-finished inventory performance as measured in days-of-stock. This inventory state was projected using a manufacturing model simulation. The purpose of defining the Optimal Inventory State was to create a vision of the potential improvement that is possible. This inventory state was measured as the projected days-of-stock employing a material and capacity constrained planning model for all production resources using the following planning parameters: a. Current SKU quantity and time buffers b. Current Gateway SKU jumbo multiples

39 3. Goal Inventory State: The Goal Inventory State defines the expected outcome from the implementation of the improvement actions. The original project charter defined the goal as a $2 million dollar reduction in semifinished inventory. As specific improvement activities are identified, the original project goal may need modification. (The degree of modification may depend on the data used to create the original charter and the process knowledge of the project sponsors.) This inventory state was measured as the projected semi-finished inventory days-of-stock over time employing a material constrained plan for all resources and a capacity constrained plan for selected gateway resources using the following planning parameters: a. Current SKU quantity and time buffers b. Current Gateway SKU jumbo multiples c. Current downstream SKU multiples The project team hypothesized the average inventory differences resulting from comparing the results of the three inventory measurement states would not only provide a means for measuring the effect of process change, but also assist in further clarifying the project Entitlement and the project Goal. Analysis of the three inventory states will be discussed in greater detail in Section 2.4.3. The primary questions the team strived to answer using the data collection plan were: Do the observations developed from the FMEA exercise represent an opportunity for inventory optimization? If so, how much is the opportunity worth?

40 2.4.3 Measurement System Analysis

Measurement System Analysis (MSA) is used to assess the statistical properties of process measurement systems. Measurement systems can include collection procedures, gages, and other test equipment used to collect data for analyzing process problems. The purpose of the MSA is to ensure or validate the quality of the process measurement system. The analysis should include design and certification, control, capability assessment over time, and repair and re-certification. [18] The goal is to pinpoint the location or source of problems as precisely as possible by building a factual understanding of existing process conditions and problems. The knowledge acquired from the MSA will help narrow the range of potential causes needing investigation in the Analyze phase of the Six Sigma DMAIC model. For operational processes, measurement variance is typically defined through assessment of the statistical properties of repeatability, reproducibility, bias, stability, and linearity. Collectively this assessment is referred to as a Gage Repeatability and Reproducibility (Gage R&R) study. [18] The equation to follow is often used as a simplified representation of process variability and tolerance spread: 2T = 2P + 2M where: 2T = Total Variance 2P = Process Variance 2M = Measurement Variance

41 In order to conduct a Gage R&R study the following characteristics are essential: The data must be in statistical control. The variation from the measurement system is from common causes only and not special causes. Variability of the measurement system must be small compared with both the manufacturing process and specification limits. Increments of measurement must be small relative to both process variability and specification limits. A substantial amount of additional information is available relating to Gage R&R studies that will not be covered. The purpose of introducing Gage R&R in this paper is to provide background information that establishes a basis for understanding why it was not applied to this transactional process. The Repeatability part of Gage R&R addresses the variation between successive measurements of the same part, the same characteristic of the part, by the same person using the same instrumentation. Reproducibility attempts to capture the difference in the average of measurements made by different people or operators using the same or different instruments measuring the same characteristic. A Gage R&R study could have been applied to the production reporting aspect of inventory and material control. Production reporting is the process of recording the input and output of labor and material resulting from production activity. Inaccurate production reporting could directly affect the accuracy of inventory balances and inventory usage. Errors in production reporting can result in inventory performance measurement errors.

42 An indicator of production reporting accuracy is inventory cycle count performance. If the value of inventory adjustments resulting from reporting errors is low, the effect of production reporting errors on inventory measurement is low. The average cycle count adjustment value for a one-year period (January 01 through December 01) was $23,000. This value included all inventory classifications (semi-finished, finished goods, packaging, and raw materials). The average adjustment value was .10% of the total average inventory value for the same measurement period. The cycle count accuracy was deemed to have virtually no effect on inventory measurement accuracy and not investigated any further (the cycle count performance is presented in greater detail in section 2.5.2). Since the primary and secondary measures for this business case did not rely on operators to measure and record data, repeatability and reproducibility were judged to be irrelevant. Given the Gage R&R study was not applicable to the measurement system for this business case, the project team focused on the following MSA areas: Definition of the type of measurement information that would best represent the process Certification of the design of how measurement data is recorded and reported Assessment of the statistical stability of the measurement systems Definition and assessment of process capability

The definition of the type of measurement information that would best represent the process entailed the questioning of whether inventory in cost dollars was the right

43 aspect of the process outcome to be measured. The project team concluded this measure would not indicate whether the level of inventory was optimal. Inventory cost dollars could be lower than previous time periods but relative to usage could be much less active (slower moving). An inventory days-of-stock (DOS) metric was added as the primary measure of inventory optimization. The initial MSA design and certification effort for this business case focused on ascertaining whether the data generated for calculating the Current Inventory State reported from a reliable source conformed to the operational definitions established by the data collection plan, and whether the data being measured was stable. The data collected for measuring the Current Inventory State was generated from preexisting software programs used to record and report inventory transactions. The inventory transaction reporting system and software code had previously been validated and documented by the Information Technology group. A random sampling of various stock keeping units were validated by comparing the live inventory system data with the reported data. The current inventory data collection system was deemed valid. Unlike operational processes, where recording data can be done without influencing the performance of the process, the inventory states were manipulated using a simulation model in order to generate data. The material requirements planning data from the live planning model were copied to a simulation model, regenerated with model parameter changes, and inventory results were recorded and reported using Microsoft Query and Microsoft Excel respectively.

44 Process control charts were used to assess the statistical stability of the simulation data for each inventory state. Statistical instability is defined as having an unnatural pattern or data points outside of the control limits. Typically a pattern is defined using out-of-control rules or conditions. For example, the I-MR Chart shown in Figure 2-6 indicates one point, labeled with a 1, is more than 3 sigmas from the average. As expected, the simulation inventory states included data points outside the control limits as projected inventory improvements from the baseline were realized based upon the effect of the simulation parameters. The process I-MR charts for each inventory state are provided in Figures 2-6, 2-7, and 2-8.

In d iv id u a l V a lu e

17

UC L=1 7 .0 5

16

Me a n = 1 6 .0 6

15 1

LC L= 1 5 .0 7

S u b g ro u p

10

20

UC L=1 .2 1 4 M o v in g R a n g e 1 .0

0 .5

R = 0 .3 7 1 4 LC L= 0

0 .0

Figure 2-6. Current Inventory State Baseline I-MR Chart

45
In d ivid u a l Va lu e 16 15 14 13 S ubgroup 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0 10 20 1 1 1 1 UCL=14.57 Me an=13.89 LCL=13.20

UCL=0.8426

Mo vin g R a n g e

R=0.2579 LCL=0

Figure 2-7. Goal Inventory State I-MR Chart (simulation)

1 6 .5 1 5 .5 1 4 .5 1 3 .5 1 2 .5 1 1 .5 1 0 .5 9 .5 8 .5 0

In d ivid u a l Va lu e

1 UC L=1 2 .5 5 Me a n=11 .4 4 LC L=1 0 .3 3 1 1 1 1 1 1 1 1 20

1 10

S ub gro up 1 .5 M o vin g R a n g e 1 .0 0 .5 0 .0

UC L=1 .3 5 9

R =0 .4 1 5 8 LC L=0

Figure 2-8. Optimal Inventory State I-MR Chart (simulation)

46 A decision the project team struggled with early on was whether capability metrics could or should be used to represent the process. Many transactional processes are not conducive to having customer or process specifications assigned that would be meaningful to measuring the capability of the process. When meaningful customer or process specifications do not exist, other process performance metrics should be considered. The transactional process examples found in my research typically avoided capability metrics and used measurements such as cycle time and costs to define process performance. [18] An assessment was completed of the current measurement systems for the other critical outputs identified in the C&E Matrix that we did not want to negatively impact as a result of reducing semi-finished inventory. These measures were also the outcome of transactional processes. Control charts were used to verify and assess the data and a review of the data sources was completed. Service was defined as the percentage of order lines on time. The data is available by product commodity by week. A customer order can be generated from the following sources: direct customers, distribution centers, or via intra-company manufacturing plants. Customer orders were evaluated based upon a comparison of the customer need date versus the actual shipment date. If an ordered item was shipped on the customer need date the line item was counted as a hit and assigned the value of 1. If an ordered item was shipped later than the customer need date, the line item is counted as a miss and assigned the value of 0. The total number of line item hits is divided by the total number of line items for the week and reported as the percent service. The service

47 measure was necessary to ensure the improvements implemented to optimize inventory would not decrease the level of service provided to our customers. An np control chart was used to measure the number of defects (late order lines) per n samples (total order lines) per week. Capacity availability was measured in terms of the machine hours forecasted versus the total hours available. The machine hours forecasted is derived by dividing the forecasted demand quantity by the planning rate for each stock-keeping unit by resource. The planning rate represents the time required to set-up and run the product. (The planning rate was updated quarterly based upon the average rate using the last six months of production history.) The capacity availability measure was necessary to ensure the improvements implemented to optimize inventory would not increase the amount of capacity needed to support demand. Performance measurement systems were already in place for the complementary critical outputs of service and capacity. Examples of the baseline control charts for service and capacity are provided in Figures 2-9 and 2-10 respectively.

48

100 S a m p le C o u n t

50 U C L= 2 4 .4 9 N P = 1 3 .5 3 0 0 10 20 30 40 S a m p le N u m b e r 50 LC L= 0

Figure 2-9. Service np Chart

60 50 40 30 20 10 0 -10 -20 -30 Subgroup Individual Value 50 40 30 20 10 0 Moving Range

UCL=51.63 M ean=16.25 LCL=-19.13 0 5 10 15 1 20 25 UCL=43.47

R=13.30 LCL=0

Figure 2-10. Capacity Availability I-MR Chart

49 The special cause test criteria used to flag out-of-control data points for the service control chart was defined as one point more than 3 sigmas from the centerline. A review by production commodity of each special cause was conducted on a weekly basis by the plant management team. Where possible, plans to address the special cause are developed and implemented. The special cause test criteria used to flag out-of-control data points for the capacity availability I-MR chart was defined as one point more than 3 sigmas from the centerline. Resource capacity reviews were conducted on a weekly basis by each functional area. The review (by resource) included an analysis of capacity versus projected demand, historical rate performance, and historical schedule attainment performance. The data used for reporting and measuring customer service was validated by comparing the measurement data with the actual customer order shipment history for a sample of stock-keeping units across the highest sales volume product commodities. The data used for reporting and measuring capacity was validated by comparing the measurement data with actual production reports for a sample of stock-keeping units across each gateway resource. Although a direct correlation does not always exist between inventory performance and service performance and inventory performance and capacity availability, process owners felt more comfortable including these secondary metrics in the monitoring of the process changes resulting from this project.

50 Whether a process is operational or transactional, MSA techniques strive to identify the contribution of measurement error to the perceived variability of the process. That being said, there were Six Sigma tools within the MSA that were more difficult to apply to a transactional process. For an operational process, measurement variation can be more easily traced to a specific resource - and the tools, materials, work methods, and environment surrounding the resource. A transactional process is subject to a variety of internal and external influences and presents a much greater challenge in identifying and modeling those influences. Transactional and operational processes appear to be similar because their measurement variation can be influenced by factors pertaining to work methods, the environment, process rules, and customers. For those new to Six Sigma, the overwhelming urge to apply MSA tools that may not fit must be avoided. An improperly used MSA tool, like the Gage R&R, may not only be worthless, but may unnecessarily damage the credibility of a very effective measurement system.

2.5 Process Analysis

The purpose of the Process Analysis phase is to begin understanding the relationships between the process inputs and outputs and to identify potential sources of process variability. The key steps in this phase were to: Complete the Failure Mode and Effects Analysis Complete the Multivariate Analysis

51 Define and Complete Design of Experiments (DoE)

The desired outcomes of this phase included: Reduce the number of process input variables to a manageable number Determine high-risk input variables from the FMEA Determine relationships between process inputs and process outputs Charter revisions (as necessary) Improvement strategy

2.5.1 Failure Mode and Effects Analysis

The primary objective of the Failure Mode and Effects Analysis (FMEA) is to identify and prioritize ways a process can fail and eliminate or reduce the risk of failure. Identification of process failures is crucial for enabling the team to improve the process in a preemptive manner before failures occur. The inputs to the FMEA include the Process Map, C&E Matrix, process or product history, and process technical procedures. The outputs of the FMEA are a prioritized list of actions to prevent causes or to detect failure modes and a record of actions taken. The FMEA is not a new tool. It was first used in the 1960s in the Aerospace industry during the Apollo missions. The FMEA was developed further in the 1970s by the Navy (documented in MIL-STD-1629) and in the automotive industry to address liability costs. [18]

52 The FMEA proved to be a very useful tool for our business case in validating the ratings of importance determined from the C&E Matrix. Although the output is for the most part subjective, the FMEA process structure and format facilitates the process improvement effort by directing the team to key input variables where multivariate studies would help define the impact of variability on the process. The FMEA document contains five major information categories pertaining to the most important process inputs as identified in the C&E Matrix. The first category for consideration is labeled the Potential Failure Mode. This category attempts to answer the question: What could go wrong in the process? Consideration is given to issues that could arise only under certain process operation conditions. An operational process failure example could include manufacturing equipment issues resulting from excessive temperature or high humidity. A transactional process failure could include customer service or inventory issues resulting from forecast inaccuracy. The second category of the FMEA is labeled the Potential Effect(s) of Failure. This category attempts to answer the question: What are the impacts of the failure occurring? Potential Effects of Failure can be isolated by understanding the impact of the input(s) on customer requirements, on downstream processes, or on related processes. The third category is labeled Potential Cause(s) of Failure. This category attempts to answer the question: What are the potential causes of this failure? The Cause indicates a design weakness that causes the Failure Mode to occur. The fourth category of the FMEA is the Current Controls section. This section attempts to answer the question: What are the existing controls and procedures

53 (inspections and tests) that prevent either the Cause or the Failure Mode? Current controls can be existing methods/devices in place to prevent or detect Failure Modes or Causes. The final major category of the FMEA is the rating system that assigns the Risk Priority Number (RPN) for the Severity, Occurrence, and Detection. The RPN is the output of the FMEA. The RPN is a calculated number based on information provided in the assessment of the Potential Failure Modes, Effects, and the ability of the Current Controls to detect the failures before reaching the customer or final output stage. The formula for calculating the RPN is as follows: RPN = Severity * Occurrence * Detection Severity measures the importance of the Effect on customer requirements. Occurrence measures the frequency with which a given Cause occurs and creates Failure Mode. Detection measures the ability of the current control scheme to detect or prevent. Most project teams use an RPN Detection Scale numbered 1 through 5 or 1 through 10 depending on the necessity and ability to differentiate. Our project team agreed to use a scale of 1 through 5. A scale of 1 through 10 could not provide any additional differentiation in detection for this project. Figure 2-11 represents the FMEA Detection Rating Scale assembled for this business case.

54
Rating Severity of Effect Significant contribution to excess SF Inventory and poor service performance. Major contribution to excess SF inventory Minor contribution to excess SF inventory and major contribution to poor service Minor contribution to excess SF Inventory No Effect Likelihood of Occurrence Very High: Failure is almost inevitable Ability to Detect

Unable to detect

High: Repeated failures

Remote chance of detection or detection after the fact

Moderate: Occasional failures

Low chance of detection

Low: Relatively few failures

High chance of detection

Remote: Failure is unlikely

Almost certain detection

Figure 2-11. Failure Mode and Effects Analysis Detection Rating Scale

The primary input to the FMEA is the C&E Matrix. These are the steps the project team followed to complete this projects FMEA: 1. Determined the ways in which the input could go wrong for the top eight inputs identified in the C&E Matrix (Failure Modes). 2. Determined the Effects of Failures on the customer and process capability for each input Failure Mode. 3. Identified potential causes of each Failure Mode. 4. Listed the current controls for each Cause or Failure Mode. 5. Determined Severity, Occurrence, and Detection Rating Scales.

55 6. Assigned Severity, Occurrence, and Detection ratings for Effects, Causes, and Controls respectively. 7. Calculated the RPNs for each Failure Mode. 8. Developed list of recommended actions to reduce or minimize high RPNs. Figure 2-12 represents a summarized version of the FMEA for this business case:

70 60

1 2

120% 100% 3 80% 60%

50 40 30 20 10 0

40% 20% 0%

FMEA FMEA Corrective C&E Tier Ranking Action Category 1 1

OCC

Failure Modes

Constraintanchored planning Planning Parameter Review

Unconstrained Make Order w/qty and Need Date

Material available. No capacity Combination of Parameters Too High or Too Low; Time/Qty Buffer Too Large, In Wrong Location or Too many in Supply Chain; Minimum or Multiple Order Qty Too High; Consolidation Style Too Long

60

Planning Parameters

48

Sequencing Rules

Changing priorities; incorrect input inventory balance; planner error; Scheduling sequencing rules quality/rejects/higher waste; poor schedule within resource attainment; unplanned downtime; difficult to plan in families; supply/supplier delivery

40

Figure 2-12. Failure Mode and Effects Analysis Summary Diagram

RPN

DET

SEV

56 The FMEA exercise led the project team to the following conclusions: 1. The inputs associated with the materials requirements planning and scheduling process contribute most to the outcome of semi-finished inventory. The variability in the planning and scheduling can be described by the timing and/or quantity of supply and/or demand. The team summarized this relationship using Figure 2-13.

Type
Timing

Demand
Requirements move from one period to another

Supply
Material received/produced earlier/later than planned Material received/produced for more/less than planned

Quantity

Requirements for more/less than planned

Sources of Variability

Material Requirements Planning

Figure 2-13. Sources of Material Requirements Planning Variability

57 2. The defect of the planning and scheduling process is defined as variation in synchronization between producing and consuming resources caused by: a. Sequencing rules within and between resources. b. Production in excess of demand caused by a variety of factors including lot sizing, changeover rules, buffers, demand and supply variability, and planning errors. c. Uncontrolled Material Requirements Planning resulting from unconstrained and unpredictable demand and supply plans. 3. The process improvement effort will be narrowed in focus to the top 2

resources that contribute most to the flow of semi-finished inventories. These resources were termed the gateway resources. These gateway resources exhibit the following characteristics: the output of a single product can be transformed into several distinct products at downstream work centers: the number of end items is large compared to the number of input raw materials; and the equipment is generally capital intensive and highly specialized. [21] The product flow diagram exhibiting the characteristics of gateway work centers is shown in Figure 2-14.

58
Customers

Converting Work Centers

Sub-assembly Work Centers

Gateway Work Centers

Raw Materials

Figure 2-14. Gateway Product Flow Diagram [21]

A unique feature of the gateway work centers chosen is their similar technological process capabilities. Of the 180 SKUs manufactured across both resources, more than half of them can be run on either resource. A second unique feature is the sharing of many of the same raw materials. A third feature is the sharing of many common resources including Focused Factory Management, supply chain analyst, maintenance, engineering support, and equipment operators.

59 The first improvement action identified from the FMEA work was to implement a constraint-anchored planning process. A planning and scheduling environment whereby demand requirements are unconstrained and constantly variable (depending on material availability) creates an unstable inventory-planning environment. The purpose of constraining the plan is not only to stabilize and smooth production requirements but also to ensure material that is produced will be consumed (synchronization). Constraintanchored planning generated the highest FMEA RPN score of 60. The second major improvement action was to understand the impact of planning parameters on the level of semi-finished inventory for the gateway resources. The key process inputs associated with this action item scored an FMEA RPN of 48. The foundations for this corrective action are as follows: 1. Parameters are currently evaluated from a resource view. A planning parameter model will be created to evaluate the effects of various planning parameters from a supply chain view. 2. A primary goal for this action is to understand the dynamic and complicated relationship between parameters (i.e. production frequency, minimums, multiples, lead times, time buffers, quantity buffers, etc.). Optimal Order Quantity logic will be tested and incorporated into the model to understand the cash versus cost tradeoff.

60 The final corrective action identified from the FMEA was to develop an optimal production sequence plan for the critical resources. The team labeled this change as the Group Technology Planning and Scheduling process. The goal of this corrective action was to reduce the variability in production cycle frequency as well as optimize equipment changeover effectiveness. The FMEA is considered to be a working document. Once improvement efforts are identified to reduce the RPNs for critical inputs, the FMEA document may need to be revisited to verify the effect of the changes compared to the original RPNs. The improvement opportunities identified in the FMEA will be covered in greater detail as we proceed through the DMAIC process.

2.5.2 Multivariate Analysis

Multivariate Analysis is a technique that can provide insight into the relationship between key process input variables and key process output variables. Through the graphical visualization of the input and output relationship a lot of information can be evaluated about the process without modifying the process and insight can be gained into where improvement efforts should be focused. Many statistical practitioners and Six Sigma consultants refer to Multivariate Analysis of Variance as MANOVA. MANOVA is a tool used to determine the significance of several factors on the performance of key output process variables. Other

61 statistical tools like Chi-Square test, t-test, and Analysis of Variance (ANOVA) are available to analyze the significance of a single factor on the performance of the key output process variables. Regardless of whether a single factor or multiple factors are analyzed, this paper will refer to the results presented as Multivariate Analysis. Some controversy exists around whether Multivariate Analysis testing tools or Process Control Charts are best for understanding the impact of key input variables on the performance of a process. Some authors argue that a control chart is a perpetual test of significance[20] and process monitoring resembles a system of continuous statistical hypothesis testing. [20] W. Edwards Deming wrote: Some books teach that use of a control chart is test of hypothesis: the process is in control, or it is not. Such errors may derail a studyrules for detection of special causes and for action on them are not tests of a hypothesis that a system is in a stable state.[21] Deming also argued hypothesis testing was inappropriate in industry where practical applications required analytical studies because of the dynamic nature of the processes for which there is no well-defined finite population or sampling frame. [21] Regardless of the controversy of which tools to use in Multivariate Analysis, the overall concept can apply to transactional processes. For this business case, a combination of process control charts and box-whisker plots were used to evaluate the effect of specific variables on the process output of inventory. These tools were chosen primarily because of their simplicity in use and function. Based upon the work completed in the FMEA exercise, four specific areas of variability were studied: inventory balance accuracy, demand variability, supply

62 variability, and schedule changes. Although inventory balance accuracy was deemed to be a key input variable for inventory, the team decided to verify this assumption via cycle count accuracy. A cycle count is an inventory accuracy audit technique where inventory is counted on a cyclic schedule rather than once a year. The key purpose of cycle counting is to identify items in error, thus triggering research, identification, and elimination of the cause of the errors. [2] For this business, the plant set a target of 98% average inventory balance accuracy. The analysis showed for the last 12 months Total Inventory Adjustment Value (Absolute Value) averaged just about $23,000 compared with a total inventory average of just over $34 million. For the gateway work centers, adjustments averaged just over $9,000 against an average inventory level of $6 million. The contribution of inventory adjustments to the effectiveness of the Material Planning Process was not significant. Total plant cycle count accuracy averaged 99.9% and Gateway cycle count accuracy averaged 99.9% accuracy for the most recent measurement periods. Figures 2-15 and 2-16 report the performance of cycle count accuracy using np control charts to indicate percent defective.

63

0 .1 0 t o T o ta l In v e n to r y V a lu e % In v e n t o r y A d j. V a lu e

U C L= 0 .0 9 7 8 8

0 .0 5

0 .0 0 0 5 10 15 20 25 S a m p le N u m b e r 30 35

N P = 0 .0 0 1 0 4 2 LC L= 0

Figure 2-15. Inventory Cycle Count np Chart

% G a t e w a y In v e n to ry A d j. V a lu e

0 .0 6 0 .0 5 to T o t a l G a t e w a y In v e n t o ry 0 .0 4 0 .0 3 0 .0 2 0 .0 1 0 .0 0 0 5 10 15 20 25 S a m p le N u m b e r 30 35

U C L= 0 .0 5 6 1 5

N P = 0 0 0 3 .4 6 LC L= 0

Figure 2-16. Gateway Inventory Cycle Count np Chart

64 Demand variability was defined as the variation in the quantity of product ordered by a customer. A customer can be an internal (downstream resource) or an external (purchaser of goods or services) entity. In this instance, demand variability includes forecast error and the effects of the consuming resources production frequency. The data was summarized by product commodity since the gateway resources served many downstream resources. The amount and significance of demand variability was dependent on the business category a commodity represented. For commodities that included a higher ratio of make-to-order products, demand variability was more pronounced and included outliers (more intermittent demand). For commodities that included a higher ratio of make-to-stock products, demand variability was much less pronounced. Figure 2-17 displays the box plot representing the demand variability data.

1000000

800000 S F Dem and $

600000

400000

200000 2911 2919 2924 2925 2965 2968 2972 2974 2975 2980 C o m m o d it y

Figure 2-17. Demand Variability Box Plot

65 Supply variability was defined in two different dimensions. The first dimension focused on the item schedule attainment for the gateway resources and resources directly downstream from the gateway resources. The second dimension focused on the cycle frequency of group technology families for the gateway resources using reported production data. The Item schedule attainment measure compares the quantity produced to the quantity scheduled. If the quantity produced is within +/- 10% of the quantity scheduled the production order is counted as a 1. If the quantity produced is not within +/- 10% of the quantity scheduled the production order was counted as a 0. Item schedule attainment is measured as the percent of the total number of 1s divided by the total number of items scheduled. Item schedule attainment performance for downstream resources looked to be a critical input in the managing inventory. The worse the downstream item schedule attainment, the greater the chance inventory produced or scheduled to be produced would not be consumed when initially planned. The item schedule attainment box plot reflects a high degree of attainment variability demonstrated by two outliers and elongated first quartiles. Figure 2-18 illustrates the item schedule attainment performance for the five most critical downstream resources from the gateway work centers:

66

1.0 Item S ched. Attain %

0.5

0.0 1 2 3 4 Re s ource Group 5

Figure 2-18. Item Schedule Attainment Box Plot

Cycle frequency is defined as the amount of time (in days) between production runs. (The term lead-time is sometimes used synonymously with cycle frequency.) For this multivariate analysis the amount of time between product families (group technologies) by gateway resource was studied. Cycle frequency was viewed as a contributing factor to the Material Planning Process and the average amount of inventory that is carried. The longer the cycle frequency, the more inventory that needs to be produced to cover all of the projected demand until the next production run. The more variable the cycle frequency the more safety stock is needed to protect against supply variability.

67 Figure 2-19 represents a sample of the baseline data accumulated for one product family on a gateway resource.

Individual Value

80 40

UCL=73.86

Mean=23.75 0 LCL=-26.36 0 10 20

Subgroup 70 60 50 40 30 20 10 0

Moving Range

UCL=61.56

R=18.84 LCL=0

Figure 2-19. Baseline Cycle Frequency I-MR Chart

The final area of analysis focused on schedule changes for the gateway resources. Schedule changes were believed to affect the capability of synchronizing production and consumption of inventory. A schedule change log was developed for the gateway resource production analyst and production supervisors for categorizing schedule changes based upon the following criteria: Business Priority Changes, Process Failures, Equipment Problems, Material Availability Problems, Production Causes, and Other

68 Issues. Schedule changes were viewed in total using a Process Control Chart and from a causal analysis perspective using a Pareto Chart. Figures 2-20 and 2-21 are the schedule change control chart and schedule change Pareto chart respectively.

10 S a m p le C o u n t

UCL=10.41

C=4.238

0 0 10 S a m p le Nu m b e r 20

LCL=0

Figure 2-20. Schedule Change Control Chart

69
W CD esc. (A ll) O M C E P 53 50 38 27 13 Cause Count Code D ate Range B 12/1/2002 - 3/30/2003 54

S chedule A ttainm ent Causal A nalysis


B=Business Reason C=Coverage S hortage E=Equipm ent Problem
60 50 40 Frequency 30 20 10 0 B O M Cause Code C E P

M =M aterial S hortage O =O ther P=Process/Q uality Issue

54

53

50 38 27

13

Figure 2-21. Schedule Change Pareto Chart

Whether the process is operational or transactional, if data is available that can be used to represent process input variables, a Multivariate Analysis can be key in providing insight into the relationship between key process input variables and key process output variables. Multivariate analysis can prove to be an invaluable tool in beginning to validate assumptions formed from the FMEA around the identification of critical process inputs. The Six Sigma structure seems to tie Multivariate Analysis in nicely with the previous steps in the process and sets the stage for continuing the process improvement effort.

70 2.5.3 Designed Experiments

A designed experiment is a systematic method for collecting data to understand the cause and effect relationships in a process. Process learning can occur through passive observation of naturally occurring events, by creating informative events, or through experimental design by manipulating input variables. A designed experiment focuses on the latter manipulating input variables to observe changes to the output responses. The goal of the designed experiment is to identify the influential inputs that minimize the effect of input variables on the output and to facilitate centering output on its target. Some processes are very conducive to conducting a designed experiment while the process is in operation. Other processes are not amiable to designed experiments because the variable changes may drive the process towards an outcome that is difficult to recover from. The degree to which a planned experiment can be run on a process that is in operation is somewhat dependent on the amount of change to be introduced. This business case is an example of a process that does not lend itself to experimentation while the process is in operation. Using the intelligence gathered from the FMEA and multivariate analysis, the designed experiments methodology was used to test our improvement recommendations in terms of a hypothesis. The research hypothesis must state an expectation or relationship to be tested.

71 Prior to beginning the hypothesis testing, to follow is a summary of the conclusions realized from the FMEA exercise: Conclusion 1: Semi-finished inventory is not a process. Semi-finished inventory is an outcome of many processes and their variability. The planning and scheduling process were determined as contributing most to the outcome of semifinished inventory. Conclusion 2: The process improvement effort will be focused on the gateway resources. The feedback from the initial data collected, multivariate analysis, and the FMEA all point to the gateway resources as contributing most to the semifinished inventory levels. Conclusion 3: The defect of the planning and scheduling process was defined as variation in synchronization between producing and consuming resources caused by: sequencing rules within and between resources; production exceeding demand due to lot sizing, changeover rules, buffers, demand and supply variability, planning errors, etc.; and, uncontrolled Material Requirements Planning resulting from unconstrained demand and supply plans, lack of firm planning, and lack of inventory allocation management. The first major corrective action from the FMEA was to implement a constraintanchored planning process. This action item fits well with the goal of Six Sigma, which is to identify and reduce variability within the process. A planning and scheduling environment where demand requirements are unconstrained and constantly in flux creates a planning environment driven by variability. The purpose of constraining the plan is not

72 only to stabilize and smooth production requirements but also to ensure material that is produced will be consumed (synchronization) as quickly as possible. Constraintanchored planning generated the highest FMEA RPN score of 60. The hypotheses for testing the expected impact of constraint-anchored planning was stated as: Ho: 1 = 2 versus Ha: 1 > 2; where 1 = average semi-finished inventory production value with unconstrained demand and unconstrained supply and 2 = average semi-finished inventory production value with unconstrained demand and constrained supply. To test these hypotheses a manufacturing planning and scheduling simulation environment was created. The hardware and software infrastructure at the site studied allowed for the running of several planning models simultaneously. The live production model was updated and run on a daily basis using current input data. Simulation production models could be run on command as long as certain required input data was provided. All of the necessary input data was copied from the live model to the simulation model prior to conducting the experiment. The hypothesis results were reported in terms of the average projected inventory production and included fourteen data points. Both the live and simulation model output data was written to a database and accessed using Microsoft Excel and Query. The second major improvement action was to define the impact of planning parameters on the level of semi-finished inventory for the gateway resources and create tools that allow for analysis by supply chain. The key process inputs associated with this

73 action item scored a FMEA RPN of 48. The basis for this corrective action were as follows: 1. Parameters are currently evaluated from an SKU view for a given production resource. Decisions relating to lot size, buffers, cycle frequency, etc. tend to be made in isolation without regard for the interdependency of all of the SKUs produced on the resource and the interaction with downstream resources. 2. The use of planning parameters in the Material Planning Process can create dynamic and complicated planning results. Process performance issues may result when different combinations of parameters are used together. Planning parameter examples include: production frequency, minimums, multiples, lead times, time buffers, quantity buffers, etc. 3. Optimal Order Quantity logic must be tested and incorporated into the model to understand the cash versus cost tradeoff. The hypotheses for testing the expected impact of planning parameters for this experiment was stated as: H0: 1 = 2 versus Ha: 1 > 2; where 1 = average semifinished inventory with current parameters and current demand forecast, and 2 = average semi-finished inventory with optimized parameters and current demand forecast. The test data was derived from a Material Requirements Planning simulation model created using Microsoft Excel and Microsoft Query to calculate average inventory for 12 weekly data points. The simulation model was a collection of databases that captured current demand for each product and then propagated the demand requirements through the product bill-of-material. The model provided a mechanism for

74 the project team to understand the impact on inventory throughout a supply chain for a given product structure for the current planning parameters as well as new or adjusted parameters (variables). The impact on inventory was measured in both days-of-stock and inventory dollars (fully-burdened unit cost at each level of production.) Figures 2-22 and 2-23 represent examples of the output results for a particular supply chain using the simulation model:

Supply Chain Parameter Planning Model


Business Model: Product Group 1

Model Notes: > The user enters information in cells highlighted in Yellow. > Using a Corporate Carrying cost factor of 11.5%. > Avg. Daily Demand includes Forecast only
Inventory Carrying Cost Factor

Avg. Daily Demand

1,018

1.115 Proposed Parameters


Time Buffer Production Consolidation Multiple Inventory (Days) Cycle (Days) Minimum Multiple Factor

Current Parameters
BOM Work Level Center Stock No. Time Buffer Production Consolidation (Days) Cycle (Days) Factor Minimum

MPS 1 2 2 2 3 3 4 3 4 3 4 5

ST DS-1

FG1 INPUT-0

15.42 0 0 0

12.89 0 0 0 7 0 0 0 0 0 14 0 0

12.89 0 0 0 7 0 0 0 0 0 7 0 0

17,600 0 0 0 1,700 0 0 0 0 0 2,000 1,000 2,000

320 0 0 0 1,700 0 0 0 0 0 2,000 1,000 2,000

6,335 231,200 0 0 53,550 0 0 0 0 0 32,680 37,400 133,450

15 0 0 0 7 0 0 0 0 0 0 0 0

11 0 0 0 7 0 0 0 0 0 14 7 7

17,600 0 0 0 1,700 0 0 0 0 0 2,000 1,000 2,000

320 0 0 0 1,700 0 0 0 0 0 2,000 1,000 2,000

10 0 0 0 7 0 0 0 0 0 7 0 0

DS1

INPUT-1

7 0 0 0 0 0

GW2 US-1 GW1

INPUT-1-1 INPUT-1-2 INPUT-1-3

0 0 0

Figure 2-22. Parameter Simulation Model Example (Ha) Inputs

75
Inventory A nalysis
$1,000,000 $900,000
C urrent Avg. D ays-of-Stock

46

45

$800,000 $700,000 $600,000 $500,000 $400,000 $300,000 $200,000 40 $100,000 $0 W eek 1 W eek 2 W eek 3 W eek 4 W eek 5 W eek 6 W eek 7 W eek 8 W eek 9 W eek 10 W eek 11 W eek 12 Projected Avg. D O S 39
Projected Avg. D ays-of-Stock

44

43

42

41

ProjectedInventory Profile

C urrent Inventory Profile

C urrent Avg. D O S

Figure 2-23. Parameter Simulation Output Example (Ha)

Testing for the expected impact of planning parameters on inventory performance was accomplished using the Material Requirements Planning simulation. The planning parameters tested included run frequency, production minimum order quantity, time and quantity buffers, and production consolidation days. Parameters were studied in isolation and in combination. Supply chains that included high dollar value and high usage materials from the gateway resources were modeled to test this hypothesis. The delta (change) between current inventory levels using current planning parameters and

76 projected inventory levels with new planning parameters was recorded and compared. These simulation model results demonstrated how different planning parameters interact in the manufacturing requirements planning model. The results of this experiment provided some interesting insight. The impact of cycle frequency on the average inventory level proved to be the most critical parameter regardless of the supply chain chosen and the combination of other parameters modeled. Longer average lead times resulted in higher average inventory levels (primarily in working inventory). Higher cycle frequency variability equated to higher safety stock inventory. A two-sample t-test for unpaired data was used to verify the constraint-anchored planning alternative hypothesis. The general formulas for computing a test statistic for making an inference about a difference between two populations is: Test Statistic: T =
X1 X2 s 2 /N1 + s 2 /N2 1 2

where N1 and N2 are the sample sizes, X1 and X2 are the sample means, and s12 and
2 s2 are the sample variances

If equal variances are assumed, this test statistic reduces to: Test Statistic: T =
X1 X2 sp 1/N1 + 1/N2

2 + (N2 1)s 2 (N1 1)s1 2 2 where s p = N1 + N2 2

77

The null hypothesis where the two means are equal (u1 = u2) will be rejected if T < - t((/) or T > + t((/) where t ( / 2, ) is the critical value of the t distribution with degrees of freedom 2 /N1 + s 2 /N2) 2 (s1 2 where = 2 2 2 /N2) 2 /(N2 1) (s1 /N1) /(N1 1) + (s 2

If the equal variances are assumed, then = N1 + N2 2 The equation H0: s1 = s2 versus Ha: s1 > s2 (where s1 equals the variance of the unconstrained supply and s2 equals the variance of the constrained supply) was used to represent the hypothesis that the variances are unequal for an unconstrained versus a constrained supply plan. These test results lead to the rejection of the null of hypothesis that the variances are equal. The tests for equal variance results using MINITAB are represented in Figure 2-24.

78

95% Confidence Intervals for Sigmas

Factor Levels U1 STDEV U2 STDEV

50000

100000 F-Test Test Statistic: 1.777 P-Va lue : 0.294

150000 Levene's Tes t Te st Statistic: 1.033 P-Value Boxplots of Raw Data : 0.318

U1 STDEV

U2 STDEV

200000

300000

400000

500000

600000

700000

Figure 2-24. Test for Equal Variances

Based upon the results of the test for equal variance, the test statistics in Figure 225 were calculated using MINITAB for the constraint-anchored planning alternative hypothesis of Ha: 1 > 2 assuming unequal variances.

79

Two-Sample T-Test and CI: U1 Actual Values versus U2 Actual Values N U1 Actual U2 Actual 15 15 Mean 1834379 1347515 StDev 238053 163267 SE Mean 61465 42155

Difference = mu U1 Actual Values - mu U2 Actual Values Estimate for difference: 486864 95% CI for difference: (333037, 640690) T-Test of difference = 0 (vs not =): T-Value = 6.53; P-Value = 0.000; DF = 24

Figure 2-25. Constraint-Anchored Planning (Ha) Test Results

The Two-sample t-test resulted in a T-Value of 6.53. When compared with a critical t-distribution value of 1.711 (95% confidence interval and with 24 degrees of freedom), the null hypothesis was rejected. The effect of constraint-anchored planning on the level of planned production is statistically significant. The designed experiments approach was very helpful in focusing simulation efforts on the effects from manipulating input variables to observe responses to the output variables. As was demonstrated by this business case, use of every tool available to execute a designed experiment is not necessary.

80

2.6 Process Improvement

The purpose of the Improve phase is to develop, implement, and evaluate solutions targeted at the verified cause. The goal is to demonstrate, with data, the solutions solve the problem and lead to improvement. Prior to implementing changes to the process, the project team created a Stakeholder Analysis matrix to identify and understand potential resistance to the project solutions. Stakeholders for this project included the plant manufacturing management team, gateway supervisors and supply chain analysts, upstream and downstream work center managers and supply chain analysts, and gateway equipment operators. Regular and frequent communication with those affected by the process change can create more buy-in, identify better solutions, and avoid pitfalls. Figure 2-26 is an excerpt from the Stakeholder Analysis completed for this project.

81
Level of Support Stakeholders Comments re: Level of Support Issues or Concerns Influence, Strategy, Tactics To achieve or maintain needed level of support Communicate benefits & discuss likely concerns & issues; include analyst in development of business rules governing process

N = Needed Level Positive or Supportive Items C = Current Level more control over planning; more stable environment; more C,N effective & efficient communications; better management of RM's

Gateway Supply Chain Analyst

less flexibility on what to run; requires more discipline; must have business rules (priorities) pre-established

Downstream and Upstream Supply Chain Analysts

should stabilize downstream reduced flexibility in the event of planning; more accurate mat'l a crisis (high sales, quality availability dates; will improve problems, etc.) could experience inventory levels; better long term b/o on some items; communications, should reduce requires more discipline supply variabilty

Communicate benefits & discuss likely concerns & issues; include analyst in development of business rules governing gateway & MPS scheduling process; provide help with managing FF manager concerns Communicate details of process and implications of new discipline; Review and gain input on control plan: how will business rules be used & enforced, etc.; contingency plan; impact on operators; new requirements on operations;

Gateway Product Manager

potential reduction in operating C,N expense & waste; more stable & predictable environment

loss of flexibility due to scheduling business rules; heat from other FF managers and Marketers

Figure 2-26. Stakeholder Analysis Excerpt

The first strategy for reducing material requirements planning variability was to develop a constraint-anchored planning process. The constraint-anchored planning strategy centered on creating supply plans based upon the availability of capacity and input materials for gateway resource manufactured products. Demand requirements that could not be supplied by the requested due date because of capacity and/or material constraints would be rescheduled to supply dates based upon when material or capacity was made available. Supply plans would not be created for demand requirements that could not be done.

82 As an outcome to creating realistic supply plans at the gateway resources, subordination of downstream resources to the capability of the gateway resources would be realized. Constrained supply plans from the gateway resources dictate the capability of downstream work centers to supply as well. Supply plans would not be created for demand requirements that could not be done for downstream work centers. The team recognized the implementation of constraint-anchored planning could result in a reduction in schedule change flexibility in favor of resource utilization, reduced supply variability, and reduced inventory. The reduction in schedule change flexibility was identified as a potential concern in the Stakeholder Analysis. Schedule change guidelines were created to improve the decision making process around schedule interruptions. These guidelines were deemed necessary to increase the understanding of the importance of maintaining the group technology schedule integrity. Each group technology family produced on a gateway resource has an interdependent relationship with each other. A schedule change can cause delay and disruption for subsequent group technology production runs. The schedule change guidelines defined the response plan for the escalation of schedule change events based upon the severity of the changeover (in hours) to the resource and the anticipated affect of the unplanned changeover on other products. The schedule change guidelines were presented to the stakeholders for their input. Consensus approval from stakeholders of the schedule change guidelines required several hours of discussion. As predicted in the stakeholder analysis, the most debated schedule change matrix concern was the potential for reduced flexibility. Prior to the schedule

83 change matrix, schedule change decisions were based solely on urgency without regard for the affect on other products, resource optimization, waste, or inventory ramifications. Schedule changes resulting from servicing one product often resulted in service issues for other products due to the delay created by the unplanned changeover. The stakeholders were eventually convinced the changeover guidelines would be beneficial in quantifying the positives and negatives of significant unplanned changeovers and engender more communication. The schedule change guidelines are presented in Figure 2-27.
Approval Level to Bypass Guideline

Schedule Change Guidelines


No breaking into families with requirements requiring HARD changeovers Unplanned MX/PPEs must have approval of FF Manager; must be run in family group

Brookings Coater Schedule Change Guidelines

Analyst/ Coating Supervisor

Coating FF Manager

Plant Manager

Sourcing Director or Supply Chain Manager

Consult

Consult

change will push next scheduled item out by 12 hours or less All unplanned PPE/MX require Coating FF Manager consent

change will push next scheduled item out by between 12 to 24 hours escalate as necessary

change will push next scheduled item out by more than 24 hours escalate as necessary

MX/PPE must be completed in time allotted Product runtimes that exceed 10% of standard time allotted will be aborted until process problem rectified. Cannot insert runs out of family sequence

extended run will push extended run will push extended run will push minimal impact on schedule out by 4 hours schedule out by more than 4 schedule out by > 8 completion of schedule or less hours but less than 8 hours Consult extended run will push extended run will push extended run will push schedule out by 4 hours schedule out by more than 4 schedule out by > 8 or less hours but less than 8 hours change will push next scheduled item out by more than 24 hours escalate as necessary

change will push next change will push next minimal impact on scheduled item out by scheduled item out by 12 to completion of schedule 24 hours 12 hours or less All items must be run in specified sequence as All other changes minimal impact on escalate as necessary set forth in production schedule (where require Coating FF completion of schedule material is available). Manager consent All other changes run out materials; Quantities must be completed to within +/require Coating FF escalate as necessary minimal impact on 10 % unless prior approval given Manager consent completion of schedule PM extension will push PM extension will push Preventive maintenance must be completed minimal impact on schedule out by 4 hours schedule out by more than 4 when scheduled, within time allotted completion of schedule or less less than 8 hours All other changes minimal impact on escalate as necessary require Coating FF All schedule delays will be approved. completion of schedule Manager consent Special Cause Circumstances As required As required As required

escalate as necessary

escalate as necessary

escalate as necessary As required

Figure 2-27. Schedule Change Guidelines

84 Implementation of the constraint-anchored firm planning strategy entailed both a planning system and process change. The process change was dependent on the success of implementing system changes in support of constraint-anchored planning. The effort spent defining the process change led to the identification of the necessary planning system changes. The network of systems and applications for this business has become very complex over time. The planning system infrastructure is made up of many specialized applications some located and supported on-site and some located and supported at corporate headquarters. Although each application serves a unique purpose, they are all interconnected and provide pieces of information critical to the planning process. At the very center of this information interchange is the manufacturing planning and scheduling system. Although this system receives critical input information from several supporting applications, it serves as the material planning and scheduling calculation brain. The capability and performance of this system will have a significant influence on the success of changes proposed in this business case. A feasibility study was completed to assess whether the solution is achievable given the organizations resources and constraints. With the assistance of an Information Technology representative, our team evaluated three major areas of feasibility: 1) Technical Feasibility: whether the proposed solution can be implemented with the available hardware, software, and technical resources. 2) Economic Feasibility: whether the benefits of the proposed solution outweigh the costs.

85 3) Operational Feasibility: whether the proposed solution is desirable within the existing managerial and organizational framework.

The feasibility evaluation merges the project solution and its system support requirements with the available hardware, software, and technical resources. Table 2-1 was developed as a summary of the estimated cost associated for each programming change.

Table 2-1. Information Technology Feasibility Matrix

86 The results of the analysis indicated the proposed solution could be implemented with current hardware, software, and technical resources. Changes to current scripts, routines, and database file structures would need to be made to implement the solution. The programming and process difficulty both average 3.75 on a scale of 1 to 5 (with 1 being the easiest and 5 being the most difficult) for those features not currently used but needed. The cost to implement is an estimated $7,800. The total cost to implement includes only the cost of software programming changes. Additional hardware and/or hardware changes were determined to be unnecessary. Since the cost of implementation can be viewed as fixed costs, the only potential cost is lost opportunity cost resulting from resources being committed to this project versus another project or projects. The results of the feasibility study indicated the benefits of the proposed solution (an estimated $500,000 inventory reduction) far outweigh the costs and the project is deemed economically feasible. Operational feasibility is much easier to justify. Since the recommendations for system changes were the result of a Six Sigma project, corporate and plant management signed off and approved the project and project solution. Complementing the constraint-anchored planning process was the development of gateway changeover sequence plans, expansion of the schedule attainment measure, and the development of scheduling change guidelines. The implementation of a changeover sequence strategy had the potential to decrease costs, increase inventory turns, and improve machine and labor productivity through improvements in efficiency and predictability for supply replenishment.

87 To arrive at a sequence plan, we involved the machine operators, process and equipment engineers, and the primary supply chain analyst. In preparation for this meeting, several pieces of information were obtained for each gateway resource using Microsoft Query and Excel. Critical SKU information for each gateway resource included: annual production quantities, annual production hours, annual usage, and billsof-material. As discussions progressed, it became clear the critical scheduling influence for differentiating sequencing families related to a common raw material input. Grouping by this common input would reduce changeover time through a reduction in time for cleanup between runs as well as reducing the number of material moves. Sequencing rules between raw material familys could also be improved - as changing from one raw material to another could reduce equipment tear down and set-up time. Sequencing rules within each group technology family are dictated more by run frequency than any other criteria. For example, some products within a group technology family may have sufficient demand volume to warrant a weekly or bi-weekly production cycle while others may be produced monthly or as needed to order. Following the identification of families and sequencing, the next effort focused on computing an optimal production quantity (OPQ) range for each SKU on each gateway resource. The approach to arriving at this range utilized a combination of the Delphi Technique (using the group of experts to help predict the future of changeover improvements) and the traditional OPQ formula. The premise for the OPQ approach is to contain the inventory cost versus ordering cost balance within a known operating range

88 allowing for some degree of quantity freedom and to reduce supply quantity variability and family cycle frequency variability. Figure 2-28 illustrates a simplified example of the theory behind the OPQ range.
Annual Cost

Minimum Order Qty

Maximum Order Qty

urve st C o C l Tota rve t Cu s o C ding Hol

Order (Setup) Cost Curve

Optimal Order Quantity

Order Quantity

Figure 2-28. Optimal Order Quantity Model

The OPQ model was constructed using Microsoft Query to extract the OPQ input data and Microsoft Excel to calculate and the display the OPQ results for any given SKU. The model was robust enough to allow the user to calculate the cost versus cash implications for a non-OPQ simulation as well. An example of this model is provided in Figure 2-29.

89

Figure 2-29. Optimal Production Quantity Simulation Model

The final activity of the exercise consisted of developing the changeover sequence plan. This plan was constructed as follows: 1. Divide minimum order quantity (from the OPQ model) by the historical average production rate for each SKU to arrive at the production hours. 2. Total the production hours for each family. 3. Define SKU production frequency within each adhesive family.

90 4. Define family sequencing schedule. 5. Transfer hours by family to calendar grid. (Ease of understanding and communication.) An example of the sequencing calendar grid is provided in Figure 2-30.

Saturday Week 1 Week 2

Sunday

Monday
Group 8: 27 Hrs

Tuesday
Group 9: 11 Hrs

Wednesday

Thursday

Friday 8

Group 10: 48 Hrs Group 14: 45 Hrs

Group 6: 76 Hrs
Group 13: 15 Hrs Group 7: 28 Hrs

Group 12: 80 hrs

Week 3

Group 13: 25 Hrs

Group 10: 48 hrs

Group 5: 48 Hrs

Group 4: 38 Hrs

9 9

Week 4

Group 11: 41 Hrs

Group 3 - 36 hrs

Group 1: 28 Hrs

Group 7: 36 Hrs

Group 15: 32 Hrs

= Capacity Bank

Figure 2-30. Group Technology Scheduling Plan

The schedule attainment measure was expanded from attainment in hours to include attainment by item. Item schedule attainment was viewed as a critical feedback measure for both schedule execution and supply plan attainment. The schedule attainment in hours measure provides a view of coverage at a machine and operator level.

91 Item schedule attainment provides an organizational performance measure because many functional areas contribute to the results. The item schedule attainment application was designed to provide some automation to the schedule change log previously maintained manually in a spreadsheet. Like the schedule change log, the item schedule attainment application would also provide the capability for causal analysis on schedule change reasons. The final improvement recommendation comes as a result of the parameter review and should prove to be a complement to both firm planning and group technology scheduling. A buffer management program was developed to help manage the variability in demand and supply at the gateway resources. During the analysis of the effects of parameter changes on a supply chain, the number of time and quantity buffers found to be in place was alarming. Supply Chain Analysts were responsible for implementing and managing buffers. An informal survey found buffers were more often put in place justin-case versus for a strategic purpose and were not based on demand or supply variability and did not correlate directly to a desired level of service protection. To improve the area of buffer management an application was created to calculate and manage a buffer through exception-based reporting. For this application, the corporation had developed a Microsoft Excel-based safety stock calculator. This application required the following information in order to calculate a safety stock quantity: service protection level, average cycle frequency (lead time), average demand over lead time, standard deviation of demand or forecast error, and standard deviation of cycle frequency. Additional databases were linked to the safety stock model that assisted

92 with analysis of demand and supply variability, historical inventory balance monitoring, and exception-based buffer performance feedback. Implementation of the Buffer Management application entailed validation of the data, providing users access to the data (database access), and user training. The data made available in the buffer management application allowed analysts to quantify the differences between current buffers and buffers calculated based upon demand variability, supply variability, and the desired level of service. The application was programmed to automatically create individuals charts for demand and supply information for the user-defined stock number and date range. The average demand quantity, the standard deviation of demand, the average supply lead-time, and the standard deviation of supply lead-time were used as inputs for calculating buffers and the target inventory level. The target inventory level was defined as being equal to safety stock plus half of the average demand during average supply lead-time. Figures 2-31, 232, and 2-33 represent simulations of the screen views offered by the buffer management application:

93
Avg. Lead Time 13 Work Center Rept Unit UCL-Max Inv. 310,332 Inventory Target Inv. $ Reduction Entitlement $257,696 -$29,418 Inv. LCL (Sfty Stk) 0 SS Inv. $ $0 Service Level 0% No. Days below Safety Stock 14

SKU A

Start Date End Date 5/1/02

Avg. Inv. 163,887

Avg. Inv. $ Target Inv. $ 228,278 185,007

12/31/2002 Gateway 2 LNYD

Inventory Balance Monitor


450,000 400,000 350,000 300,000 250,000 200,000 150,000 100,000 50,000 0

R QTY

Avg. Inv.

UCL-Max Inv.

Target Inv.

Inv. LCL (Sfty Stk)

Figure 2-31. Buffer Management Inventory Monitor


SKU A
Start Date End Date Work Center Gateway 2 Data Points L/T - Avg. Days L/T Std.Dev. L/T Coeff. of Var.

5/1/2002

12/31/2002

17

13

0.34

Cycle Frequency Individuals Chart


30 25 20 15 10 5 0

Note: Avg. Lead Time may not be centered. Statistically calculated LCL is not allowed to go below zero.

Act. Lead Times

Avg. Lead Time

UCL

LCL

Figure 2-32. Buffer Management Cycle Frequency Individuals Chart

94
SKU A
Start Date End Date Work Center Gateway 2 Data Points Avg. Wkly Usage Qty STDEV Usage Qty Coeff.of Var.

5/1/2002

12/31/2002

1873

167,102

11,746

0.39

Demand Individuals Chart


90,000 80,000 70,000 60,000 50,000 40,000 30,000 20,000 10,000 0

Note: Avg. Demand may not be centered. Statistically calculated LCL is not allowed to go below zero.

Usage Qty

Avg. Usage Qty

UCL

LCL

Figure 2-33: Buffer Management Demand Individuals Chart

The data presented in the buffer management application proved useful in more ways than as a tool to calculate safety stock quantity. The demand individuals chart was useful in analyzing the historical profile of demand patterns for a stock-keeping unit. This demand information was valuable for analyzing customer demand patterns and in seeing the impact of planning and scheduling methods on upstream work centers. The supply cycle time individuals chart was useful to analysts in understanding the frequency in which a product was manufactured and the relationship of this frequency to the demand profile. Situations were discovered where the cycle frequency did not

95 match a demand profile with very little variation even though the resource responsible for supply did not have cost constraints preventing the reduction in lot size or lead-time. The inventory monitor was useful in assessing the performance of the planning and scheduling process relative to managing inventory to target, identifying when and how often safety stock was penetrated, and indicating the frequency with which maximum inventory levels were exceeded. The final step in the process improvement phase was to complete pilot testing to validate the system infrastructure changes, understand the effects of the changeover sequence plan on downstream work centers and inventory consumption, and to validate the planning and scheduling process. This exercise was accomplished by using a test model that included the live system supply chain tables, scripts, and manufacturing-planning model. The test model was capable of being updated with the same daily demand information, bill-of-material data, work center routing and rate data, and production schedule data as the live system. Once the test model has been updated with the test plan, the manufacturing model data can be saved and accessed for comparative analysis. This pilot test system proved invaluable in comparing the model results using several combinations of planning system features and planning and scheduling techniques. Once the new system configuration had been confirmed and validated, the test system was made available (through Windows NT Client) to the gateway supply chain analyst. The test system provided an environment for the analyst to learn and understand the new planning process and system requirements. The analyst was also able to provide

96 feedback on the use and functionality of the process, the complexity, and whether the time required for utilizing the new process was reasonable and manageable. At this phase of using the Six Sigma DMAIC roadmap, there is very little insight that can be given to discern the applicability of Six Sigma tools to the process improvement effort. All of the work completed prior to this phase would either lead to the improvement or they would not. There is not an infallible method to validate the solutions will be correct except to run and record. Like any other process improvement effort, if the recommended solutions do not fit together or are not bought into by process owners, they will not be successful. Like the Deming PDCA model, Six Sigma methodology suggests returning to the FMEA if improvements do not deliver the results expected. Typically, after the measurements supporting the improvement phase have been developed and are in place, a project leader will present the project results to the process owners, process champion, and black belt sponsor in what is termed a pre-close. The final project presentation is termed the close. The time between the pre-close and the close is typically spent observing the process using the measurement control systems. This run and record time was beneficial for this business case because it allowed for additional time for proving the concept, for ironing out any programming issues for obtaining data, and for providing additional training time for process owners prior to the transfer of project control.

97 2.7 Process Control

The purpose of the Control phase is to develop, implement, and evaluate solutions targeted at the verified cause or causes. The goal is to demonstrate that the solutions solve the problem, lead to improvement, and reduce or eliminate special causes. Key activities to be taken in managing the process improvement solutions include: implementing ongoing measures and actions to sustain improvement; defining responsibility for process ownership and management; and, executing closed-loop monitoring. The ongoing measures that are put in place to manage the process should be meaningful and measurable. The measurement should help track process performance and assist leading the process owners towards making better decisions. Documentation should accompany the measurement systems. This documentation is necessary for several important reasons. The documentation spells out where the data comes from, how the measurement systems are updated, how and when to respond to emergencies, and serves as a means to update and track measurement system revisions. The Six Sigma Control Phase is really not a unique concept. The concepts are very similar to the Act step in the Demings PDCA continuous improvement model. The recommended tools and the processes for evaluating results are very similar. Whichever process improvement technique is used to describe this phase of process improvement, many of the concepts are transferable to both operational and transactional processes.

98 2.7.1 Project Controls

The project controls for this business case can be best demonstrated using the classic inventory replenishment diagram. Advantages to using this classic diagram to illustrate the project control plan included: familiarity to process owners (in this case the Materials Manager and Supply Chain Analysts); it is simple and easy to comprehend; it condenses a complex process down to understandable pieces; and it demonstrates the interdependence of key input variables on process performance. As Figure 2-34 depicts, the control plan for this business case encompasses three critical measurement areas: Supply Plan & Schedule Attainment, Lead Time/Cycle Time Management, and Buffer Management.
Supply Plan & Schedule Attainment

and De m

Inventory Level

Safety Stock Cycle Frequency Time Buffer Management Schedule Change Guidelines

Lead Time/Cycle Time Management

Figure 2-34. Semi-Finished Inventory Control Plan

Sup ply

99 A Control Plan Matrix was developed to assist the project team as well as process owners with understanding the relationship of each control measure, enabler, and countermeasure. The matrix was a convenient communication tool as it provided one information location that summarized the controls necessary to manage the process and provided a connection back to the process map. Figures 2-35, 2-36, and 2-37 illustrate the format used for this business case.

Measurement Process Map Process Input Output Measurement Description Measurement Frequency Data Granularity LSL Target USL

1 Supply Plan Attainment Material Requirements Planning Net Unconstrained Demand Firm Constrained Supply Plan Measures actual supply qty versus demand qty using the demand due date. Weekly Weekly; Square Yards

2 Inventory Performance Production Schedule Execution Production Schedule Inventory

3 Item Schedule Attainment Production Schedule Execution Production Schedule

Inventory Measures actual supply qty versus scheduled quantity. Date range is Actual inventory levels over time the scheduling week (beginning Monday and ending Sunday). Weekly Weekly; Square Yards Weekly Weekly; Square Yards 90% 95% 100%

Lesser of the Demand Plan (10%) and Constraint-anchored Safety Stock Plan (-10%) Greater of the Demand Plan Avg. Demand over Lead Time and Constraint-anchored Plan divided by two plus safety stock Sum of Maximum quantity for Target +10% products manufactured during measurement period.

Assign a cause code whenever a data point violates the LSL or USL. Review causal Reaction Plan information weekly. Develop corrective actions for largest problem areas.

Respond when 1data point violates the LSL or USL.

Assign a cause code whenever a data point violates the LSL or USL. Review causal information weekly. Develop corrective actions for largest problem areas.

Figure 2-35. Primary Control Plan Measures

100
1 Demand Variability Material Requirements Planning Actual Demand Demand variability Measures the general performance of demand variability As needed (mandatory review for safety stock analysis quarterly) Weekly; Square Yards The greater of zero or the statistically calculated LCL using the I-MR chart Statistically calculated Mean Calculated using the I-MR chart Investigate when cause flag appears. 2 Supply Variability Material Requirements Planning Actual Production Supply variability Control Chart As needed (mandatory review for safety stock analysis quarterly) Per Changeover; Days between changeovers The greater of zero or the statistically calculated LCL using the I-MR chart Statistically calculated Mean Calculated using the I-MR chart Investigate when cause flag appears.

Enabler Process Map Process Input Output Measurement Description Measurement Frequency Data Granularity LCL Target UCL Reaction Plan

Figure 2-36. Control Plan Measurement Enablers


2 Overall Equipment Effectiveness Capacity Planning Production Reporting Operating Cost / Productivity Measures the effectiveness of Measures the sales order lines equipment based upon machine on time as a percent of total availability, performance, and lines quality. Weekly Weekly; Percentage Sales Order Lines on Time LSL - Zero 95% Average UCL - Calculated using np chart Investigate when cause flag appears. Weekly Weekly; Percentage CEE 1 Customer Service Demand Planning Customer Orders & Inventory Order lines on time 3 Capacity Capacity Planning Net Unconstrained Demand Rough Cut Machine Loading Compares the current machine loading to the maximum machine loading (CSIP). Weekly

Measurement Process Input Output Measurement Description Measurement Frequency Data Granularity LCL or LSL Target UCL or USL Reaction Plan

Proces s Map

Weekly; Machine Load (Hours Required / 520) LSL - Customer Service Interruption LCL - Calculated using I-MR chart Point 85% Average Average Machine Loading UCL - Calculated using I-MR chart Investigate when cause flag appears. USL = 1.40 Respond when projected machine loading exceeds CSIP

Figure 2-37. Counterbalance Control Plan Measures

101 The project team, with input from the process owner, developed a Responsible, Accountable, Consultant, and Informed (RACI) Matrix in support of the metrics outlined in the Control Plan Matrix. The purpose of the RACI Matrix is to assign names and/or job titles to the control plan to ensure a smooth transfer of project control from the project team to those who implement, maintain, and respond to the performance of the control metrics. The RACI Matrix for this business case is provided in Figure 2-38.
R=Res pons ible A=Accountable C=Cons ultant I=Informed Plant Manager Gateway Manager Supply Chain Gateway Supply Manager Chain Analys t Downs tream Res ource Managers Downs tream Supply Chain Analys ts Gateway Engineering Operators

Control

Inventory Planning and Control Gateway Semi-Finis hed Inventory Tracking Gateway SKU Inventory Performance to Target Gateway Safety Stock Evaluation & Adjustment I I C I I C A A A R R R C I C C I C I I I I

Scheduling Planning and Control Gateway Firm Planning Process Gateway Lead Time Management Gateway Schedule Attainment Tracking Gateway Supply Plan Attainment Tracking Gateway Group Technology Family Maintenance A A A A I A A A A I C C I I A R R R R R I I I I I I I I I I I I I C I I C

SF Inventory DOS
Service Cons trained Equipment Effectivenes s (CEE) Capacity Planning

Secondary Meas ures A A A R R R R I I R I I R I I R I I R I I R I R

Figure 2-38. Responsible, Accountable, Consulted, Informed (RACI) Matrix

102 The Supply Plan & Schedule Attainment measure gauges how well the gateway resources are performing to their production plans and schedules. For all practical purposes, the schedule attainment measure is part of the overall supply plan attainment measure. The reason for the separation of the plan and the schedule is that the schedule attainment portion includes feedback to the production operators on the performance issues they have more control or influence over. The firm plan portion of supply plan attainment does not always relate to production performance and was viewed as more an organizational issue. This Supply Plan measure was developed to gauge the synchronization between the gateway (producing) resources and downstream (consuming) resources and provide control feedback on the management of performance between the interdependent objectives surrounding inventory control, constrained equipment efficiency, and customer service (i.e. achieve and maintain service goals with the least amount of inventory and cost). The Supply Plan compares actual production against two different dimensions of the plan. The first dimension is the measure of actual production versus the constraintanchored plan and the second dimension is the measure of actual production to what was needed by the customer. In an environment where variability is minimal and there are no cost or capacity pressures, these two dimensions could be the same. Historically, the gateway resources have been loaded (required hours of production) very heavily and were not always able to produce what the customer needed by the date they needed it. This is why the gateway resources were selected as the constraint to drive the

103 availability of material to downstream resources based upon what the resources could produce. The schedule attainment portion of the measure is focused on how well the gateway resources execute to the production plan requirements that are converted to the schedule from the firm plan. This measure differs from the Supply Plan measure because of whom it is applicable to and where it can be applied. The Supply Plan Attainment measure is more an organizational performance measure. Schedule Attainment is an operational measure having direct applicability to production and/or operator performance. The Schedule Attainment measure is also where execution miscues can be more easily be tallied using causal analysis data. The data used for both measures is stored in a database and can be accessed via Microsoft Query and/or Microsoft Access. The databases were created using Oracle SQL Forms. The data can be accessed at various levels of information detail. The longrange plan for both measures is to create on-line control chart applications accessible via the planning and scheduling applications. (This feature was not available prior to the writing of this paper.) Figures 2-39 and 2-40 represent examples of the how the measurement data is organized.

104

Figure 2-39. Supply Plan Attainment Detail Screen

Figure 2-40. Schedule Attainment Detail Screen

105 Although emphasized as separate control variables in the inventory replenishment diagram, buffer management and cycle frequency were found to be related to no ones surprise. Because of this relationship, the monitoring of both variables was combined into one measurement application. The measurement application was created in such a manner that every aspect of the creation of buffers and its impact on inventory can be modeled and monitored. The application was created using Microsoft Query and Microsoft Excel. The primary data components consisted of: 1. Query Input Sheet: Interface for the user to Query by stock numbers and start and end dates. 2. Demand Individuals Control Chart: Provides a picture of actual demand, average demand, demand standard deviation, and coefficient of variation for the dates selected. (Coefficient of variation was provided to users as an indicator of variability relative to the mean. A higher coefficient of variation usually indicates the data is more spread-out and widely dispersed. The safety stock calculator model used tends to underestimate the true safety stock requirement for coefficient of variation values greater than 1.25 when higher service levels (>=97%) are required.) 3. Cycle Frequency Individuals Control Chart: Provides a picture of actual leadtime, average lead-time, lead-time standard deviation, and coefficient of variation for the dates selected.

106 4. Safety Stock Calculator: Model that calculates buffer quantity and target inventory based upon the desired service level protection, variability of demand, and variability of supply. 5. Inventory Balance Monitor: Tracks inventory balance performance to the calculated target. In addition to the Buffer Management model, a Group Technology Cycle Frequency monitor was developed to track the cycle variability for each product family on a gateway resource. The process owner is able to select a group technology family by resource for a given date range and analyze the variability around the cycle frequency for that family. The idea behind this control is that as changeover times decrease, the cycle time between changeovers should also decrease. If changeover times increase, the process owner should reevaluate the cycle frequency and optimal order quantity to determine the effect on cost. An example of the Group Technology Lead Time Monitor is shown in Figure 2-41. (The Buffer Management model application examples can be found in Figures 2-31, 2-32, and 2-33).

107
G W3 1 Enter Wrk Ctr No. Enter Family No. Enter Start Date (mm/dd/yy) Enter End Date (mm/dd/yy) Pre-5/1/02 Pre-5/1/02 Avg. Pre-5/1/02 Pre-5/1/02 Post-5/1/02 Post-5/1/02 Avg. Data Points Cycle Time Std Dev Coeff. Of Var. Data Points Cycle Time Post-5/1/02 Std Dev Post-5/1/02 Coeff. O f Var.

1/1/2002 12/31/2002

Fam ily Desc.

28

17

0.60

15

16

0.52

Cycle Tim eA nalysis by Production Family


90 80 70 60
L/T Days

Note: Avg. Lead Time may not be centered. Statistically calculated LCL is not allowed to go below zero.

50 40 30 20 10 0

Figure 2-41. Group Technology Cycle Frequency Individuals Chart

of schedule changes. During the Measurement Phase, schedule changes were found to be a significant contributor in sub-optimized synchronization between product demand requirements and actual product supply. Schedule change guidelines were developed to define the levels of schedule change disruption that required escalation to the appropriate levels of management. Besides minimizing the impact of schedule changes, business justification was now

1/ 13 /2 00 2 1/2 7/ 20 02 2/ 10 /2 00 2 2/ 24 /20 02 3/ 10 /2 00 2 3/ 24 /2 00 2 4/7 /2 00 2 4/2 1/ 20 02 5/ 5/ 20 02 5/ 19 /2 00 2 6/ 2/ 20 02 6/ 16 /20 02 6/ 30 /2 00 2 7/ 14 /2 00 2 7/ 28 /2 00 2 8/ 11 /2 00 2 8/ 25 /2 00 2 9/ 8/ 20 02 9/ 22 /2 00 2 10 /6 /2 00 10 2 /2 0/ 20 02 11 /3 /2 00 11 2 /1 7/2 00 2 12 /1 /2 00 2


Act. Cycle Time Avg. Cycle Time UCL LCL Lead Time Trend

The final control plan strategy focused on minimizing the frequency and impact

108 required with each change. An understanding of the business need, the affect on other products, and the affect on customers were examples of some of the information required as justification. (The Schedule Change Guidelines can be found in Figure 2-26.) Complementing the schedule change guidelines were the assignment of cause codes found in the schedule attainment application. The cause codes will be used to track the number of schedule changes and allow managers to identify the cause reasons that occur most often to help direct schedule execution improvement efforts. (Examples of the Schedule Change control chart and Schedule Change Pareto Chart can be found in Figures 2-20 and 2-21 respectively.) Control plan implementation also entailed documentation and training for those identified as Responsible in the RACI Matrix. Documentation and training was both process and systems-related. The process documentation and training was focused on group technology, firm planning strategies, managing inventory to target, and safety stock review frequency. The systems documentation and training centered on understanding the process metrics including: updating the metrics, process out-of-control definitions, and response plans.

2.7.2 Process Capability

In very broad terms, process capability assesses a process performance relative to specification criteria. A process is deemed capable if virtually all of the possible variable values fall within specification limits. [16]

109 Capability studies are viewed as a key component of the Six Sigma process. The project team uses capability analysis to assess current process performance and to analyze the impact of improvement efforts. Since research was unavailable that demonstrated the use of capability analysis in transactional processes, the capability measures were kept separate from the process performance measures that were created for the process owner and supply chain analysts (see section 2.7.1). Process capability is typically reported as the 6 range of a processs common cause variation where is usually R / d 2 . [18] The Cp and Cpk indices can be used to represent process capability. The Cp index shows how well the variation of the process fits within the specifications and Cpk indicates how well the process can meet specification limits while accounting for the location of the average (centering). [18] Process performance studies also assess a process relative to specification criteria. Process performance is typically reported as the 6 range of a processs total variation (common and special cause), where is usually estimated by either average range or by s, the sample standard deviation. [18] The Pp and Ppk indices are typically used to represent process performance. The Pp index shows how well the total variation fits within the specifications and Ppk indicates how well the process can meet specification limits while accounting for the location of the average (centering). [18] My research discovered various opinions on which measures assess short-term capability and which measures assess long-term capability. [18] For example, one opinion holds that Cp and Cpk typically assess short-term capability by using a short-term standard deviation estimate, while Pp and Ppk typically assess overall long-term capability

110 by using a long-term standard deviation estimate. Other opinions are based on the differing calculation methods for standard deviations -ranging from lumping all of the process data together to determining standard deviation from a variance components model. [18] Defining and reporting process capability can provide misleading process information if the right approach is not used. Two areas were recognized as potential issues with developing capability measures: the use of computer software for conducting capability analysis and the application of capability analysis to processes where meaningful specifications do not exist. Statistical computer software like MINITABTM can be very convenient and helpful in simplifying the calculation of process capability. Where good communication and agreement has occurred in determining the techniques and use of capability metrics, statistical computer software should support Six Sigma improvement efforts. However, in situations where the use of capability has not been agreed upon, there is a danger that process capability metrics will be employed incorrectly. This issue is particularly prevalent in situations where training advocates the use of a statistical software package without giving enough guidance on its use. For those project leaders that have minimal experience with capability analysis and/or statistics in general, there is a tendency to believe that entering process data into the computer package will provide a valid and reliable capability metric. This, of course, is total nonsense. A second issue concerns the attempt to apply capability analysis to processes where meaningful specifications do not exist. Many project leaders may feel pressure to

111 use capability analysis where it does not fit or use the wrong capability indices. AIAG (1995) states that the key to effective use of any process measure continues to be the level of understanding of what the measure truly represents. Those in the statistical community who generally oppose how Cpk numbers, for instance, are being used are quick to point out that few real world processes completely satisfy all of the conditions, assumptions, and parameters within which Cpk has been developed. Further, it is the position of [the AIAG] manual that, even when all conditions are met, it is difficult to assess or truly understand a process on the basis of a single index or ratio number.[22] Another hurdle to consider when using capability indices is the comfortability of a process owner in using these indices to measure the performance of a process. For owners of transactional processes, capability indices may not feel as intuitive as a control chart in measuring the performance of a process. The potential negative effects associated with inappropriate capability analysis application can be minimized if an organization defines the necessary elements of process control that must be in place before a capability assessment can be performed and then communicates how and when capability will be measured. Both areas of controversy were prevalent in determining how capability would be measured in this business case. The organization sponsoring the project did define how Cp, Cpk, Pp, or Ppk can be applied to operational processes but did not cite examples of the application of capability indices to transactional processes. The organization failed to provide information and training relating to other accepted capability indices and did not cite examples for alternatives to capability indices where specifications were not

112 available. Project leaders were led to believe Cp, Cpk, Pp, or Ppk metrics were the only acceptable capability indices available and that every process has valid specification limits. Our project team debated whether meaningful specification limits could be defined and, if so, if capability indices would be of any use in measuring the material planning process as firm planning, group technology scheduling, and planning parameter management changes were implemented. Inventory targets, minimum inventory levels, and maximum inventory levels for each SKU could be calculated based upon the process input information for supply, demand, and the desired service level. The minimum and maximum inventory levels could be viewed as process specification limits. These specifications are not set by a customer and are not statistical control limits. The specifications are established based upon some key process information. This process information includes: Average Lead Time, Standard Deviation of Lead Time; Average Demand during Average Lead Time; Standard Deviation of Demand; Desired Customer Service Level; and any miscellaneous process requirements. Miscellaneous process information affecting specification limits may include: product shelf-life; optimal processing conditions (i.e. the longer the material is in queue waiting processing the more waste that is incurred when it is used as input); storage limitations; etc. Table 2-2 briefly outlines the definitions of the process information.

113

Process Information

Description

Average Lead Time

Average number of days between production runs.

Lead Time Variability

Standard Deviation of the number of days between production runs.

Average Demand during Average Lead Time

Average total demand (independent and dependent) during the average lead-time.

Demand Variability

Standard deviation of total demand.

Example: Product with a specific shelf-life; Miscellaneous Process Requirements Jumbo freshness for slitting productivity.

Table 2-2. Key Process Information Definitions

The maximum and minimum inventory levels could be subject to change if any part of the process information changed. For example, if the average lead time,

114 variability of lead time, average demand during average lead time, and the desired service level remained constant but demand variability increased - the minimum, maximum, and target inventory levels would be projected to increase to protect against the change in demand variability. Conversely, the minimum, maximum and target inventory levels would be projected to decrease if variability were reduced. As the project team worked with developing the specification definitions, we learned that each SKU needed to be evaluated independently. Like an operational process where each manufactured product has design, quality, process, or customer specifications that will optimize the performance of the product, each SKU has similar characteristics that differentiate it from other SKUs relative to the level of inventory that is carried. This approach is a departure from this organizations inventory goal-setting techniques of the past. The organization typically communicated inventory reduction goals by market segment. Within each market segment, business teams were formed around product groupings. Each business team within the market segment is held accountable for achieving the same inventory reduction goal as its market segment regardless of the complexity of their manufacturing processes, products, or customer requirements. This approach led to the sub-optimization of other metrics like customer service or cost control in an attempt to achieve required inventory targets. The top five SKUs in volume for the gateway work centers were selected for capability analysis. The project team obtained approval from management to use the methodology we developed to establish minimum and maximum inventory levels as specification limits for each SKU. The inventory data (as measured in average days-of-

115 stock) was then evaluated for stability using Individuals and Moving Range (I-MR) control charts. Figure 2-42 represents the actual data used for Gateway SKU 1 to evaluate stability.

In d iv id u a l Va lu e

20

UCL=21.32

10

Me an=11.84

LCL=2.364 0 10 20

0 Observation S ubgroup

UCL=11.64 M o vin g R a n g e 10

5 R=3.563 0 LCL=0

Figure 2-42: Gateway Stock Keeping Unit 1 I-MR Chart

Gateway SKU 1 accounts for over 80% of the total average inventory for one of the Gateway work centers and just over 30% of the total average inventory for all Gateway work centers. The days of stock data for Gateway SKU 1 was found to be stable and fit for capability analysis.

116 The next step in preparing to run a capability analysis was to determine the specification limits for this SKU. The first decision was to use days of stock as the unit of measure rather than inventory quantity or inventory value. The days of stock unit of measure was selected over inventory value in cost dollars or inventory quantity to drive the focus of the capability analysis towards inventory optimization versus inventory investment (which does not always correlate to the level of inventory optimization). The Average Days-of-stock for Gateway SKU 1 was reported on a monthly basis and used the following calculation: Average Daily Inventory / Previous 3 months average daily usage. This calculation was approved by management and coincided with corporate and division guidelines for calculating days-of-stock. Specification limits were evaluated using a combination of the buffer management model referenced earlier in this chapter (see Figures 2-31, 2-32, and 2-33) and process information from supervisors, operators, and analysts of the Gateway and downstream work centers. The specification limits for the Gateway SKU 1 were heavily influenced by process-related information. Through discussions with Gateway and downstream work center supervisors and equipment operators we discovered the sooner SKU 1 material was processed at downstream work centers following its release from the gateway work center, the faster the processing rate and the less the material waste. This window of optimal performance could include material that was up to 5 days old - after which average productivity and waste performance for downstream work centers dropped appreciably. Data verifying the productivity loss from using material greater than 5 days old was analyzed using production reporting. The effect of delayed consumption on

117 material waste was recorded by operators at downstream work centers using waste-bycause tally sheets. (NOTE: These production reporting and tallying activities were part of the production process before this project team was formed. The link between productivity and waste issues had not been incorporated into the planning and scheduling process until this project.) On average, downstream work centers experienced a 2% increase in waste and a 3% loss in productivity (as measured in yards per hour) when converting material greater than 5 days old. Gateway SKU 1 was used as an example for this paper because establishing the specification limits for this process demonstrated how a process variable like average lead time may indicate how the planning and scheduling process was managed but may be inadequate in describing how the process should be managed based on other variables. After specification limits had been determined the team turned its efforts toward selecting the capability metric(s) that would be used for reporting and analysis. The primary questions the project team sought to answer concerning which capability metric would provide the best picture of how the process is performing included: 1) What is the difference between short-term and long-term capability and how does it apply to this process?; 2) Is the amount of variation and its relationship to the tolerance most important?; 3) Is the measurement of process centering most important?; 4) Is the comparison of actual inventory values to target appropriate for this process? The project team initially started to focus on measuring inventory to target using such measures as the Z score and Cpm. The objective of the Z score is to indicate how many standard deviations a value (x) is from the mean. In order to improve the capability

118 of the process, the Z score would need to be reduced. A reduction in Z score correlates to a reduction in variability in managing inventory to target, and conversely, an increase in Z score correlates to an increase in variability in managing inventory to target. The Cpm index incorporates the target when calculating the standard deviation. Instead of comparing the data to the mean (like Cp or Cpk), the data is compared to the target. These differences are squared. Any observation that is different from the target observation will increase the Cpm standard deviation. As this difference increases, so does the sigma. As this sigma becomes larger, the Cpm index gets smaller. If the difference between the data and the target is small, so too is the sigma. And as this sigma gets smaller, the Cpm index becomes larger. The higher the Cpm index, the better the process. The project team encountered problems in applying the Z score and Cpm index across each SKU. For Gateway SKU 1, the target value was less important than the upper and lower specification limits. As long as the inventory replenishment and consumption process operates within the 1-day LSL and 5-day USL, inventory will be optimized to fit the needs of both the customer and the manufacturing facility (cost and waste). The project team also explored the use of Cp, Cpk, Pp, or Ppk capability metrics. The MINITABTM software application provides very convenient tools for calculating the Cp, Cpk, Pp, or Ppk capability metrics for both normal and non-normal distributions. Once the specifications were defined and the method for gathering actual data was developed, the reporting of these measures was very simple using MINITABTM software. Data for past 20 months indicated the lead-time had averaged just over 10 days for this SKU and

119 the days of stock for the same time period averaged 11.84. Using capability metrics such as Cp, Cpk, Pp, or Ppk indicated exactly what we already knew: Our process was not capable of performing at those specification limits because we did not plan to. Capability analysis for the performance of the process using an Upper Specification Limit (USL) of 5 days and a Lower Specification Limit (LSL) of 1 day yielded very poor capability results for the past 20 months. Nevertheless, capability metrics were created as a means of comparing the process performance as it was to the process performance based upon the changes made to meet the specifications. The capability results for the past 20 months are provided in Figure 2-43 using the MINITABTM Capability Sixpack (Normal) reporting tool.

IndividualValue

24 16 8 0 O bser. 0 12

Individua l a nd MR Cha rt U CL =21.32 M ean=11.84 L CL =2.364 10 20 U CL =11.64 5

Ca pa bility His togra m

13 21 Norma l P rob P lot

M ov.Range

8 4 0 La s t 20 Obs e rva tions 20 R=3.563 L CL =0 5 Within StDev: 3.15883 Cp: 0.21 Cpk: -0.72 Overall StDev: 3.97615 Pp: 0.17 Ppk: -0.57 13 21 Ca pa bility P lot Process Tolerance Within I I I Overall I I I Specifications I I 1 5

Values

15 10 5 0 10 Observation Number 20

Figure 2-43. Capability Results Baseline Data for Gateway SKU 1

120 The initial capability measures provided for interesting observations. First, the capability of the process as measured by Cp and Pp was very poor. Since specification limits were previously not used to measure this process, this outcome was not surprising to the project team. Specification limits were selected that represented implementation of process changes. A second observation was the negative values of Cpk and Ppk. This result was also related to the selection of specification limits. Since Cpk is equal to

USL x x LSL min , , if x is greater than the upper specification limit and the value 3s 3s
of

USL x x LSL is less than , a negative value is possible. This also holds true for Ppk 3s 3s

as well since the only difference between Cpk and Ppk is the calculation of standard deviation. Obviously, theses indices were not useful in ascertaining a measurement of baseline capability. As was stated earlier in this section, determining whether capability indices provided any meaning to a transactional process was a key issue for this project team. Making changes that affect the planning and scheduling process related to the top 5 Gateway SKUs was seen as an opportunity to gauge how well capability metrics represented process improvement. Continuing with our Gateway SKU 1 example, the project team (which now included the temporary membership of supervisors, operators and analysts related to the production and consumption of Gateway SKU 1) implemented changes that were intended to improve the capability of the process using a USL of 5 and LSL of 1. These changes included: moving from bi-weekly production runs to weekly runs; synchronizing the critical downstream work centers for consumption of material

121 based upon the production schedule for Gateway SKU 1; removal of the stock buffer for Gateway SKU 1; removal of stock buffers for all downstream SKUs; and subordination of the production schedule sequence of all products run on the same resource as Gateway SKU 1 to the production schedule of Gateway SKU 1. The capability results for the nine months following implementation are presented in Figure 2-44.

Individua l a nd MR Cha rt Individual Value 4 3 2 1 O bser. 0 Mov.Range 1.8 1.2 0.6 0.0 La s t 9 O bs e rva tions 3.2 Values 2.8 2.4 2.0 0 1 2 3 4 5 6 Observation Number 7 8 9 R=0.6 L CL =0 Within StDev: 0.531915 Cp: 1.25 Cpk: 1.07 Overall StDev: 0.471200 P Pp: 1.41 Ppk: 1.21 1 2 3 4 5 6 7 8 9 U CL =1.960 L CL =1.115 U CL =4.307 M ean=2.711

Ca pa bility His togra m

2.0

2.5

3.0

Norma l P rob P lot

2 3 Ca pa bility P lot Process Tolerance Within I I I Overall I I I Specifications I I 1 5

Figure 2-44. Capability Results Post-Improvement Data for Gateway SKU 1

Additional data points will need to be reported in order to assess the true impact of the changes made to the planning process for Gateway SKU 1. However, the indexes

122 do indicate an improvement has been made in capability and in the level of inventory as measured in days-of-stock from levels reported during the 20 months prior. Although the project team proved that capability analysis could be done for this transactional process, the team determined that using the measures of Cp, Cpk, Pp, and Ppk was not practical based upon the level of understanding of capability analysis of the process owner. More automated, timely, and simple indicators were explored to provide the owner a general sense of process capability and performance. To accomplish these objectives, simple control charts were created. The I-MR charts presented in Figures 2-45 and 2-46 are provided as examples of the format used to reflect the change in process performance following the implementation of the improvement strategies.

Individual Value

1 20 10 0 0 5 10 15 20 25
UCL=4.384 Mean=2.713 LCL=1.041

Subgroup

Moving Range

15 10 5 0
UCL=2.054 R=0.6286 LCL=0

Figure 2-45. Days-of-Stock I-MR Chart Gateway SKU 1

123

22 21 20 19 18 17 16 15 14 0 10 20 30

Individual Value

UCL=18.87 Mean=16.90 1 LCL=14.94 40 50 60

Subgroup

Moving Range

3 UCL=2.418 2 1 0

R=0.74 LCL=0

Figure 2-46. Days-of-Stock I-MR Chart-Gateway Total

The process improvement results for this business case can best be described as guardedly optimistic. Short-term performance of the material requirements planning process has proven to be successful. However, long-term process capability is yet to be proven. Summary measures were incorporated to gauge the change in semi-finished inventory performance resulting from the improvements in the Material Planning Process. These summary measures were valuable as means of documenting process changes and their effect on the performance of the process. The measures were also helpful to the process owner for communicating general inventory performance to managers and gateway resource team members. The first summary measure tracks the

124 change (from the baseline) in inventory dollars and days-of-stock in total for products manufactured at the gateway resources. The second summary measure reports the change in inventory dollars and days-of-stock for semi-finished SKUs directly downstream from the gateway resources. These inventory measurements were intended to represent the impact of firm planning and group technology scheduling as compared with the baseline inventory levels. Figures 2-47 and 2-48 represent the gateway and downstream semi-finished inventory performance improvement metrics:

Figure 2-47. Gateway Inventory Improvement Measure

125

Figure 2-48. Downstream Inventory Improvement Measure

The results achieved for the project at the time this paper was written provides support the process improvements and their supporting control plans are having the desired effect on the material planning process for the gateway work centers. If not for the inventory build (for the gateway work center equipment upgrade), semi-finished inventory reduction would be between $800,000 and $900,000 and days-of-stock would be averaging between 15 and 16 days. However, these special cause situations must be included as part of managing the process.

126 Chapter 3 Results and Conclusions

This section will be presented in two parts. The first part will summarize the conclusions reached surrounding the question: Can Six Sigma Methodology be Successfully Applied to Transactional Processes? The second part of this section will address Six Sigma-related topics that are recommended for future study. It cannot be implied that the success or failure of a project is the result of using the Six Sigma Methodology. There is some degree of subjectivity that is employed in the identification of the critical inputs and solutions for any of the process improvement techniques available. As long as there is some subjectivity there is a risk of selecting the incorrect input variables and/or improvement solutions. Project results are also influenced by how project teams apply the improvement tools available to them. The incorrect application of tools can mislead the team on the importance of data collected, the relevance of solution criteria, the capability of the process, and the effectiveness of the measurements in identifying process problems. Other dynamics also play a critical role in whether a project succeeds. The availability and quality of project team members, the quality of the process improvement training, the availability and quality of support resources, the amount of funding available (if capital equipment is required), and organizational culture and structure are examples of other variables that may affect the outcome of a project.

127 3.1 Research Problem Results

The response to the question Can Six Sigma Methodology be Successfully Applied to Transactional Processes? is not a binary yes or no. This study demonstrated that some tools provided in the Six Sigma Methodology did not fit the transactional process improvement requirements while other tools complemented improvement efforts. The results of this business case provide only a brief glimpse into the applicability of Six Sigma methodology to transactional processes. The fit of Six Sigma Methodology to other types of transactional processes should also be given some consideration. This section will briefly report on the relevance to other types of transactional processes and summarize the results of the applicability of Six Sigma tools to this business case. During the course of my research, I found very few examples that detailed the application of Six Sigma in improving transactional processes. The lack of published, detailed transactional process examples came as somewhat of a surprise. Quality Digest recently conducted a Six Sigma survey to find out whos using Six Sigma and what kind of programs are being implemented. Approximately 87,500 Quality Digest readers were asked to participate in the survey. A total of 2,870 responses were received. (The survey results may include more than 1 response from the same company.) The survey results were interesting from the perspective of the types of programs Six Sigma was being applied to. The application of Six Sigma to transactional processes appears to be very

128 strong although not as widespread as operational processes. Figure 3-1 represents the results of the Quality Digest Six Sigma survey.

Distribution of Six Sigma Programs by Functional Areas


600 520 500

400 357 Respondents 349 311 300 301 300 256 240 207 200 193 190

100

100

0
ce ra tio ns er in g ra tio n ng ng es g n tu rin tro l en t Se rv i ha si tC on op m en tio tio n Po llu Pr ev ec tio ei vi Sa l n

fa c

in e

is t

tO pe

g/ R ec

sp

Pu rc

Ad m in

En g

to m er

st /In

ev el

Pl an

C us

pp in

Te

rc h

Sh i

Source: Quality Digest Six Sigma Survey Note: This was asked only of respondents whose companies have a Six Sigma program in place.

Figure 3-1. Quality Digest Survey Results [23]

The survey results would lead you to believe there should be an abundance of published work demonstrating the application of Six Sigma to transactional processes. Through my research, I was able to find examples of transactional processes where Six Sigma had been applied. I was able to gain access to a handful of detailed examples that demonstrated what tools were used to arrive at a process improvement strategy and the results of the process improvements. None of the examples I found elaborated on which

R es

ea

D oc

um en

an u

/D

129 Six Sigma tools did not work well for their process. The detailed information I was hoping to find was either protected by a company as proprietary or protected by consultants (maybe due to the costs of implementation as discussed in this Quality Digest article). Another area of confusion surrounding these survey results is the question of: What constitutes a Six Sigma project? When a company introduces Six Sigma, there is significant pressure to justify the cost of training and to validate that the methodology works by classifying process improvement gains as fitting under the Six Sigma umbrella. There is also an attraction to the convenience of having one database or location for capturing all project savings. It was evident at GE, Allied Signal, and the company studied in this paper, that many improvement projects were put into the "Dollars saved by Six Sigma" category even though they may not have used Six Sigma tools to achieve the results. So - how many Six Sigma projects are really Six Sigma projects? Another potential explanation for the lack of Six Sigma process improvement examples is the desire for organizations and consulting firms to protect their Six Sigma application knowledge and/or proprietary process information. Although growing in popularity, Six Sigma has not been embraced or implemented by a majority of organizational sectors. A Quality Digest (Nov. 2001) survey of about 4,300 of its 75,000 readers, asked respondents to provide their perceptions of Six Sigma, and if they had experience with it, the results of their experience. Among the respondents, only a small number of companies have

130 implemented a formal Six Sigma program and the vast majority of those were units of large corporations.[24] Some Six Sigma consultants point to a couple of reasons why Six Sigma has been primarily embraced by big organizations. The first potential reason is the larger the company the more areas for improvement. Greg Brue, president and CEO of Six Sigma Consultants, explained that because Six Sigma methodology is dependent upon identifying concrete areas for improvement that directly affect the bottom line. The more numerous or glaring the problem areas, the easier it is to launch a successful Six Sigma program.[24] A second potential factor is that small companies tend to have a more difficult time assigning the resources necessary to effectively implement Six Sigma. Thomas Pyzdek, a published Six Sigma author and consultant, and John Kullmann, director of marketing at Six Sigma Qualatec, both suggest that companies with fewer than 500 employees struggle with implementing Six Sigma due to the inability to assign dedicated resources.[24] As the research for this paper progressed through the DMAIC process, many Six Sigma tools were evaluated for their applicability and use in improving the transactional business case. In general the tools that were less data-driven and more subjective in their use were more easily applicable. Subjective tool examples include process mapping, Cause and Effects Matrix, Failure Mode and Effects Analysis, and Stakeholder Matrix. The Six Sigma tools that were more difficult to apply were more data-related and/or statistical in nature. Examples of these tools include Gage Reproducibility and

131 Repeatability, live Design of Experiments, Correlation and Regression, Analysis of Variance tests, Chi-Square Tests, and classical capability metrics. Perhaps by their nature, transactional process improvement efforts may not lend themselves to the applicability of data-related tools. To follow are some observations relating specifically to this business case that may better describe the difficulty with applying some of the Six Sigma tools: 1. The material planning work process that creates the outcome called semifinished inventory is somewhat invisible. Material planning revolves around information handled and stored in a manufacturing and planning computer application. The observable work results are not very tangible and make understanding how the work gets done more difficult. 2. There were a lack of facts and data specific to the material planning process. The process understanding that existed was narrowly focused and somewhat anecdotal. These circumstances made it difficult to identify specific variables that correlated to sub-optimal inventory results. 3. There were insufficient examples and training materials relating to transactional processes to draw experience from. This shortcoming manifested itself in the overemphasizing of statistical applications including the use of computer software (i.e. MINITABTM). 4. Meaningful customer specification limits were not initially available for this process. Classical capability metrics like Cp, Cpk, Pp, or Ppk are more difficult to

132 apply to transactional processes. There was pressure from management to describe capability using only these classical metrics. 5. Because the material planning process was more virtual, experiments were conducted in a laboratory setting versus live modeling. Although this type of simulation protected the process from disruption, it was not able to predict the response of all process variables as accurately and robustly as a live experiment. 6. The initial scope of the project was too vague for any associated data to be very meaningful. This issue resulted in paralysis-by-analysis as the team attempted to discern how the data that was available fit the project goals.

3.2 Recommendations for Future Study

As long as a research void exists around using Six Sigma for transactional process improvement, areas of future study will be numerous. The purpose of this section is to briefly describe a few opportunities for future study that were discovered during this business case application study. An area that I found to be in need of additional research and clarification is the application of capability analysis to transactional processes. This type of study could include an analysis of how capability can be defined for processes that lack meaningful specification limits, what tools can be used to define process capability, and the success or failure in achieving long-term capability based upon the tool chosen. Additional study

133 would also be welcome around the effect of assuming data to be normally distributed and the importance of normality in different situations. Another area of interest is a study relating to the correlation between the success/failure and span of time until project close of a Six Sigma project relative to how well a project is initially defined. This may include: the amount of data available to support the defined project opportunity; the magnitude of the process selected for improvement; the operating boundaries (capital, resources, etc.); and the support from management. This issue should be investigated for both operational and transactional process improvement projects. Studies comparing the success rate of Six Sigma and various other process improvement strategies across different organizational sectors and process types could prove helpful in understanding which improvement strategies worked best in certain organizational models and processes. Research is needed regarding the effectiveness of using simulation in transactional process environments. Topics of study could include: recommended simulation tools; DoE application and strategies; statistical techniques for measuring simulation results; and simulation validation techniques. Additional case studies would be helpful in understanding the potential application scope of Six Sigma. For example, whether the methodology can be applied to growth strategies, or whether Six Sigma tools can be applied to academic processes. In what other process settings (operational or transactional) are Six Sigma tools inefficient or a total waste of time?

134 A study centered on the level or breadth of statistical training that should be provided as part of implementing the Six Sigma program would be interesting. The study could prove useful in demonstrating whether de-emphasizing statistics for transactional Six Sigma projects delivers better and faster results than training that focuses on or emphasizes statistics.

135 Bibliography

1. NIST/SEMATECH (2003), Engineering Statistics Handbook, National Institute of Standards and Technology, International SEMATECH. 2. American Production and Inventory Control Society, Inc., APICS Dictionary, Falls Church, VA.,1995. 3. Laux, Daniel T., Six Sigma Evolution Clarified Letter to the Editor, 2 pp. Online. IsixSigma, 2002. 4. Swinney, Zach, 1.5 Sigma Process Shift Explanation. 2pp. Online. isixsigma, 2001. 5. Ramberg, John S., Six Sigma: Fad or Fundamental? Quality Digest, May 2000. 6. Stamatis, D.H., Who Needs Six Sigma Anyway? Quality Digest, May 2000. 7. Basu, Ron (2001). Six Sigma to Fit Six Sigma, IIE Solutions, v33, i7, p28. 8. Martin, Sheree, Six Sigma: Qualitys New King? Fabricating Equipment News, 2001. 9. Costanzo, Chris, Celebrated Six Sigma Has Its Critics, Too, American Banker, Aug. 28, 2002, v167, i165, p165. 10. Clifford, Lee, Why You Can Safely Ignore Six Sigma. Fortune, 2001, v143, i2, p140. 11. Gyorki, John R., Machine Safety Deserves Better than Six Sigma. Machine Design, 1999,v71, i1, p112. 12. Hahn, Gerald I., Hill, William J., Hoerrl, Roger W., Zinkgraf, Stephen A., The Impact of Six Sigma Improvement - - A Glimpse into the Future of Statistics, The American Statistician, 1999, v53, i3, p208.

136 13. Wyper, Bill, Harrison, Alan, Deployment of Six Sigma Methodology in Human Resource Function: A Case Study, Total Quality Management, July 2000, pS720. 14. Van Kooy, Mark, Edell, Lori, Melchiorre Scheckner, Heather, Use of Six Sigma to Improve the Safety and Efficacy of Acute Anticoagulation with Heparin, Journal of Clinical Outcomes Management, Aug 2002, v9, i8, p445. 15. San, Dee C., Six Sigma Method Application in Reducing ED Wait Time, The Journal of the American Medical Association, July 17, 2002, v288, i3, p290. 16. Pande, Peter S., Neuman, Robert P., Cavanagh, Roland R., The Six Sigma Way. New York: McGraw Hill, 2000. 17. Chemical Week Associates, Is Six Sigma Fast Enough?, Chemical Week, March 1, 2000, p26. 18. Breyfogle, Forest W. III, Measurement of Process Capability: Cp, Cpk, Pp, Ppk, Probability Plotting and Six Sigma. Smarter Solutions. Austin, TX, 1999. 19. Umble, Michael, Srikanth, Mokshagundam L., Synchronous Management-Volume 2, Guiliford: The Spectrum Publishing Company, 1997. 20. Woodall, William H., Controversies and Contradictions in Statistical Process Control. Journal of Quality Technology, v32, no.4, October 2000. 21. Deming, W.E. (1986). Out of the Crisis. Massachusetts Institute of Technology, Center for Advanced Engineering Study, Cambridge, Mass. 22. AIAG (1995), Statistical Process Control (SPC) Reference Manual, Chrysler Corporation, Ford Motor Company, General Motors Corporation.

137 23. Dusharme, Dirk, Six Sigma Survey: Big Successbut what about the other 98 percent? Quality Digest. Feb. 2003. 24. Dusharme, Dirk, Six Sigma Survey: Breaking Through the Six Sigma Hype. Quality Digest. Nov. 2001.

138 Supplemental Research References

1. Box, G.E.P. and Kramer, T. (1992). Statistical Process Monitoring and Feedback Adjustment. A Discussion, Technometrics 34, pp 251-285. 2. Chowdhury, Subir, Working Toward Six Sigma Success. Manufacturing Engineering, 2001,v127, i1, p14. 3. Cusin, Jean-Daniel (2001). A Lot Size of One? Get Serious, APICS: The Performance Advantage, November/December 2001. 4. Echempati, Raghu, White, Christy, Case Study of Hinge Alignment Problems: A Six Sigma Quality Analysis, Production and Inventory Management Journal, 2000, v41, no. 2. 5. Frazier, Gregory V, Reyes, Pedro M., Applying Synchronous Manufacturing Concepts to Improve Production Performance in High-Tech Manufacturing, Production and Inventory Management Journal, 2000, v41, no. 3. 6. Grossman, Thomas A., Teachers Forum: Spreadsheet Modeling and Simulation Improves Understanding of Queues. Institute for Operations Research and the Management Sciences, 1999. 7. Hoerl, Roger W, Six Sigma Black Belts: What Do They Need to Know?, Applied Statistics Group, 2001, v33, no. 4. 8. Juran, J.M. (1997). Early SQC: A historical Supplement. Quality Progress pp. 129137.

139 9. Lucier, Gregory T., Seshardi, Sridhar, GE Takes Six Sigma Beyond the Bottom Line. Strategic Finance, 2001, v82, i11, p41. 10. Miller, Sandra, Lean Sigma. APICS 2001 International Conference Proceedings, 2001. 11. Pressly, Thomas R., Six Sigma: The Breakthrough Management Strategy Revolutionizing the Worlds Top Corporations. The Ohio CPA Journal, 2001,v60, i2, p69. 12. Schmitt, Bill, Expanding Six Sigma. Chemical Week, 2001, v163, i8, p21. 13. Shand, Dawne, Six Sigma. Computerworld, 2001, p38. 14. Sheridan, John H., Lean Sigma Synergy. Industry Week, 2000,v249, i17, p81. 15. Srikanth, Mokshagundam L., Umble, Michael, Synchronous Management-Volume 1, Guiliford: The Spectrum Publishing Company, 1997. 16. Walden, Joseph, Applying Six Sigma Methodology to Supply Chain Operations. APICS 2001 International Conference Proceedings, 2001. 17. Walker, William T., Synchronized for Growth. APICS: The Performance Advantage, April 2001. 18. Wood, Andrew, Making Six Sigma Benefits Stick. Chemical Week, 2001, v163, i19, p40.

Das könnte Ihnen auch gefallen