John C. Livengood, Esq., AIA and Patrick M. Kelly, PSP CPM scheduling, invented in the mid-20th century, continues to be a fast evolving technical endeavor that outpaces the capability of judicial decisions to provide legal guidance. As a result of the rapid technical development of CPM scheduling and the more leisurely pace of judicial precedence, CPM schedulings step-child, known as forensic schedule analysis, is now driven by three separate factors: 1) The GOOD the advancement of schedule delay methodological and analytical tools to permit more accurate and precise evaluation; 2) The HARD the ability to manage, organize and evaluate the explosion of facts now captured on construction projects through advanced project management software programs; and 3) The SCARY the inability of the legal system to provide precedential certainty on many of the more essential issues relating to schedule delay and CPM scheduling decisions. Part 1: The Good This article, the first of three parts, concerns the development of advanced schedule delay methodologies which allows a level of analytical analysis that moves schedule delay analysis closer to an objective "science" and further from a subjective "art." These analysis techniques have received new vigor as a result of the explosion in data permitting greater accuracy and support for expert opinions. Incidentally, experts benefit from more protection from Daubert challenges. This development is good for all involved because it leads to more transparent and more understandable opinions.
The second part of this article will be published in the next issue of this newsletter. It will be concerned with the growth of integrated project management databases that consolidate all data on the project and provide a framework for the management decisions that are reflected in emails and other data which provide unique challenges to the forensic delay expert. That expert must now be able to capture, organize and analyze more data than has ever been previously available. Consequently, there will be many fewer opportunities for an expert to say "we don't know what happened here." This development makes analysis more work for the expert with less room for ambiguity.
The third part, to be published in a future issue, concerns the long list of CPM issues for which there is little law. That list continues to grow as the technical practice of CPM scheduling evolves while the number of new judicial decisions slows due to the extraordinary costs associated with litigation and the number of mediations and arbitrations that lack public recorded decisions increases. This development is bad for all involved due to the lack of certainty. Forensic Schedule Delay Analysis Forensic Schedule Delay Analysis (FSA) is a relatively new area of knowledge and expertise for architects, engineers, construction contractors, and other project practitioners. Over the past 40 years, many schedule delay methodologies and approaches have been proposed and utilized for analyzing schedule performance on projects, particularly with the advent of CPM scheduling. Nevertheless, with the increased scrutiny associated with tighter project budgets and timelines, and with expert testimony included in construction litigation, parties preparing schedule delay claims or extension of time requests must pay particular attention to industry standards and peer-recognized appropriate methodologies. These skills are now essential in addition to the more traditional skills associated with design, engineering and construction.
The need for FSA grew out of the increasingly complex interplay between construction, contracting and case law. Concurrently, the skills and techniques for performing FSA developed in parallel with the development of newer, more sophisticated software applications for accurately modeling and analyzing construction means and methods applications which are more capable, and at the same time, simpler to use than ever before.
This rapid growth of a relatively new profession created (at least) two problems: Communication (and coordination) of the various types of methods available to practitioners. As practitioners used the latest technical tools to develop new ways to look at schedules, they created innovative methods for determining the existence and cause of delays within the CPM network. However, these methods, developed independently by different practitioners, were sometimes similar to techniques developed by others. These practitioners gave these techniques names which were often interesting, but not necessarily descriptive and sometimes these names were duplicative of names used by other analysts for other, dissimilar techniques. To communicate the specifics of their technique, many analysts published papers or books, or gave presentations at technical conferences such as AACE International's Annual Meeting (AACE was formerly known as the Association for the Advancement of Cost Engineering International). While this did serve to let others know about the details of the methods in use, it did not help to coordinate, standardize or eliminate conflicts in the overlapping techniques or names.
Selection of a methodology that was appropriate to the facts of the case. As the new industry grew, practitioners experimented with new methods of performing FSA some of which were good and some of which were ineffective because some methods were acceptable under certain circumstances, or within certain forums, and some were supported by case law relevant only in specific jurisdictions. Some methods required lots of data which may not be available on every project. With the ever increasing criteria, analysts were left with a series of competing reasons as to why they would choose one analysis method over another.
To aid in the resolution of these problems, the AACE published the first American compilation and practice guide to the various methods used in forensic schedule analysis. Called the Recommended Practice for Forensic Delay Analysis (RP 29R-03), it was originally published in 2007 and has undergone several subsequent revisions which reflect the complex and still changing character of this recent analytical specialty. (Available at http://www. aacei.org/resources/rp/).
Despite the open nature of its development and the continuing call for any interested professional practitioner to join in the on-going process of refinement, there still exist some misconceptions about the purpose and worth of the FSA RP. For the practicing analysts, the primary benefits of the FSA RP are the functional taxonomy which eliminates the confusion of non-descriptive and overlapping common names and the methodology selection standards which recommends to practitioners a process by which they can determine the best methodology for their specific case. This makes the FSA RP an invaluable tool for the forensic schedule analyst and one of the best innovations in the field of forensic schedule analysis. The Functional Taxonomy The establishment of the "functional taxonomy," presented in Section 1.4 of the FSA RP, was preceded by a detailed review of the major types of analysis methods currently in use. The RP's authors, all very experienced analysts themselves, categorized the methods in a scientific manner by identifying the key characteristics of the techniques and grouping them together by those characteristics: The RP correlates the common names for the various methods to taxonomic names much like the biosciences use Latin taxonomic terms to correlate regionally diverse common names of plants and animals. This allows the variations in terminology to coexist with a more objective and uniform language of technical classification. For example, the implementation of time impact analysis (TIA) has a bewildering array of regional variations. Not only that, the method undergoes periodic evolutionary changes while maintaining the same name.
By using taxonomic classifications, it hoped that the discussion of the various forensic analysis methods will become more specific and objective.
The result of the functional taxonomy is a series of names which sound unfamiliar at first (for example, a "windows analysis" becomes the "bifurcated contemporaneous period analysis"); however, within the profession, they are an invaluable communication tool providing an accurate description of the method in use rather than a potentially confusing common name. Of course, use of the functional taxonomic term by non-practitioners (attorneys and clients) may be inappropriate; however, in these cases, analysts can easily resort to the use of common names to bridge the gap in understanding. No one methodology can be forwarded as the 'right' methodology in absence of analysis of key factors. For instance, imagine a situation where the two sides in a dispute have each hired a Forensic Schedule Analyst. The contractor's analyst performed an analysis that he calls a "Window Analysis." The owner's analyst, tasked with reviewing this "Window Analysis," knows that there are any number of methodologies that have been called "windows" by practitioners and by the courts. In fact, based on their surveys of hundreds of scheduling professionals, the authors of RP 29R-03 identified five of the nine major method groups as having been colloquially referred to as "windows." Without a more meaningful description, the owner's analyst will be unable to quickly determine the contractor's analyst actual methodology. The communication and coordination problem continues to be exacerbated by practitioners who misinterpret the functional taxonomy as an attempt by the authors of the FSA RP to impose new common names which are unfamiliar to the cadre of analysts. As a result of this misperception, a few individuals have instead proposed entirely new sets of common names instead names which are generally neither common knowledge nor proper functional descriptions of the methodology in use: these new names only create more confusion. Further, it is important for forensic schedule analysts to understand that this sort of miscommunication between two individuals who both identify themselves as "experts" in the profession of Forensic Schedule Analysis causes unnecessary friction in the dispute resolution process and undermines the entire profession's credibility. Methodology Selection One of the very common criticisms of the FSA RP is that it is a recommended practice which does not recommend a single analysis methodology as the "best" method. This is a correct assessment of the FSA RP because not recommending a single practice was an original intent of the document. Some recent publications have provided a ranking of methodologies (albeit using newly created and non-standard terminology for those methodologies after criticizing the FSA RP of doing the same) which arrive at a conclusion that one methodology is better than others. Suggestions that the FSA RP should rank methodologies ignore the vastly complicated interplay between the many specific factors that a competent analyst should consider when choosing a method for a particular project.
Section 5 of the FSA RP, entitled "Choosing a Methodology" states in part: Forensic scheduling analysts generally agree that there are a number of qualitative reasons, beyond technical schedule analysis reasons, that should be included in determining which forensic schedule analysis method is to be used for a particular claim. This part of the RP discusses eleven factors that should be considered by the forensic schedule analyst when making a recommendation to the client and its legal counsel concerning this decision. The forensic schedule analyst ought to consider each of these factors, reach a conclusion, and offer a recommendation with supporting rationale on methodology to the client and legal counsel in order to obtain agreement prior to proceeding with the work.
The process of selecting the right methodology, based on the facts of a specific case and on the quality of the existing schedules is what the FSA RP recommends. No one methodology can be forwarded as the "right" methodology in absence of analysis of these key factors, and any text that identifies a single "right" methodology for any and all situations not even allowing for variations in schedule quality and accuracy is presumptuous and flawed.
Some of the selection factors include:
Contractual requirements. The language of the contract is often a driving force in determining which methodology to use. There are certainly occasions when contract language either specifically dictates which method to use or at least which methods are preferred. This can occasionally lead to confusion, for example, is the "TIA" required in the specification of a prospective TIA or a retrospective TIA? Or is it, due to the many variances in terminology, another methodology altogether? Such considerations must be made early in the process and they will likely affect the selection of a particular method.
Purpose of the analysis. The methods defined in the FSA RP each have advantages and disadvantages in terms of what sorts of delay or disruptions they can analyze. For instance, certain methods are stronger at determining the existence of concurrent delay. The analyst must account for the types of damages that are sought and choose a methodology that will better serve as a tool to prove or disprove the existence of and entitlement to those damages.
Source data availability and reliability. Considerations should include the availability and quality of source data. This is an absolutely essential part of methodology selection particularly when considering performing an analysis that requires the use of the contemporaneously generated CPM schedules. Methods such as the "windows" family (contemporaneous period analyses, MIPs 3.3, 3.4, and 3.5) have a need for a very high quality series of schedules in order to be effective. This typically means that the network needs to be complete and balanced, updated accurately on a regular basis, reviewed and understood by both parties, and used to plan and execute the construction. These are fairly lofty goals for a schedule series. Nevertheless, the reality of construction is that often times the projects that have disputes did not have a high quality schedule series and it is therefore impossible to say that all projects should use some form of "windows." The methodology selection must necessarily consider whether the schedules even exist, and if so, whether they exist in a form which will support analysis. If not, the analyst will have to consider other methods. The choice of an appropriate methodology for a given claim will rest heavily on the size of the dispute, complexity of the dispute, and the resulting time and budget available. The facts of the dispute. The choice of an appropriate methodology for a given claim will rest heavily on the size of the dispute, the complexity of the dispute, and the resulting time and budget available for the analysis. For instance, a $500,000 delay claim may not support the fees necessary to create a detailed daily specific as-built schedule. Similarly, even some expensive delay claims particularly those involving projects with necessarily linear and invariable order of work can be analyzed using simpler methods.
The expertise of the analyst. Along with resources available, it is important for both the analyst and the client to ensure that the analyst has the requisite experience in the performance of the selected methodology. Otherwise, there is a risk of the expert incorrectly applying the methodology or missing some critical step in the analysis. If such an analysis is subject to scrutiny in a litigation setting, the results could be disastrous.
The forum for resolution. This is one of the most important considerations in selection of methodology. The FSA RP states: If there is good reason to believe that all issues are likely to be settled at the bargaining table, or in mediation, then the range of options for forensic scheduling methods is wide open as the audience is only the people on the other side. Almost any option which appears persuasive is legitimately open for consideration. On the other hand, if legal counsel believes that the issue will end up in a federal court or a federal board of contract appeals, then the range of options available is considerably narrowed because the Boards of Contract Appeals have, for nearly two decades, insisted that delay issues presented to them must rest on CPM scheduling.
In addition to these selection factors (see the FSA RP for the complete list of factors to consider), the analyst must consider the relevant case law specific to the forum and the geographic location in which the claim is being discussed. There have been a number of calls to annotate the FSA RP with American case law citations despite the fact that the RP focuses on technical, not legal, aspects of forensic schedule analysis methods; however, even a complete collection of case law relevant to FSA would not recommend a single methodology. It would merely provide the analyst with an additional series of "factors to consider" when selecting a methodology for a specific project within a specific forum, under a specific contract and with set of laws and regulations. Why Do Different Methodologies Give Different Results? Certainly one of the most perplexing problems for attorneys, judges, juries, laymen and experts is why different methodologies, each properly performed on the same set of data, yield different results? The truth to this observation is undeniable; many studies by experts with very different views on delay methodology conclude that different methodologies yield different results. But why? We believe the answer lies in what each methodology is calculating. But shouldn't they all measure delay? Yes, and the different methodologies do measure delay, but they measure it differently. For example: As-planned v. as-built methodologies (MIP 3.1 and 3.2) measure delays against the original baseline schedule and do not reflect updates or changes in the plan.
Windows methodologies (also known as Contemporaneous Period Analysis) (MIP 3.3, 3.4 and 3.5) measure delays on a month-by-month basis against the then current update. Therefore its measurement of delays will reflect changes in planning, shifting the apparent timing of delays as compared to other methodologies. Further, the different variations of this methodology also impact its calculated delays. These authors generally believe that this is the most accurate methodology if the various prerequisites can be met.
Impacted-as-planned method (MIP 3.6) calculates anticipated prospective delays against the original baseline schedule. It fails to consider what actually changed on the job during performance and measures delays that have yet-to-occur. Yet because it is a mixture of prospective delay and actual events, it yields different answers than other methodologies.
Time Impact Analysis (MIP 3.7) measures the anticipated impact of delays against an active CPM on a periodic basis. This methodology reflects the various changes that occur during performance. Some experts think this is the most accurate methodology.
Collapsed-as-built (MIP 3.8 and 3.9) measures delays as a residue of the analysis which is a very different calculation. Further, because it starts its measurement from the end of the project, it tends to identify the delays as soon as it encounters them in reverse order from any of the other methodologies. Therefore, the differences in delay measurement among the methodologies do not reflect better or worse, or accurate or inaccurate methods. Rather, they reflect differences in how the delays are manifested in the analysis. Conclusion Despite some minor criticisms, some of which have been issued by individuals with a limited understanding of the FSA RP, its intent and its audience, the FSA RP remains one of the best innovations to come to the practice of forensic schedule analysis in the last decade.
The document provides a common language as a basis of discussion between experienced professional analysts while still allowing for colloquial terms and newly developed titles. It gives a series of recommendations for the selection of a specific method that matches the needs of the specific scenario faced by the analyst rather than an "arbitrary ranking" of methodologies that fails to consider all the situational intricacies, and it provides a series of technical recommendations that serve to minimize an analysis rate of error. The FSA RP provides experienced practitioners an excellent framework for improving their practice
The Future Of Forensic Schedule Analysis: The Good, the Hard and the Scary Part 2: The Hard John C. Livengood, Esq., AIA and Patrick M. Kelly, PSP This is the second of a three-part series on the future of forensic schedule delay analysis. In the first installment, published in the Fall 2012 issue of ARCADIS' Construction Claims Solutions, we discussed recent advances in forensics analysis techniques that enable more accurate, defensible and realistic conclusions. In this article, we will address one of the principle reasons forensic schedule analysis is becoming more accurate: better underlying data. But that better data is a two-edged sword.
On one edge, the explosion of computerized management systems, the ubiquitous presence of email and other social media, and the advancements in imaging and graphical project depiction (BIM, GIS and photography) have provided potentially astonishing levels of detail on project performance. The other edge is the continued difficulty in culling the relevant facts from the irrelevant. . This has always been a problem on construction projects that have historically generated tremendous quantities of data. Now there is even more data, and much of it is either irrelevant or seemingly irrelevant to the actual delay analysis. The third part of this series will address the scary aspect of forensic delay analysis, the major areas for which no legal precedence exists and is never likely to exist. Data: Essential to the Practice of Forensic Schedule Analysis Forensic Schedule Analysis (FSA) has always relied on the accuracy and completeness of the data contained in the schedules and other project records. This reliance is recognized in AACE's Recommended Practice on Forensic Schedule Analysis (RP29R-03 (2011)), which states:
[The level of detail that] may be adequate for project controls may not be adequate for forensic scheduling the initial focus here is in assuring the functional utility of the CPM baseline schedule for the purpose of analysis as opposed to assuring the reasonableness of the information that is represented by the data or optimization of the schedule logic.
The first concept expressed here is that more data is required for performing forensic scheduling analysis than is required for prospective scheduling. Project planning, a necessary first step to project scheduling, requires experienced supervisors, managers and estimators who are familiar with the detailed scope of the project, past production history, labor skills and company work policies.
These skill sets translate into a planned method for construction, which in-turn leads to project scheduling. The experienced scheduler then translates the planning sequences into activities that represent the complete scope of work with durations that represent the ability of reasonably estimated manpower levels to complete that work. These activities are then assigned logic in accordance with the planning sequences to show the order in which the activities will be executed, hopefully with respect to any limitations on the number of crews or resources on-site at that time.
This work is significant and requires experience, historical data and educated estimates. At the same time, the level of detail generated by the scheduler for an upcoming project is usually limited by the needs of project management.
When selecting an "appropriate level of detail," most project management teams find excessively detailed schedules to be unwieldy and ultimately unhelpful in executing the project. No manager would want his or her schedule for electrical conduit installation developed on a room-by-room basis it would overwhelm their ability to monitor and to update schedules.
Despite the wishes of the project manager however, it is possible for the construction schedule to be developed into such a level of detail because modern schedule development techniques and software permit (though do not require) extraordinary complexity in CPM schedules, such that even moderate-sized projects can have schedules that are so detailed they defy reasonable analysis or review during construction. So while the necessary expertise required to create a project schedule is considerable, the eventual level of schedule detail is limited by the needs of the field management team.
In contrast, the forensic schedule analysis often requires data on what actually happened at a greater level of detail than the typical prospective schedule, and the exact type of data that is necessary can vary greatly depending on the type of analysis that needs to be performed.
Further, the type of analysis that needs to be performed varies with the nature of the dispute. In order to examine the events on a project, it may be necessary, in the example cited above, to examine the electrical conduit installation performance for individual rooms to identify when and where specific delays occurred. As part of that analysis, the expert might have to review many forms of information including: the original designs, change directives, change orders, daily reports, meeting minutes, field observation reports, production reports, inspection reports, deficiency reports, installation photos, relevant job cost records, man-hour records, letters, telephone notes and emails relating to a specific set of conduit installation in a particular wall. The Data Explosion Data is therefore essential to the practice of Forensic Schedule Analysis. But where does that data come from? While many construction projects still maintain minimal records (other than the ubiquitous email files), most large construction companies have policies in place requiring they maintain organized, comprehensive, and surprisingly detailed records. Compared to manufacturing activities of similar value, construction is a heavily "papered" industry. Before the advent of computers, it was common to maintain designs, change orders, daily reports and meeting minutes as well as labor records on all but the smallest project, and those records were kept and organized in paper format.
With integrated project management software, much of the former hand-created documents are now computer driven forms that are easily sortable, searchable and retrievable. One task often assigned to our current project controls specialists is to integrate the material developed at the outset of a project that was created as part of the planning process, including records estimates, productivity assumption, equipment requirements and such, along with the actual construction- related history to create a huge database that enables those who work on the next project the benefit of what was learned on previous projects. With the advent of Building Information Modeling (BIM), the level of available design integration and detail has also increased exponentially.
The result is a virtual explosion of documents and data. While large projects routinely have documents that were measured in the hundreds of thousands or even a million "pieces of paper", modern larger scale projects record production is measured in tera-bytes of data, equating to tens of millions of documents. For project managers, there is some help available. As discussed in the recent Engineering News Record article for 3-Dec-12 (Vol. 269, No. 16), "What You Don't Know About Your Data Can Hurt You," many major construction companies are investing significant funds in using or developing data integration and analysis tools. While some of this development is customized, there are several vendors that market sophisticated analytic tools that can integrate not only existing construction management systems, but also schedules, job-cost reports, production reports and invoicing. This integration allows contractors and owners to better manage on-going projects and know what is happening, and it also gives them more accessibility and analytical capacity for these voluminous documents. BIM is allowing all levels of the construction process to better understand and plan the work that needs to be done. "Clashes" can be reduced and contractors can work more efficiently. Data is a Double-Edged Sword One of the goals of data control is to reduce or even to eliminate disputes through early detection of problems and better understanding (through the records) of what happened -- this is in keeping with the old adage that those with the better records win. However, the goal of "no claims" is not always achieved. In the event of claims or disputes, the same software that was used to manage a project becomes invaluable because it permits the analyst to organize data on an unprecedented level of detail. Not only can schedule issues be more clearly dissected, but associated production can also be illustrated. The consolidated record can portray sequences of events and interrelationships that heretofore had been obscured. Unfortunately, it is often the projects that do not pay attention to the proper management of documentation that results in large and seemingly intractable disputes certainly the authors always seem to be working on projects that have major schedule/document problems. Further, we seldom encounter a project in dispute that was managed using sophisticated record documentation tools, indicating that those project managers that can embrace the more advanced and integrated management systems seem to be less exposed to disputes which are beyond their ability to analyze and resolve. One of the goals of data control is to reduce or even to eliminate disputes through early detection of problems ... Poorly documented projects certainly have schedules and may have electronic daily reports, but most of the rest of the data is in individual silos including: letters in a word processing program, change orders in another program, schedules in Primavera, Excel spreadsheets for almost every conceivable purpose, and finally, email files with CCs and BCCs to numerous individuals, most sent as part of extraordinarily long email strings. The data exists, but not in any convenient, integrated way that would allow for expedient collation and correlation by a forensic analyst.
As discussed in our first article, developments in the quality of scheduling software have also created a wide array of data that can potentially be used for the resolution of disputes; however, again, this ability has come with associated risks. Newer scheduling software, such as Primavera Version 6 (P6), maintains a single database that contains all contractors' project schedules together. The benefit of this arrangement is that the project scheduler can create a table of resources manpower and equipment that can be shared across all projects, thereby ensuring, for instance, that a particular unique crane isn't needed on two projects during the same timeframe. Presumably this would make scheduling of resources more efficient and therefore increase the likelihood of projects being finished on time. In a forensic context however, this creates a different problem. Can a schedule that was created in such a database be accurately analyzed without the analyst having the entire database? This would mean that in order for a contractor to have an analyst perform a delay analysis on a set of schedules, the contractor would not only have to give the analyst the schedule series in question, but potentially all the other project schedules that exist in that database. This would not be acceptable to many contractors, particularly if they were compelled to turn over seemingly unrelated schedule files to the opposing side during the discovery process. Developing Solutions As a result of this data explosion over the past decade, many law firms have adopted litigation support databases to manage the massive volume of documents produced in discovery on construction lawsuits. These databases include CT Summation, Concordance, iConnect and IRPO's Eclipse, all of which provide comprehensive search capabilities through millions of documents. These programs allow the user to view a document image while simultaneously viewing database with information and comment fields about the document. Data fields often include bates number, date, author, recipient, subject, document type, issue and summary, but there are many other possible fields.
The greatest advantage document databases have is that they enhance the user's ability to search the text of all documents for specific keywords and phrases. Native format electronic documents, such as emails, already contain the content in text form, and an Optical Character Recognition (OCR) process reproduces and stores the text of scanned hardcopy documents. As experts and attorneys develop a better understanding of the key issues of a case, they are able to perform accurate and refined searches to find pertinent documents. As new or additional issues are identified, the text search capability provides a cost effective way to review, rather than having to comb through all the boxes or individual files multiple times. Combining the search capabilities along with both objective coding and analysis summaries can greatly reduce the overall data analysis time for large volumes of documents. These topics are more fully discussed in Eric D. Schatz's "Finding A Needle In A Paperstack" in ARCADIS' Construction Claims Solutions Summer 2011 issue.
Perhaps the most intriguing, and perhaps frightening development associated with the huge volumes of data, is the rise of "predictive coding." Essentially, this is where a computer program "learns" what to look for in a large array of electronically stored information (ESI) based on iterative reviews by professionals for key words or phrases. Science Fiction? Hardly. Experts believe it is already here and may soon be essential to the legal practice as it saves many hours billed by the $400/hour legal associate. This process is likely to become an essential part of litigation in the future.
There is legal precedent for this development. In Da Silva Moore v. Publicis Groupe, 2012 WL 607412 (S.D.N.Y. Feb. 24, 2012), Judge Peck found that the predictive protocol was clear enough that the Plaintiff's objection to its use was premature. In EORHB, Inc., et al v. HOA Holdings, LLC, C.A. No. 7409-VCL (Del. Ch. Oct. 15, 2012), the judge ruled that the parties must use predictive coding and share a single vendor in the process in this commercial dispute.
Finally, in a building collapse case, Global Aerospace Inc. v. Landow Aviation LP, 2012 WL 1431215 (Va. Cir. Cl. Louden County, Apr. 23, 2012), the court ruled that predicative coding was permissible, but the opposing side was free to challenge the completeness of that process. Conclusion Forensic experts therefore have two great advantages now in performing their work over just a few years ago. First, as discussed in our initial part of this series ("Part 1: The Good," published in the Fall 2012 issue of ARCADIS Construction Claims Solutions) there is a more organized and clearly defined set of delay methodologies, and secondly, as a result of BIM, databases and electronic project management software, there is better data to understand and to support conclusions concerning delay. This second advantage is a hard advantage because it comes with greater difficulty in segregating the small number of relevant facts from millions of irrelevant facts, making the computer-aided work still human-driven. P6 User Tip: Change Setting for Proper Baseline Comparison This article provides important tips for developing baseline (target) comparison schedules using Oracles Primavera P6.
The article also highlights an issue with administrative preference settings in P6, which can cause incorrect baseline comparisons. Bar charts and baseline comparison tables are useful in showing comparisons between the current plan and the original plan, or between the current project status and a previous update.
These comparisons were previously generated in Primavera Project Planner (P3) by setting the comparison project to be a target schedule. In P6, the same features are available through the Maintain Baselines and Assign Baselines features on the Project menu. The names of these features are somewhat counterintuitivebaselines must be maintained before they can be assigned. In addition, projects that are converted to baselines using the Maintain Baselines command are not available for viewing or manipulation, except by using the baseline features available in the project to which the baseline is assigned. The baseline schedule itself will no longer appear in the Projects view.
Once baseline schedules have been created using the Maintain Baselines command and they have been assigned as the primary, secondary, or tertiary baseline using the Assign Baseline command, comparison layouts can be generated. However, one critical step is necessary. Select the Earned Value tab on the Admin Preferences dialogue available from the Admin menu. At the bottom of the dialogue, under the heading, Earned Value Calculation, When calculating earned value from a baseline use, select Budgeted values with current dates.
The setting, Budgeted values with planned dates, will not result in a correct comparison unless the user has specifically set the planned dates using a global change and intends to compare the current schedule to the planned dates in the baseline schedule, as opposed to the normal dates that populate the Start Date and Finish Date data fields most commonly displayed in P6. In other words, if you want to compare the schedule to a previous schedule or baseline in a typical situation, change this setting. There would be few situations where you would want to generate a comparison to a set of planned dates that differ from the start and finish dates in the baseline (or target) schedule.
1. Delay Analysis Methodology Time Impact Analysis Focus Planning Ltd 2. Disclaimer Information contained within this presentation is for education purposes only. How a programme or schedule is built, maintained and managed is the responsibility of the owning organisation. Focus Planning Ltd accepted no responsibility for changes made to programmes or schedules which are altered as a result of reading slides contained within this presentation. The configuration and settings of computer software are the responsibility of the license holders and Focus Planning Ltd accept no liability for the configuration used by the license holder. 3. What is Time Impact Analysis? Time Impact Analysis is a method of calculating the delay to a project based on the delay to project completion and was developed by MDCSystemsLtd. It is normally associated with being a best practice method for assessing a single delay on the critical path of a project schedule that is in progress. The analysis looks to compare the schedule pre-delay to the schedule with the delay included. It looks to calculate the duration variance between the two schedules to provide the contractor with a basis for estimating the delay impact. Time Impact Analysis is best used to calculate delays looking forward rather than back, other methods such as As-Built But- For (ABBF) can be more accurate for calculating retrospective delay. (See http://www.slideshare.net/AdamGarnham/delay-analysis- methodology-asbuilt-butfor- variation-in-p6 ) Focus Planning Ltd 3 4. Process Steps In order to successfully identify the delay the following steps should be followed; Focus Planning Ltd 4 Determine the baseline schedule Determine As-Built data source Decide on the TIA date Model delay using fragnet Merge fragnet with schedule Update delay durations Review results Determine delay actuals Communicate
5. Step 1: Determine the Baseline Schedule The first step in completing a TIA is to determine the correct baseline schedule to judge the delay against. Most construction contracts will contain clauses relating to the agreed or confirmed baseline, although this is often an implied rather than regimented factor. Whatever contract mechanics are in place the Contractor should look to determine the correct baseline to use for the analysis and to review the baseline to ensure it is reasonable for the project. Focus Planning Ltd 5 6. Step 2: Determine As-Built data source This would normally be the updated schedule but again contractually it is important to agree what schedule version to use. Contracts such as NEC determine the schedule as the Accepted Programme. The mechanics of which schedule to use will be determined usually by the contract in place. Once this has been agreed, assign the baseline from Step 1 to this project in P6. Focus Planning Ltd 6
7. Step 3: Decide on the TIA date This is the date you believe the delays commenced and the start date for your delay fragnet in the next few steps. This will normally be advised by the parties on site, for example; a TIA is being produced for the demolition of a building being delayed, the delay was due to the ball & crane not arriving on site, so the TIA fragnet start date would be the day the crane was planned to arrive as per the as built data source (from Step 2) Once agreed, some planners prefer to add this date as a milestone in the schedule to help the project stakeholders get a graphical representation of the TIA period as below. Focus Planning Ltd 7
8. Step 4: Model delay using fragnet At this stage we will look to model the delay according to the logic and duration decided by the project team. This is the basis for the claim. To do this create a copy of the schedule and enter the delay as a new activity/s and logically link to the successors on the critical path according to the build process. Do not reschedule yet the purpose of the fragnet at this stage is to confirm the delay logic and relationship to the current programme for acceptance. Focus Planning Ltd 8
9. Step 4: Model delay using fragnet Once the fragnet has been entered and confirmed by the planner this will need communicating to the project team to allow them to comment on the logic and placement of the delay. Once all are in agreement and acceptance with the fragnet detail this will need to be kept aside ready for Step 5. Focus Planning Ltd 9
10. Step 5: Merge fragnet with current schedule Now the fragnet has been accepted by the project team it will need incorporating in to the current schedule. Going back to the current schedule take a new baseline to record the dates per-fragnet. This baseline is for records purposes and does not need assigning at this stage. Now using the accepted fragnet, enter the delay activities in to the current programme with the logic as agreed and link to the effected successors in the schedule. Now reduce the delay activities durations to zero days and re-schedule the project. The Project Completion date should still remain the same as the new activities have zero duration. As we assigned the baseline in Step 2 you can check this is the case by adding the baseline bars in the Gantt Chart or by adding the Variance BL Project Finish Date column to the activity table (value should be zero) Focus Planning Ltd 10 11. Step 6: Update Delay Durations Now the fragnet has been entered it is time to enter the delay durations against the delay activities. To do this enter the duration against the activities in the original duration field this should then be mirrored in the remaining and at completion duration fields. Re-schedule and you will see the bars from the fragnet increase and the remaining activities push out moving the project completion date. Focus Planning Ltd 11
12. Step 7: Review Results Note the variance to the Project Completion milestone. This variance is important in calculating the cost impact when applying for damages, for example; delay causing another week on site is another week of paying for staff, prelims, road-closure, etc. The variance is also quantified in the Variance BL Project Finish Date column we added to the activity table earlier. It is important at this stage to ensure the impact on project completion is measured according to the contract terms. For example the contract may specify calendar days in which case the delay should also be communicated as calendar days and so on. Focus Planning Ltd 12
13. Step 8: Enter Delay Actuals The final step is to enter the Delay actual dates. If this is a calculation of an upcoming delay there will be no actual dates on the delay activities and the results from Step 7 should be communicated. If this is an ongoing delay the actual delay start date should come from the last schedule update. In order to enter these you will need to assess when the successor to the delay becomes critical. As you will remember from Critical Path Analysis the activity becomes critical at the late start date, so the delay start date will be the successor original late start date + 1day. Mark this date as the delay start date and reschedule. For more information on Critical Path Analysis see; http://www.slideshare.net/AdamGarnham/guide-to-the-critical-path-and-critical- path-analysis Focus Planning Ltd 13
14. Step 9: Communicate the TIA Now you can communicate the completed TIA to the project and commercial teams ready for a cost analysis process to take place. As well as displaying the Gantt Chart, the following columns can also be helpful in communicating the delays; BL Project Start the original planned start date for activities BL Project Finish the original planned finish date for activities Start the new start date with the delay included Finish the new finish date with the delay included Actual Start the actual start date with the delay included Actual Finish the actual finish date with the delay included Variance BL Project Finish Date the variance in Finish dates between the original planned finish and the delay included finish. Focus Planning Ltd 14