Sie sind auf Seite 1von 10

Proceedings of the 2012 9th International Pipeline Conference


September 24-28, 2012, Calgary, Alberta, Canada



Steven Dresie PII Pipeline Solutions, a GE Oil & Gas and Al Shaheen joint venture (PII)

7101 College Blvd, Suite 1100 Overland Park, Kansas United States of America Email:

ABSTRACT This case study will examine the implementation of an integrated suite of pipeline integrity management software tools and discuss related challenges during configuration and rollout phases. In this case, pipeline facility data was migrated from paper sources into a centralized database where it is regularly maintained and provides basis for related operations and integrity management components. Existing integrity management procedures and guidelines formed the core specifications for configuration of engineering assessment software tools. Using these documents the software suite now standardizes and automates the processing of ILI data, condition assessment, risk assessment and ECDA (NACE SP0502) management. The system produces a documented integrity plan customized to report specified key performance indicators and is integrated with the enterprise work order management system. The results of the engineering assessments and planning are maintained in the enterprise database and used to power web-based reporting available to a wide range of personnel inside the organization. Challenges include addressing gaps in data, bringing procedural documents and operating groups together, applying the system to existing operations, and ongoing support. This case study is intended for operators considering an integrated pipeline integrity management software solution or looking to improve the system currently in place inside their organization.

INTRODUCTION Bringing an integrated PIMS software solution online can be a daunting task. Today’s pipeline operators rely on a growing number of tools to manage and communicate day-to- day activities as well as near- and long-term planning for asset

maintenance and integrity assurance. These tools tend to evolve on their own and many times contain redundant, if not conflicting, data that can lead to confusion between groups and decisions based on incomplete or outdated information.

An integrated PIMS environment enables practices that result in more efficient data maintenance and information reliability while providing consistent application of integrity procedures. The ideal system provides a communication platform for a large audience inside a pipeline organization through use of web-based reporting tools that produce content for functional groups ranging from maintenance and operations, to integrity engineering, to management. Integration with other enterprise applications, such as a work order management system, further assures that activities are tracked, executed, and reported to meet operating goals.

THE ENVIRONMENT BEFORE INTEGRATION Architecting a Pipeline Integrity Management System using a series of software tools and process definitions requires first a close look at the current operating environment. In most cases, pipeline systems have been in the ground and carrying product for several years. At the time the pipe was laid, information about its location, materials, and design were neatly organized and accessible owned by the project team responsible for pre- commissioning activities. This would be the ideal time to establish an integrated PIMS to capture the well-understood data and build the base for future management of the asset. However, with many pipeline systems being built before the availability of the computing resources we have today, we find ourselves in a quite different situation. In this case, when the pipeline system was put into commission (prior to the

availability of computerized PIMS components), it grew to become a decentralized collection of pipeline information and procedures.

As the pipeline system is operated over time, data is continuously gathered and experience is gained. Unfortunately, this information is often difficult to access and use in an automated fashion. The data is spread across different operating groups, each maintaining the information they have gathered in the way it is most useful to them. A Pipeline Maintenance group may have information about infield conditions and coating repairs. The Corrosion group may have data about cathodic protection and coating condition. In many cases different methods exist between the groups for capturing and archiving this information. As time passes, disparate data collections systems break down. It is not uncommon to find that a wealth of information exists primarily in the memory of a few key individuals. Of course this is not a particularly desirable situation. Not only will that wealth of information retire at some point in the future, but currently it is at best difficult to fully apply to integrity assessment calculations.

While this scenario is showing why data that was collected in the past is important to consider, we need to also think about how the data coming through the door is treated today. It is likely that present-day data collection has already begun to happen electronically although the means for accomplishing this still vary from department to department. As we consider the ease of capturing data electronically, we realize that this practice leads to duplicated data between groups, as certain components of the pipeline data form a core that each group extends by applying their own specialized findings. The fact that electronic data can be easily shared between groups complicates what originates as good intentions by causing out- of-synchronization scenarios. Shared data is updated by the primary owner without redistribution to others.

In addition to the way data management occurs in the different operating groups, we can look at how communication tools have been put in place to help track and automate activities. These can include tracking of day-to-day activities in the maintenance group and regular reporting and planning from the integrity group. Each department gets good use out of these tools yet there is value to be recognized by making this information more accessible to others for planning purposes.

When it comes to planning, the non-integrated, departmentalized data systems have particular impact on the integrity group. To conduct the necessary assessments, a significant effort is required to collect information from each department and incorporate into assessment calculations. When these assessment calculations are performed with the aid of software tools, a full assessment and planning cycle can stretch over several weeks as different scenarios are applied, researched, and adjusted before finally being accepted. If

additional data is required to be gathered as part of the assessment, this new data often does not make its way back to the originating department’s collection system. Once the plan is published it is typically distributed in paper form to the various stakeholders. Updates on progress toward plan goals may be distributed from time to time in paper reports or during staff meetings. The monitoring of actions resulting from the plan can take significant time and effort to track and record from department to department. The integration of these efforts through linking of engineering and planning software reduces the time spent chasing task completion status.

As we can see from the description of these activities, more efficient processes will reduce the effort required of pipeline management personnel and reduce the risk of error that results from inconsistent information across groups. While new software tools will make sharing data and reporting less time consuming, it is important to note that in some cases the software tools act as implementation of a broader process with goals of enforcing procedures to attain more consistent results. The motivation to integrate systems can originate from a number of sources inside a pipeline organization. It is often the integrity planning department that benefits the greatest from integration because of the demands for far-reaching, up-to-the minute data to be used in assessment calculations. However, many of the other departments benefit from the ability to access one central data repository through tools that provide simplified data query functionality.

SUPPORTING THE CASE FOR CONSOLIDATION The consolidation of data maintenance efforts offers some easy-to-capture benefits that are realized nearly immediately upon implementation. These can be summarized in the following groups:

These can be summarized in the following groups: Reduced effort to maintain data More accurate and

Reduced effort to maintain data More accurate and timely information Collaborative organization culture

The first category focuses on the removal of duplicated data management effort, as multiple departments maintain the same pieces of data. This efficiency may not be a one-for-one savings across departments, but the reductions by each group add up quickly. It is also important to consider the overhead that is associated with each group that maintains data on their own. Each group will likely have put in place their own procedure for gathering, recording, and auditing the data they need for operations. These procedures may not be eliminated in their entirety, as gathering activities may still take place to be fed into new processes, but again, the reductions in small actions across multiple groups add up quickly.

In addition to the reduced effort to maintain pipeline data, the consolidation of effort also results in better application of

quality assurance procedures. It is difficult to imagine how separate departments who collect and maintain data would have matching or even equivalent quality procedures. Consolidating data maintenance efforts introduces the opportunity to revisit quality procedures and apply the best available to the practices of the new data custodian. With a single group now more focused on the maintenance of pipeline data, new practices can be put in place that help round out the availability of existing data and help flag situations where data that should be known is missing in the system. With a view that consists of the full system, it is much easier to see where gaps exist than when the data is segregated and no full view of the available data is possible. These are all valuable outcomes that will provide a real impact on future pipeline assessments.

Structuring data maintenance as a consolidated effort also reduces the lag between the time data is gathered and when it is available for use by other departments. With a clearly defined process for submitting new and updated data, a production line environment can be established where new data enters a queue and can be tracked to completion when it is ultimately entered into the system. This practice eliminates the need to roam from department to department collecting the supporting data needed to complete assessments. As the consolidated data custodian gains experience in the data maintenance activities, their proficiency for data management will also improve, allowing data to be recorded more quickly with fewer errors, again reducing the time needed to get production-ready data distributed to those who need it.

The consolidation of data management efforts also affects the culture of a pipeline operating company by creating an environment where interaction is encouraged and sharing information allows for cooperative analysis and knowledge transfer. It helps communication come more naturally and increases productivity as data is shared and a common relationship to the information is created. By placing some simple reporting tools on top of the consolidated data, more people can review what has been gathered and provide input from areas that perhaps had not been previously considered.

DELIVERING INTEGRATION When preparing for and implementing an integrated integrity management system, it is important to build a solid foundation. The consolidation of data inside the pipeline organization will be this foundation. Without a trusted master data repository, the system will be handicapped from the start. All engineering assessments and resulting integrity plans will be built as an extension to the pipeline and operating data consolidated into the master data repository.

There are several questions to reflect on that will guide the approach to integration. Here are a few to consider:

Who will be responsible for the newly integrated system and its components?the approach to integration. Here are a few to consider: What do I want the integrated

What do I want the integrated system to be capable of? What does this mean for data structure and assessment tools?for the newly integrated system and its components? How can the integrated system extend beyond the

How can the integrated system extend beyond the pipeline integrity group?What does this mean for data structure and assessment tools? How will the integrated system improve

How will the integrated system improve my integrity management program?system extend beyond the pipeline integrity group? Early on it is important to identify who will

Early on it is important to identify who will take on this new responsibility of data ownership. Some deep thinking should be done to determine if the necessary skills exist, have the potential to be developed, or need to be recruited. A common trap is to assume these responsibilities can be absorbed by a person that is already very busy with daily tasks. When this person becomes overwhelmed and is unable to keep up, the foundation of data will begin to crack.

Once the data custodian has been identified, attention should be turned to defining guidelines for collecting and maintaining the data necessary to run the integrity management system envisioned. These guidelines are not intended to be hard and fast rules on how to handle every situation, but rather should set forth the expectations of the data to be consolidated. These guidelines will be adjusted as the system develops. The data custodian will need to be involved with the core users of the integrity system to understand critical components and create an open communication channel.

Once the ground rules are in place to govern how the data structure will be formed, we can build on the foundation by capturing engineering and planning processes in the software system. While the software tools will help make engineering assessments repeatable and provide a history for future analysis, but they will not define nor enforce the integrity management program on their own. A review of existing integrity management processes is needed. These processes should be adjusted to reference and incorporate the new tools provided by a software package.

The integrated pipeline integrity management system provides the opportunity to make pipeline and operations data available to a larger audience. How this will be implemented and rolled out will need attention from all of the operating groups. Reporting tools that provide read-only access are the primary method for distributing information. These can be made available on the company intranet or through an internet portal secured by user authentication. It is important to consider who will use these tools and how much information they need. Although today’s systems are designed to be intuitive to use, similar to popular websites, the organization will benefit from a structured training and rollout. Functionality can be covered quickly in the training sessions, but time should be allowed for

questions from those in attendance. More interactive and engaging training and rollout sessions will improve the overall acceptance of the tools and will lead to quicker adoption and broader use.

Enhancing Pipeline Integrity Management With an integrated pipeline integrity management system in place, keeping track of progress towards integrity goals becomes an activity more easily measured and managed. An integrated software solution will use engineering assessment tools coupled with advanced reporting features to record and track performance against defined key performance indicators. Because the system is integrated, these key performance indicators are kept current with minimal effort. Performance against goals can also be reported in dashboard format for managers without the need for staff to compile data and produce reports. These dashboard tools can typically be configured to report on what is most important to the reader on a user-by-user basis. Another benefit of using dashboard reporting is the ability to see progress on planned integrity tasks for the year. By incorporating this status in the dashboard report, management can monitor progress as tasks in the plan are completed and can keep an eye on tasks to come for potential conflicts.

Using an integrated software solution will also provide the ability to adopt a standardized report procedure making comparison to previous reports easier through a consistent format. This consistency also reduces the time needed by engineering resources to write and format reports. As an added benefit, these reports are archived in the software system so they can be quickly accessed for review as needed.

An integrated software solution also provides means to interface with other supporting systems. An interface with the organization’s work-order management system will allow the users of the integrity software packages to see the interaction of planned maintenance tasks in line with assessment results. The linking of systems also allows for work order entries to be created automatically following engineering assessments. For example, following the feature assessment from data collected during an in-line inspection and prioritizing in-field verification sites, this prioritized list of activities will feed into the work order management system, creating multiple activities where appropriate. The integrity system then shows these actions as pending until the work-order management system reports back that they have been completed. With the two systems tied together, there is less chance that a task slips through the cracks and goes unexecuted.

The ideal system will also link to an organization’s financial system to retrieve actual spend figures for reporting purposes. These program spend numbers can be plugged into reports and dashboards, giving totals that are calculated

consistently for comparison purposes and updated as quickly as the corresponding system.

CASE EXAMINATION Much can be learned about complex integration projects by examining real-world experience of implementation activities. The deployment of these types of systems benefit greatly when planning is done up front with some insight into the challenges ahead. In this case examination, we will look at the components involved and how they interact with the organization and how the final overall environment is structured.

In this case study, the objective was to bring together a data management process that enabled a geographic information system (GIS) that powered a web-based data query and reporting tool and ultimately drove the application of the integrity management program. The system objectives can be broken into the following categories:

Standardize and automate integrity management processesobjectives can be broken into the following categories: integrity Consolidate pipeline data Use consolidated data to


Consolidate pipeline data




integrity Consolidate pipeline data Use consolidated data to power management processes Provide access to pipeline



management processes

Provide access to pipeline data which was previously unavailable or cumbersome to obtaindata Use consolidated data to power management processes Data management consisted of a collection of data

Data management consisted of a collection of data repositories built over time by the individual departments. The core of the pipeline location and facility data was available from as-built drawings available in paper copy from the document management archive. Other construction data was also available here, mostly in paper form. The document management archives were kept up to date in most cases, however, some as-built drawings had been misplaced several years earlier and were unaccounted for. Pipeline design documents and inspection history was maintained by the integrity department. The design documents were complete for the primary parts of the system. Some of the shorter, smaller diameter lines did not have complete design documents. Inspection history consisted of in-line inspection runs dating back several years, though in many cases pre-commissioning intelligent pig inspections were not available. Cathodic protection direct current voltage gradient (DCVG) surveys and close interval potential surveys were available for many of the pipelines in the system, again spanning multiple years. The integrity department also had a good record of repair history for the last few years before the project. Maintenance department records were difficult to locate and organize however, the maintenance department collectively held a large amount of practical information about the pipeline in personal experience. Through a series of interviews this information was extracted and applied where appropriate.

All of the data collected was consolidated into a single data repository. This took the form of a GIS-enabled, pipeline oriented database. This was the first introduction of a GIS to the pipeline organization. The GIS platform was chosen from an industry leader and the data model was adapted to accommodate its specific requirements. The GIS interface formed the primary method to maintain pipeline facility and integrity data in the master data repository. To accomplish this, each piece of data needed to be associated with a pipeline and have a location in space. Software tools built on top of the GIS platform provided automated tools for creating spatial data when loading new data and updating existing data.

The newly consolidated data was made available to the greater pipeline organization through an internal website with controlled access. Pipeline personnel used the web interface to view the data graphically in space with the ability to use spatial reporting tools to query location and specific attribute data. This was the first interactive map-based reporting tool for pipeline data available inside of the pipeline organization.

Integrity assessment and planning tools are a critical component of the system. Without the proper input and properly managed output they are of little value. The integrity department along with health and safety and operations had a well-documented set of procedures in place before seeking out a software system to automate the process. These procedures needed to be brought together and properly interpreted to configure the software components to maintain the integrity environment already in place in the pipeline organization. The process of integrating corporate guidelines and operating procedures into software configuration provided an excellent opportunity to review the material from an integrity perspective. All departments participated in the review of procedures and guidelines as they were benchmarked against industry best standards and adjusted for use in the software.

To accomplish these objectives, focus was placed on a few key components to increase the probability for a successful implementation project.

Centralized Data The first is the centralized database where all pipeline location and facility data was to be kept moving forward to power the engineering assessment work completed on a day-to- day basis. The data model selected has roots in managing location and facility data with a special focus on the oil and gas pipeline industry. However, taking the publicly available model in its generic state would not have sufficed to meet our objectives. A data-modeling session was required to analyze the data requirements of calculations to be performed for engineering assessments of pipeline anomalies and to perform risk assessment designed to meet the requirements specified in ASME B31.8S [2] and API 1160 [3] . In order to perform this task, both the specifications for the pipeline anomaly assessment and

risk assessment needed to be in a near complete condition. This was not the case as work was planned to develop both of these specifications. To avoid a stall in the progress of data collection and loading, the generic model was updated based on experience of the data structure typically needed for implementation of the software tools. This structure formed a preliminary data model. As the specifications for the implementation of the integrity management software were developed and finalized, the data model was reviewed for impacts of data required for calculations.

Assessment Specification The process of developing assessment specifications is intense and requires the guidance of seasoned professionals. In this case, a quantitative risk assessment was required and, as such, the calculations were based on available factual data with very few subjective components. The exercise of assembling the risk assessment specification was owned by the Integrity group, but cannot be performed in a vacuum by that department alone. Input was needed from many parts of the pipeline organization, not limited to Maintenance, Corrosion, and Health and Safety. The quantitative nature of the specification makes it difficult to digest in a single sitting with a group as diverse as described above. To ensure success, it was important to give special attention to planning the workshop. It was beneficial have a short introduction to the material with individual departments and to encourage them to prepare for the workshop. The workshop spanned multiple sessions with examples worked ahead of time to show impact of making decisions of calculations to be included. Extra time and attention was devoted to this task as this specification required a significant effort to configure in a software tool and changes to the specification after the software configuration had begun could become costly and cause delays.

Process Integration With the centralized database and engineering specifications in place, it was time to turn attention to reviewing the procedure for how this system would fit into the existing integrity program of the pipeline operator. The various stakeholder departments were gathered again to discuss how corporate objectives fit with the new automation tools in place. This workshop provided the opportunity for non-integrity departments to ask questions and better understand their role as it related to the integrated system being implemented. The review of existing objectives was the minimum required. The next step came in creating procedures to be followed in operating the integrity system. For this case, these operating procedures were left for consideration following the implementation of the software system.

System Interfaces The system in this case was configured to interface with an existing work order management system. The interface allowed the integrity engineering software to analyze data from a

pipeline inspection and request work orders be created for repairs automatically from the work order management system. In order to define the process and information required to automate the work order requests, planning was required between the integrity group (the originators of the work request) the maintenance group (who receive and process the work requests) and the support group that maintains the software of the work order management system. Each of these groups spoke a different language when it came to their viewpoint of the work items created. The groups were able to define what work action elements were necessary to initially route the work request on automated creation so that proper approval was applied before work was scheduled. The mechanism for sending completion status back to the engineering software was captured in its own specification so that it could be maintained in the future in the case the original authors were no longer accessible. A test environment was discussed for the work order management system but ultimately the testing was done under controlled conditions to avoid interference with production use.

These key elements represented the stress points of the integrity management software implementation. The remaining parts of the system that will be discussed below required fewer configurations to meet the requirements of the pipeline operator’s environment.

The full pipeline integrity management system consists of many software tools and interfaces. As discussed, a centralized data repository talking the form of an oil and gas pipeline data model implemented in an enterprise database environment is crucial to the success of the full system. Careful attention is paid to defining this data store to accommodate the needs of the software tools it supports. It has both the technical component of database administration as well as the business component of the maintenance of the data it contains.

The centralized data repository needs to be maintained as the pipeline system ages, is maintained, and expands. A set of software tools are in place help automate this process. When documentation of daily operations are turned into the data custodian, they can be quickly loaded into the data repository for use by all associated applications. The automation tools reduce the effort required to maintain the relationship of the new data with existing data. These tools also create the ability to verify data maintenance activities and include utilities to spot errors in documentation and data entry.

SOFTWARE POWERED SOLUTION A broad range of software tools developed for engineering assessment, planning, and reporting tools run on top of the centralized data store. These tools each form an important part of the pipeline integrity management system whether they are performing calculations or formatting data into a report for easier consumption. In this case, the software tools in place have been matched to the need of the organization. The

illustration in Figure 1 (located at the end of this paper) shows how software tools fit inside, and form part of, the integrity management process.

The first tool in the suite automates the process of loading and maintaining in-line inspection data. Once an inspection has been performed and the findings tabulated and reviewed, the software is used to create a record of the inspection, archiving the date of the run, technology of the tool, and the sections of pipe inspected. Once the run activity has been created, the software will take an input of the anomaly listing and by using the constant data, such as valves and welds, will adjust the location of anomaly data to account for inaccuracy of the inspection tool. The alignment is done automatically and displayed in a linear format for human review of derived matches. Once the review is complete and the matches approved, the anomaly data is pushed into the master data repository with location data that reflects the software based alignment adjustments. With an adjusted, more accurate, location of anomalies, the process can continue with assessment of the individual anomalies ultimately leading to field work, which is highly dependent on accurate location data.

The anomaly assessment software tools take the anomaly data loaded into the master data repository and analyze it using industry standard methods to identify features that pose immediate integrity threats. Once top priority features from the in-line inspection are identified, they can be grouped to consolidate excavation activities where possible and reports are generated to provide field personnel with details they need to perform a successful dig. This tool also performs a corrosion growth assessment, which provides information needed to optimize inspection intervals and support the business case for future inspections. These assessment results can be used to create scheduled response plans stored in the master data repository for future reference. Because this assessment information is stored in the master data repository, it can be used for display on a number of reports and can be used as input to other assessments and calculations.

The risk assessment tool uses the output from the anomaly assessment and many other data inputs to provide a risk ranking of all pipelines and pipeline segments in the network. In order to make efficient use of the large amount of data needed for the risk assessment, the software segments the data so that individual sections with like characteristics can be evaluated together to provide an accurate representation of real-world conditions. Customized, standards-based logic is then applied to determine risk ranking based on applicable pipeline threats and failure consequences. The results of the risk assessment are formatted into several different report styles to help with digestion of the large amounts of calculated output. The software also provides a tool that allows the integrity engineer to experiment with possible remediation actions and see the effect on pipeline risk by re-performing the assessment with the

proposed changes in data. All of the data generated by these calculations is again written back to the master data repository for archive purposes and for use with software reporting tools.

With risk assessment information now archived, the planning software tools provide a means to develop remediation plans that include activities such as pipeline maintenance program modification, pipeline inspections, and repair. From this comes a documented integrity assessment plan that facilitates the decision making that leads to selection of a specific integrity assessment plan for a given pipeline segment. These integrity assessment plans are then used to create detailed budgets and calculate the most cost-effective mitigation strategies for the pipeline system.

For this integrity management system the risk assessment software module relies significantly on data collected through in-line inspections of the pipelines. The pipeline system in the case however, included several pipelines that could not be inspected using these technologies. A software module implementing the four stages of NACE SP0502 [1] direct assessment provided a solution for these pipelines. The module aids with the determination of direct assessment regions and definition of applicable inspection types. Data collected in the field is then loaded into the module and is archived in the master data repository. This data is then analyzed to find indications and determine priority for direct examination. Following direct examination, the defect data is again loaded into the module and archived. This data is then used to calculate the corrosion rate, remaining life, as well as reassessment interval for each region.

The integrity management system includes tools to help report out the data that has been consolidated in the master data repository and calculated through the engineering assessment modules. The first of these reporting tools produces an alignment sheet style report that displays data in a band format relative to its position on the pipeline. Several different reports are configured for use with this tool, some with the purpose of conveying general information, others with a specific audience in mind. There are two commonly used sheet reports. The first contains primarily facility information about the pipeline and its equipment. The second focuses on inspection data and combines risk assessment results to give an understanding of the situations in which a high-risk segment of pipe exists. The reports can be customized to combine these two styles, or to use an entirely different set of data. The reporting tool has access to all of the data in the master repository and once bands of data are configured, they can be quickly added to new report layouts.

The second key reporting tool takes the form of an interactive map and is published over the company intranet for use by personnel across several departments. The primary interface of the reporting tool allows the user to visually locate data about the pipeline based on the area of the pipeline system

that it affects. The map control tools are similar to those offered by popular internet map providers and, because of this, the users, although not knowledgeable with the integrity management system, can quickly find and produce reports with the published data. In order to simplify the interaction, a limited number of data elements are exposed through the interface. The reporting tool has the same access to the master data repository as the alignment sheet style reports, but because of the broad audience the published data has been limited for the system rollout. As the system matures and specific needs are identified, the map interface can be expanded to incorporate any other data that is required. In addition to the query tools provided by the map interface, alignment sheet style reports can be viewed through the map interface providing a quick means to distribute commonly used sets of information to the larger organization. Access is controlled by configuring only known system users and the map site is available only on the company intranet.

IMPLEMENTATION CHALLENGES Implementing an integrated pipeline integrity management system is a significant undertaking. Each case will have its own unique challenges but there is much that can be learned by examining the challenges of a similar project. Some case- specific challenges are presented here to provide some insight into the process and provide some thinking points for how they can be addressed in a new implementation project.

Finding Data The first challenge experienced related to the gathering of data to build the pipeline data model and master data repository. The preliminary review of data available showed early on that this point would require special attention. The system in this case had been in the ground and operating for many years before this project to consolidate pipeline facility and operating data. In the early days, document management was the responsibility of localized operating groups. Some kept records well intact; others did not have full records. This was somewhat expected for a pipeline system of its age, but the challenge became if these data can be found. Some pipelines where known to have incomplete location documentation. For these, a GPS survey of the lines was ordered. For others with missing operating data, interviews were conducted with different department groups to determine if the information was known, documented, or needed to be gathered. This exercise provided the general understanding of data availability needed to move forward with the consolidation efforts.

Once the scope of the available data was understood, the issue became that of filling gaps in the data that was expected to be complete. Not all data is required for the model to be complete, so the exercise focuses efforts on the material that would be most influential in the engineering assessments. A report was made of the gaps in the data and then each gap was ranked to most efficiently use the time available to collect additional data. Not all gaps were filled and this was determined

to be an acceptable solution. Data collection and maintenance are ongoing tasks and should be treated as such. The engineering assessments will identify where additional data is needed and can be used regularly to refine the data available.

Ownership and Stakeholders During the data collection and consolidation phase it became clear that a decision was needed on whom in the pipeline organization would be responsible for the newly created master data repository. While the data being collected came from many different departments, there would need to be someone appointed to look after the consolidated data and ensure that new data coming in to each department would make its way into the master data repository. This task is often a good fit for the GIS group if one is available, however, in this case there was no such department. A preliminary discussion had taken place suggesting that this responsibility be taken on by the integrity group. A decision was made to move forward in this direction and identify resources that could be used for this task. It is important that this decision be taken on as soon in the process as possible so that those who will be ultimately responsible for maintenance of the master data repository can be involved in its initial definition and population.

Deploying a software based integrated integrity management system requires the buy-in and cooperation of several departments in the pipeline company. Data collection activities for the initial population of the master data repository are most successful when all of the departments holding data are engaged from the beginning. The compilation of engineering assessment specifications again requires input from many of the departments in the pipeline organization. As the heart of the system is software-based, the information technology department becomes an important partner in implementing and maintaining the system. Personnel from database administrators to server and network managers as well as desktop support will be needed to successfully complete the project. Going into the initial planning phases it is important to involve the other stakeholders if only from an informational perspective. If each group has a designated supporter and open communication is a goal of all the stakeholders implementation of the new system will go much more smoothly than only calling on others when they are needed.

Open communication early on in the project drives engagement that will benefit the workshop activities needed throughout the project to specifications that drive core functionality of the integrity management system. A good example of this is in the process of building the specification for the engineering risk assessment. To complete this specification, input is need from departments that may not have been involved to a large extent in the data collection phase. In this case, departments such as corrosion and health and safety were brought into the project to participate in workshops after the initial data collection had been completed. These departments

had valuable input that was necessary to configure the risk assessment module but, because they didn’t fully understand the entire scope of the project at time, they felt lost. This is a difficult situation to handle in a workshop environment. If these resources are engaged earlier on in the process and have the opportunity to ask questions and provide input, they will have more ownership of the final product. This helps to build confidence in the solution and the teamwork nature allows activities such as a specification workshop to proceed smoothly.

System Commissioning When the time comes to move the new pipeline integrity management system into production use, the go-live period is always full of challenges. It seems that regardless of the amount of testing that takes place beforehand, there is always something unexpected left to be discovered once the switch is flipped and the system is put to work in the real world. Plan for there to be surprises. Make sure that support personnel are available and ready to be called upon. If possible allow for a break-in period where the system is up and being used, but critical reports or assessments are not needed. Once the system has been up for a while and put through the paces of daily use, make the transition to producing reports and move back into the scheduled rotation of assessments. Here again the information technology department will be indispensible. Having a plan that is realistic and flexible makes the go-live period a manageable bump in the road.

Ongoing Support Once the system is live and used for production work, the challenge becomes how to support it into the future. In the short term, it is important to monitor that personnel assigned to maintain the system have found a way to fit these tasks in with their other daily work. If individuals are finding it difficult to keep up with the system, and maintenance tasks drop, the data can quickly become out of synchronization and a dedicated effort will be required to remedy the situation.

In the first few months that the system is operational, users tend to focus on certain aspects of functionality and tend to know them well after gaining experience. The downside of this is that there are often areas of functionality that have been used sparsely and training on the process and dependencies of these pieces has been largely forgotten. Refresher training is useful at this point. Dedicate some time to assess what areas of the system are known well and which have not seen enough action for users to have a comfort level with them. In a few short sessions, the training material for these less-frequently used areas can be reviewed. The group will then feel more comfortable moving outside of the common work tasks they had been focused on for the months prior.

Effective long-term support requires the definition of a process for addressing common issues that are encountered during use of the system. This should include how software

issues are reported and corrected as well as how normal updates are applied to keep the modules functioning properly as the information technology environment changes. The procedure can be simple. Again it is important to share this and have input from the other support groups and stakeholders. Other issues will arise over time such as the need to update engineering assessment methods as a result of changing corporate guidelines. These can be effectively addressed in a one-off manner.

CONCLUSION A pipeline integrity management system is a decision- making tool. It is constructed of many pieces and should be based on existing operating procedures and company policies. To be effective data and assessment results need to be communicated to all those involved and tight integration is required to accomplish this.

The process of implementing a software-based pipeline integrity management system requires commitment from the pipeline organization as a whole. Although there are many challenges to face in this type of integration project, communication is the best tool available to smooth the path to putting the system into place. Looking inside an existing integrity management system and considering the state of the environment before integration, the process followed to build out the system, and the challenges faced helps everyone involved prepare to meet the critical challenges needed to complete the task.

Each pipeline integrity management system is structured differently. However, they all revolve around a few core concepts. Data should be consolidated to make the most current information available to those that need it. Engineering assessment modules should be implemented to utilize the consolidated data with existing policies and procedures. Data, along with the results of engineering assessments, should be shared throughout the organization using reporting tools that simplify the effort needed to access pertinent information quickly.

An integrated PIMS software solution, when implemented properly, can provide more accurate results from engineering assessments by ensuring they are completed year after year on a consistent basis and by enabling them to use the most accurate data available. This results in better decision-making leading to more efficient integrity programs that reduce the expenditures necessary to keep a pipeline system operating with a low risk of failure.




Methodology, ANSI/NACE SP0502, 2008





2. ASME B31.8s 2004, Managing System Integrity of

Gas Pipelines, Supplement to ASME B31.8, January


3. Managing System Integrity For Hazardous Liquid

Pipelines, API Standard 1160, 1st Edition, November


Figure 1 – Software enabled Pipeline Integrity Management System 10 Copyright © 2012 by ASME

Figure 1 Software enabled Pipeline Integrity Management System