Beruflich Dokumente
Kultur Dokumente
ENDESA PERU
User Guide
PDM Interface
Document Number: U0320
Revision: 1.0.0.0
Date: 2015-02-09
CONFIDENTIAL
Author: N. Bogachev
NOTICE:
This document contains information which is confidential and proprietary to Siemens Energy, Inc. This
document, including any excerpt hereof, may not be copied, transmitted, distributed or otherwise
communicated to any third party without the express written consent of Siemens Energy, Inc.
SIEMENS
ENDESA PERU
User Guide
PDM Interface
Date: 2015-02-09
Name Date
_____________________ ______________
_____________________ ______________
CONFIDENTIAL
Disclaimer of Liability Copyright © 2009 Siemens Energy, Inc.
Although we have carefully checked the contents of this The reproduction, transmission or use of this document or
publication for conformity with the hardware and software its contents is not permitted without express written
described, we cannot guarantee complete conformity authority. Offenders will be liable for damages. All rights,
since errors cannot be excluded. The information including rights created by patent grant or registration of a
provided in this manual is checked at regular intervals utility model or design, are reserved.
and any corrections that might become necessary are
Registered Trademarks
included in the next releases. Any suggestions for
improvement are welcome. SIMATIC®, SIMATIC NET®, SIPROTEC®, DIGSI®,
SICAM®, SINAUT® and Spectrum Power™ are
Subject to change without prior notice.
registered trademarks of Siemens AG. All other product
This document contains information which is confidential and brand names in this manual might be trademarks, the
and proprietary to Siemens Energy, Inc. use of which by third persons for their purposes might
infringe the rights of their respective owners.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the restrictions
stated on the first page.
U0320
1.0.0.0
Revision
Revision Record
Date
2015-02-09
Page iii
Revision: 1.0.0.0
Table of Contents
Table of Contents
1 Introduction ..........................................................................................1
2 Job Management Overview .....................................................................2
2.1 Job .................................................................................................................. 2
2.2 Viewing the Current Jobs ..................................................................................... 2
2.3 Task ................................................................................................................. 3
2.4 Viewing the Tasks of a Job ................................................................................... 3
2.5 Job and Task Report ........................................................................................... 3
2.6 Application Log.................................................................................................. 4
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the restrictions
List of Figures
Figure 49-1: ImOraDumpLoad Process Flow............................................................................................ 110
List of Tables
Table 4-1: Job Status - Action and Results................................................................................................. 20
Table 5-1: Application Log Status - Action and Results ............................................................................... 27
Table 41-1: Mapping an Application Name to an Error File Name ............................................................... 90
Table 52-1: Process Flow ........................................................................................................................ 117
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the restrictions
stated on the first page.
1 Introduction
The Primtive Data Management (PDM) Interface is a set of functions that allow primitive data to be defined, accessed
and transferred to the operational database. The PDM Interface functions include:
Job Management
Job management provides the ability to group a set of changes into a job and to control the actions performed
on this job.
Copy Management
Copy Management is a feature of the PDM, designed for multisite, which manages the propagation of primitive
data to Backup Control Center(s), and to other Real Time databases such as Historical Information Systems
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
(HIS), Energy Accounting (EA), and Communication Front End (CFE). Copy Management makes use of the
tools provided by Job Management.
Import
Import provides the ability to load and remove data from ASCII files into the primitive database.
Export
Export provides the ability to extract data from the primitive database and produce ASCII files which may be
subsequently imported.
Forms
Forms provide the ability to view and/or edit the contents of the primitive database using a GUI interface. See
the PDM Forms User Guide - U0325.
restrictions stated on the first page.
Reports
Reports provide the ability to produce an ASCII file displaying the contents of primitive data.
Validation
Validation of data occurs during entry of data into the primitive database. This type of validation is referred to as
entry-level validation. Error messages are issued if data values violate the validation rules. Error message
reports produce an ASCII file of the error messages.
Global Validation
Global validation provides the ability to verify the contents of the primitive database. Global validation includes
both entry-level validation and database-level validation. Error messages are issued if data values violate the
validation rules.
Job Transfer
Job transfer provides the ability to transfer a job from the primitive database to the operational database.
Reverse Transfer
Reverse transfer provides the ability to initialize SCADA, ICCP, Reference and RTDS/CFE data in an empty
primitive database from an existing operational database.
Configuration
Configuration provides the ability to install and configure the PDM Interface product. This includes creating
additional PDM Oracle users that may modify the PDM schema owned by user PRIME. See the Primitive
Database Maintenance User Guide - U0385.
The PDM Interface functions are executed using both UNIX scripts and Oracle SQL*Plus scripts. The following
chapters describe how to use these scripts.
changes made to the primitive database are recorded in the change log and associated with the job. The user may
disconnect from and reconnect to the same job many times.
When finished defining the station in the primitive database, the user executes the job transfer script and supplies
the job name. Job transfer retrieves the primitive data changes made by this job from the change log and transfers
the changes into the offline operational database.
When the transfer is complete, the user activates the job into the online operational database. If Remote Copies
are established in a Multisite system, the user also activates the job to ALL ‘Active’ Remote Copies requiring the
job changes at that time.
After the activation is complete, the user deletes the job from both of the Remote databases: the operational
database and the primitive database. The job is archived in the job log history and the change log history.
In formal terms, a job is a collection of data changes identified by a unique name. The job is the increment that is
transferred from the primitive database to the operational database. The user is in control of which changes are
restrictions stated on the first page.
included in a job.
1. A valid name may contain letters, numbers, and the underscore (_).
2.3 Task
Jobs are further subdivided into tasks. A new task is created each time a user connects to a job. All of the changes
made in one connection are grouped together into one task. For example, if an import, which consists of two tasks, is
run three times within a job, that job will contain six tasks. The concept of tasks also applies to changes made using
Forms. In this case, a task consists of all the changes made during a single visit to a form. To clarify, imagine calling up
the DIG form and changing data for several digital points followed by a call-up of the ANA form from which several
analog points are changed. The changes made using Form DIG represent one task; the changes made using Form
ANA represent a second task.
Each task has both a name and an id. The task id is the unique identifier for a task because the task name is the same
for each execution of a particular function.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
import PRIME
25-JAN-94 01:59:41
restrictions stated on the first page.
import PRIME
2 25-JAN-94 01:59:59
Task data is also available within Oracle Forms and may be viewed by using the form TASL. See the PDM Forms User
Guide - U0325.
ImJmShowJob is a UNIX script that lists the contents of both job and application information for a job. The associated
SQL*Plus script ImJmShowJob.sql is also available. An example usage is:
unix> ImJmShowJob -u {oracle_username} -j {jobname}
Job Name/ Status Current User/ Creation User/
Job Id Current Date/Time Creation Date/Time
-------- ------- ----------------- ----------------
starl TRANSFERRED PRIME
359 20-SEP-05 11:06:28
Appl Transfer Status Activate Status Prev Delete ODB Job Date/Time
Activate Status
restrictions stated on the first page.
Status
---- --------------- --------- ------ ------ ------------------
AI TRANSFER_SKIPPED
BA TRANSFERRED 20-SEP-05 11:07:07
NWA TRANSFER_SKIPPED
EA_NOT_NEEDED
HIS_NOT_NEEDED
NOT_REQUIRED
3 Copy Management
Copy Management is a feature of the PDM, designed for Multisite, which allows propagation of primitive data to the
Backup Control Center(s), and to other RealTime databases like Historical Information Systems (HIS) and Energy
Accounting (EA). In a Copy Management configuration, there is one Main center, and any number of:
PDM Backup Control Center (PDM Copy)
HIS/EA Copy
All of the Copies are linked, and in synchronization, with the Main center. Only jobs that have been activated at the
Main can be propagated to the Copies, as needed.
Note: For specific details about the pre-requisites, set up and configuration for Copy Management, please refer to
Core System Installation Guide - I3000.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
It is important to note that the entire design of Copy Management relies on change detect to maintain data within a
‘Copy’. Thus, running a data import without change detect and then full transferring that data will adversely affect the
data within the PDM ‘Copy’. The user should abandon all PDM copies and then run the resynch process to re-establish
them should it ever be necessary to run PDM jobs without change detect on.
Copy Management maintains only jobs that contain primitive data. It cannot maintain changes made to relations
through the use of Spectrum Power 3 DBA Editors.
It is strongly recommended that the RDBMS Forms interface be used to handle PDM jobs in the Copy Management
environment, rather than the corresponding UNIX command line scripts. The RDBMS Forms will ensure the proper
handling of jobs, and will guide the user through the correct and allowable steps in the data propagation process, even
if the user is not familiar with the allowable job states within Copy Management.
This chapter discusses the job flow and the movement of data from the Main to the Active Copies using the PDM
Initiated Activation feature.
restrictions stated on the first page.
Note: Whenever the word “remote” is capitalized, as in “Remote database”, it is referring to those databases which are
linked to in the PDM copy_definition table.
3. An activation failure on any Remote database is reported back to the user on the Main.
4. The failed Remote database DOES NOT become automatically abandoned (the need of frequent Resynchs is
eliminated).
5. In the event of a failed activation on the Remote database, the user may:
a. Reset and retry the activation again on the failed database.
b. Undo the activation from successful databases.
c. Abandon the failed database and proceed with making the job permanent (job delete).
6. When a database is ‘Abandoned’, it requires a ‘Resynch’ to synchronize it back with the Main (as before).
Please Note that if Copies have not been established, or if the PDM Initiated Activation software determines that
activation to Copies is not required (based on the type of data change made) the normal job flow of Transfer ->
Activate Delete comes into play.
restrictions stated on the first page.
activation, or Undo the online operational activation on the Main and back out the job.
If activation to a Copy has failed a message is issued to the user that activation failed on a Remote Copy. The only
option for the user at this point is to [Reset] the failed activation.The user can point the cursor at the failed Copy in the
Remote Activation form. The [Reset] button will become available. At this time the user can Reset the failed activation,
fix the error and try activation again for that failed Copy.
Upon successful completion of the activation across ALL Active Copies requiring the change the job status changes to
‘ODB_ACTIVATED’ since the job is now activated everywhere. The only options in front of the user now are to either
‘Undo All’ - undo activation from the Copies the job was activated to - or go back to the main PDM form and delete the
job making the job changes permanent everywhere.
activated to. It performs the following functions on the different Active Copies:
PDM Copy - It calls the ‘ImCmPDMUndo’ script to reverse the data changes associated with the job from the PDM
Copy and deletes the job from the PDM Copy. This is done for all the Active PDM Copies that have been defined in
the system.
The status of the PDM Copy in the ‘application_log’ table for the concerned job_id is updated to
‘PDM_SEND_REQ’ if the undo was successful or to ‘PDM_UNDO_FAILED’ in case of a failure.
HIS Copy - For HIS related changes it calls the ‘ImHisJobMgmt’ script to connect to the Oracle database and Undo
the HIS related data changes from the HIS sid on the HIS Copy through an Oracle database link. After the changes
have been reversed from the HIS sid the script also sends a softbus message to the COMS to inform them that the
HIS changes have been reversed. This is done for all Active HIS Copies that have been defined in the system.
The status of the HIS Copy in the ‘application_log’ table for the concerned job_id is updated to
‘HIS_ACTIVATION_REQ’ if the undo is successful or to ‘HIS_UNDO_FAILED’ in case of a failure.
restrictions stated on the first page.
For EA related changes, it calls the ‘ImEaJobMgmt’ script to connect to the Oracle database and Undo the EA
related data from the HIS sid on the HIS Copy through an Oracle database link. This is done for ALL Active HIS
Copies that have been defined in the system.
The status of the HIS Copy in the ‘application_log’ table for the concerned job_id is updated to
‘EA_ACTIVATION_REQ’ if the undo is successful, or to ‘EA_UNDO_FAILED’ in case of a failure.
After the Undo is complete, the form is updated with the correct activate statuses pertaining to the Copies.
If Undo from a Copy has failed, a message is issued to the user stating that Undo has failed on a Remote Copy. The
user can point the cursor at the failed Copy in the Remote Activation form. The [Reset] button will become available. At
this time, the user can Reset the failed Undo, fix the error and try Undo again for that failed Copy.
Upon successful completion of the Undo from ALL Active Copies requiring the Undo, the job status changes to
‘ODB_ACTIVATE_IN_PROGRESS’. The only options in front of the user now are to either [Activate All] - Activate data
changes within the job to the Copies as needed - or go back to the main PDM form and Undo on-line operational
database Activation, Cancel Job Transfer and so on to back out the changes.
o PDM Copy - It executes the ‘ImCmDelete’ script to delete the job from the PDM Copy. This is performed on all
Active PDM Copies defined in the system. The activate_status of the PDM Copy in the ‘application_log’ table
for the concerned job_id stays ‘PDM_SENT’, and the delete_status is updated to ‘PDM_DELETED’ if the
delete was successful, or to ‘PDM_DELETE_FAILED’ in case of a failure.
o HIS Copy - There is no delete per se for HIS or EA data. Therefore, it just updates the application_log table
and sets the delete_status for the HIS Copy to ‘HIS_DELETED’ or ‘EA_DELETED’ depending on the
application affected by the job.
Deleting job from on-line operational database - After the successful job delete from ALL Active Copies the job was
activated to, on-line operational database job delete is performed by the form by calling the ‘ImDBAJobMgmt’
script.
Deleting job from PDM - This is the last step in the job delete process. It calls pkg_jobm.delete_job to delete the job
from PDM.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
If the delete fails on any Remote Copy, delete continues to be performed on other Copies. However, on-line
operational database job delete and PDM job delete is not performed. At this time, the user can go back to the Remote
Activations form, reset the failed delete on the host, and click on the [Delete] button in the Remote Activations form to
execute delete again on the failed host.
restrictions stated on the first page.
3.3.1 ImCmActivateAll
ImCmActivateAll is a UNIX script that activates the job changes to all the Active Copies that require the data change. It
needs to be executed on the Main and can be invoked as follows:
unix> ImCmActivateAll -u {Oracle Username} -j {job_name}
ImCmActivateAll performs the following functions for the different Active Copies:
restrictions stated on the first page.
PDM Copy - It calls the ‘ImCmPDMSend’ script to “push” the data changes associated with the job from the Main to
the PDM Copy and completes a “Redo”. This is done for all the Active PDM Copies that have been defined in the
system.
The status of the PDM Copy in the ‘application_log’ table for the concerned job_id is updated to ‘PDM_SENT’ if the
send was successful or to ‘PDM_SEND_FAILED’ in case of a failure.
HIS Copy - For HIS related changes, it calls the ‘ImHisJobMgmt’ script to connect to the Oracle database and send
the HIS related data changes across to the HIS sid on the HIS Copy through an Oracle database link. After the
changes have been applied to the HIS sid, the ‘ImHisJobMgmt’ script also sends a softbus message to the COMS
to inform them that new HIS changes have arrived. This is done for all Active HIS Copies that have been defined in
the system.
The status of the HIS Copy in the ‘application_log’ table for the concerned job_id is updated to ‘HIS_ACTIVATED’ if
the activation was successful, or to ‘HIS_ACTIVATE_FAILED’ in case of a failure.
For EA related changes, it calls the ‘ImEaJobMgmt’ script to connect to the Oracle database and send the EA related
data changes across to the HIS sid on the HIS Copy through an Oracle database link. This is done for all Active HIS
Copies that have been defined in the system.
The status of the HIS Copy in the ‘application_log’ table for the concerned job_id is updated to ‘EA_ACTIVATED’, if
the activation is successful or to ‘EA_ACTIVATE_FAILED’ in case of a failure.
If activation to a Copy fails, a diagonostic message is issued in the ‘msg’ table. At this time, the user has to Reset the
failed activation using the respective Reset script before attempting activation for that failed Copy again (discussed in
detail later in the Reset section).
Once the activation fails and is Reset, the ‘ImCmActivateAll’ script can no longer be used for activation to the failed
Copy. The user will have to use the individual activation script for activating the job to the Copy. The scripts need to be
run on the Main. Please note that the ‘host name’, ‘preferred node name’ and ‘alternate node name’ are the same
parameters as passed in in the ‘ImCmAddCopy’ script when defining a Copy and can be found in the ‘copy_definition’
Oracle table.
3.3.2 ImCmUndoAll
Once the job is successfully activated to ALL the Copies requiring the data change, the user may wish to “undo” the
activation of the job from the Active Copies. The UNIX script ‘ImCmUndoAll’ can be invoked at this time to perform the
undo from the Copies. It needs to be executed on the Main and can be invoked as follows:
sql> ImCmUndoAll -u {Oracle Username} -j {job_name}
ImCmUndoAll performs the following functions for the different Active Copies that the job was activated to:
PDM Copy - It calls the ‘ImCmPDMUndo’ script to reverse the data changes associated with the job from the PDM
Copy and deletes the job from the PDM Copy. This is done for all the Active PDM Copies that have been defined in
the system.
The status of the PDM Copy in the ‘application_log’ table for the concerned job_id is updated to
‘PDM_SEND_REQ’ if the undo was successful, or to ‘PDM_UNDO_FAILED’ in case of a failure.
restrictions stated on the first page.
HIS Copy - For HIS related changes it calls the ‘ImHisJobMgmt’ script to connect to the Oracle database and Undo
the HIS related data changes from the HIS sid on the HIS Copy through an Oracle database link. After the changes
have been reversed from the HIS sid the ‘ImHisJobMgmt’ script also sends a softbus message to the COMS to
inform them that the HIS changes have been reversed. This is done for all Active HIS Copies that have been
defined in the system.
The status of the HIS Copy in the ‘application_log’ table for the concerned job_id is updated to
‘HIS_ACTIVATION_REQ’ if the undo is successful or to ‘HIS_UNDO_FAILED’ in case of a failure.
For EA related changes it calls the ‘ImEaJobMgmt’ script to connect to the Oracle database and Undo the EA
related data from the HIS sid on the HIS Copy through an Oracle database link. This is done for ALL Active HIS
Copies that have been defined in the system.
The status of the HIS Copy in the ‘application_log’ table for the concerned job_id is updated to
‘EA_ACTIVATION_REQ’ if the undo is successful or to ‘EA_UNDO_FAILED’ in case of a failure.
If Undo from a Copy fails a diagonostic message is issued in the ‘msg’ table under the user’s job_id. The user then has
to Reset the activation for the failed host using the respective Reset scripts (discussed later in this section). After the
failed activation is Reset the ‘ImCmUndoAll’ script can no longer be used for undoing from the failed Copy. The user
will have to use the individual undo script for undoing the job from the Copy. The scripts need to be run on the Main.
The individual undo script for PDM Copy is:
unix> ImCmPDMUndo -u {Oracle username} -j {job_name} -h{host_name}
The individual undo script for HIS Copy (HIS changes) is:
unix> ImHisJobMgmt -u {Oracle username} -j {job_name} -f U –h
{host_name}
The individual undo script for HIS Copy (EA changes) is:
3.3.3 Reset
If the activation or undo to/from any Copy fails, the user has to reset the failed activation/undo before attempting
activation/undo again.
For a failure of activation/undo on the PDM Copy, the user will have to Reset the failed activation/undo by running the
following command on the Main:
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
3.3.4 ImCmDeleteAll
Unlike the Forms interface, deleting the job through the scripts is not a one-step process as compared to deleting the
job through the PDM form. The user has to use multiple scripts to delete the job. If the user is using the script interface,
restrictions stated on the first page.
As the Resynch process is running, the statuses of both the Main and the PDM Copy will change to transient states.
Use the following select statement on both the Main and the PDM Copy:
sql> select * from copy_definition;
and it will show :
host_name link status db_type
{Main host_name {Main db_link} PAUSE_MAIN PDM
{PDM Copy host_name} {Copy db_link} RESYNCHING PDM
After the successful completion of Resynch process, the statuses of the Main and the PDM Copy will be updated on
both the Main and the PDM Copy(s). The PDM Copy will now become ‘ACTIVE_COPY’.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
Data can, however, be viewed on the ‘ACTIVE’ PDM Copy using sql*plus or Oracle forms.
If a system is configured to have multiple ‘HIS Copy’ systems, ImCmResynch must be run individually for each ‘HIS
Copy’.
In the event that ImCmResynch should fail, the status of ‘HIS Copy’ would get set back to ‘ABANDONED’.
If a HIS Copy is ‘ABANDONED’, a reshychronization using ImCmResynch is the only action that will correct the ‘HIS
Copy’ and place it into an ‘ACTIVE_COPY’ state.
the Main.
Please note that the ‘PDM failover’ is not an automatic function; it comes into effect only after user intervention.
The PDM Failover process performs the following functions. It:
1. Resets all sequence generators to the current maximum.
2. Rebuilds the s_internal_number table.
3. Updates the copy_definition table with the correct status.
The reset and rebuild of internal control tables and sequence number generators assures proper operation of PDM Job
Management software after the Failover is complete.
To execute Failover, as UNIX user ‘imdba’, run the UNIX script ‘ImCmFailover’ on the PDM Copy which is to takeover
the Main functionality, as follows:
restrictions stated on the first page.
If the expected data changes are not in place in the HIS database, the HIS copy will need to be abandoned and
resynched. Subsequent jobs affecting HIS/EA do not need to be verified as the resynch will cover all the data.
4. Delete job from PDM.
Where DIR is the directory where the workstation_name.rtds file is being maintained.
For more information on maintaining the workstation_name.rtds filltab file, see the chapter “Control Tables for BA
and RTDS Subsystems” in the U0385 Primitive Database Maintenance User Guide.
5. ImBaRvIddug
Run ImBaRvIddug to reverse transfer the IDDUG information from on-line operational database to be loaded into
PDM. This step will dump out the IDDUGS for SCADA, RTDS/ CFE, SAM, REFERENCES, ICCP, CMCH, EA and
HIS records.
unix> ImBaRvIddug -l {directory where IDDUGS should be stored} -a
abcdefg -h y -u {Oracle username}
6. ImBaImportDirectory
Run ImBaImportDirectory with change detect turned off to import all the data exported from on-line operational
database into PDM.
unix> ImBaImportDirectory -u prime -d {directory containing
generated IDDUGS} -c N -e 1000 -r Y
7. ImEsCfeImport
If the system is configured with CFE, then run ImEsCfeImport as follows to load the default IDDUG:
unix> ImEsCfeImport -u prime -j {job name} -i
$SPECPATH/src/rdbms/ImEs/cfe_default_iddug3.0
8. ImBaRvUpdateIds
Before running UpdateIds to synchronize the internal numbers in PDM with on-line operational database, the HIS
Copy will need to be ABANDONED. To abandon the HIS Copy, log in as user “imdba” and run the ImCmAbandon
script as follows from the new Main:
unix> ImCmAbandon -u prime -h {HIS Copy host_name}
Upon successful completion of ImCmAbandon, the host specified will have a status of ‘ABANDONED’ in the
copy_definition table of the Main.
Now run UpdateIds on the new Main:
unix> ImBaRvUpdateIds -u {Oracle username} -j {job name} -e Y
9. Re-synchronize the HIS Copy with the new Main by running the following commands on the new Main as UNIX
user ‘imdba’:
unix> ImCmResynch -u {oracle_username} -h {HIS Copy host_name} -d
HIS
The new Main will now be available for data processing and propagation.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
restrictions stated on the first page.
4 Job Status
The job status identifies the state of the job. What actions a user may perform on a job is dependent on the current
state of the job. The possible job states are described below and the transitions between these states are detailed in
Table 5-1.
READY— The job is ready for either input of data or transfer to the offline operational database.
EDITING— Someone is currently connected to this job for data edit purposes. This is a transient state. When the
edit session is finished, the status of the job will be set to ready.
TRANSFER_IN_PROGRESS— In the process of transferring the changes associated with this job to the offline
operational database.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
TRANSFERRED— This job has been transferred to the offline operational database.
ODB_ACTIVATE_IN_PROGRESS— In the process of activating this job from the offline operational database to
the online operational database.
If Remote Copies are defined and are ACTIVE, this job status also means that the job has been activated to the
online operational database and is pending activation to one or more Active Remote Copies.
ODB_ACTIVATED— This job has been activated into the online operational database.
If Remote Copies are defined, this status means that the job has been activated to the Remote Copies as well.
AA_ONLY_TRANSFERRED— This job affected only AA applications that have no activation program and all
affected applications have been transferred to the offline operational database.
AA_ONLY_SKIPPED— This job affected only AA applications that have no activation program and all affected
applications have been skipped.
restrictions stated on the first page.
ODB_DELETE_IN_PROGRESS— In the process of deleting the job from the operational database job log and
from Remote Copies (if defined).
ODB_DELETED— This job has been deleted from the operational database job log and from the Remote Copies
(if defined). The changes were applied to both the offline and online operational database, and to the Remote
Copies. The only option open to the user is to delete this job from the primitive database.
ODB_DELETE_FAILED— Deletion of this job from the operational database job log failed.
DELETING— In the process of deleting this job from the primitive database tables. This is a transient state. When
the delete is finished, this job no longer exists and the changes are permanent in both the primitive database and
the operational database.
DELETE_FAILED— Deleting of this job form the primitive database tables failed.
UNDOING— In the process of removing the changes made by this job from the primitive database tables. This is a
transient state.
UNDONE— The changes associated with this job have been removed from the primitive database tables. No
further changes may be made to primitive data through a job whose status is undone.
UNDO_FAILED— Undo of the changes associated with this job failed.
REDOING— In the process of re-applying the changes associated with this job to the primitive database tables.
This is a transient state. When the redo is finished, the status of the job is set to ready.
REDO_FAILED— Re-applying the changes associated with this job to the primitive database tables failed.
CANCELLING— In the process of cancelling this job from the primitive database tables. This is a transient state.
When the cancel is finished, this job no longer exists and the primitive database has been returned to its state prior
to this job.
CANCEL_FAILED— Canceling of this job from the primitive database tables failed.
Spectrum Power 3 PDM Interface, User Guide Page 19
U0320 Revision: 1.0.0.0
Job Status
CANCELLING_TASK— In the process of cancelling the last task of this job from the primitive database tables. This
is a transient state. When the cancel task is finished, the job state is set to ready.
CANCEL_TASK_FAILED— Canceling of the last task of this job from the primitive database tables failed.
OP_RDBMS_ONLY— This job affects only applications with an operational PDM. No transfer is required -- only
activation.
OTS_ONLY— Indicates this job has changes which only affect the OTS application.
ASR_WRITE_SUCCESS— Only relevant for CFE on AIX configurations. Transient state indicating a successful
write of the Bulk ASR Change Log file by ImAsrExportBulk.
Table 4-1: Job Status - Action and Results
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
Edit EDITING
TRANSFER_IN_PROGRESS
Transfer or
OP_RDBMS_ONLY
READY
Undo UNDOING
CANCELLING
Cancel or
restrictions stated on the first page.
CANCELLING_TASK
Transfer TRANSFER_IN_PROGRESS
Cancel TRANSFER_IN_PROGRESS
TRANSFERRED (transfer)
or
TRANSFER_IN_PROGRESS (transfer)
TRANSFER_IN_PROGRESS (Complete) or
AA_ONLY_TRANSFERRED (transfer)
or
READY (cancel)
Activate ODB_ACTIVATE_IN_PROGRESS
TRANSFERRED
Cancel TRANSFER_IN_PROGRESS
Activate ODB_ACTIVATE_IN_PROGRESS
Undo ODB_ACTIVATE_IN_PROGRESS
ODB_ACTIVATED (activate)
or
ODB_ACTIVATE_IN_PROGRESS ODB_ACTIVATE_IN_PROGRESS (activate to Remote
(Complete)
Copies)
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
or
TRANSFERRED (undo)
Undo ODB_ACTIVATE_IN_PROGRES
ODB_ACTIVATED
Delete ODB_DELETING
Delete ODB_DELETE_IN_PROGRESS
restrictions stated on the first page.
(Complete) PDM_ACTIVATED
PDM_ACTIVATING
PDM_ACTIVATED (Copy which failed to activate will be
(Failed)
ABANDONED)
Undo PDM_UNDOING
PDM_ACTIVATED
Delete PDM_DELETING
Complete) ODB_ACTIVATED
PDM_UNDOING
ODB_ACTIVATED (Copy which failed to undo will be
(Failed)
ABANDONED)
Delete ODB_DELETE_IN_PROGRESS
ODB_DELETED
(Complete) or
ODB_DELETE_IN_PROGRESS
ODB_DELETE_IN_PROGRESS
(Complete) READY
REDOING
(Failed) REDO_FAILED
(Complete) Job cancelled. No longer in job log.
CANCELLING
(Failed) CANCEL_FAILED
(Complete) Job deleted. No longer in job log.
DELETING
(Failed) DELETE_FAILED
AA_ONLY_TRANSFERRED Delete DELETING
(Complete) READY
CANCELLING_TASK
(Failed) CANCEL_TASK_FAILED
restrictions stated on the first page.
Delete DELETING
AA_ONLY_SKIPPED
Cancel CANCELLING
OP_RDBMS_ONLY Activate ODB_ACTIVATE_IN_PROGR
5 Application Status
Job Transfer, being smart enough to know what data has changed and to which applications that data belongs, uses
this intelligence to write an entry into the Application Log for each affected application. First Job Transfer and then
ODB Activation use the Application Log to track what has been run and whether or not it was successfully completed.
The success or failure is registered in the Application log by using one of the following statuses. To understand the
transitions between these states refer to Table 5-1.
With Copy Management and PDM-Initiated Activation, Job Transfer has been enhanced to create an entry in the
Application Log for each Active Remote Copy that has been defined in the system. Depending on the type of data
change made, the activate_status in the Application Log is updated to either ‘Activation Required’ or ‘Not Required’.
READY— The application is ready to be transferred to the offline operational database for this job.
TRANSFERRING— In the process of transferring the changes associated with this job and application to the offline
operational database. This is a transient state.
TRANSFERRED— This job has been transferred to the offline operational database for this application.
TRANSFER_FAILED— A failure occurred during transfer for this application. The error must be corrected before
proceeding.
TRANSFER_SKIPPED— Transfer was skipped for this application and this job. Transfer should be run at some
later date to keep the primitive database and the operational database in sync.
NOT_NEEDED— Transfer is not needed for this application and this job. Activation will need to be run for this job
and application later. (Data changes affected this application and this job, but there is no transfer program for the
application, only an activation program.)
restrictions stated on the first page.
ODB_CANCELLING—In the process of undoing the changes from the offline operational database for this
application and removing the job from the operational database. This is a transient state.
ODB_CANCEL_FAILED—A failure occurred during cancel for this application and job.
TEST_UNDOING_FAILED— Undo of the changes from the test operational database failed. The changes still exist
in both the test and offline operational database.
ODB_ACTIVATING— In the process of activating this application for this job from the offline operational (or test
operational) to the online operational database.
ODB_ACTIVATE_FAILED— Activation from the offline operational (or test operational) database for this
application and job failed.
ODB_ACTIVATED— This application has been transferred from the offline operational (or test operational) to the
online operational database for this job.
ODB_UNDOING— In the process of undoing the changes from the online operational database for this application
and job. The changes still exist in the offline operational (or test operational) database. This is a transient state.
restrictions stated on the first page.
ODB_UNDO_FAILED— Undo of the changes from the online operational database failed. The changes still exist in
both the online and offline operational database (and the test operational if one exists).
INITSOS_NA_SERVER— No script exists to transfer this applications data changes to the online operational
database. To activate the data changes, the NA server must be INITSOSed
DONE_AS_PART_OF_BA— A separate activation is not required to transfer the data changes to the online
operational database for this application. The activation is done as part of the BA application transfer.
PDM_SEND_REQ— This job will require an activation from the online operational database to the PDM copy.
PDM_SENDING—In the process of activating this job from the online operational database to the PDM copy.
PDM_SENT— This job has been activated from the online operational database to the PDM Copy.
PDM_SEND_FAILED — Activation for this job from the online operational database to the PDM Copy failed.
PDM_UNDOING — In the process of undoing the changes associated with this job from the PDM Copy. The
changes still exist in the online operational database. This is a transient state.
PDM_UNDO_FAILED — Undo of the changes from the PDM Copy failed. The changes still exist in the PDM Copy.
HIS_NOT_NEEDED — The changes associated with this job are not required to be activated to the HIS Copy as
the changes are not HIS related.
HIS_ACTIVATION_REQ — The changes associated with this job require to be activated to the HIS Copy.
HIS_ACTIVATING — In the process of activating the HIS realted job changes to the HIS database on the HIS
Copy. This is a transient state.
HIS_ACTIVATED — The data changes have been successfully activated to the HIS database on the HIS Copy.
They are available for the collection programs running on the HIS database.
HIS_ACTIVATE_FAILED — The activation of the HIS data on the HIS Copy failed. The changes are not available
to the HIS database.
HIS_UNDOING — In the process of undoing the activation of the job from the HIS Copy. This is a transient state.
When the undo is finished, this job is no longer activated on the HIS Copy.
HIS_UNDO_FAILED — Undo of the activation of job from the HIS Copy failed. The changes are still activated to
the HIS database on the HIS Copy.
EA_NOT_NEEDED — The changes associated with this job are not required to be activated to the HIS Copy as
the changes are not EA related.
EA_ACTIVATION_REQ — The changes associated with this job require to be activated to the HIS Copy as there
are some EA related changes.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
EA_ACTIVATING — In the process of activating the EA related job changes to the HIS database on the HIS Copy.
This is a transient state.
EA_ACTIVATED — The EA data changes have been successfully activated to the HIS database on the HIS Copy.
EA_ACTIVATE_FAILED — The activation of the EA data on the HIS Copy failed. The changes are not available to
the HIS database.
EA_UNDOING — In the process of undoing the activation of the EA related job changes from the HIS Copy. This is
a transient state. When the undo is finished, this job is no longer activated on the HIS Copy.
EA_UNDO_FAILED — Undo of activation of the EA related job changes from the HIS Copy failed. The changes
are still activated to the HIS database on the HIS Copy.
NOT_NEEDED/XFER SKIPPED— Indicates that an activation is unnecessary either because no activation is
defined for this application or because the user elected to skip the transfer.
restrictions stated on the first page.
ODB_DELETED
Transfer Status Transfer READY -> TRANSFERRING ->TRANSFERRED
Cancel TRANSFERRED -> ODB_CANCELLING -> READY
ODB_READY -> ODB_TEST_ACTIVATING ->
ODB_TEST_ACTIVATED -> ODB_ACTIVATING ->
Activate ODB_ACTIVATED
or
DONE_AS_PART_OF_BA
Activate Status
AGC ODB_ACTIVATED -> ODB_UNDOING ->
ODB_TEST_ACTIVATED -> ODB_TEST_UNDOING ->
Undo ODB_READY
restrictions stated on the first page.
or
DONE_AS_PART_OF_BA
ODB_DELETE_READY -> ODB_DELETING ->
ODB_DELETED
Delete Status Delete
or
DONE_AS_PART_OF_BA
Transfer Status Transfer NOT_NEEDED
Activate ODB_READY -> ODB_ACTIVATING -> ODB_ACTIVATED
Activate Status
RDBMS Undo ODB_ACTIVATED -> ODB_UNDOING -> ODB_READY
ODB_DELETE_READY -> ODB_DELETING ->
Delete Status Delete
ODB_DELETED
Transfer READY -> TRANSFERRING -> TRANSFERRED
Transfer Status
Skip READY -> TRANSFERRED_SKIPPED
AA Applications
Activate Status Activate INITOS_NA_SERVER
Delete Status Delete NOT_NEEDED
6 Job Interlocks
Job interlocks prevent different jobs from changing the same data. For example if job ‘crystal’ had a job interlock on the
B1 name of ‘crystal’ then only job ‘crystal’ would be allowed to modify any data in the ‘crystal’ B1 hierarchy. Note that
this includes all lower levels of the hierarchy also (e.g. B2 B3 Element and Info data for ‘crystal’).
Job interlocking is performed automatically whenever any user or application tries to change data. This includes import
forms and sql*plus. If a job does try to modify data that is locked by another job an error message that identifies the job
that holds the job interlock for this data is written to the MSG table and the change is rolled back.
Job interlocks are released when a job is deleted or cancelled.
B1 hierarchy data will be locked at the B1, B2 or B3 level depending on what data was modified. This locking level can
best be illustrated using some examples:
Example 1.
If one job modified B1=crystal, then no other job may modify this B1 nor any of the B2, B3, element or infos below this
B1. A different job would be able to modify the data for any other B1.
Example 2.
If one job modified B1=crystal, B2=220K then no other job may modify this B2 nor any of the B3, element or infos
below this B1/B2. A different job would be able to modify the data for any other B2.
Example 3.
If one job modified B1=crystal, B2=220K and B3=BB1A, then no other job may modify this B3 nor any of the elements
or infos below this B1/B2/B3. A different job would be able to modify the data for any other B3.
restrictions stated on the first page.
Example 4.
If one job modified data for B1=crystal, B2=220K, B3=BRCST3, Element=CB 1, Info=CB trip, then the data is locked
for B3=BRCST3. No other job may modify this B3 nor any of the elements or infos below this B1/B2/B3. A different job
would be able to modify the data for any other B3.
transferred activated and deleted before any other job may edit voltage set range data.
Job Name Job Id Task Name Task Id Lock Lock Lock Key1 Lock Key2 Lock Key3
Type level
The Job Interlock report may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide -
U0325.
restrictions stated on the first page.
unix> cd $SPECPATH/src/schema/pdm/owner/prime/table
ORACLE_BASE= /u01/app/oracle
DBA= /u01/app/oracle/admin
ORACLE_PATH= /home/p/form
SQLPATH= /home/p/sys/rdbms
ORACLE_SERVER_HOME= /u01/app/oracle/product/10.2.0/db_1
ORACLE_ASM_HOME= /u01/app/oracle/product/10.2.0/asm_1
ORACLE_CRS_HOME= /u01/crs/oracle/product/10.2.0/crs_1
ORACLE_OID_HOME= /u01/app/oracle/product/10.1.4/oid_1
ORACLE_CLIENT_HOME= /u01/app/oracle/product/6.0.8/forms_1
restrictions stated on the first page.
TWO_TASK=
EMP_CONNECT= emp.world
GCS_CONNECT= gcs.world
OS_CONNECT= os.world
RDBMS_TOP_PN= /home/p
INCLUDE_TOP_PN= /home/p
SINCLUDE_PN= /home/p/src/rdbms/ImCustom
TABLESPACE_PN_FN= /home/p/sys/rdbms/ImOraDataTS
RUN_PN= /home/imdba
Current pwd = /home/imdba
For a description of the environment variables displayed by echora, use the command:
unix> echora ?
For a list of variables pertaining to PDM schema creation (ImOraSchema.plx), use the command:
unix> echora -i
Refer to the echora man page for more information on script options.
When echora is used with the help option it gives a description of what the environment variable is used for. For
example:
unix> echora help
The database in Oracle that will be accessed is this SID.
ORACLE_SID= emp1
ORACLE_BASE= /u01/app/oracle
For OFA, the location of the Oracle software owner’s admin files.
DBA= /u01/app/oracle/admin
Many of the PDM Interface scripts require that execution be initiated from what is known as the “run” directory. The run
directory is the value of the RUN_PN environment variable. echora may be used to see the current value of RUN_PN.
If the script is not initiated from the run directory, the following message will be displayed:
The current working directory is not the allowed run directory.
CWD= /home/p/src/schema/pdm/owner/prime/table
Change what is in environment variable RUN_PN or move via:
cd /home/imdba
The run directory was established to prevent the output files generated by scripts from cluttering up directories where
their existence may cause harm. Most notably the directories which contain Oracle schema source. The user may
override the default run directory location by placing the absolute path of the desired directory into the RUN_PN
environment variable.
sql> @ImJmForceReady
Enter value for jobname: mary
Note: The ImJmForceReady script always forces the job state to ‘READY’ in addition to clearing the user locks. Thus
this script should only be used if the job state is ‘EDITING’. For other job states it is recommended to use the PDM job
management forms to reset the job states.
Locking and unlocking can also be performed from the form JOBL within Oracle Forms. See the PDM Forms User
Guide - U0325.
Hint. If you wish to create a new job and immediately lock the job, use the ImJmLock script using a new job name. This
will both create and lock the job. No other user will have a chance to connect to the job in between these two steps.
restrictions stated on the first page.
ahead of Antipode). As an alternative, the Voltage Set data can be stored at the beginning of the first importable
file. Using the previous example, the Voltage Set records could be placed at the beginning of the file called
Antipode.
Each B1 hierarchy (i.e., each station) should be stored in its own IDDUG file.
All the reference records should be stored in one or more IDDUG files, and those files should be identified in such
a way as to ensure that they will be imported last—following the import of all other data (including Advanced
Application data).
The RTDS/CFE data should be stored in one or more IDDUG files.
Splitting up the data into multiple files is important for the following reasons. The operational database job
management is sized to handle a specific amount of data. One change to the primitive database may result in many
changes to the operational database. Therefore, keeping the amount of data in a job to a reasonable size will mean
that job transfer will not exceed maximums. Also, dividing B1 hierarchies into separate files will keep hierarchy errors
restrictions stated on the first page.
(e.g., record types out of order) from propagating beyond one B1.
Another recommendation is to use the B1 name as the job name.
primitive database (delete). A record marked for removal is deleted; all other records are handled as updates to an
existing row or inserts of a new row. The order in which the record types are merged is B1, B2, B3, Element, Info, and
RTDS record types. (CFE data is modeled using the RTDS record types.) As the primitive data tables are updated, the
PDM kernel triggers both validation and change detection.
Change detection also occurs during import. Images of the new or modified primitive data table rows are written to the
change log. This allows job transfer to determine what needs to be transferred as well as allowing the user to “undo”
the changes made by the import job if so desired.
It is possible to disable change detection during import but this should only be done when initializing an empty primitive
database using reverse transfer.
CAUTION! If change detection is turned off there will be no record of what has changed in the primitive
database that needs to be transferred to the operational database. Also the data in this job will not be able to
be transferred to remote systems via Copy Management job processes.
interlocking keeps import from touching data already being edited by other jobs and issues an error message if import
tries to make such a change.
Although it is possible to disable interlocking during import, the practice is strongly discouraged.
CAUTION! Overriding job interlocks puts the responsibility for maintaining data integrity on the user. Both the
primitive database tools and job transfer rely on job interlocking as a way of sequencing events and overriding
the interlocks can cause serious data integrity problems.
This override exists to provide a way for the user to get a “quick” change into the primitive data base and across to the
online database without having to cancel outstanding jobs and without having to transfer extraneous pieces of data.
Consider the situation where a station’s data is undergoing an extensive overhaul in the primitive database when the
need arises to make a “quick” change to that station’s data. Without the override, the user would have to make the
“quick” change under the same job name as the overhaul and would force job transfer, which only transfers data by the
job, to prematurely activate overhauled data along with the “quick” change. The only other option would be to cancel
the overhauling job, thereby losing the original set of changes. By using the override and a fair amount of caution, a
skilled user could make the “quick” changes under a new job name and transfer just that job’s changes to the online
system.
If the locking job has no purpose (is the result of an import of reverse transferred data, for example), it is recommended
that the job status be changed to ‘ODB_DELETED’ and the job be deleted from the primitive database to release the
lock.
When a B3 is inserted, additional elements are inserted based on the block type and the topology type. The element
type for the auto elements is set to -null-. The user must set the element type to the desired value. If the IDDUG file
contains an element with the same name as the auto element, then the element type and the other attributes are
updated to match the contents of the IDDUG record.
Note about forms. When using forms some additional elements are automatically inserted that are not inserted during
import. These are the analog elements based on the network element group and their corresponding binary element.
When an element is inserted and the element name indicates that this is a binary element, then the associated binary
element is automatically inserted with the correct element type.
When an element is inserted and it contains an element type, or if the element type is updated from -null- to a value,
some infos will be automatically inserted. Specifically, those digital infos whose default value is non-zero. If the IDDUG
file contains an info with the same names as the auto info, then the attributes of the info are updated to match the
contents of the IDDUG record. Also, if the IDDUG file contains a digital info record containing only values identical to
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
the default values, this will not be inserted into Oracle. This is known as sparse digital.
Modifications to the RTDS equipment cannot be added or deleted via the RTDS pseudo station. This station is created
so that characteristics such as the technological area for signalling, one-line display names and local and global
decision tables may be modified to suit the project.
The Job Trace report may also be run from the REP form within Oracle Forms. See the PDM Forms User
Guide - U0325.
3. If any errors were issued, generate a report of the errors messages.
sql> @ImRptMsg
Enter value for report_output: crystal.msg
Enter value for job_name: crystal
Enter value for task_id: %
An example import error message is:
Msg Site/
ID Date Time Message Source S Code Kid
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
The Error Message report may also be run from the REP form within Oracle Forms. See the PDM Forms User
Guide - U0325.
4. Correct the errors in the IDDUG data file.
unix> vi crystal.iddug
5. If any errors were detected and corrected in the IDDUG data file, cancel the previous import job to remove any
changes made to primitive data. This will also clear out the error message, trace messages and import bulk
table rows for this job.
unix> ImJmCancel -u oracle_username -j crystal [-o output_file
(default=ImJmCancel.out)]
This script can also be run from the PDM (Main) form within Oracle forms. See the PDM Forms User Guide -
U0325.
Note, at this point the primitive database should be in the state it was prior to the previous import.
6. Rerun import. I.e., start again at step 1.
.
lines deleted here
.
0 errors were issued for 22 DIGITL records. 07:43:45
pkg_import_ba.main: Deleting rows from import_bulk for 07:43:45
job mary.
End of Import. Normal Termination. 0 errors were issued 07:43:45
for 676 total records.
11 Communications Import
Import of communications data is much like Base Applications Import. The main difference is the command executed
to perform the import. Be sure to run import for the B1 hierarchy before running any communication imports.
Check for inconsistencies within the data by running the ImGvICCP script.
unix> ImGvICCP -u oracle_username -j iccp [-v Row Level Validation
Flag (Y/N) Defaults to N [-o output_file (default=ImGvICCP.out)]
Global validation can also be run from the GLOV form within Oracle Forms. See the PDM Forms User Guide - U0325.
Check for inconsistencies within the data by running the ImGvLs script.
unix> ImGvLs -u oracle_username -j LdShed [-v Row Level Validation Flag
(Y/N) Defaults to N [-o output_file (default=ImGvLs.out)]
Global validation can also be run from the GLOV form within Oracle forms. See the PDM Forms User Guide - U0325.
Check for inconsistencies within the data by running the ImGvVr script.
unix> ImGvVr -u oracle_username -j VoltRed
[-v Row Level Validation Flag (Y/N) Defaults to N
[-o output_file (default=ImGvVr.out)]
Global validation can also be run from the GLOV form within Oracle forms. See the PDM Forms User Guide - U0325.
Default=ImEaImport.out
[-e] Maximum number of errors issued before stop.
Default=100
[-c] Change detection flag (Y/N) Default=Y
[-l] Job Interlock flag (Y/N) Default=Y
[-t] Import trace level. Default=0
A specific example to import Energy Accounting data using EnergyA as a the job name is:
unix> ImEaImport -u oracle_username -j EnergyA -i EnergyA.iddug -e
99
Import can also be run the PDM (Main) for within Oracle forms. See the PDM Forms User Guide - U0325.
Follow the directions for Base Applications Import to check results.
Check for inconsistencies within the data by running the ImGvEa script.
restrictions stated on the first page.
Once all the files are in the $SPECPATH/par/adb directory, to run the pre-processor:
unix> cd $ADBHOME
unix> AdbChkr
The validation errors and summary information produced by the pre-processor is in the file $ADBHOME/AdbChkr.out.
The pre-processor also creates the import datafiles: impPWA.dat, impNWA.dat, impSCA.dat, impOTS.dat, impAI.dat,
impDSA.dat in the $ADBHOME directory. These files are used by AA import.
16.2 AA Import
Run the AA import program using the script ImAaImport. AA Import uses the AdbChkr output data files to insert or
update the AA primitive database tables. If you run ImAaImport for subsystem ALL it will first check to make sure there
are no other jobs with AA changes that exist in the primitive database. All AA jobs must be either cancelled or deleted
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
Default=ALL
16.3 AA Export
AA Export reads from the AA primitive database tables and writes an ASCII file back in IDDUG format. All six AA
scopes (PWA, NWA, SCA, OTS, AI, DSA) are exported. The output is written to the $ADBHOME directory.
unix> cd $ADBHOME
unix> ImAaExport -u oracle_username
Output path/files are $ADBHOME/ImAaExport.out and
$ADBHOME/DBNETD.export
You will be prompted for the Oracle password when you run the script.
The output file $ADBHOME/ImAaExport.out will tell you the start and end date and time of the last ImAaExport run.
The ASCII file created by export is $ADBHOME/DBNETD.export. Base Applications Export (ImBaExport) is used to
export the reference data.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
restrictions stated on the first page.
The generated export files are written to the directory specified by the -a option. The naming convention used for
the export files is xxx.exp, where xxx is the name of the data which is exported. For example, file Minnetonka.exp
receives the data for the station Minnetonka.
Note, it is recommended that a new job name be chosen for export instead of using an existing job.
2. Check the ImBaExport script output file to verify that the export was successful. Also log into SQL*Plus and check
the job trace. This can be done while ImBaExport is still executing and/or after it has finished.
unix> sqlplus oracle_username/password
sql> @gtrace jobname
MESSAGE TIME
---------------------------------------------------- --------
Start of B1 export for “Vienna”. 12:27:41
Processing STATN record types. 12:27:41
Processing LN_BLK record types. 12:27:41
..
Lines deleted here
..
End of Export run. Normal Termination. 12:28:13
Job Trace information may also be seen on the JOBT form within Oracle Forms. See the PDM Forms User Guide -
U0325.
3. Cancel the export job.
unix> ImJmCancel -u oracle_username -j jobname
[-o output_file (default=ImJmCancel.out)]
This script can also be run from the PDM (Main) form within Oracle forms. See the PDM Forms User Guide -
U0325
Note: ImJmCancel is used rather than ImJmDelete so that the job maybe removed without having to first change the
status to ODB_DELETED. Use ImJmCancel only if this job was used for export purposes only. If this job was used for
other purposes, then use ImJmDelete instead.
17.2 B1 Export
A specific example to export B1 crystal using the B1 name as the job name follows. The generated export filename is
./export/crystal.exp.
unix> ImBaExport -u oracle_username -j crystal -h b1 -k crystal -a
export
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
To export all B1 data use the following. One file for each B1 is created in directory ./export_b1.
unix> ImBaExport -u oracle_username -j exp_b1 -h b1 -k all -a
export_b1
Remember to check the output file ImBaExport.out and the job trace to verify that the export ran successfully. Also
remember to cancel the job when finished.
one file for Base Applications-only references or for those references that appear in both Advanced Applications and
Base Applications (called reference_ab), and one file for all other references (called reference_ba.exp).
Remember to check the output file ImBaExport.out and the job trace to verify that the export ran successfully. Also
remember to cancel the job when finished.
18 Communications Export
Export of communications data is much like Base Applications Export. The main difference is the command executed
to perform the Export.
(default=ImLsExport.out)]
[-a Directory to receive exported IDDUG files
Default=pwd/export###### where ######=UNIX pid]
Export can also be run from the PDM (Main) form within Oracle forms. See the PDM Forms User Guide - U0325.
Follow the directions for Base Applications Export to check results and cancel the export job.
restrictions stated on the first page.
ORACLE_HOME=/usr/oracle
ORACLE_SID =emp1
ORACLE_PATH=/usr/emlib/public/im/bin:/home/s/lib/form
MENU_PN =/home/s/lib/form
Starting rdbms_interface main form and a PDM Interface window
should appear on your screen
If a window does not appear on your screen, verify that the UNIX environment variable DISPLAY contains the name of
your screen. If you received the following error message from ImForm, then use the xhost command to allow windows
from another server to be displayed on your screen.
restrictions stated on the first page.
24 Global Validation
Global validation verifies the contents of the primitive database. Global validation includes database-level validation
and may optionally include entry-level validation. If any errors are detected by global validation, error messages are
written to the MSG table.
Entry-level validation checks are the validation checks that are done when data is entered into the primitive database
tables. These are the same checks that are done during import or during the use of Forms. By default the entry-level
checks are not performed. To perform these checks include the -v Y option on the UNIX script.
Database-level validation checks are additional validity checks that ensure that the primitive database as a whole is
complete and consistent.
An example of a completeness check is the check that verifies that the number of absolute limits that are actually
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
defined for an element is the correct number for this element name and element type combination. If too many or too
few absolute limits are defined, an error message is issued.
An example of a consistency check is the check on the reference table that verifies that the B1/ B2/B3/Element names
entered do actually exist in either the SCADA tables or the Network Application tables.
Global validation should be executed after changing data in the primitive database and prior to job transfer. It is the
user’s responsibility to fix any errors identified by global validation.
Job global validation is only allowed if there are entries in the change log associated with the job. This is necessary
because the entries in the change log are used to determine which tables and rows were affected by the job and need
to be re-validated.
Job global validation executes as a separate task in the job with a task name of ‘job_validation’. All of the error
messages issued by job global validation are associated with this task.
The reason why a user might use job global validation is to generate an up-to-date listing of error messages associated
with the job after correcting existing error messages.
ImGvJob is the UNIX script that executes global validation for a job. An example usage is:
unix> ImGvJob -u oracle_username -j jobname -t % -d Y [-o
output_file (default=ImGvJob.out)]
Global validation can also be run from the GLOV form within Oracle forms. See the PDM Forms User Guide - U0325.
Note, if a % is entered for the value for the -b parameter then all B1s are validated.
Element names in Oracle database that will be interpreted as duplicates within the on-line opertational database. This
check is necessary because of the difference in the way Oracle and the on-line operational database handle upper and
lower case, blanks, and underscores. In Oracle the differences between these pairs is significant and Oracle will
interpret them to be unique.
“STAR1” and “star1”
“Mary 2” and “Mary2”
“Tbear_1” and “Tbear1”
In the on-line operational database, however, these pairs are interpreted to be identical.
This script will check the Oracle names and will issue messages if the names will form duplicates due to upper/lower
case, underscores, or blanks.
An example usage is:
restrictions stated on the first page.
Global validation for Energy Accounting validates all Energy Accounting data and writes error messages to the MSG
table. It performs the database-level validation checks and the entry-level checks if requested.
Energy Accounting global validation executes as a separate task in the job with a task name of ‘ea_global_val’. All of
the error messages issued by Energy Accounting global validation are associated with this task.
ImGvEa is the UNIX script that executes Energy Accounting global validation. An example usage is:
unix> ImGvEa -u oracle_username -j EnerAcc -v N [-o output_file
(default=ImGvVr.out)]
Global validation can also be run from the GLOV form within Oracle forms. See the PDM Forms User Guide - U0325.
ImGvICCP
ImGvLs
ImGvUls
ImGvVr
ImGvEa
Each secondary script run within ImGvBaAll executes as a separate task within the job. Each task name is derived
from the type of data being validated. See the sections above for the exact task name for each global validation script.
All of the error messages issued by ImGvBaAll are associated with these tasks.
ImGvBaAll is the UNIX script that executes all the base application global validation scripts. An example usage is:
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
called PORT is being deleted, all records that reference PORT, no matter where they reside in the data base, will be
removed or revised.
Delete actions that originate from a form or from an import automatically run with cascade delete turned on. Delete
actions that originate during an SQL*Plus session automatically run with cascade delete turned off. To turn on cascade
delete from within an SQL*Plus session, enter the following command:
sql> exec pkg_jobm.setpv_cascade_delete (‘Y’);
sql> delete from b1 where b1_name = ‘Star1’; /* or some other
delete statement/*
Cascade delete will be active for the duration of the session.
1 The dependency of child records, as defined in the FK_TABLE and FK_COLUMN tables, does not necessarily ensure
that cascade delete will occur since setting the CASCADE_DELETE field in the FK_TABLE table to “N” turns off the
cascade delete function for that relationship. See the U0385 - PDM Interface Maintenance User Guide for more
information.
“ABSOLUTE_LIMIT”.
..... ....................... Cascade Delete touched subsystem “BA” Table
“ANALOG”.
..... ....................... Cascade Delete touched subsystem “BA” Table
“B3”.
..... ....................... Cascade Delete touched subsystem “BA” Table
“ELEMENT”.
..... ....................... Cascade Delete touched subsystem “BA” Table
“REFERENCE”.
..... ....................... Cascade Delete touched subsystem “BA” Table
“LS_FDR”.
..... ....................... Cascade Delete touched subsystem “RTDS” Table
“RTU_SCAN_ORDER”.
============================ END OF EXCERPT ============================
restrictions stated on the first page.
Notice that the actual task name has been replaced by “task_DEL” and that the actual start and end sequence
numbers have been replaced by “aaa” and “bbb”. When the cascade delete routine starts a delete, it generates a new
task name by combining the current task name with the suffix “_DEL”. If several deletes occur within the same job,
each delete task will be assigned the same name. These sibling tasks can only be distinguished, one from the other,
by looking at their task id numbers.
NOTE: Each cascade delete operation causes a new task to be created. This is done so that the user can cancel the
effects of a single delete operation -- even if other changes have already been made using the form.
If the user needs to see the data from the individual records touched by a cascade delete operation, the utility
ImRptChgLog can be used.
In addition to the standard child/parent relationship, the Calc Operand record has a second set of fields that also forms
a child-like relationship with the parent tables. These secondary fields, operand_station_name, operand_voltage_level,
operand_b3, operand_element_name and operand_info_name, must be considered during a cascade delete.
Using an example where the B3 form performs a cascade delete operation on the TA where:
the B1 name = PORT,
the B2 name is 230K,
and the B3 name is PRTLD2,
any Calc Operand record whose b1_name/b2_name/b3_name matches these values or whose
operand_station_name/operand_voltage_level/operand_b3 matches these values will be deleted. Here, too, the
“sparse” digital exception applies. If the delete action originates with a digital info, it will not cascade into the calc
operand table. If this calc operand record needs to be deleted, it must be removed by hand.
In addition to the standard child/parent relationship in which the principle key fields, b1_name_from_side,
b2_name_from_side, b3_name_from_side, and term_name_from_side, map back to a higher level parent record, the
Reference record has a second set of fields that also forms a child-like relationship with the parent tables. These
secondary fields, b1_name_to_side, b2_name_to_side, b3_name_to_side, and term_name_to_side, must be
considered during a cascade delete.
Using an example where the B3 form performs a cascade delete operation on the TA where:
the B1 name = PORT,
the B2 name is 230K,
and the B3 name is PRTLD2,
any Reference record whose b1_name_from_side/b2_name_from_side/b3_name_from_side matches these values or
whose b1_name_to_side/b2_name_to_side/b3_name_to_side matches these values will be deleted.
If cascade delete is turned off between Base Applications and Advanced Applications (see “Scope of Cascade Delete”
on page 102), a delete of a B3 will not delete a reference if that reference points to an existing Advanced Applications
equipment. Similarly, a delete of an Advanced Applications equipment will not delete a reference if that reference
points to an existing Base Applications equipment.
If cascade delete crosses between Base Applications and Advanced Applications, all affected references will be
deleted.
record, the scan order record is NOT blanked out. Though this may seem strange, it is based on the fact that digital
records are stored “sparingly” in the PDM database. If the parent element uses default digital information, there is no
need to store the digital info in the PDM database. Later, however, when the element is transferred to the off line
operational database, the digital information is expanded to include the default infos. Therefore, removing a digital info
from the Oracle database does not imply that the digital info does not exist. If this scan order record needs to be
blanked out, it must be done by hand.
1 Cascade delete between Base Applications and Advanced Applications is unidirectional. A cascade delete originating
in the Base Applications hierarchy can ripple over into the Advanced Applications hierarchy. A delete in the Advanced
Applications hierarchy, however, does not cross over into Base Applications.
Cascade delete can also be defined to cross between the two subsystems at specific levels. For instance, the mapping
for the delete can begin at the company level, the station level, or any other selective level.
See the U0385 - PDM Interface User Guide for more information on how to parameterize a cascade delete.
26.1 Modeling
ImTAModel is a PDM Interface UNIX script that copies a TA and its hierarchical data in the base applications and/or
the advanced applications database.
ImTAModel will prompt the user for the Oracle Password if it has not been passed to ImTAModel in environment
variable PASSWD.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
Modeling can be done on one of four levels - B1/station, B2/voltage level, B3/equipment or Element - and may or may
not include the references.
The TA modeling task will appear as task name ‘ta_model’ in the task list for the job and all generated messages will
be associated with that job/task.
ImTAModel is the UNIX script that executes the modeling function. An example usage is:
unix> ImTAModel -u username -j myjob -s1 star1 -s2 110v -d1 star2 -
d2 110v -l 2 -aa N -r N
Modeling can also be run from the Model form within Oracle forms. See the PDM Forms User Guide - U0325.
There are several switches that you can set to control the scope of the model that you are creating.
To begin with, you must set the -l or level switch. This switch can be set to:
1 - B1/Station Model
2 - B2/Voltage Level Model
3 - B3/Equipment Model
4 - Element Model
By default, the level is assumed to be level one.
Depending on what model level you choose, you must also select one or more pairs of source and destination
switches. These switches include:
s1/d1 - source and destination B1 (station) name. This switch is required for all modeling levels.
s2/d2 - source and destination B2 (voltage level) name. This switch is required for modeling levels 2, 3, and 4.
s3/d3 - source and destination B3 (equipment) name. This switch is required for modeling levels 3 and 4.
s4/d4 - source and destination Element name. This switch is required for modeling level 4.
If the name associated with any source or destination switch contains an embedded blank, be sure to enclose that
name between double quotes (i.e,. “b1 name”).
In addition to the setting the level option, you can direct the program to leave out specific pieces of data by setting the
hierarchy switches. These switches are:
ba - Copy the data in the base application hierarchy
aa - Copy the data in the advanced application hierarchy
r - Copy the reference data
Spectrum Power 3 PDM Interface, User Guide Page 70
U0320 Revision: 1.0.0.0
Modeling Primitive Data
By default the hierarchy switches are all set to Y, meaning that all data will be copied.
For a complete discussion of the switch options, see the man page for this script.
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
restrictions stated on the first page.
----------------------------------------------------
Select the ImRpt Step to Perform on emp1
----------------------------------------------------
1) ImRptApplDefn
2) ImRptChgLog (extra wide)
3) ImRptChgLogHist (extra wide)
4) ImRptColumn
5) ImRptDomain
6) ImRptFk
7) ImRptIddug
8) ImRptIlock
9) ImRptIlockCtrl
restrictions stated on the first page.
10) ImRptJobLog
11) ImRptJobLogHist
12) ImRptJobTrace
13) ImRptMsg
14) ImRptMsg179 (extra wide)
15) ImRptMsgOk
16) ImRptMsgNoJobId0
17) ImRptTable
18) ImRptTablespace
19) ImRptTransferCtrl
20) ImRptCopyDefHist
21) ImRptCopyDefLog
22) ImRptHisChgLog (extra wide)
23) ImRptHisChgLogHist (extra wide)
24) ImRptCalcOpDelete
q) Exit this Session.-----------------------------------------
Enter choice:
These reports may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide - U0325.
----------------------------------------------------
Select the ImBaRpt Step to Perform on emp1
----------------------------------------------------
1) ImBaRptB1
2) ImBaRptRef
3) ImBaRptRtds
4) ImBaRptRtu
5) ImBaRptAaB1Map
6) ImBaRptVol
Reports 7-14 are not available because TCI is not configured.
15) ImBaRptCfe
16) ImBaRptRegBus
restrictions stated on the first page.
17) ImBaRptCharCurve
18) ImBaRptRefDesc
19) ImBaRptTopDesc
20) ImBaRptMsCCNam
21) ImBaRptAdPreset
22) ImBaRptFxWGName
23) ImBaRptElinName
24) ImBaRptBtypname
25) ImBaRptElNameType
26) ImBaRptEtypname
27) ImBaRtpTotyname
28) ImBaRptRtuConn
q) Exit this Session.-------------------------------------------
Enter choice:
These reports may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide - U0325.
----------------------------------------------------
Select the ImCommRpt Step to Perform on emp1
----------------------------------------------------
1) ImCommRptICCP
2) ImCommRptB1ICCP
a) ImCommRptICCP & ImCommRptB1ICCP
q) Exit this Session.-------------------------------
Enter choice:
These reports may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide - U0325.
restrictions stated on the first page.
----------------------------------------------------
Select the ImLsRpt Step to Perform on emp1
----------------------------------------------------
1) ImLsRptAreaSeqFdr
2) ImLsRptSeqFdr
3) ImLsRptFdrSwh
4) ImLsRptUnused
5) ImLsRptFdrInArea
q) Exit this Session.-------------------------------
Enter choice:
These reports may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide - U0325.
restrictions stated on the first page.
----------------------------------------------------
Select the ImUlsRpt Step to Perform on emp1
----------------------------------------------------
1) IImUlsRptFreqRlyFdr
2) ImUlsRptRlyFdr
3) ImUlsRptFdrSwh
4) ImUlsRptUnused
5) ImUlsRptFdrInFreq
q) Exit this Session.-------------------------------
Enter choice:
These reports may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide - U0325.
restrictions stated on the first page.
----------------------------------------------------
Select the ImVrRpt Step to Perform on emp1
----------------------------------------------------
1) ImVrRptBusElem
q) Exit this Session.-------------------------------
Enter choice:
These reports may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide - U0325.
restrictions stated on the first page.
----------------------------------------------------
Select the ImEaRpt Step to Perform on emp1
----------------------------------------------------
1) Energy Accounting
q) Exit this Session.-------------------------------
Enter choice:
These reports may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide - U0325
restrictions stated on the first page.
sql> @ImJmEdit
Enter value for jobname: mary
sql> @ImJmEndEdit
sql> exit
unix>
restrictions stated on the first page.
If you mistakenly exit an SQL*Plus editing session without using the ImJmEdit.sql script, the job log will incorrectly
indicate that the job is currently being used and will list the job status as EDITING. Furthermore, no user will be allowed
to connect to the job using the normal scripts. To remedy this situation, the SQL*Plus script ImJmForceReady.sql
should be used to correct the row in the job log table. This script will change the status of the specified job name to
READY and set the current_username, current_date_time and session_id columns to null. The user must be careful
when using this script. A job that has been transferred to the operational database should never be changed back to
ready using this script. An example usage is:
sql> @ImJmForceReady
Enter value for jobname: mary
35 Change Log
The change log contains information about each change that was made to the primitive database. This information
includes the job id, the oracle_username who made the change, the date and time of the change, information which
identifies the table and row that was changed, whether the change was an insert, update or delete, and the before and
after image of the row which was changed.
ImRptChgLog.sql is an SQL*Plus script that generates a detailed report of the change log. This script may be executed
from SQL*Plus or through the UNIX script ImRpt. An example usage is:
sql> @ImRptChgLog
Enter value for report_output: change_log.jobname
Enter value for job_name: jobname
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
This report may also be run from the REP form within Oracle Forms. See the PDM Forms User Guide - U0325.
restrictions stated on the first page.
36 Job Trace
A trace of a job shows what has occurred during the execution of the job. Multiple levels of job trace exist. The default
level is level zero. Level zero shows the major steps which have occurred during execution of the job. Levels greater
than zero provide more detailed trace. See the PDM Interface Maintenance User Guide U0385 for more details. An
example of job trace output is:
MESSAGE TIME
--------------------------------------------------------- --------
Start of Import run. Job name=star1 Job id=1 01:57:45
Processing STATN record types. 01:57:45
0 errors were issued for 1 STATN records. 01:57:46
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
Use the following list to determine what information is added to the trace report as the level is increased. Please note
that the trace levels are cumulative. If the trace level is set to 50, the trace report will contain all messages for level 50
and below.
Trace level Description
--------- ------------------------
0 Major steps in programs are traced. This is the default.
10 Message page is traced.
20 Validation packages are traced.
30 Import merge packages are traced.
40 Change detection packages are traced.
50 Job management triggers are traced.
60 Statement triggers are traced.
70 Row triggers are traced.
The new trace level will remain in effect for the duration of the SQL*Plus session or until it is changed by another call to
pkg_jobt.set_level.
The SQL*Plus script gtrace lists the trace messages and the time each message was issued for a job. The “g” in gtrace
stands for “get”. The above sample output was produced by gtrace. An example usage is:
sql> @gtrace jobname
ImRptJobTrace.sql is an SQL*Plus script that generates a report of the trace messages for a job.This script may be
executed from SQL*Plus or through the UNIX script ImRpt. An example usage is:
sql> @ImRptJobTrace
Enter value for report_output: jobname.trace
Enter value for job_name: jobname
11 % PKGIMP_B1.MERGE
96 % PKG_ACCUMULATOR.VAL_RECORD_ACCUMULATOR
24 % PKG_ANALOG.VAL_RECORD_ANALOG
1 % PKG_APPL.SET_SKIP
3 % PKG_B1.no_RTDS_insert
66 % PKG_B3.VAL_RECORD_B3
117 % PKG_DIGITAL.VAL_RECORD_DIGITAL
18 % PKG_RTU_SCAN_ORDER.VAL_RECORD_RTU_SCAN_ORDER
38 % pkg_calc_operand.val_calc_operand
3 % pkg_digital.vl_initial_value
2 % pkg_elcom_gv.val_elcom
2 % pkg_import_ba.main
1 % pkg_import_elcom.main
1 % pkg_import_wscc.main
==============================================
Message counts by Message Code and Site Code
==============================================
Count Job Msg Code Site
----- ---- ------ ---
13518 % 102 e
2 % 103 e
17601 % 104 e
3 % 108 e
10 % 123 e
30 % 151 e
1 % 224 e
Spectrum Power 3 PDM Interface, User Guide Page 84
U0320 Revision: 1.0.0.0
Validation Error Messages
14 % 10237 e
=======================================
Message counts by Job Name and Job_Id
=======================================
Job Name Job Id Count
------- ----- -----
Carver 25 123
Castillo 26 12
GvRtsMis 109 3
ImGvB1 86 96
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
LdShed 110 13
Line 45 3
London 47 12
MPLS 48 51
=====================================
66 % B3
38 % CALC_OPERAND
117 % DIGITAL
1362 % ELCOM_PARTNER_ACCESS
24587 % None
3 % RTU_PROTOCOL_TYPE
==============================================
30 151 %1
224 e %1 transfer for job %2 was skipped. Oracle and the on-line database are no
longer in sync for this application.
1410 237 e No matching RTU scan order record with a subtype of 8 (analog-to-digital-tap)
was found for the RTU scan order record with a subtype of 1 (single pole digital)
and a tap conversion type of 0 (no tap conversion). RTU name “%1”, B1 name
“%2”, B2 name “%3”, B3 name “%4”, element “% information name “%6”. A
match must exist with the same B1/B2/B3 block and in the same RTU since the
norm element type for the digital info is TapChan.
Spectrum Power 3 PDM Interface, User Guide Page 85
U0320 Revision: 1.0.0.0
Validation Error Messages
5 10238 e The info type of the B1/B2/B3/Element/Info specified on the scan order record
for rtu “%1”, scan order number “%2” is not defined as controllable, but a
Control Address is defined for it. Setpoints cannot be operator initiated from
one-lines for non-controllable infos.
9 10245 e The field “%1” for RTU “%2” and RTU Scan Group “%3” is superfluous because
the Scheduling Type field for Channel “%4” is equal to “%5”.
This may also be run from the PDM (Main) form within Oracle Forms. See the PDM Forms User Guide - U0325.
restrictions stated on the first page.
39 Task Cancel
It is possible to cancel tasks from a job in reverse order. Cancelling a task removes the changes associated with the
last task from the primitive data and from the change log. This ability allows the user to remove some of the changes
associated with a job, and then continue using the job. Note that it is always the last task that is cancelled.
The restrictions which apply to job undo also apply to cancelling a task. If the data involved in a task is required to
satisfy a foreign key constraint, the cancel of the task will not be allowed. One difference between job undo and task
cancel is that the job undo can be reversed by a job redo. A cancel of a task is not reversible.
ImJmCancelTask is a UNIX script that cancels a task. The associated SQL*Plus script ImJmCancelTask.sql is also
available. An example usage is:
unix> ImJmCancelTask -u oracle_username -j jobname
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
40 Job Transfer
Job transfer provides the ability to transfer a job from the primitive database to the operational database. Job transfer
uses the change log to determine what has changed, extracts the modified data from the primitive database and writes
the data into the offline operational database as part of a DBA job.
The execution of job transfer may be accomplished manually by executing a series of scripts or by using the PDM
(Main) form within Oracle Forms. The PDM (Main) form is the recommended approach as it will handle the proper
setting of job statuses through out the life of a job. See the PDM Forms User Guide - U0325 for more information on
the PDM (Main) form.
database job name. A corresponding DBA job is created in the offline operational database with the same name,
except the name is forced to upper case. If the user desires both job names to be identical, the user must create the
primitive job name in upper case. Refer to the man page for more information about this script and its options. An
example usage is:
unix> ImBaJobTransfer -u oracle_username -j jobname
[-o output_file (default=ImBaJobTransfer.out)]
The Job Transfer function is also available through the PDM (Main) form within Oracle Forms. See the PDM Forms
User Guide - U0325.
If an error occurs and the messages are not clear, debug may be enabled by using the -t, -d, and –e switches during
the job run.
If the -s switch (Stable Data Model) is Y, then it is assumed that DMX_CONF/config parameter StableDM is set to true
and nimset numbers are not re-used. If Stable Data Model is N, then nimset numbers are being re-used and transfer
restrictions stated on the first page.
will force jobs with deletes to complete before jobs with inserts are allowed to proceed.
The -t switch should not normally be used; it is needed for bulk transfer operations in conjunction with the “full” transfer
option and is set internally. If the -t switch is set improperly (requesting a transfer that would allow the Oracle database
and the off-line operational database to become “unsynchronized”), the transfer program will terminate with an error
message.
The -d switch enables additional output messages for a specific object. For example, “-d 1” issues additional messages
about the b1 object, “-d 2” issues additional messages about the b2 object, and so forth.
Setting the -e switch to 1 will turn on additional output messages that do not pertain to a specific object. Setting this
switch will produce prodigious amounts of information. Beware of its verbosity. It is possible to get more detailed trace
by setting the job trace level value to a higher number using the -l switch.
After ImBaJobTransfer has finished execution, the BA transfer status for the job is set to TRANSFERRED
(TRANSFER_FAILED if unsuccessful), and the changes are in the offline operational database. The DBA subsystem
may be used to view the DBA job in the offline operational database. Refer to the DBA Editor’s user guide (U0310) for
more information on the DBA subsystem.
Guide - U0325.
Use grep to list the job transfer error messages in the ADM console log. An example usage is:
unix> tail -2000 /var/adm/messages | grep ImTrJo
Note: If nothing returns when working with the grep, try the tail without the grep. It is possible for the
/var/adm/messages file to get full.
Example error messages resulting from this command are:
Mar 14 14:33:24 mm01_dem2 ImTrJo 2803 .dopb1 4 3 0
Use man to get additional information about the ADM console log error messages. There are three parameters for
man. The first is the name of the .err file which contains the error text, the second is “err”, and the third is the error
number. An example usage is:
unix> man dopobj_b1 err 2803
restrictions stated on the first page.
In the case of job transfer, it may take some effort to translate the console message into the appropriate parameter
values for man. Unlike most other Spectrum Power 3 subsystems, the user cannot simply use the application name
from the console message (the third field within the message, ImTrJo in the example above).
ADM console messages for job transfer originate from the object functions that are called. An identifier for the object
function is found in the fifth field of the console message (.dopb1 in the example above). The object function identifying
field for job transfer console messages always starts with “.dop”. In most cases, the name of the .err file that contains
the error text can be generated by removing the initial “.” and inserting “obj_” between “.dop” and whatever follows it.
(“.dopb1” in the example console message translates to “dopobj_b1” as the example parameter for man which
indicates that the dopobj_b1.err file contains the error text for that console message.) Since there are several
exceptions to this rule, Table 41-1 is provided to help map the application names and the console messages to their
corresponding .err files.
Table 40-1: Mapping an Application Name to an Error File Name
Application Console Message Corresponding .err file name
Application Name
Accumulator Info .dopinac dopobj_in_ac
Analog Info .dopinan dopobj_in_an
B1 .dopb1 dopobj_b1
B2 .dopb2 dopobj_b2
B3 .dopb3 dopobj_b3
Characteristic Curves .dopcc dopobj_cc
Common Link Users Interface .dopclui dopobj_clui
application = BA
AGC
output_file = Output path/file Default=pwd/ImBaDBAJobMgmt.out
command line = Script is being executed from the UNIX command line
execution
Y - Is being executed from command line (default)
N - Is being executed from the PDM (Main) form
debug_switch = Debug switch
40.9 Caution
In some cases, the order of job execution is important within the operational database. In Oracle, PDM provides data
interlocking which prevents the user from modifying data which has been modified by another job. In the operational
database, there are internal numbers that are generated outside this interlock process.
Consider the following scenario.
$SPECPATH/par/DMX_CONF/config parameter StableDatMod is set to false. This means that the operational internal
nimset numbers may be re-used.
Job A executes a cascade delete of a B3 and successfully transfers this data to the offline operational database. Note
that this makes available some previously used nimset numbers.
Job B defines a new B1 and its subordinate data and is also successfully transferred to the offline operational
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
database. During this process, some of its data is assigned nimset numbers that were made available by the deletes in
Job A.
If either Job A is deleted from the offline operational database and/or Job B is activated to the online operational
database, a situation where a nimset number is assigned to more than one equipment can occur.
In order to prevent this from occurring, the -s (StableDM) switch has been added to the transfer program. This switch
defaults to N (to align with StableDatMod=false). This switch will cause an additional check against the entries in the
change_log. If the specified job is a FULL TRANSFER or there are inserts of B1, B2, B3, ELEMENT or RTU data and
there are one or more other jobs which delete data from any of the same tables, the transfer will be aborted and the
user will be recommended to cancel or transfer/activate/delete all jobs which contain the data deletes.
This error condition can be overridden by setting the -s switch to Y. Just be sure to process a job from start to finish.
restrictions stated on the first page.
42 Job Delete
Job delete is used to clean up the primitive database control tables after the job has been taken through the normal
steps of being transferred to the offline operational database, activated into the online operational database and
deleted from the operational database job management control relations. After a job has been deleted, the changes
made by the job are permanent in both the primitive database and operational database. In contrast, job cancel is used
to clean up the primitive database when the changes made by the job should be removed from existence.
ImJmDelete is a UNIX script that deletes a job which has been activated into the online operational database. When a
job is deleted, the job log entry is moved to the job log history table, the status is set to ODB_DELETED, all task
information is removed, and the change log entries associated with the job are moved to the change log history table.
The associated SQL*Plus script ImJmDelete.sql is also available. An example usage is:
unix> ImJmDelete -u oracle_username -j jobname
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
43 Job Cancel
Cancelling a job returns the primitive database to its original state prior to the job’s existence. A job cancel is allowed
only if the job has never been transferred to the operational database, or, if it was previously transferred, then it must
also have been cancelled out of the operational database.
ImJmCancel is a UNIX script that cancels a job. When a job is cancelled, the job log entry is moved to the job log
history table with a status indicating the job was cancelled. If any changes were made to primitive data, the changes
are undone. The change log entries are not saved in the change log history. The associated SQL*Plus script
ImJmCancel.sql is also available. An example usage is:
unix> ImJmCancel -u oracle_username -j jobname
[-o output_file (default=ImJmCancel.out)]
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
Jobs may also be cancelled using the PDM (Main) form within Oracle Forms. See the PDM Forms User Guide -
U0325.
ImRptJobLogHist.sql is an SQL*Plus script that generates a report of the jobs which have been deleted or cancelled.
This script may be executed from SQL*Plus or through the UNIX script ImRpt. An example usage is:
sql> @ImRptJobLogHist
Enter value for report_output: joblog.history
This report may also be run from the PDM (Main) form within Oracle Forms. See the PDM Forms User Guide - U0325.
restrictions stated on the first page.
44 Job History
Information about a job is saved for historic purposes. The length of time this job history is saved is a decision that is
made by each Spectrum Power 3 customer.
The change log history table will continue to grow until it is manually cleared out. Because of the large number of rows
which may accumulate per job, the amount of free space in this table should be monitored. Use the Oracle DBA views
(specifically DBA_FREE_SPACE) to see current space utilization.
ImJmClearChangeLogHistory is a UNIX script that deletes the change log history entries for a job. The associated
SQL*Plus script ImJmClearChangeLogHistory.sql is also available. An example usage is:
unix> ImJmClearChangeLogHistory -u oracle_username -j jobid
[-o output_file (dflt=ImJmClearChangeLogHistory.out)]
d - Miscellaneous
Voltage Set Limits
e - ICCP
f - CMCH
g - EA
[-o] Output path/file
Default=pwd/ImBaRvIddug.out
[-b] B1 Name
[-r] RTU Name
-- If ‘all’, one IDDUG file per RTU in ‘-1’ location
-- If one actual RTU name, one IDDUG file for that RTU
only in ‘-1’ location
-- If neither ‘-s’ option nor ‘-r’ is specified, value
‘all’ will be set and one IDDUG file per RTU/SAM in
‘-1’ location
[-s] SAM Name
-- If ‘all’, one IDDUG file per SAM in ‘-1’ location
-- If one actual SAM name, one IDDUG file for that SAM
only in ‘-1’ location
-- If neither ‘-s’ option nor ‘-r’ is specified, value
‘all’ will be set and one IDDUG file per RTU/SAM in
‘-1’ location
[-h] Include HIS records. (y/n) default=n
[-u] Oracle Username (PDM EMP database)
[-n] Copy number. Default=1
It is recommended that the SCADA station data be imported first, followed by the RTDS data in rtds_dat.
4. Update the id numbers (e.g., B1 number, RTU number, etc.) in the primitive database to match those of the
operational database. The symbolic name is what is used to find the match. If a name was changed in the primitive
database, no match will be found. ImBaRvUpdateIds is the UNIX script that updates the id numbers. See the man
page for more details. An example usage is:
unix> ImBaRvUpdateIds -u oracle_username -j jobname -n
odb_copy_number -d debug_filename >out_error 2>out_debug
After ImBaRvUpdateIds has finished check the output. Make sure that the constraints and triggers have been re-
enabled.
Note, Oracle DBA view ‘user_constraints’ may be queried to see the status of the constraints (xxxx = UNQ_B1_B1,
UNQ_B2_B1NB2, and UNQ_B3_B1NB2NB3). The status for constraints must be ‘enabled’.
sql> select status from user_constraints where
constraint_name=’xxxx’
Note, Oracle DBA view ‘user_triggers’ may be queried to see the status of the triggers (xxxx = B1, B2, B3,
SCSI_ADAPTER_MODULE, COM_INTERFACE_ADAPTER, CHANNEL, RTU_LINE, RTU, RTU_GROUP, and
RTU_SCAN_GROUP). The status for triggers should be either ‘enabled’ or ‘error’.
sql> select trigger_name,trigger_type,status from user_triggers
where table_name=’xxxx’
A status of ‘disabled’ for either constraints or triggers should be looked into.
5. After a reverse transfer and import, it is not necessary to do a job transfer. Use the following steps to delete the
primitive database job;
unix> sqlplus oracle_username/password
sql> update job_log set status=‘ODB_DELETED’ where job_name=’xxxx’;
Note, replace the xxxx with the import job name.
sql> commit;
sql> exit;
unix> ImJmDelete -u oracle_username -j jobname
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
restrictions stated on the first page.
46 Full Transfer
Full transfer provides the ability to transfer a B1, RTU, SAM, ICCP, LS, ULS, and VR hierarchy of data from the
primitive database to the operational database. The difference between full transfer and job transfer is in what is used
to determine the data to be transferred. Job transfer uses the change log whereas full transfer uses key information
supplied by the user.
It is important to note that Copy Management does not support deployment of `full' transfer jobs to the `Copy' system -
if it is determined that a `full' transfer is needed on the Main and Copy systems, the `full' transfer must be manually
executed on the `Copy' system.
To do a full transfer of data, use the script ImBaJobTransfer. Choose your processing options by selecting the
appropriate parameters from the following list:
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
unix> ImBaJobTransfer
-u Oracle Username(table owner)
-j Job Name
[-o] Output path/file
Default=pwd/${SN}.out
[-f] Full Transfer
(b1/rtu/sam/elcom/iccp/wscc/ls/uls/vr)
[-k] Key value for full transfer
[-s] Stable Data Model flag (Y/N)
Default=N
[-t] Test switches
SCADA
restrictions stated on the first page.
a = b1 (1) b = b2 (2)
c = b3 (3) d = elem (4)
e = accum (5) f = analog, voltage set range (6)
g = digital (7) h = reference (8)
RTDS/CFE
j = sam (10) k = rtu_proto (11)
l = cia (12) m = chan (13)
n = line (14) o = rtu_group (15)
p = rtu (16) q = rtu_scan_3x (17
r = rtu_so (18) s = rtu_scan (19)
t = rtu_sub (20)
COMM
u = iccp (21)
Load Shed
A = fdr (24) B = swh (24)
C = seq (24) D = area (24)
E = prio (24)
UnderFrequency LoadShed
G = fdr (25) H = swh (25)
I = rly (25) J = freq (25)
Voltage Reduction
L = bus (26) M = bus_element (26)
z = guid (28) Guid_TA processing
[-c] Command Line Execution (Y/N)
Default=Y
Note, if the b1_name contains blanks, enclose the name in double quotes. For example, “b1 name”.
To transfer all SAM hierarchy, use the UNIX script ImBaJobTransfer as follows:
unix> ImBaJobTransfer -u oracle_username -f sam -j jobname
This command will transfer RtuPrototype, RtuGroup, Sam, Cia, Channel, and RtuLine data.
To transfer a RTU hierarchy, use the UNIX script ImBaJobTransfer as follows:
unix> ImBaJobTransfer -u oracle_username -f rtu -j jobname -k
rtu_name
This command will transfer Rtu, RtuScan3x, RtuScanOrder, RtuScanGrp, and RtuScanSubGrp data for the specified
key. To force the transfer of RtuPrototype, Channel, and RtuGroup data within the full transfer of an Rtu, you must
explicitly set the -t switch to look like this:
restrictions stated on the first page.
47 Operating Modes
OTS and TEST operating modes are supported by the use of a separate primitive database. The steps to create this
separate database are:
Configure an OTS or TEST system. This includes a separate ADM/DEMS server in Training or Test mode. This
server must have its own Oracle database. The Oracle database must have the Spectrum Power 3 schema
installed, just as it exists on the Process ADM server.
Execute reverse transfer on the on-line operational database.
Execute import on the Training or Test ADM server using the generated IDDUGs.
Execute update ids on the Training or Test ADM server to synchronize off-line operational database and Oracle
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
internal numbers.
At this point an OTS/TEST primitive database is ready for use. The user may perform data manipulation activities on
the OTS/TEST Oracle database via Forms, Import, etc. and transfer the data changes to OTS/TEST online.
If the changes are to be moved to the Process ADM, do the following:
Execute export (ImBaExport, ImCommExport, etc.) for the data that is to be moved.
Move the generated IDDUG files to the Process ADM.
Import the IDDUG files, job transfer, and activate change
restrictions stated on the first page.
48 IM Tools
48.1 ImBaDeleteTA
ImBaDeleteTA is a UNIX script that can perform cascading delete operations from the user-supplied B1, B2, B3 or
ELEMENT names.
The B1/B2/B3/Element names can also include Oracle wildcard symbols (%). If these characters are included in the
options, Oracle will search for and delete data where the % can represent any character string present in the database.
48.2 ImBaImportDirectory
ImBaImportDirectory is a UNIX tool that creates a script that runs ImBaImport for all of the files in a directory. The
generated script can be edited before it is run. If you choose, you can skip the edit and go directly to Import by
executing ImBaImportDirectory in “Run Mode.”
restrictions stated on the first page.
ImBaImport Directory uses the file names in the specified IDDUG directory to build an ImBaImport command. Each file
in the directory will cause a new ImBaImport command to be generated. The first 8 characters of the file name are
used as the job name and the output file name.
48.3 ImBaLoadFilltab
ImBaLoadFilltab is a UNIX script that processes Filltab files. This script handles the deletion of old Filltab data and the
insertion of new Filltab data into the Oracle database.
48.4 ImIddugCompare
ImIddugCompare is a UNIX script that compares two IDDUG files. The checking routines produce several reports: a
report of only-in-iddug-file1 records, a report of only-in-iddug-file2 records, and a report that summarizes records
common to both IDDUG files that have different values in their non-keyed fields1.
1 The following differences in the IDDUG image are not considered to be technological differences and are not
shown as differences in the report output:
o The attribute in one IDDUG file contains no value and its comparable attribute in the second iddug file
contains the default value.
o The attribute is a number and the subtraction of the two values is zero.
48.5 ImCompare
ImCompare is a UNIX script that checks for existence of objects in Primitive Database (Oracle), i.e., PDB and on-line
operational database relations, i.e., SDB. It compares the key, number and name, wherever applicable, in the Base
Apps, RTDS, ICCP and Filltab hierarchy data. Thus, this script checks for orphans that exist in either database.
The script produces a report that shows the orphaned object printed on the side of database where it exists. Thus, if an
object ABC exists on the PDB side and not on the SDB side, then the object ABC will be printed on the PDB side in the
report.
The script compares the following Base Application tables:
b1, b2, b3, element.
The script compares the following RTDS/CFE tables:
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
Rtu, Rtu Scan Order, Rtu Scan Group, Rtu Scan SubGroup, Rtu Group, SAM, Cia, Channel, Line and Rtu
Scan 3X.
The script compares the following filltab files:
btypname, bltyde, cmch, elname, etypname, eltyde, elinname, infodef, inname, intyde, itypname, nede,
netyname, ngname, totyde, totyname.
The script compares the following ICCP tables:
IccpAccess, IccpAccessAcct, IccpAccessDevice, IccpAccessMsg, IccpGroup, IccpLink, IccpRemote, IccpTransfer,
IccpTransferAcct, IccpTransferdevice, IccpTransferMatrix, IccpTransferMsg
happen unless an error in parameterization has occurred: a typical situation would be if the off-line operational
database element number was changed in the Spectrum Power 3 parametrization, but the corresponding
change was not made in the PDM database. The fix in this case would be for the DBA to manually change the
PDM objects to match the Spectrum Power 3 parametrization.
48.6 ImOraDumpLoad
ImOraDumpLoad can, as its name implies, can dump both and load the Oracle database. The resulting dump might be
used as a simple backup (archive) of user data, or it might be used to carry “user” data forward following a software
upgrade. It uses Oracle’s Import and Export to dump and load user data and is faster than using the recommended
ImXxxExport/ImXxxImport method.
Before you begin, be sure you understand the Action options (-a). Reading the following caveats and instructions will
help to ensure success.
WA R N I N G
restrictions stated on the first page.
Entering data into the database when the triggers have been disabled is risky business. Be certain when using
this tool that you are not going to lose trigger-generated data.
Pay particular attention to any new fields or records that have been added as part of the software upgrade. If
new data is generated in full or in part by a trigger, it will not be populated during ImOraDumpLoad.
ImOraDumpLoad coordinates the disabling and enabling of constraints, the dropping and restoring of triggers (when
run with the "-a upgrade" mode), preserving sequence numbers and provides a way to verify the dump/load process.
To use ImOraDumpLoad during a software upgrade (Figure 49-1):
1. Run ImOraDumpLoad to dump the user data from Oracle using the -a upgrade option - note that if it is
certain that the PDM schema will be recreated immediately after the dump the, "-r n" switch may be
specified to speed up this script (see specific examples below).
2. Apply the software upgrade
3. Run ImOraDumpLoad to load user data. Be sure to set the -a switch to “upgrade”. If this is not set, the newly
created triggers will be overwritten by the contents of the dumped database.
4. Run global validation with the row level option.
Please note that filltab data is not dumped during an ImOraDumpLoad1. This data should always come from the filltab
files and should be loaded during the installation process.
1.A flag in the table TABLE_DEFINITION indicates which tables are filltab tables. These tables are not dumped during
an ImOraDumpLoad dump unless they are specified explicitly on the -t switch.
(1) ImOraDumpLoad
Oracle Database
(3) ImOraDumpLoad
restrictions stated on the first page.
----------
99896
If your Oracle storage parameters are set up for a large amount of data expansion, you will see that the
actual disk space needed to dump the tables is much less.
unix> du -k
16784 ./emp1_dump
5. Select a safe place to keep your dump files. Since the files created during an ImOraDumpLoad dump may
be used to load the data back into Oracle after a software upgrade, it is important that these files are not
accidentally deleted during the upgrade process.
To dump/load/verify data, use the script ImOraDumpLoad and choose your processing options by selecting the
appropriate parameters from this list.
You MUST log in as UNIX user ImOraDumpLoad before you can run this script.
unix> ImOraDumpLoad
-u Oracle User
-a Action: [backup], [load], [upgrade] [verify]
[-d] Dump Directory (created during dump)
default= ./Mar05.dmp
[-f] Dump Filename Prefix
default <ORACLE_SID>_<ORA_USER>
[-o] Log File
restrictions stated on the first page.
default= ./Mar05_emp1_user_action.log
[-t] Tables to process
default= all
[-r] If dumping, Restore Triggers
default= y
[-e] If loading, Enable Constraints and Triggers
default= y
[-m] Mode: [batch], [interactive]
default= interactive
WA R N I N G
Ensure that nobody is using the database while you perform the following steps.
1. Dump your data.
For a software upgrade, use the command: (note that the `-r' switch should only be set to `n' if the 'the
PDM schema will be recreated immediately after the dump, because this option will leave the Oracle
triggers disabled after running, which makes the PDM un-usable):
unix>ImOraDumpLoad -u oracle_user -a upgrade -d
dump_directory_name -r n
For a database backup, use the command:
unix>ImOraDumpLoad -u oracle_user -a backup -d
dump_directory_name
2. For Upgrade users only. Apply the software upgrade. This means the new software has been installed and
the PDM schema has been recreated by ImOraSchema.plx; a database in which the tables have been
created, control information has been loaded, but where no user data exists.
to use this tool to process non-EMP databases, consider the following points.
1. Since the TABLE_DEFINITION table may not exist in the database, ImOraDumpLoad will prompt you to
verify that ALL tables should be dumped when “all” tables is specified.
2. If the database is needed after an ImOraDumpLoad (dump) is performed and you must consider which
setting of the -a switch was used.
If the ‘-a upgrade’ option was used, then the dropped triggers must be loaded back into Oracle before
allowing users back on the system. The -r option may or may not work depending on where the source
of the trigger code is kept.
If the ‘-a backup’ option was used, then the existing triggers will be dumped along with the table data
and will be re-loaded with the table data: this may require you to re-create the table triggers (e.g. if the
triggers have changed after this data was dumped).
3. ImOraDumpLoad does not automatically know if there are tables whose source data comes from the
install process. Normally the TABLE_DEFINITION table contains this information about each table. So
when the data is reloaded, all of this user’s tables will be re-loaded from the dump - this may affect ‘filltab’
control tables if those values have changed after this data was dumped.
4. If TABLE_DEFINITION is not used, some other steps may have to be taken during the dump or load
process in order to coordinate the data coming from the install verses the data coming from the dump.
48.7 IndentTrace
IndentTrace is a script that reads the output file created during a run of ImRptJobTrace.sql or a run of gtrace.sql and
indents the report lines based upon the level in the call stack.
After ImOraSchema.plx creates the PDM schema (including Oracle user PRIME in that schema), ImOraAddUser may
be used to create additional Oracle users in the PDM schema that have access to user PRIME's objects. These
additional Oracle users are assigned to the proper temporary tablespace (TEMP) and default tablespace
(PRIME_DEFAULT). The new user is also assigned to the selected predefined Oracle role and profile. The new user
is granted quota on the selected schema.
Oracle roles and profiles are configured by ImOraSchema.plx during PDM schema creation. The PDM schema roles
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
50 CFE on AIX
50.1 Modeling CFE Data
For Customers who have CFE (Communication Front End) in a PDM Master configuration data is maintained in PDM
using the SAM and RTDS hierarchies just as if it were RTDS. A flag called FRONT_END_TYPE in the PDM table
SCSI_ADAPTER_MODULE determines whether a given server is CFE or RTDS. If this flag is set to ’CFA’ then this
SAM is considered to be a CFE server. Additionally, any RTU connected to a line belonging to a CFE server is
considered to be a CFE RTU. It is possible to have a mix of CFE and RTDS servers in the system.
the user to enter a value. Defaults for CFE attributes can be set globally using PDM Control Tables (see Chapter on
PDM Control Tables for CFE on AIX in U0385).
A second level of defaults is provided via the CFE default iddug. This iddug allows the user to set defaults at the
protocol level. This iddug can be found in directory:
$SPECPATH/src/rdbms/ImEs/cfe_default_iddug3.0.
To load this IDDUG, run the following script:
ImEsCfeImport -u prime -j <job_name>
-i $SPECPATH/src/rdbms/ImEs/cfe_default_iddug3.0
There is no change_log involved with running this script; it first truncates the tables it writes to and then updates them
from the IDDUG. After running this script the job should be checked for errors by running ImRpt:
ImRpt -u prime -j <job_name> -m 13
restrictions stated on the first page.
If there are no errors listed in the ImRptMsg.out file then the job can be cancelled and the user can proceed. If errors
are found, then the cfe_defaults_iddug3.0 file needs to be corrected and the import script re-run.
50.3 ImArsExportBulk
On initial set-up for the CFE on AIX, the usre needs to first import the CFE default iddug. Next PDM job(s) need to be
created to model the CFE data. Once these things have been accomplished the user would next run script
ImAsrExportBulk. This script reads through all of the CFE data modeled in PDM and produces a bulk ASR change log
file. ASR is short for Application Suite Repository, which describes a type of file in PowerCC. The bulk ASR change log
consists of a binary file of linked lists. Each list links PowerCC data types which model CFE data.
After Running ImAsrExportBulk the user can attempt to INITSOS the CFEP and CFE server types. Part of the INITSOS
process involves copying the Bulk ASR change log from the ADM and loading it into the database used by the CFE
applications on the CFE servers.
The only other time it sould be necessary for the user to manually run ImAsrExportBulk is if there is a change to the
CFE data model schema.
1 Prepare the IDDUG file. “Division of BA Data in Multiple IDDUG Files” on page 37
“Communications Import” on page 44
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
3 Report and view errors using ImRptMsg. “Error Message Reports” on page 83
4 Report and view job trace using @gtrace or “Job Trace” on page 81
ImRptJobTrace. “IndentTrace” on page 113
5a Correct the data in the IDDUG file or use ”Primitive Database Forms” on page 60
Oracle Forms to correct the Primitive
Database,
7 Report and view errors Using ImRptMsg “Error Message Reports” on page 83
9b Correct the data in the IDDUG file or use “Primitive Database Forms” on page 60
Oracle Forms to correct the Primitive
Database,
This document contains information which is confidential and proprietary to Siemens Energy, Inc. Information is subject to the
11 Report and view errors using ImRptMsg. “Error Message Reports” on page 83
12 Report and view job trace using @gtrace or “Job Trace” on page 81
ImRptJobTrace.
14a Cancel the DBA job, “Cancel the DBA Job” on page 92
14b Cancel the job in the Primitive Database, “Job Cancel” on page 98
14c Correct the data in the IDDUG file or use “Primitive Database Forms” on page 60
Oracle Forms to correct the Primitive
Database,
17a Undo the DBA job, “Undo the DBA Job” on page 93
18b Delete job from DBA “Delete the DBA Job” on page 93