Beruflich Dokumente
Kultur Dokumente
VERSION 1.0
REVISION LIST
Document Name: DBOL-DB2 SAP Policy and procedures Manual
Version Number : 1.0
Rev
.
No
Revision
Date
Revision
Description
Page Prev.
No.
Page
No
27/04/2007
Initial Draft
03/05/2007
First Update
18
14
Updated
the
Contact
List
14/05/2007
Second Update
17, 20
15,18
Included
all the
organiza
tion
charts
29/05/2007
Third Update
53
N/A
22/06/2007
Fourth Update
78
N/A
Included
all the
check
lists in
the
procedur
es
Included
the
Access
required
for
DBOL
team
List of abbreviations
Action
Taken
Addendum/
New page
Release
Notice
Ref.
S.NO
Abbreviations Expansion
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
CST
PTF
IRLM
SMP/E
SQL
CA
BSDS
TMS
CAB
CFRM
DDF
GSS
Xmanager
OAM
DBOL
CICS
SVC
PMR
BCP
DR
APAR
APF
SP
WLM
SPAS
DBM1
MSTR
CSI
RIM
CBPDO
SAP
ICLI
GBP
BP
XCF
LIST OF FIGURES
ORGANIZATION CHART
CONTENTS
ONCALL SUPPORT PROCESS...........................................................3
2. SCOPE OF SERVICE.......................................................................10
2.1. SUMMARY OF AGREEMENT ........................................................................................................................10
2.2. SERVICE LEVEL AGREEMENT OVERVIEW................................................................................................10
2.3. SCOPE OF SERVICES.....................................................................................................................................12
2.4. LIST OF REPORTS TO BE REPORTED TO THE CLIENT....................................................................................13
2.5. PRIMARY SUPPORT TASKS AND RESPONSIBILITIES MATRIX.......................................................................13
2.6. TEAM ORGANIZATION.................................................................................................................................14
...................................................................................................14
2.7. ESCALATION MATRIX.................................................................................................................................14
2.8. OS, HARDWARE, SOFTWARE, SERVER AND DATABASE DETAILS ..............................................................14
5. SUPPORT PROCESS......................................................................17
5.1. RESOURCE AVAILABILITY ..........................................................................................................................17
5.2. KNOWLEDGE TRANSITION PLAN.................................................................................................................17
13.1.1.
13.1.2.
13.1.3.
13.1.4.
13.1.5.
13.1.6.
13.1.7.
13.1.8.
PURPOSE.............................................................................................................................................53
PREREQUISITE..................................................................................................................................53
RACI MATRIX....................................................................................................................................53
INPUT..................................................................................................................................................53
PROCESS.............................................................................................................................................53
OUTPUT..............................................................................................................................................53
FORMS, CHECKLIST OR TEMPLATES USED...............................................................................54
REFERENCE.......................................................................................................................................54
22. LOGREC.........................................................................................67
23. DAE.................................................................................................67
24. NETVIEW........................................................................................68
1.1.1. KC AUTOMATION................................................................................................................................68
1.1.2. NETVIEW AUTOMATION...................................................................................................................69
24.
STRATEGY OR APPROACH..................................................................................................................................71
RACI MATRIX..................................................................................................................................................71
INPUT ................................................................................................................................................................72
STEPS ................................................................................................................................................................72
OUTPUT.............................................................................................................................................................72
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................72
REFERENCES......................................................................................................................................................72
OVERALL CORE SUPPORT PROCEDURE.............................................................................................................73
PURPOSE............................................................................................................................................................73
STRATEGY OR APPROACH..................................................................................................................................73
RACI MATRIX..................................................................................................................................................75
INPUT ................................................................................................................................................................75
STEPS ................................................................................................................................................................75
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................76
REFERENCES......................................................................................................................................................76
ESCALATION PROCEDURE ................................................................................................................................76
PURPOSE............................................................................................................................................................76
STRATEGY OR APPROACH..................................................................................................................................76
RACI MATRIX..................................................................................................................................................77
INPUT ................................................................................................................................................................77
STEPS ................................................................................................................................................................77
OUTPUT.............................................................................................................................................................78
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................78
REFERENCES......................................................................................................................................................79
ROOT CAUSE ANALYSIS....................................................................................................................................79
PURPOSE............................................................................................................................................................79
STRATEGY OR APPROACH..................................................................................................................................79
RACI MATRIX..................................................................................................................................................79
INPUT ................................................................................................................................................................80
STEPS ................................................................................................................................................................80
OUTPUT.............................................................................................................................................................80
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................80
REFERENCES......................................................................................................................................................81
TICKET REVIEW PROCEDURE............................................................................................................................81
PURPOSE...........................................................................................................................................................81
STRATEGY OR APPROACH..................................................................................................................................81
RACI MATRIX..................................................................................................................................................82
INPUT ................................................................................................................................................................82
STEPS ................................................................................................................................................................82
OUTPUT.............................................................................................................................................................82
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................83
REFERENCES......................................................................................................................................................83
26.
1. INTRODUCTION
1.1. Purpose
This document describes the operational procedures carried out by Kimberly Clark DB2 team
in detail.
Vendor Support.
Telephone.
2. Requests
to manage DB2
1.5. Assumptions,
Constraints
Dependencies,
limitations
and
The DBOL DB2 Team is responsible for the DB2 System programming activities like
Installation of DB2 Regions, Maintenance of the DB2 Regions, Scheduling and carrying out
Production Copy process and Vendor Products.
The object creation on the application environment and application queries will be handled by
application DBA team.
The DBOL DB2 team will work performance team for subsystem level tuning.
During Installation and Maintenance, DBOL DB2 team applies patches and fixes that may
require an IPL to activate the changes. The IPL will be handled by the MVS and operations
team.
Replication of volumes from TCP to TWQ and restoring the disk volumes for the subsystems
is done by DASD team. All log management activity is taken care by computer operations.
The DB2CONNECT server in SAP is owned by SAP BASIS. The SAP BASIS team is
responsible for any installation and maintenance.
Group Mail id
DBOL
Description
All CICS, DB2, MQ requests
are sent to this group.
DBA
_DBA-Support, USA-Nee
MFOS
_Support, Mainframe
Storage
Service Desk
Operations
team
_HelpDesk, USA-Nee
Computer
Security
HP Service
Desk support
team
Magic support
team
CA-7 and TWS
job scheduling
team
_Admin, Magic
SAP R/3 -
EBP
C Folders
HR
Knowledge Warehouse
Workplace
BW
2. SCOPE OF SERVICE
2.1. Summary of Agreement
Systems
to
be
supported (include
regions
under
scope, if any)
Type of support
L2 and L3 DB2
Administration
Type of Service
Enhancement,
Support and
Maintenance
Availability
requirements
24/7
SLAs
expected
Service
Level
Agreement
s
Sl. No
Quality Factors
DB2
Subsystem
availability
Quality Objectives
The infrastructure setup related activities have dependency with external teams like z/os,
Security, Storage, DBA and the customer itself. If there is a delay from the external teams and
customer would impact the SLA directly.
Activities
External Dependency
Installation
Maintenance
Production Copy
Storage
Subsystem Creation
Here is a document with the SLAs applicable for DBOL DB2 team
Quality
Factor
MTTR
Availability
Periodicity
Explanation
Monthly
Monthly
Quality Goal
>=99%
100%
>=99%
100%
Production:
On call problem
procedure and
SLA
DBOL Primary
On call person
Critical problem
procedure
DBOL Primary
On call person
Call regarding a
user problem
Dial
into the
system
with in
30
Able to
be on
site at
the
TCC
with in
2 hours
Resolve
with in four
hours
Follow-up
with the
customer/
helpdesk
after the
issue is
resolved
DBOL
Secondary On
call person
Respond with
in 60 minutes
Call regarding a
user problem
Contact GNAAPO
The below Whofixes commands can be used in order to find out the On call persons in each
of the teams:
TEAM
DASD
z/OS
DB2 Legacy & SAP
DBA
Datacom
DB2 Connect
COMMAND
Whofixes dasd
Whofixes mfos
Whofixes db2soft
Whofixes dba
Whofixes datacom
Whofixes db2connect
NOTE:- WHOFIXES information is accessible only from Legacy not from SAP LPARS.
Infrastructure Maintenance.
Request Handling
Escalation Procedures
Contact Information
Problems with the application programs. This happens when the DB2 team performs
all the tests and confirms when DB2 is up and running. Under such circumstances the
request can be sent to the application team for them to review and analyze their
programs.
Responsibility
Accountability
Consultancy
Informed
System Administration of
DB2 Regions
Software
Installation,
Maintenance,
Upgrade, Migration
of
subsystems,
Carrying
out
Production
copy
refreshes,
Subsystem Level
Tuning, Incident &
Change
KC
Users
and
Application
owners
Management.
Points of escalation
Timelines
Level II
Timelines Level III
Timelines
30 min from Offshore TM / 60 min from Program Manager90 minutes
time of call
Onsite TM time of call
from time of
logging
logging
call logging
SAP Release
Kernel
A collection of all executable programs which implement the technical basis of the
R/3 System together with the existing operating system services and database services
Uses RRSAF
Uses DRDA
Kernel 6.20
Kernel 6.40
DB2 Connect
Downward compatibility
i.e. The 6.40 release level of Kernel compatible with 6.20 Basis Version
Subset of Kernel
Connecting to DB2
Two ways to connect to DB2 from SAP
DB2 Connect
Database details:
Server Name
DB2 V8.1
OS
Z/OS 1.6
Type of Server
Database Server
Hardware
System Z9
All the DB2 Systems under SAP are currently running in V8.1. There is a plan of migrating
the DB2 Systems to V9.0.
Vendor Products:
Vendor Tools
IBM DB2 Performance Expert
Usage
This is used to do performance analysis
like running Explain, collecting statistics
from RUNSTATS etc.
This is used to do all the administrative
activities like starting a DB2 subsystem,
stopping a DB2 subsystem etc.
Version
V2.1.0
V5.1
The details about the Vendor products and the expiry date are attached below.
team
Name
Role
5. SUPPORT PROCESS
The DBOL DB2 team provides a continuous support to resolve the customer problems related
to DB2. The
support is provided through user requests wherein the user can specify his
requirements, through phone calls also through the escalations from the service desk and
attending to Magic tickets.
Plan, install, customize, integrate, upgrade and verify DB2 system software and related
products/utilities/tools.
Apply maintenance PTFs (program temporary fixes) for DB2 system software and related
products/utilities/tools.
Interact with core technical teams (MVS, Network and Storage) for the DB2 sub system
Perform problem determination and provide resolution for DB2 system software and
related products/utilities/tools.
Conduct tuning exercise on DB2 system software level when there is a reported
performance problem.
Provide technical assistance to production DBA during planned disaster recovery testing
exercise.
Perform capacity planning exercise to come up with forecasted and optimized DB2
workload process set up by analyzing the past and current process workload.
Perform Research & Development and proof-of-concept work for testing the recent and
new features of DB2 system software related products and reports the long-term cost-benefits
to the senior management on a continuous basis.
Assess the performance of DB2 system software related products and provide
recommendations for opportunities for performance improvement and associated long term
cost-benefits
to
senior
management.
Implementing
the
performance
improvement
recommendations in the environment after obtaining the approval from senior management.
Analyze and resolve DB2 system software product related issues and problems raised by
Oncall Support
SYSPLEX
LPARS
Coupler
SAP0ICF2,
SAP0ICF3
SAPQ
SAPX
TCX0, TCX1
SAP0
SAPQWCF0,
SAPQWCF1
SAPXICF2,
SAPXICF3
SAP0 Sysplex:
SAP0 is the main sysplex and the tables shown below gives detailed information about the
LPARs in the sysplex and the DB2 regions in each LPAR.
LPAR
Production LPARs
(TCP0,TCP1,TCP2,TCP3)
Quality LPARs (TCQ0,TCQ1)
Development/Test LPARs
(TCT0,TCT1,TCT2)
Total Subsystems Supported
LPAR
TCP0
TCP1
TCP2
TCP3
TCQ0
No of DB2 Subsystems
50
23
40
113
DB2 Subsystems
PABA,PAPA, PAWB, PEBB, PECB, PEPB,
PEWB, PE3B, PGFA, PGWA, PHAA, PLPA,
PM3A, PPPB, PPWA
PABB,PACB, PAPB, PA3A, PA4A, PA4C,
PECA, PGFB, PGWB, PHAB, PLPB, PLWB,
PL3B, PM3B, PPPA, PPWB
PACA, PA3B, PA4B, PA4D, PGCA,
PGHB,PLWA, PL3A, PP3A
PAWA, PEBA, PEPA, PEWA, PE3A, PGCB,
PGHA, PP3B
CEWA, CGWA, IABA, IGKA, IP3A, QACA,
QE3B, QGHA, QGWA, QHAA, QM3A, QP3A
TCQ1
TCT0
TCT1
TCT2
DB2 Subsystems
BXBA, BXCA, BXWA, BX3A
BXBB, BXCB, BX3B,BXWB
DB2 Subsystems
CGHA, CL3A, QABA, QA4A, QL3A, QP3A
CA4A, CE3A, CGCA, CGWA, CP3A, DLWA,
SGHA
Each Data sharing group contains two members A and B. A is the primary member and B
acts as a failover member.
Among all these subsystems the production subsystems which start with letter P as their first
letter are most critical ones.
Most of the subsystems in SAP are set up in data sharing mode. All the production regions
are data shared and some of the quality regions are data shared. The following table shows
different data sharing groups and the corresponding members. All the members in SAPQ
sysplex are set up as non data shared.
Secondary Member
PAB
PAP
PAW
PABA
PAPA
PAWA
PABB
PAPB
PAWB
PEB
PEBA
PEBB
PEC
PECA
PECB
PEW
PEWA
PEWB
PE3
PE3A
PE3B
PGF
PGFA
PGFB
PGW
PGWA
PGWB
PHA
PHAA
PHAB
PLP
PLPA
PLPB
PL4
PL4A
PL4B
PM3
PM3A
PM3B
PEP
PEPA
PEPB
PPP
PPPA
PPPB
PPW
PPWA
PPWB
PAC
PACA
PACB
PA3
PA3A
PA3B
PA4
PLW
PLWA
PLWB
PL3
PL3A
PL3B
PP3
PP3A
PP3B
PGC
PGCA
PGCB
PGH
PGHA
PGHB
QE3
QE3A
QE3B
DA3
DA3A
DA3B
QA3
QA3A
QA3B
BXB
BXBA
BXBB
BXC
BXCA
BXCB
BXW
BXWA
BXWB
BX3
BX3A
BX3B
We have D (Development) and S (Application Sandbox) regions which are used by the
application teams.
All the regions in the SAPX LPAR which has a starting letter as B in its name is used by
BASIS team for their testing purpose.
All the names of the production regions start with P. These are the most critical regions and
they are always up and running. They are setup to provide 24X7availability. They are
provided with a failover member so that whenever one member in the group is brought down
for maintenance purpose the other member takes over.
All the regions in SAPQ sysplex are setup as non data shared. They are used by the quality
assurance group for the purpose of Quality assurance.
The DB2 Address spaces and jobs can be viewed using SDSF job list menu. SDSF is used to
view job log output. ANYSTC is the owner address space for most of the DB2 Address
spaces.
SAP uses RACF for security purposes
resources(HR).
4th character:- Stands for data sharing group A or B.
Eg: PA3A (Production SAP R/3 Subsystem running in North America)
DP3A (Development SAP R/3 Subsystem running in Asia Pacific)
Environments
Applications
Development
D North America
A SAP R/3
Quality
Q Consumer
C APO
Production
Europe
Instructional
Global
G CRM
Basis Sandbox
Application
Sandbox
Data Migration
M Latin America
C Master
Light (Portal)
Asia / Pacific
RFID
R Knowledge Warehouse
EBP / SRM
cFolders
HR
External / Computer
Services / Mexico
Solution Manager
BW
RFID
Dataset
Description.
DB2V8.CNTL
DB2V8.DB2.BP
JCLs.
This dataset contains JCL specific to Bufferpool
DB2V8.INSTALL.CNTL
operations.
This dataset contains model jobs specific to
DB2V8.MAINT.CNTL
Installation.
This dataset contains Maintenance specific model
DB2V8.MIG.CNTL
jobs.
This dataset contains model jobs used for
DB2V8.NEWSID.CNTL
Migration.
This dataset contains model jobs used for
DB2V8.PROCLIB
DB2V8.RSUMMYY.CNTL
used.
This dataset contains the maintenance jobs that
DB2V8.SOURCE
DB2V8.SMP.CNTL
ZPARMS.
This dataset contains the jobs that are used in the
DB2V8.SYS6.DSN81*.*
SMP/E processing
These datasets contains the SMP/E libraries.
DB2V8.TEXT
7.3
The DB2 SMP/E Environment in SAP comprises of DB2 Libraries which has 6 sets each, from
A to F.
For example A set of SMP/E libraries will follow the naming conventions shown below:DB2V8.SYS6.DSN81A.SMPE.DB2.DLIB.CSI
DB2V8.SYS6.DSN81A.SMPE.DB2.DLIB.CSI.DATA
DB2V8.SYS6.DSN81A.SMPE.DB2.DLIB.CSI.INDEX
DB2V8.SYS6.DSN81A.SMPE.DB2.GLOBAL.CSI
DB2V8.SYS6.DSN81A.SMPE.DB2.GLOBAL.CSI.DATA
DB2V8.SYS6.DSN81A.SMPE.DB2.GLOBAL.CSI.INDEX
DB2V8.SYS6.DSN81A.SMPE.DB2.TARGET.CSI
DB2V8.SYS6.DSN81A.SMPE.DB2.TARGET.CSI.DATA
DB2V8.SYS6.DSN81A.SMPE.DB2.TARGET.CSI.INDEX
The maintenance and patches will be applied to one of these sets. Always one set will have
the latest changes and there by we maintain 5 previous versions from the current
maintenance level.
The subsystems will use one of these sets of libraries depending on the current maintenance
level. The A-F set of libraries are not hard coded in the system rather they are referred though
the alias.
The alias will be defined for each subsystem during maintenance and they will be pointing to
any particular set depending on the maintenance level of the SMP/E that we are in.
For ex, a subsystem pa4a is upgraded from PUT level 0702 to PUT level 0703 then the F set
Of libraries used by the subsystem in the PUT0702 and now it will be using A set of libraries
in the new PUT level.
During maintenance, while upgrading PA4A, the alias will be deleted and recreated to point to
A set of libraries.
The new changes will be active as a part of the IPL. Once the subvsystem is recycled through
the automation the redefined alias will point to the new set of libraries, A set in the above
case.
During installation all the SMP/E datasets are saved as prefix.smp/e datasetname. Where
prefix is the name of the subsystem for example, while creating the subsystem PGHA all the
SMP/E datasets are saved as PGHASYS.smp/e datasetname and the smp/e datasetname
can be the name of the datasets like SDSNLOAD, SDSNEXIT etc. After this changes have
been made to these dataset names we can find SDSNLOAD, SDSNEXIT, SDSNLINK,
SDXRRESL
datasets
in
PGHASYS.SDSNLOAD,
PGHASYS.SDSNLINK,
PGHASYS.SDSNEXIT
PGHASYS.D.DB2V8.SDSNEXIT
PGHASYS.SDSNLINK
PGHASYS.D.DB2V8.SDSNLINK
PGHASYS.SDSNLOAD
PGHASYS.D.DB2V8.SDSNLOAD
PGHASYS.SDSNDBRM
DB2V8.SYS6.DSN81D.SDSNDBRM
PGHASYS.SDXRRESL
PGHASYS.D.DB2V8.SDXRRESL
PGHASYS.SDSNMACS
DB2V8.SYS6.DSN81D.SDSNMACS
PGHASYS.SDSNSAMP
DB2V8.SYS6.DSN81D.SDSNSAMP
PGHASYS.DBRMLIB.DATA
PGHASYS.DB2V8.DBRMLIB.DATA
PGHASYS.RUNLIB.LOAD
PGHASYS.DB2V8.RUNLIB.LOAD
PGHASYS.SRCLIB.DATA
PGHASYS.DB2V8.SRCLIB.DATA
The history of changes made to these set of SMP/E letters can be found in
7.4
DATA SHARING
Subsystem that belongs to a particular data sharing group is a member of that group. All
members of a data sharing group use the same shared DB2 Catalog and Directory. Maximum
no of members in a data sharing group is 32.
Improves availability of DB2, extends processing capacity of the system, more flexible
ways to configure the environment and increases transaction rates. It also improves
availability during planned and unplanned outages.
Improves scalability and one can add a new DB2 onto another Central processor complex
and access same data through DB2. All DB2s in a data sharing group have concurrent
read and write access, and all DB2s use single directory and catalog
Runs application on more than one DB2 Subsystem to achieve transaction rates that are
higher than possible on a single subsystem.
More capacity to process complex queries. Sysplex Query parallelism enables DB2 to
use all processing power of data sharing group to process a single query. For complex
data analysis/ decision support , Sysplex query parallelism is a scalable solution.
7.5
KC AUTOMATION - OVERVIEW
There are few automation tasks that has been done in SAP for Maintenance for
bringing up and bringing down a Subsystem during the maintenance that is
carried out on Saturdays, bringing down of subsystems is taken care by Netview
and it is done automatically. Then after the subsystems come down a task called
DB2FLIPV8 starts automatically which flips the smp/e letter to the current
maintenance level of that week. This task gets information from the flags set in
the member $SID6W8.
Then after the IPL and when the subsystems start coming up there are some after
jobs which are to be run this is also taken care by automation by using the
information from the flags set in the member $SID4WS. If there are any jobs that
are to be run after the subsystem comes down it can be done by the flags in
$SID3WS.
All this automation is a part of Netview which is IBM Tivoli tool developed for the
purpose of making the operators work easier, now the operator doesnt have to remember
the commands to bring the subsystem down or to bring up the subsystem
7.6
The only vendor products that are used in SAP are IBM products DB2 Performance Expert
and DB2 Admin Tool. These products are used widely by DBA team to collect Real-time DB2
Subsystem statistics and monitor threads. DB2 Admin Tool provides in-depth catalog
navigation by displaying and interpreting objects in the DB2 catalog and executing dynamic
SQL statements. It is integrated with other DB2 utilities to simplify the creation of DB2 utility
jobs, which creates additional functionality with product-specific line commands for table
editing, SQL cost analysis, and path check analysis.
DBOL team is responsible for the installation and maintenance of the following vendor
products
KC Management will first request for the Vendor product tapes from the vendors when
new release comes out.
manuals and CDs along with the tapes. We also get Program Directory for the same. The
Installation process is carried out through SMP/E process and customizing the values for
various panels which we take from the previous installation releases.
Vendor product maintenance is carried out whenever a new maintenance level release comes
up. For Performance Expert we have 6 set of libraries and it is always maintained at the same
set as the subsystem which is using it. Each subsystem has its own set of smp/e libraries for
Performance Expert, like SFPELOAD, SFPELINK and SFPEDBRM and these libraries are
maintained at the same set as the subsystem
7.7
Installation of DB2 is done by the DBOL Team. When a new installation needs to be carried
out, tapes are received from IBM and we install the same using the SMP/E Libraries. The
Installation/Migration of the DB2 Regions is done on the Sandbox region first. Once the
Sandbox regions are migrated, we precede the migration to rest of the DB2 Regions in the
below mentioned hierarchy of the SAP environment:
1. SANDBOX
2. DEVELOPMENT/TEST
3. QUALITY
4. PRODUCTION
PICTORIAL REPRESENTATION OF SOFTWARE INSTALLATION
PROCESS
Sand Box
Production
Development /
Test
Quality
Maintenance on DB2 SAP Subsystems is been done on a Monthly basis. The maintenance is
carried out on the first 3 weekends of the month that we are doing maintenance as per the
schedule in the corresponding LPAR. We roll the maintenance on the Sandbox region first
before rolling onto the test regions. The schedule for the maintenance is as below:
7.8
TRACE - OVERVIEW
When using DB2 UDB you might on occasion encounter an error message that directs you to
"get a trace and call IBM Support", "[turn] on trace [and] examine the trace record", or to
"contact your technical [support] representative with the following information: problem
description, SQLCODE, SQLCA contents (if possible), and trace dataset (if possible)". Or,
when you report a problem to IBM Support, you might be asked to perform a trace to capture
detailed information about your environment.
DB2 traces can be especially useful when analyzing recurring and reproducible problems,
and greatly facilitate the support representative's job of problem determination.
DB2 trace is essentially a log of control flow information (functions and associated parameter
values) that is captured while the trace facility is on. Traces are very useful to DB2 technical
support representatives who are trying to diagnose a problem that may be difficult to solve
with only the information that is returned in error messages.
IBM will request for a GTF trace or a selective dump using DSN1SDMP utility. The first option
is achieved by enabling the trace and selecting the destination as GTF records. The second
option is achieved by forcing the dumps when selected DB2 trace events occur and writing
DB2 trace records to a user defined dataset.
7.9
"SVC dump is like a burglar alarm.... It lets you know something's wrong and helps
you pinpoint where it started."
An SVC dump provides a representation of the virtual storage for the system at the
time the dump is taken. Most commonly, a system component requests an SVC dump when
an unexpected system error occurs. After the dump has completed, processing can usually
continue.
Whenever there is an ABEND and the program which caused that ABEND is requesting for
an SVC dump then SVC dump occurs. An SVC dump provides a representation of the virtual
storage for the system at the time the dump is taken. Most commonly, a system component
requests an SVC dump when an unexpected system error occurs. After the dump has
completed, processing can usually continue.
An authorized program can request an SVC dump with the SDUMP or SDUMPX macro. The
operator can also request an SVC dump by using the SLIP or DUMP command. Both are
used to obtain diagnostic data to aid in problem resolution. System Automation process
recognizes the Dump occurred and sends the mail to the Group Mail Box.
It is also possible to take a dump manually if the dump is not created automatically. It is done
by the following command in the log
SLIP SET ID=pk01, j=jbname, action=svcd, c=0c4,end
We can also see what are the SLIPs that are currently active by giving D SLIP command in
the log
RESPONSE=TCQ0
IEE735I 14.09.46 SLIP DISPLAY 911
ID STATE
ID STATE
ID STATE
ID STATE
ID STATE
0001 ENABLED X013 ENABLED X028 ENABLED X052 ENABLED X058 ENABLED
X066 ENABLED X070 ENABLED S071 ENABLED SS71 ENABLED X073 ENABLED
X0DX ENABLED X0E7 ENABLED X0F3 ENABLED X13E ENABLED X1C5 ENABLED
X222 ENABLED X322 ENABLED X33E ENABLED S3C4 ENABLED X422 ENABLED
X42X ENABLED X47B ENABLED X622 ENABLED X71A ENABLED X804 ENABLED
X806 ENABLED X80A ENABLED X81A ENABLED X91A ENABLED X9FB ENABLED
XB37 ENABLED XC1A ENABLED XD1A ENABLED XD37 ENABLED XE37 ENABLED
XEC6 ENABLED XXC6 ENABLED
This shows for what abends SLIP is enabled.
To disable an ID we can use SLIP MOD, DISABLE, ID=xxxx.
To delete an ID we can use SLIP DEL, ID=xxxx.
When system goes for IPL the SLIP information is deleted. The information about dumps is
found in B41380.NOTES(DUMP) member.
We would also request for E-Mail response from IBM at every point of time on the status and
progress of the issue.
Severity 2: DB2 Region went down, region comes up again. (i.e.) Region is Unstable
contains both DB2 Connect Personal Edition and DB2 Connect Enterprise Edition with license
terms and conditions that allow the unlimited deployment of any DB2 Connect product.
License charges are based on the size of the S/390 or zSeries server that DB2 Connect users
will be working with.
This package offering is only available for OS/390 and z/OS systems, and licensing is only
valid for DB2 for OS/390 and z/OS data sources.
2. JCLs for Subsystem:- The PDS DB2v8.NEWSID.CNTL is the model which contains all
the members for creating a new subsystem. The member BILDNON is used for creating a
non-data shared region, the members BILDSHRA and BILDSHRB are used for creating
data shared regions. These members creates a PDS DB2V8.subsysname.CNTL where
subsysname is the name of the subsystem that is to be created.
5. The dataset DB2V8.TEXT contains the check list members for most of the tasks like
installation, maintenance and prodcopy. DB2V8NON, DB2V8SHA, DB2V8SHB & P$$4C$
$ respectively.
6. SYS7.PARMLIB & SYS7.PROCLIB are the common dataset that is shared across
LPARS.
The HLQ for catalog and directory for SAP is first three letters of subsystem followed by
numeral 1 and SAP, for example for PA4A and PA4B they belong to the same data sharing
group PA41 so their catalog and directorys HLQ is PA41SAP.
For each Data Sharing group, the Dataset convention for Directory and Catalog follow the
below conventions:
PA41SAP.DSNDBC.A000XAAA.#DESCRZ3.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ABAPTREE.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ABAP1WT4.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ABDOCMOD.I0001.A001
PA41SAP.DSNDBC.A000XAAA.ABDO1EMC.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ADOWNERR.I0001.A001
PA41SAP.DSNDBC.A000XAAA.ADOW1LVX.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ADRCOMCS.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ADRC1SDX.I0001.A001
The active log data sets for PA4A follow the naming convention shown below
PA4ALOG.LOGCOPY1.DS21
PA4ALOG.LOGCOPY1.DS21.DATA
PA4ALOG.LOGCOPY1.DS22
PA4ALOG.LOGCOPY1.DS22.DATA
PA4ALOG.LOGCOPY1.DS23
PA4ALOG.LOGCOPY1.DS23.DATA
PA4ALOG.LOGCOPY1.DS24
PA4ALOG.LOGCOPY1.DS24.DATA
PA4ALOG.LOGCOPY1.DS25
PA4ALOG.LOGCOPY1.DS25.DATA
PA4ALOG.LOGCOPY1.DS26
PA4ALOG.LOGCOPY1.DS26.DATA
PA4ALOG.LOGCOPY2.DS21
PA4ALOG.LOGCOPY2.DS21.DATA
The archive log datasets for PA4A follow the naming convention shown below
PA4AARC.ARCHLOG1.D07124.T0144243.A0016217
PA4AARC.ARCHLOG1.D07124.T0144243.B0016217
PA4AARC.ARCHLOG1.D07124.T0330108.A0016218
PA4AARC.ARCHLOG1.D07124.T0330108.B0016218
PA4AARC.ARCHLOG1.D07124.T0444349.A0016219
PA4AARC.ARCHLOG1.D07124.T0444349.B0016219
PA4AARC.ARCHLOG1.D07124.T0616588.A0016220
PA4AARC.ARCHLOG1.D07124.T0616588.B0016220
PA4AARC.ARCHLOG1.D07124.T0722208.A0016221
PA4AARC.ARCHLOG1.D07124.T0722208.B0016221
PA4AARC.ARCHLOG1.D07124.T0857371.A0016222
PA4AARC.ARCHLOG1.D07124.T0857371.B0016222
PA4AARC.ARCHLOG1.D07124.T1048289.A0016223
The load libraries for PA4A has the following naming conventions
PA4ASYS.D.DB2PM.SDGOLOAD
PA4ASYS.D.DB2V7.SDSNEXIT
PA4ASYS.D.DB2V7.SDSNLINK
PA4ASYS.D.DB2V7.SDSNLOAD
PA4ASYS.D.DB2V7.SDXRRESL
PA4ASYS.D.DB2V7.SFPEDBRM
PA4ASYS.D.DB2V7.SFPELINK
PA4ASYS.D.DB2V7.SFPELOAD
PA4ASYS.D.DB2V8.SDSNEXIT
PA4ASYS.D.DB2V8.SDSNLINK
PA4ASYS.D.DB2V8.SDSNLOAD
PA4ASYS.D.DB2V8.SDXRRESL
PA4ASYS.D.DB2V8.SFPELINK
PA4ASYS.D.DB2V8.SFPELOAD
7.12.4 ZPARM
The ZPARMS settings for all the DB2 Subsystems in SAP can be found in the Dataset
DB2V8.SOURCE.
For example: DB2V8.SOURCE(PA4A) contains the values for PA4A Subsystem.
& DB2V8.SOURCE(PA4B) contains the values for PA4B Subsystem
All the changes made to ZPARM are also tracked in the dataset DB2V8.SOURCE
(subsysname) where subsysname is the DB2 Subsystem name. Each DB2 member has its
own Assembly JCL.
The member name contains the date on which the parameter settings was changed along
with the reason and requestor name for the changes that was done. Then it contains the
source code for the ZPARMS.
Below link shows the sample example of how ZPARMS will look like for a PA4A Subsystem
present in PA4A member which contains the history of changes made along with the current
settings of ZPARMS.
DB2.CNTL contains all the DB2 Batch jobs and the Utility jobs like DSNJU004, DSNJU003
etc Below link shows an Sample example of execution of DSNJU004 utility which prints the
BSDS contents for BX3A. DB2.CNTL(BSDSLIST) is the member.
authorizing the DB2 Libraries and defining the DB2 Subsystem and IRLM to z/OS
In SAP for the JCLs for the common utilities are found in the dataset DB2.SHARE.CNTL.
There is a member called ##OLDTHD it lists the jobs which are running for
more than 30 hours. Here is a part of JCL
//*******************************************************************
//****** COPIED FROM BATCH JOB AK05OT32
//*******************************************************************
//*===================================================================*
//* IDENTIFY DB2 THREADS IN "PE3A" OLDER THAN 30 HOURS.
//*===================================================================*
//CLEAR
//DD1
//
EXEC PGM=IEFBR14
DD DSN=B41380.AK05OT32.PE3A.OLD.THREAD.RPT,
DISP=(MOD,DELETE,DELETE),UNIT=SYSDA,SPACE=(TRK,1)
//*===================================================================*
//THD2OLD EXEC PGM=IKJEFT1B,DYNAMNBR=20,COND=(0,LT)
//STEPLIB DD DISP=SHR,DSN=DB2V8.TCPALIAS.SDSNLOAD
//SYSPRINT DD SYSOUT=*
//IKJ.SYSTSPRT DD SYSOUT=*
//IKJ.SYSPROC DD DSN=TSOUSERS.CMDPROC,DISP=SHR
//IKJ.THD
//*IKJ.THD
DD SYSOUT=*
DD DSN=B41380.AK05OT32.PE3A.OLD.THREAD.RPT,
//*
DISP=(,CATLG,DELETE),UNIT=SYSDA,
//*
SPACE=(TRK,(10,10),RLSE)
//SYSTSIN DD *
THD2OLD PE3A 30
SAP R/3 -
EBP
C Folders
HR
Knowledge Warehouse
Workplace
BW
The person who is On call has the primary responsibility of monitoring the group
mail boxes and will needs to investigate and follow up on the issue.
Similarly we also get notifications in case a DB2 Region abends and also mail will
be sent to Operations team to contact the Oncall person of DBOL Team through
DB2SOFT which is Whofixes for DBOL Team.
We need to resolve problems proactively by monitoring the DB2 Logs of all the
Subsystems and the group mailboxes on a regular basis.
Future Upgrades
DB2 Admin tool currently running in V5.1 is going off support on 30 th September
2007. The DB2 Admin tool will be migrated from V5.1 to V7.1 shortly.
The CAB representative for the DBMS team must attend the weekly CAB and change
meetings or get someone to attend in his or her place. The CAB coordinator must
understand all changes entered by the team and understand their impact. The CAB
coordinator must also communicate any potential impact a change entered by
another team may have on products and services we provide.
The activity that require Change tickets to be raised are given below:
Activity
Installation of DB2 Subsystems
Migration of DB2 Subsystems
Applying Maintenance on DB2
Vendor Tool Upgrade
Vendor Tool Maintenance
Production Copy process
Change
Ticket
required (Y/N)
Y
Y
Y
Y
Y
N
Change Approver
Charlesworth Janet
Charlesworth Janet
Charlesworth Janet
Charlesworth Janet
Charlesworth Janet
Backup
Mike/Saketh
Mike/Saketh
Mike/Saketh
Mike/Saketh
Mike/Saketh
We need to allocate a data sharing group and create a Coupling Facility structures for
the DB2 Regions that is going to participate in the Data Sharing. The same will be carried out
by z/OS team.
We need to create an IRLM XCF Group to allocate the LPAR on which new
subsystem has to be created. This also will be carried out by z/OS team.
Define a Group Bufferpool (GBP0) in coupling facility that maps to Bufferpool BP0.
GBP0 is used for caching the DB2 Catalog and directory tablespaces and indexes, partitions
that use Bufferpool BP0. This also needs to be taken care by z/OS team.
Once DBOL Team starts the process of creating the Subsystem, we need to specify
the DB2 Configuration parameter Data Sharing=YES in the Main Panel to enable the DB2
Members to be in Data Sharing. Also we need to add the Data Sharing group definition in the
DB2 Address spaces in SYS7.PROCLIB which will be generated by the Installation jobs.
The subsystem creation document contains the process and procedure to configure/create
Data sharing environment.
9. LOG MANAGEMENT
DB2 records all data changes and significant events in a log as they occur. In the case of
failure, DB2 uses this data to recover. DB2 writes each log record to a disk data set called the
active log. When the active log is full, DB2 copies the contents of the active log to a disk or
magnetic tape dataset called the archive log. We can choose either single logging or dual
logging.
A single active log contains between 2 and 31 active log data sets. With dual logging, the
active log has the capacity for 4 to 62 active log data sets because two identical copies of the
log records are kept. Each active log data set is a single-volume, single-extent VSAM LDS.
There are Active and Archive logs that are maintained for SAP. There are 6 sets of Active log
datasets present. Dual archiving process is followed here with both the Archive log1 and
Archive log 2 goes to Archive volumes (Archive Pool) in DASD. They stay on the Archive
volumes in DASD for sometime and then one set of Archive logs goes to Centera disk and
other set of Archive logs is done on tapes.
We can find the active log datasets as subsysnameLOG.LOGCOPYx.DSxx
where
DSNJ001I
884
884
STARTRBA=00EC34C7B000,ENDRBA=00ECB4CAEFFF
00.31.45 STC10170
00.31.47 STC10170
00.31.47 STC10170
884
00.31.48 STC10170
00.36.41 STC10170
DSNJ003I
099
099
DSNAME=PAPAARC.ARCHLOG1.D07150.T0031452.A0000616,
099
099
099
CATLG=YES
00.36.41 STC10170
DSNJ003I
100
100
DSNAME=PAPAARC.ARCHLOG2.D07150.T0031452.A0000616,
100
100
100
CATLG=YES
00.36.41 STC10170
DSNJ139I
DB2 keeps track of the tapes used in BSDS. There is a separate Tape Management system
(TMS) which keeps track of the tapes (HSM, Archive logs) that are been used and manage
them. Archiving is done on tapes which goes to the Offsite location (South-West) which
contains the Offsite tape storage location and the retention period for the Archive logs is 35
days after which the tapes will be scratched for further usage.
We retrieve the Log records through the following events:
1. A log record is requested using its RBA.
2. DB2 searches for the log record in the locations listed below, in the order given:
a. The log buffers.
b. The active logs. The bootstrap data set registers which log RBAs apply to
each active or archive log data set. If the record is in an active log, DB2
dynamically acquires a buffer, reads one or more CIs, and returns one
record for each request.
c. The archive logs. DB2 determines which archive volume contains the CIs,
dynamically allocates the archive volume, acquires a buffer, and reads the
CIs.
1. SANDBOX
2. DEVELOPMENT/TEST
3. QUALITY
4. PRODUCTION
We have to inform the application folks who are using the above DB2 Regions except for
Sandbox when we plan to migrate them by coordination through DBA and Management and
get the clearance for the activity. We need to do an Impact analysis on the Migration to V8
and inform the DBAs and Application folks on the same. We need to check the RETAIN
Database in IBM and find out what level DB2 has to be in for
10.1.2.
PREREQUISITES
Primary activities
Responsibility
Accountability
Consultancy
Informed
Installation of DB2
DBOL
Team
DBOL-Lead
N/A
Application
owner
DB2
10.1.4. INPUT
1. Installation Tapes.
2. CAB approved change ticket.
10.1.5. PROCESS
Detailed steps of Installation procedure can be found in the following link.
Detailed process
SMP/E Environment
Change Ticket
10.1.6. OUTPUT
10.1.8. REFERENCE
1. DB2 version 8 Installation & Migration Guide from IBM red books
2. The dataset X.TC.INSTALL.DB2.V8R1M0.SAPR3.TEXT available on legacy LPARS
SMP/E Dataset names that are available in Legacy along with Zone names are:
Dataset Name
Zone Name
DB2V8.SYS6.DSN81x.SMPE.DB2.DLIB.CSI
DLIB
DB2V8.SYS6.DSN81x.SMPE.DB2.GLOBAL.CSI
Global
DB2V8.SYS6.DSN81x.SMPE.DB2.TARGET.CSI
Target
Zone Name
DB2V8.SYS6.DSN81D.SMPE.DB2.DLIB.CSI
DLIB
DB2V8.SYS6.DSN81D.SMPE.DB2.GLOBAL.CSI
Global
DB2V8.SYS6.DSN81D.SMPE.DB2.TARGET.CSI
Target
After maintenance is done and the maintenance level gets changed, the alias for DB2 load
libraries are made to point to another set of libraries say D when the DB2 Subsystem is
brought down via Automation. Once IPL is done on the LPARs and DB2 Subsystem comes
up it will point to the new set of Libraries.
Co-ordinate with Z/OS team and storage team during maintenance, the person who is on call
is the primary contact during maintenance. The details of that person can be found out using
WHOFIXES an in-house tool in KC or using the calendar of the Group mail box. The person
who is on call primary on that week, his name is displayed on the top of the calendar
In case we migrated the Sandbox regions to V9 and the rest of the regions are still in V8, we
freeze the maintenance for V8. At that point we apply the maintenance for V9 only on the
Sandbox regions. Once the rest of the regions are migrated to V9, we can start rolling the
maintenance for V9 onto all the regions. We apply only the emergency PTFs for V8.
During maintenance DB2 commands are not used to bring down the DB2 subsystems,
automation takes care of it.
11.1.1. Purpose
Maintenance is carried out in order to keep the regions in a recent PUT level. To apply any
special PTFs specified by IBM or SAP or any other users of the system.
11.1.2. PREREQUISITES
Be ready with the preparatory work done and Apply Check analysis for the
Maintenance to be carried out.
Send the Apply Check report analysis which are relevant to the DBA team.
Prepare the post installation jobs for pre-compiling and assembling the security exits
and jobs that updates the BSDS
Create the change ticket using HPSD for the Maintenance to be carried out.
Wait for the Change Advisory Board to approve the Change Ticket.
Primary activities
Responsibility
Accountability
Consultancy
Informed
Applying Maintenance
to the regions.
DBOL
Team
DBOL-Lead
KC
To the user
DB2
11.1.4. INPUT
11.1.5. PROCESS
Detailed steps of maintenance procedure can be found in the following link.
Detailed process
Check list
11.1.6. OUTPUT
1. The changed aliases for the SMP/E datasets of the regions.
2. Updated SMP/E letter with the current maintenance level.
3. Running the After jobs for some PTFs or Sysmods to take full effect
11.1.8. REFERENCE
The data set DB2V8.RSU0702D.CNTL($install).
or Production. We can check with DBAs for understanding the Application requirements
before entering the values for ZPARM. During migration, we do not change the
configuration values
12.1.1. Purpose
The creation of new subsystem is done based on the requirement of the application team.
Each subsystem is dedicated to a given set of applications, like this we can localise the effect
of any outage, i.e if there is any outage on a particular subsystem only those applications that
use that subsystem wont be available, rest of the applications are not impacted.
12.1.2. PREREQUISITE
1. Receive request from the application team and approval from the management.
2. Create the change ticket using HPSD for the maintenance to be carried out.
3. Wait for the Change Advisory Board to approve the Change Ticket.
4. Be ready with the preparatory work done.
5. Co-ordinate with DASD, Mainframe team during Installation and check if Mainframe team
has created the Coupling facility structures in case we are building a Data sharing subsystem.
6. Receive the storage packs from DASD, LPAR information and Port values from Mainframe
teams.
Primary activities
Responsibility
Accountability
Consultancy
Informed
Creation/ Migration of
subsystem
DBOL
Team
DBOL-Lead
N/A
Application
owner
DB2
12.1.4. INPUT
1. Storage packs from DASD, LPAR information & port values from Mainframe teams.
2. CAB approved change ticket.
3. New subsystem creation check list.
4. Subsystem migration check list.
12.1.5. PROCESS
Detailed steps of maintenance procedure can be found in the following link.
Detailed process
Data Shared A
Data Shared B
Data Shared A
Data shared B
12.1.6. OUTPUT
1. New subsystem ready to use/ Existing subsystem migrated to a new version.
2. New entry in the Automation Netview.
3. Run the verification jobs and test the new Subsystem created with Application folks
3.
12.1.8. REFERENCE
1. DB2 version 8 Installation & Migration Guide from IBM red books.
2. The dataset DB2v8.TEXT(DB2UPGAB) available on SAP LPARS
3. The dataset DB2v8.TEXT(DB2UPG78) available on SAP LPARS
4. The dataset DB2v8.TEXT(DB2v8SHA) available on SAP LPARS
5. The dataset DB2v8.TEXT(DB2V8SHB) available on SAP LPARS
6. The dataset X.TC.INSTALL.DB2.V8R1M0.SAPR3.TEXT available on legacy LPARS
Before retiring a Subsystem, the strategy followed in KC is to the take the Backup of all
the DB2 System datasets, Decommission the DB2 Subsystem members from
SYS7.PROCLIB and remove the Subsystem definition from Automation. We need to
shutdown the Subsystem and wait for 6 months to see if there are no user requests for
bringing up the Subsystem. If so, we can go ahead and delete the datasets after getting
approval from Management
According to the check list we have two kinds of refreshes, one is when were
doing a refresh for a target region for the first time then weve to follow all the steps given in
the check list inclusive of those mentioned as First in the checklist, the second kind of refresh
is a refresh for a target system not for the first time, then we dont have to follow the steps
mentioned as First in the checklist, we can skip them.
Advantages of using IBM cloning tool for production copy:
1. With cloning tool we are able to do a live cloning i.e we dont have to bring down the
subsystem for this purpose and hence we are avoiding an outage.
2. If we are not using this tool we have to do an unload of the existing production data and
load it into the prod copy subsystems this is a time consuming and hectic process.
13.1.1. PURPOSE
To replicate the production data to quality so that quality assurance team can proceed with
their quality checks.
13.1.2. PREREQUISITE
1. Receive requests from Basis team through production copy refresh website.
2. Inform the storage team & co-ordinate with them.
3. Schedule the same in the Group mail box calendar.
4. Be ready with the preparatory work done before the DASD snap happens.
5. Keep the prod copy checklist ready and proceed accordingly.
Responsibility
Accountability
Consultancy
Informed
DBOL
Team
DBOL-Lead
KC
Application
owner
DB2
13.1.4. INPUT
1. Basis teams request.
2. prod copy checklist.
13.1.5. PROCESS
Detailed steps of prod copy process can be found in the following link.
Detailed process
13.1.6. OUTPUT
1. The cloned prod copy region
Checklist
13.1.8. REFERENCE
1. IBM Cloning Tool (Mainstar) Manual.
2. The dataset DB2V8.TEXT (P$$4C$$).
In case of Total System Failure (LPAR failure), the DASD team is responsible end-
end for recovering the DB2 Datasets and Subsystems back in place
2.
In case we need to recover system catalog and directory, the DBOLS team is
Class 0 - A set of volumes for the DB2 catalog and directory, DB2
loadlibs,and target libraries and the ICF catalog for this
data.Volumes are named <Sid>0*
Class 1 - A set of volumes for sort.
Volumes are named <Sid>1*
Class 2 - A set of volumes for the BSDS, active logs and the ICF
catalog for this data.
Volumes are named <Sid>2*
Class 3 - A set of volumes dedicated to the SAP data and the ICF
catalog for this data.
Volumes are named <sid>3*, <sid>4*, <sid>5*, etc.
All archive logs and image copies for all Sids will be assigned to a shared set of
DASD volumes.
above.
System Automation which is controlled by z/OS team will recognize any DB2 Dumps that
occur and sends the mail to the Group mail box of the DBOL team.
Using Magic & HPSD tools to solve Requests.
There are no house keeping jobs run as a part of DB2 teams day to day activities
18. KC AUTOMATION
DB2 Startup and shutdown process for SAP is done using Automation. Whenever DB2 is
brought down for Maintenance or IPL its done using Automation. Automation will not allow
DB2 to come down even if we manually try to bring down DB2 using the commands.
Similarly ICLIs (Integrated Call level Interface) running in some DB2 SAP Applications is
Brought down and brought up using Automated means.
However on the IPL weekends, we dont have to coordinate with any other team. Operations
team will have a special automation through which DB2 Subsystems are brought down during
IPL.
Whenever a new subsystem is created the mainframe team is notified ahead of time and they
are the present owners of Netview automation they will take care of defining the new
subsystem to Netview automation.
Overview
Detailed process
TOOLS
Endeavor (Change control)
IDEAL-DATACOM/DB Utilities & Functions
DESCRIPTION
Used to manage the process of developing and
maintaining software for mainframe application.
Integrated Development Environment.
Output Processing
(IOF)
LogRec
Netview
SMP/E
.
It is a tool to keep track of any abends that are
occurring.
Netview tool is used which brings down and
bring up the DB2 Subsystems through
Automation. Whenever DB2 is brought down
for Maintenance its done using Automation
using Netview. Automation will not allow DB2
to come down even if we manually try to
bring down DB2 using the commands.
Similarly ICLIs (Integrated Call level
Interface) running in some DB2 SAP
Applications is
Brought down and brought up using Automated
means through the Netview tool
System Maintenance Program.
TR
Computer Tape Handling Request/Authorization
Form to be filled in prior to sending the tapes to
tape library rack.
INFOPAC
Magic
Whofixes
Have one central point of control to manage and monitor all DB2 instances
conflicts.
Highlight when you exceed exception thresholds or when you reach event
thresholds.
Monitor DB2 Connect and the connection with remote applications along with
the host thread information giving you a complete picture of resources and time
spent with DB2, DB2 Connect, and the network.
exception events, and review those exceptions that have occurred in the
exception log.
We have 6 sets of smp/e libraries that is used for maintenance on performance Expert.
The datasets are of the following format
DB2V8.SYS6.DSN81E.FPE210.AFPEDATA
DB2V8.SYS6.DSN81E.FPE210.AFPEDBRM
DB2V8.SYS6.DSN81E.FPE210.AFPEEXEC
DB2V8.SYS6.DSN81E.FPE210.AFPEFORM
DB2V8.SYS6.DSN81E.FPE210.AFPEINS0
DB2V8.SYS6.DSN81E.FPE210.AFPEMENU
DB2V8.SYS6.DSN81E.FPE210.AFPEMOD0
DB2V8.SYS6.DSN81E.FPE210.AFPEMOD1
DB2V8.SYS6.DSN81E.FPE210.AFPEPENU
DB2V8.SYS6.DSN81E.FPE210.AFPESAMP
DB2V8.SYS6.DSN81E.FPE210.AFPESLIB
DB2V8.SYS6.DSN81E.FPE210.AFPETENU
DB2V8.SYS6.DSN81E.FPE210.AFPEWS01
DB2V8.SYS6.DSN81E.FPE210.SFPEDATA
DB2V8.SYS6.DSN81E.FPE210.SFPEDBRM
DB2V8.SYS6.DSN81E.FPE210.SFPEEXEC
DB2V8.SYS6.DSN81E.FPE210.SFPEFORM
The current version of Performance Expert is 2.01 and the current maintenance level is 703.
This document gives a detailed view on how to use PE
20.1.1. PURPOSE
Install new version of Performance Expert
20.1.2. PREREQUISITE
1. Management needs to order the Installation tapes from IBM.
2. Be ready with the preparatory work done and Impact analysis for the Installation to be
carried out.
3. Receive the Installation tapes from IBM.
4. Co-ordinate with DASD, Mainframe team during Installation.
5. Create the change ticket using HPSD for the Installation to be carried out.
6. Wait for the Change Advisory Board to approve the Change Ticket.
Responsibility
Accountability
Consultancy
Informed
Install Performance
Expert
DBOL
Team
DBOL-Lead
KC
DBA
DB2
20.1.4. INPUT
1. Installation tapes.
2. Storage disk packs from storage team.
20.1.5. Process
Detailed steps of this process can be found in the following link.
Once the above steps are completed successfully, configure the subsystems.
Detailed steps of this process can be found in the following link.
20.1.6. OUTPUT
1. Tailored installation jobs.
2. Updated SMP/E library.
3. Test the Installation using verification steps.
In case of problem with Performance Expert tool, we need to check out the problem and try to
resolve the issue; else we need to contact the vendor to track the problem.
20.1.8. REFERENCE
The website for the IBM Products is:
http://www-3.ibm.com/software/support/
When a new release of Performance Expert is carried out, we need to contact the DBAs.
for DBAs step through the processes of data unloading, object dropping and rebuilding, and
data reloading.
The tool is designed with an easy-to-use interactive system productivity facility (ISPF)
interface that lets you manage and process the DB2 objects, and organize them for better
system throughput. DB2 Admin Tool provides in-depth catalog navigation by displaying and
interpreting objects in the DB2 catalog and executing dynamic SQL statements. It is
integrated with other DB2 utilities to simplify the creation of DB2 utility jobs
The alternative tool available in market for DB2 Admin tool is CA-Platinum.
Displays the static SQL statements from application plans and packages
Executes dynamic SQL statements (in many cases, without requiring you to remember
SQL syntax)
Issues DB2 commands against databases and table spaces (without requiring you to
Enables you to copy (migrate) DB2 databoth databases and table spacesto other
DB2 systems
Enables you to extend existing DB2 Admin applications or to rapidly develop new
applications
Enables you to perform space-related functions such as resizing page sets; lets
you move page sets to and from STOGROUP- and VCAT-defined space; and helps you
estimate space allocations for new table spaces and indexes
Enables you to create and manage work statement lists (WSLs) and run them in batch
Enables you to launch installed IBM DB2 tools that have an ISPF interface
Enables you to dynamically manage system parameters (if running with DB2 Version 7
or above) Enables you to request the Prompt function, so that you are prompted before a
statement is executed
We have 6 sets of smp/e libraries that is used for maintenance on Admin tool.
The datasets are of the following format
DB2V8.SYS6.DSN81E.ADB510.AADBBASE
DB2V8.SYS6.DSN81E.ADB510.AADBCLST
DB2V8.SYS6.DSN81E.ADB510.AADBDBRM
DB2V8.SYS6.DSN81E.ADB510.AADBEXEC
DB2V8.SYS6.DSN81E.ADB510.AADBMLIB
DB2V8.SYS6.DSN81E.ADB510.AADBNCAL
DB2V8.SYS6.DSN81E.ADB510.AADBPLIB
DB2V8.SYS6.DSN81E.ADB510.AADBSAMP
DB2V8.SYS6.DSN81E.ADB510.AADBSLIB
DB2V8.SYS6.DSN81E.ADB510.AADBTLIB
DB2V8.SYS6.DSN81E.ADB510.SADBBASE
DB2V8.SYS6.DSN81E.ADB510.SADBCLST
DB2V8.SYS6.DSN81E.ADB510.SADBDBRM
DB2V8.SYS6.DSN81E.ADB510.SADBEXEC
DB2V8.SYS6.DSN81E.ADB510.SADBLINK
DB2V8.SYS6.DSN81E.ADB510.SADBLLIB
DB2V8.SYS6.DSN81E.ADB510.SADBMLIB
DB2V8.SYS6.DSN81E.ADB510.SADBPLIB
DB2V8.SYS6.DSN81E.ADB510.SADBSAMP
DB2V8.SYS6.DSN81E.ADB510.SADBSLIB
DB2V8.SYS6.DSN81E.ADB510.SADBTLIB
The current version of DB2 Admin tool is 5.1 and the current maintenance level is 703.
This document gives a detailed view on how to use DB2 Admin tool.
21.1.1. PURPOSE
Install new version of DB2 Admin.
21.1.2. PREREQUISITE
1. For a new installation project charter has to be prepared and management has to approve
it.
2. Management needs to order the Installation tapes from IBM.
3. Be ready with the preparatory work done and Impact analysis for the Installation to be
carried out.
4. Receive the Installation tapes from IBM.
5. Co-ordinate with DASD, Mainframe team during Installation.
6. Create the change ticket using HPSD for the Installation to be carried out.
7. Wait for the Change Advisory Board to approve the Change Ticket.
21.1.3. INPUT
1. Installation tapes.
2. Storage disk packs from storage team.
Primary activities
Responsibility
Accountability
Consultancy
Informed
Install Performance
Expert
DBOL
Team
DBOL-Lead
KC
DBA
DB2
21.1.5. PROCESS
Detailed steps of this process can be found in the following link.
Once the above tasks are completed successfully, follow the below instructions to configure
the subsystem. Detailed steps of this process can be found in the following link.
21.1.6. OUTPUT
1. Invoke DB2 Admin Tool.
21.1.8. REFERENCE
In case of problem with DB2 Admin tool, we need to check out the problem and try to resolve
the issue, else we need to contact the vendor to track the problem.
The website for the IBM Products is:
http://www-3.ibm.com/software/support/
When a new release of DB2 Admin tool is carried out, we need to contact the DBAs.
22. LOGREC
LOGREC is a free tool provided by IBM, it is used for the purpose of capturing the error log
from a specified time to a specified time. It is done with the help of EREP program. This
information may be used for the purpose of Analyzing the problem or to report the problem to
IBM.
We can use the following options also
S = Summarize LOGR LOGREC data
D = Detailed LOGR Software Records
I = LOGR LOGREC Inventory
O = User EREP Input from dataset
Z/OS team installs this product. It is a free product so no service is available for this product.
Detailed information is found in
23. DAE
DAE (Dump Analysis and Elimination) is a tool which keeps track of the dumps. It creates a
dump for an abend if it is happening for the first time. Then it wont create a dump it just
records the number of times that dump has occurred. We can go into that tool and manually
take a dump by giving (T) take next dump option over there. It is basically used to keep track
of the abends and eliminate the process of taking unnecessary dumps.
24. NETVIEW
Netview is a IBM Tivoli Product developed for the purpose of making the operators work
easier, now the operator doesnt have to remember the commands to bring the subsystem
down or to bring up the subsystem. In automation every thing is considered as a task, apart
from bringing up and bringing down the subsystem a variety of other information can be
obtained like we can display the status of the task, its desired and actual states, the eligible
resource upon which it can run, the tasks that are required to be up before bringing up this
task and the tasks that are supported by this task i.e the tasks which this task automatically
brings up.
1.1.1. KC AUTOMATION
There are few automation tasks that has been done in SAP for Maintenance for bringing up and
bringing down a Subsystem during the maintenance that is carried out on Saturdays, bringing
down of subsystems is taken care by Netview and it is done automatically. Then after the
subsystems come down a task called DB2FLIPV8 starts automatically which flips the smp/e letter
to the current maintenance level of that week. This task gets information from the flags set in the
member $SID6W8.
Then after the IPL and when the subsystems start coming up there are some after jobs which are
to be run this is also taken care by automation by using the information from the flags set in the
member $SID4WS. If there are any jobs that are to be run after the subsystem comes down it can
be done by the flags in $SID3WS.
All this automation is a part of Netview which is IBM Tivoli tool developed for the purpose of
making the operators work easier, now the operator doesnt have to remember the commands to
bring the subsystem down or to bring up the subsystem
24.
Version
8.1
5.1
2.1.0
EOS Date
Not Available
30 Sep 2007
30 Sep 2008
Replacement
DB2 Admin Tool Version 7.1
OMegaMon XE for DB2
Performance
3.1
LPARs
Groups
TCP0
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
TCP1
Expert
version
TCP2
TCP3
TCQ0
TCQ1
TCT0
TCT1
TCT2
TWQ0
TWQ1
TCX0
TCX1
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
http://www-304.ibm.com/usrsrvc/account/userservices
CA
http://www.ca.com/us/support
DB2CONN
IBM Software
handbook
http://www.ibm.com/software/support/probsub.html
http://techsupport.services.ibm.com/guides/handbook.html
ESR
http://www.ibm.com/software/support/help.html
Strategy or approach
When a new user joins KC, what are all the various activities to be completed are being listed here.
RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
activities
Getting Employee
id
Getting physical
access to the KC
area
Getting Qid
Responsibility
Accountability
Consultancy
Informed
Team Leader
Project Manager
MATC
member,
Team Leader
ISC
Admin
Team
PMO
Team
PMO
Team Leader
Tower Leader
KC-Tower leader
member,
Getting allocated
in Ultimatix
Task allocation in
iPMS
Project Manager
Accounts
Manager
Project Manager
MATC
Team
PMO
Team
PMO
Team
PMO
Tower Leader
PMO
member,
member,
member,
Input
1. Offer letter needed for getting employee id.
Steps
1. After receiving the employee id, the employee needs to send a mail or contact the team
leader, for getting physical access to the KC area.
2. The Team leader will forward the request to ISC, the ISC will send the request through the
Lotus notes security database and Admin will validate give the access rights to the list of
people.
3. The team leader will get the KC-NDA signed by the team member and fax it to KC. The
request for Qid will also be raised in the incident management tool.
4. The Qid will be received by the Service Desk team and informed to the team member.
Team member to update the team leader with details. The team leader will give the
updates to the PMO.
5. Project manager to raise the request in Ultimatix and get the allocation done for the team
member.
6. Once the member is allocated to the WON, task will be allocated in iPMS for the member.
Output
1. Physical access id card.
2. Updated list of employees of KC with Employee id, Name, QId, Access availability,
Onsite/Offshore and status available with PMO.
Employee
id
Employee
name
QID
Access
(Y/N)
Availability
References
Ultimatix My allocations to verify the allocation
iPMS to know the WON in which allocation is done
Onsite/offshore
Status
Strategy or approach
The process flow is below.
No
Yes
No
Yes
Inform Operations / Help Desk "Problem is being addressed"
Support Team Member
Prioritize the problem, if there's
more than one at the same time
3
Support Team Member
No
Yes
Yes
Yes
No
For major incident, PIR will be conducted by KC
with all concerned parties (IS, Business, Support
Team Mgmt) and a PIR report will be generated.
6
Support Team Member
a) Update and Close the ticket, on receiving user feedback
b) Send Problem Report with all details to concerned parties, if applicable
b) Record the problem details in Solution Vault
Support Team Member
RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
activities
Problem reporting
Analysing,
resolving
and
escalating
Responsibility
Accountability
Consultancy
Informed
User/Service Desk
Team Member
Team Leader
User/Service
Desk operator
End user
Input
1. Problem ticket or problem reported by the user through desk phone, mobile phone or
through pager, at day or night.
2. The problem can be even through conference calls by the Service Desk.
Steps
Explanation of the process flow diagram
No.
Explanation
Initial Analysis
When a problem is reported, an initial analysis has to be performed by the support person. If it is
determined that the reported one is actually a core-support problem and belongs to the
concerned group, the support person will take immediate action to provide the necessary fix.
If the problem does not belong to the concerned group, then Operations or Help Desk would be
requested to reassign the problem ticket to the relevant group.
Prioritization
When there is more than one Core-support request, Support Team Leader will prioritize them
based on the Severity Guidelines and the recommendations given by KC as there will be only a
limited number of resources in a pool to work on them. When multiple users are affected or entire
site is down, the PIR will be conducted by KC.
Co-ordination
If the fix requires support effort from other Support groups / customers, Support Team member
will request the Service Desk to co-ordinate the re-prioritization of tasks for other Support
group(s) and inputs / feedback required from customers.
Perform Fix
Break-fix requests should be given the highest priority over any other task. Production problems
would be given the next priority. Support Team Member will ensure that the fix is provided on
time to ensure smooth running of business.
No.
Explanation
Communication
a) During resolution Support Team Member will keep all concerned parties (Business, IS
and management) posted on the progress. A preliminary Problem Report would be sent to all
if SLA slippage is anticipated.
b) On Completion of the Fix Support Team will communicate to all concerned parties and
close the request after receiving the feedback from the user. A detailed Problem Report
would be sent to all if there was a slippage. Problem details would also be logged in the
Solution Vault tool.
Escalation Process
The escalation process to be followed if Support team cannot meet the SLA for a Coresupport request is explained in Escalation Procedure.
References
Policies and procedures documents of the Mainframe Tower.
Escalation Procedure
Purpose
Purpose is to escalate the tickets if they are of sev1 and are missing SLAs.
Strategy or approach
The escalation matrix is to be developed and communicated to all the members in
the team. 5 levels of escalations are to be documented. These need to have
names of people from
tower.
RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
activities
Preparation
updating
escalation
procedures
Following
escalation
procedures
Responsibility
Accountability
Consultancy
Informed
and
of
Team Leader
Tower Leader
Customer
All
the
members
the
Team member
Team Leader
team
Input
1. Escalation matrix
Elapsed time *
10 minutes
20 minutes
45 minutes
90 minutes
120 minutes
Until resolution
Steps
1. Ticket analysis is to be done and depending upon the severity of the issue and the time elapsed,
the next level of escalation is to be done according to the escalation matrix, the contact person
from the contact list is to be notified.
Note:
1. If a problem is being reassigned to several groups, the SLA would have to be calculated based on
the actual time spent by concerned groups in the problem resolution. Standard escalation process
would still have to be followed by Support team.
2. If a lower severity problem (Severity 3) leads to a higher severity problem (Severity 1) either due
to a bad fix or oversight, then a new Severity 1 ticket have to be opened by the concerned support
person for his own support group, so that further escalation would happen as expected.
3. If the problem resolution requires action from more than one support group, the KC SPOC would
be requested to co-ordinate the reprioritization of the other groups tasks.
Output
1. Updated Ticket in the Ticketing system.
2. PIR report
Group
Management
Name
Work No.
Home No.
Login ID
Team
leader
Tower
leader
KC Tower
leader
PM
KC PM
KC
SPOC
Mainframe Systems
HR & Payroll
Inventory & Logistics,
Marketing, Pricing
Stores
Non-Mainframe Systems
Datawarehouse
Infrastructure, Stores
<The contact list should cover all the Cross-functional contacts needed for resolving issues>
2. SPOC list
No.
Business Area
1.
2.
CRM
3.
Datawarehouse
4.
5.
6.
7.
DBA (DB2)
SPOC
KC SPOC
Form with the details of how to update the ticket in the tool can be added.
References
None.
Strategy or approach
Customer is conducting PIR meeting, depending upon the severity of the problem.
Similar meeting is to be conducted in for all the problems and brainstorming &
fishbone analysis tools to be used to find the root cause of the problem.
Action plan for preventing that problem in future also be brainstormed and
prepared.
RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
activities
Creating
problem ticket
Responsibility
Accountability
Consultancy
Informed
Tower Leader
Customer
and
affected parties
Conducting RCA
meeting
Project Manager
Preparation
RCA report
Team Leader
Tower Leader
of
Customer
and
affected parties
Customer
Input
1. Problem and severity details of the problem
Steps
1. Problem ticket is to be created in the ticketing tool
2. Team member to escalate to the Team Leader and involve him for the meeting.
3. Team Leader to call Tower leader and involve him for the meeting.
4. Team leader to schedule and conduct the meeting and involving all the relevant people.
5. Brainstorming and Fishbone tools to be used to find the root cause of the problem.
6. Preventive action plan is also to be discussed.
7. RCA report is to be prepared and the preventive action plan to be submitted to all the
affected parties and the customer.
Output
1. Updated Ticket in the Ticketing system.
2. RCA report
3. Preventive action plan.
Description
System
Component
Type of Failure
Problem Owner
Outage
Severity
Start Date/Time
Restore Date/Time
Duration
Description
Analysis
Impact analysis
Root cause
Time line
Contributing factors
Resolution
Analyzed by
Reviewed by
References
Strategy or approach
All the tickets handled by each person through tickets or emails or phone calls are
to be logged into excels on daily basis. Each tower to have their own review folder
configured and monthly folders are to be created. Under those folders, weekly
excel files will be created with daily excel sheets in them.
For Example -> KC folder for the tower -> Ticket tracking & reviewing folder -> Apr 07
folder
This folder to have excel file for every week, with the start date of the week.
02-Apr-07 week1.xls This file will have sheets for every day 02-Apr-07, 03-Apr07 etc.
Team Leader to verify the ticket details entered every day. Tower leader to do an
audit on the ticket review details every week.
RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
Responsibility
Accountability
Consultancy
activities
Logging the ticket Team Member
Team Leader
Users
details into the
ticket
review
excel
Creation of the Team Leader
Tower leader
Ticket Review file
Verification of the Team Leader
Tower Leader
Team Members
ticket
details
entered
every
day
Validation of the Tower leader
Project Manager
Team
Leaders
ticket
details
and Members
entered
every
week
Informed
Team Members
Input
1. Incidents received through phone, mails and tickets.
Steps
1. The template to be used for logging and tracking the ticket details.
2. Monthly folders and weekly files are to be created by the tower/team Leader.
3. Daily sheets are to be created in the files for every week. The file to be a shared file, so that
multiples people can do entries.
4. The ticket details are to be entered by the team members.
5. Every day the team leader needs to verify whether every one in the team has entered all
the tickets handled by him.
6. Every week Tower Leader needs to verify the sheets on completeness and correctness of
entries.
7. He should also prepare a ticket audit report in an excel.
8. The Quality Manager will do monthly verification on the process being followed.
Output
1. Weekly review files with daily sheets Team Leader and Team Members
2. Weekly Audit files Team Leader & Tower Leader
3. Monthly FI report - QM
Monthly FI checklist
FI checklistIS-Chennai V1.1.doc
References
None
25.
Vendor Tool
Version
EOS Date
V5.1
V2.1
30-Sep-07
30-Sep-08
For resolving DB2 related problems, the approach and steps to be taken are
given below:
Determine the cause of the problem by analyzing the DB2 Logs. The DB2 Logs will give the
information about the error code, reason code, type and the Object name.
Look into the DB2 Messages and codes manual to find the detailed description of the
problem. The same can be found in:
Look at the online RMF reports to check for any enqueue related to DASD or CPU related
delays. Escalate to appropriate team depends on DASD (storage) or CPU (z/oS) related
problems.
Based on the analysis, take the necessary steps like running utilities or jobs for
problem.
In case of any emergency changes that needs to be done like recycling the DB2 Subsystem,
we need to contact the necessary persons for approvals and implement the changes.
Work with the impacted application and technical teams top resolve the issue.
It is important to note down the actions and timing during the problem solving process.
26.
resolve the
Description
22793,379,000
76231,379,000