Sie sind auf Seite 1von 84

Policy and Procedure document for DB2 SAP

VERSION 1.0

REVISION LIST
Document Name: DBOL-DB2 SAP Policy and procedures Manual
Version Number : 1.0
Rev
.
No

Revision
Date

Revision
Description

Page Prev.
No.
Page
No

27/04/2007

Initial Draft

03/05/2007

First Update

18

14

Updated
the
Contact
List

14/05/2007

Second Update

17, 20

15,18

Included
all the
organiza
tion
charts

29/05/2007

Third Update

53

N/A

22/06/2007

Fourth Update

78

N/A

Included
all the
check
lists in
the
procedur
es
Included
the
Access
required
for
DBOL
team

List of abbreviations

Action
Taken

Addendum/
New page

Release
Notice
Ref.

S.NO

Abbreviations Expansion

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

CST
PTF
IRLM
SMP/E
SQL
CA
BSDS
TMS
CAB
CFRM
DDF
GSS
Xmanager
OAM
DBOL
CICS
SVC
PMR
BCP
DR
APAR
APF
SP
WLM
SPAS
DBM1
MSTR
CSI
RIM
CBPDO
SAP
ICLI
GBP
BP
XCF

LIST OF FIGURES

Central Standard Time


Program Temporary Fixes
Internal Resource Lock Manager
System Modification Program/Extended
Structured Query Language
Computer Associates
Bootstrap Datasets
Tape Management System
Change Advisory Board
Coupling Facility Resource Manager
Distributed Data Facility
Global Subsystem
Execution Manager
Object Access Method
Database and Online Services
Customer Information and Control System
Supervisor Call
Problem Management Record
Business Continuation Program
Disaster Recovery
Authorized Program and Analysis Report
Authorized Program Facility
Stored Procedure
Work Load Manager
Stored Procedure Address Space
Distributed Manager
Master
Consolidated Software Inventory
Related Installation Manual
Custom built in product delivery option.
Systems Applications and Product
Integrated Call Level Interface
Group Buffer Pools
Buffer Pools
Cross System Coupling Facility

ONCALL SUPPORT PROCESS

CUSTOMER ORGANIZATION CHART

ORGANIZATION CHART

SOFTWARE INSTALLATION CHART

CONTENTS
ONCALL SUPPORT PROCESS...........................................................3

CUSTOMER ORGANIZATION CHART................................................3


ORGANIZATION CHART.....................................................................3
SOFTWARE INSTALLATION CHART..................................................3
1. INTRODUCTION.................................................................................7
1.1. PURPOSE........................................................................................................................................................7
1.2. MISSION STATEMENT....................................................................................................................................8
1.3. SERVICES OVERVIEW....................................................................................................................................8
1.4. GOALS AND OBJECTIVES...............................................................................................................................8
1.5. ASSUMPTIONS, DEPENDENCIES, LIMITATIONS AND CONSTRAINTS...............................................................9
1.6. DETAILS OF THE WORK GROUPS UNDER SCOPE...........................................................................................9
1.7. KEY USERS/ CUSTOMERS BY NAME............................................................................................................10

2. SCOPE OF SERVICE.......................................................................10
2.1. SUMMARY OF AGREEMENT ........................................................................................................................10
2.2. SERVICE LEVEL AGREEMENT OVERVIEW................................................................................................10
2.3. SCOPE OF SERVICES.....................................................................................................................................12
2.4. LIST OF REPORTS TO BE REPORTED TO THE CLIENT....................................................................................13
2.5. PRIMARY SUPPORT TASKS AND RESPONSIBILITIES MATRIX.......................................................................13
2.6. TEAM ORGANIZATION.................................................................................................................................14

...................................................................................................14
2.7. ESCALATION MATRIX.................................................................................................................................14
2.8. OS, HARDWARE, SOFTWARE, SERVER AND DATABASE DETAILS ..............................................................14

3. CUSTOMER ORGANIZATION CHART & CONTACT LIST.............16


3.1. ORGANIZATION CHART ..............................................................................................................................16
3.2. CUSTOMER & VENDOR CONTACT LIST ......................................................................................................16

4. ORGANIZATION CHART & CONTACT LIST.................................16


4.1. ORGANIZATION CHART ..............................................................................................................................16
4.2. CUSTOMER CONTACT LIST..........................................................................................................................16
TEAM................................................................................................................................................................16

5. SUPPORT PROCESS......................................................................17
5.1. RESOURCE AVAILABILITY ..........................................................................................................................17
5.2. KNOWLEDGE TRANSITION PLAN.................................................................................................................17

6. DBOL DB2 ROLES AND RESPONSIBILITIES............................17


7. DB2 SAP ENVIRONMENT OVERVIEW........................................19
7.1 ENVIRONMENT LANDSCAPE........................................................................................................................19
7.2 DB2 REGION NAMING CONVENTIONS........................................................................................................22
7.3 SMP/E ENVIRONMENT OVERVIEW..............................................................................................................24
7.4 DATA SHARING.......................................................................................................................................25
7.4.1 DATA SHARING - OVERVIEW............................................................................................................25
7.4.2 ADVANTAGES OF DATA SHARING..................................................................................................26
7.4.3 BENEFITS OF DATA SHARING...........................................................................................................26
7.4.4 DATA SHARING KC ENVIRONMENT............................................................................................27
7.5 KC AUTOMATION - OVERVIEW...........................................................................................................28
7.6 VENDOR AND THIRD PARTY TOOLS..................................................................................................28
7.7 SOFTWARE INSTALL & MAINT OVERVIEW..................................................................................29
7.8 TRACE - OVERVIEW...............................................................................................................................30
7.9 SVC DUMP OVERVIEW........................................................................................................................30
7.10 IBM PROBLEM REPORTING AN OVERVIEW................................................................................32

7.11 DB2 CONNECT AN OVERVIEW........................................................................................................33


7.11.1 DB2 CONNECT ENTERPRISE EDITION ..............................................................................................33
7.11.2 DB2 CONNECT PERSONAL EDITION.................................................................................................33
7.11.3 DB2 CONNECT UNLIMITED EDITION................................................................................................33
7.12 ENVIRONMENT LANDSCAPE - OVERVIEW..................................................................................................34
7.12.1 DATASETS INFORMATION........................................................................................................................34
7.12.2 NAMING CONVENTION AND STANDARDS...............................................................................................34
7.12.3 MAINTENANCE JOBS...............................................................................................................................36
7.12.4 ZPARM .................................................................................................................................................36
7.12.5 ASSEMBLY JCL.......................................................................................................................................37
7.12.6 COMMON DATASETS - SYSPLEX...........................................................................................................37
7.12.7 JCL LIBRARY..........................................................................................................................................37
7.13 APPLICATIONS SUPPORTED........................................................................................................................38
7.13.1 DB2 APPLICATIONS................................................................................................................................39
7.14 SAP EXITS OVERVIEW...............................................................................................................................39
7.15 ALERT MONITORING..................................................................................................................................39
7.16 UPGRADES- PAST AND FUTURE.................................................................................................................40
7.17 CHANGE TICKET PROCESS......................................................................................................................40

8. HOW TO SETUP DATA SHARING..................................................41


9. LOG MANAGEMENT.......................................................................42
10. SOFTWARE INSTALL....................................................................43
10.1. PROCESS & PROCEDURE....................................................................................................................43
10.1.1. PURPOSE.................................................................................................................................................44
10.1.2. PREREQUISITES................................................................................................................................44
10.1.3. RACI MATRIX.....................................................................................................................................45
10.1.4. INPUT...................................................................................................................................................45
10.1.5. PROCESS..............................................................................................................................................45
10.1.6. OUTPUT...............................................................................................................................................45
10.1.7. FORMS, CHECLISTS OR TEMPLATES USED................................................................................45
10.1.8. REFERENCE........................................................................................................................................46

11. SOFTWARE MAINTENANCE........................................................46


11.1. PROCESS & PROCEDURE....................................................................................................................46
11.1.1. PURPOSE.................................................................................................................................................48
11.1.2. PREREQUISITES.................................................................................................................................48
11.1.3. RACI MATRIX.....................................................................................................................................48
11.1.4. INPUT...................................................................................................................................................48
11.1.5. PROCESS..............................................................................................................................................48
11.1.6. OUTPUT...............................................................................................................................................48
11.1.7. FORMS, CHECKLIST OR TEMPLATES USED................................................................................49
11.1.8. REFERENCE........................................................................................................................................49

12. SUBSYSTEM CREATION/MIGRATION.........................................49


12.1. OVERVIEW.............................................................................................................................................49
12.1.1. PURPOSE.................................................................................................................................................50
12.1.2. PREREQUISITE...................................................................................................................................50
12.1.3. RACI MATRIX....................................................................................................................................50
12.1.4. INPUT..................................................................................................................................................50
12.1.5. PROCESS.............................................................................................................................................50
12.1.6. OUTPUT..............................................................................................................................................51
12.1.7. FORM, CHECKLIST, OR TEMPLATE USED..................................................................................51
12.1.8. REFERENCE.......................................................................................................................................51

13. PRODUCTION COPY.....................................................................52


13.1. OVERVIEW.............................................................................................................................................52

13.1.1.
13.1.2.
13.1.3.
13.1.4.
13.1.5.
13.1.6.
13.1.7.
13.1.8.

PURPOSE.............................................................................................................................................53
PREREQUISITE..................................................................................................................................53
RACI MATRIX....................................................................................................................................53
INPUT..................................................................................................................................................53
PROCESS.............................................................................................................................................53
OUTPUT..............................................................................................................................................53
FORMS, CHECKLIST OR TEMPLATES USED...............................................................................54
REFERENCE.......................................................................................................................................54

14. BACKUP AND RECOVERY...........................................................54


14.1. OVERVIEW.............................................................................................................................................54

15. BCP & DISASTER RECOVERY EXERCISE..................................57


16. DBOL DB2 - DAY TO DAY ACTIVITIES........................................57
17. PERFORMANCE TUNING.............................................................58
18. KC AUTOMATION..........................................................................58
19. TOOLS - OVERVIEW.....................................................................58
20. PERFORMANCE EXPERT.............................................................60
20.1. OVERVIEW.............................................................................................................................................60
20.1.1. PURPOSE.............................................................................................................................................62
20.1.2. PREREQUISITE..................................................................................................................................62
20.1.3. RACI MATRIX....................................................................................................................................62
20.1.4. INPUT..................................................................................................................................................62
20.1.5. PROCESS................................................................................................................................................63
20.1.6. OUTPUT..............................................................................................................................................63
20.1.7. FORMS, CHECKLIST OR TEMPLATE USED.................................................................................63
20.1.8. REFERENCE.......................................................................................................................................63

21. IBM DB2 ADMINISTRATION TOOL..............................................63


21.1. OVERVIEW.............................................................................................................................................63
21.1.1. PURPOSE.............................................................................................................................................66
21.1.2. PREREQUISITE..................................................................................................................................66
21.1.3. INPUT..................................................................................................................................................66
21.1.4. RACI MATRIX....................................................................................................................................66
21.1.5. PROCESS.............................................................................................................................................66
21.1.6. OUTPUT..............................................................................................................................................66
21.1.7. FORMS, CHECKLIST OR TEMPLATE USED.................................................................................67
21.1.8. REFERENCE.......................................................................................................................................67

22. LOGREC.........................................................................................67
23. DAE.................................................................................................67
24. NETVIEW........................................................................................68
1.1.1. KC AUTOMATION................................................................................................................................68
1.1.2. NETVIEW AUTOMATION...................................................................................................................69

24.

CURRENT VERSION, MAINT LEVEL AND EOS DATE.............69

25. ACCESS REQD FOR DBOL DB2 .................................................69


26. GENERAL PROCEDURES............................................................71
PROCEDURE FOR ADDING A NEW USER...........................................................................................................71
PURPOSE............................................................................................................................................................71

STRATEGY OR APPROACH..................................................................................................................................71
RACI MATRIX..................................................................................................................................................71
INPUT ................................................................................................................................................................72
STEPS ................................................................................................................................................................72
OUTPUT.............................................................................................................................................................72
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................72
REFERENCES......................................................................................................................................................72
OVERALL CORE SUPPORT PROCEDURE.............................................................................................................73
PURPOSE............................................................................................................................................................73
STRATEGY OR APPROACH..................................................................................................................................73
RACI MATRIX..................................................................................................................................................75
INPUT ................................................................................................................................................................75
STEPS ................................................................................................................................................................75
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................76
REFERENCES......................................................................................................................................................76
ESCALATION PROCEDURE ................................................................................................................................76
PURPOSE............................................................................................................................................................76
STRATEGY OR APPROACH..................................................................................................................................76
RACI MATRIX..................................................................................................................................................77
INPUT ................................................................................................................................................................77
STEPS ................................................................................................................................................................77
OUTPUT.............................................................................................................................................................78
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................78
REFERENCES......................................................................................................................................................79
ROOT CAUSE ANALYSIS....................................................................................................................................79
PURPOSE............................................................................................................................................................79
STRATEGY OR APPROACH..................................................................................................................................79
RACI MATRIX..................................................................................................................................................79
INPUT ................................................................................................................................................................80
STEPS ................................................................................................................................................................80
OUTPUT.............................................................................................................................................................80
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................80
REFERENCES......................................................................................................................................................81
TICKET REVIEW PROCEDURE............................................................................................................................81
PURPOSE...........................................................................................................................................................81
STRATEGY OR APPROACH..................................................................................................................................81
RACI MATRIX..................................................................................................................................................82
INPUT ................................................................................................................................................................82
STEPS ................................................................................................................................................................82
OUTPUT.............................................................................................................................................................82
FORMS, CHECKLISTS OR TEMPLATES USED ......................................................................................................83
REFERENCES......................................................................................................................................................83

25. CURRENT VERSION, MAINT LEVEL AND EOS DATE..............83


25.

APPROACH TO PROBLEM SOLVING...................................83

26.

ACTIVE PROBLEM MANAGEMENT RECORDS...................84

1. INTRODUCTION
1.1. Purpose
This document describes the operational procedures carried out by Kimberly Clark DB2 team
in detail.

1.2. Mission Statement


To provide Quality of Service to the DB2 SAP Supported Application Customers of KimberlyClark to achieve 100% compliance on all policies and procedures and provide complete
customer satisfaction on the day to day activities

1.3. Services Overview


The DB2 SAP System Administration team provides services for supporting the Infrastructure
of KC for DB2 in installation, maintenance and support for Applications using DB2.
DB2 Team is also responsible for resolving all the DB2 related issues. Following are the roles
and responsibilities of DB2 Team.
1. DB2-SAP Environment Management

Installation, Maintenance and Upgrade of DB2

Creation of New DB2 Regions

Handling Production copies.

Problem Management and Reporting to IBM.

Maintenance of Special and Third party tools used for DBOL.

System Level Backup and Disaster Recovery.

Vendor Support.

Vendor Software Installation and Maintenance.

Mailbox/Email (Run Requests)

HPSD Service Calls

Telephone.

2. Requests

1.4. Goals and Objectives


The objective of this document is to provide information on the DB2 Administration. This
document will serve as a guide and handbook for Kimberly-Clark and
Administration and Support.

to manage DB2

1.5. Assumptions,
Constraints

Dependencies,

limitations

and

The DBOL DB2 Team is responsible for the DB2 System programming activities like
Installation of DB2 Regions, Maintenance of the DB2 Regions, Scheduling and carrying out
Production Copy process and Vendor Products.
The object creation on the application environment and application queries will be handled by
application DBA team.
The DBOL DB2 team will work performance team for subsystem level tuning.
During Installation and Maintenance, DBOL DB2 team applies patches and fixes that may
require an IPL to activate the changes. The IPL will be handled by the MVS and operations
team.
Replication of volumes from TCP to TWQ and restoring the disk volumes for the subsystems
is done by DASD team. All log management activity is taken care by computer operations.
The DB2CONNECT server in SAP is owned by SAP BASIS. The SAP BASIS team is
responsible for any installation and maintenance.

1.6. Details of the Work Groups under Scope


Group

Group Mail id

DBOL

_DBOLS, Computer Services

Description
All CICS, DB2, MQ requests
are sent to this group.

DBA

_DBA-Support, USA-Nee

MFOS

_Support, Mainframe

Storage
Service Desk
Operations
team

_Support, Consolid Stor Mgmt

This group is contacted for


system DB2 Application
Support.
This group is contacted for
system Z/OS support.
This group is contacted for
Storage Related Issues.

_HelpDesk, USA-Nee

This group is contacted for


Service Desk support.

_Computer Security, Global

This group is contacted for


Computer security related
support.

Computer
Security
HP Service
Desk support
team
Magic support
team
CA-7 and TWS
job scheduling
team

_Admin, Magic

This group is contacted for


HPSD support.
This group is contacted for
Magic support.

_Global, Job Scheduling

This group is contacted for


CA7 and TWS support.

_Admin, Service Desk

1.7. Key Users/ Customers by name


The Key users of the system are DBA group and the application team. The
applications that use DB2 are

SAP R/3 -

APO Only one application use DB2

EBP

CRM Except Mexico

C Folders

HR

Knowledge Warehouse

Workplace

BW

WEB Application Server / SAP Exchange Infrastructure Only Some


application use DB2

For all these applications BASIS team serves as a contact.


The contacts of BASIS teams are found in the last column of this spread sheet.

2. SCOPE OF SERVICE
2.1. Summary of Agreement
Systems
to
be
supported (include
regions
under
scope, if any)

Type of support

Global DB2 users of


Kimberly Clark

L2 and L3 DB2
Administration

Type of Service

Enhancement,
Support and
Maintenance

2.2. Service Level Agreement Overview

Availability
requirements

24/7

SLAs
expected

Service
Level
Agreement
s

Sl. No

Quality Factors

DB2
Subsystem
availability

Quality Objectives

DBOL team should ensure that the DB2 subsystems available


as per the SLA.

K-C currently practices the following timelines:

Incident Mgmt ownership of Magic incident ticket within 4 hours (_DBOLS


Computer Services Group)

On call 30 minutes to engage, 1 hour to resolve or escalate. Refer to picture for


details

Two day turn around on group mailbox requests

The infrastructure setup related activities have dependency with external teams like z/os,
Security, Storage, DBA and the customer itself. If there is a delay from the external teams and
customer would impact the SLA directly.

Activities

External Dependency

Installation

RACF, Storage, z/Os and DBA

Maintenance

z/os, DBA and Storage

Production Copy

Storage

Subsystem Creation

RACF, Storage, z/Os, DBA and Customer.

Here is a document with the SLAs applicable for DBOL DB2 team

Quality
Factor
MTTR

Availability

Periodicity

Explanation

Monthly

Mean time to resolve Incidents


(a) Sev 1 within 4 hours/12 hours
Sev 1 within 2 Business Days
(b) Sev 2 within 16 hours
Sev 2 within 2 Business Days
Class 3 not more than 7 consecutive days
Availability of DB2 Sub Systems:

Monthly

2.2.1 Oncall Support


DBOL DB2 ON CALL SUPPORT PICTORIAL REPRESENTATION

Quality Goal
>=99%
100%
>=99%
100%
Production:

On call problem
procedure and
SLA

If the problem experienced


by user is defined as
broke ** follow critical
problem escalation
procedures

DBOL Primary
On call person

Critical problem
procedure
DBOL Primary
On call person

Call regarding a
user problem

Dial
into the
system
with in
30

Able to
be on
site at
the
TCC
with in
2 hours

Resolve
with in four
hours

Follow-up
with the
customer/
helpdesk
after the
issue is
resolved

Escalate problem to Janet (project


lead) by calling. Send an e-mail
describing the issue and the possible
resolution
Project Lead

DBOL
Secondary On
call person
Respond with
in 60 minutes

Call regarding a
user problem

Escalate problem to Mike


Mauritz if problem lasts
more than 2 hours

** Any problem that affects multiple users or the


outage lasts more than an hour

Contact GNAAPO

The below Whofixes commands can be used in order to find out the On call persons in each
of the teams:
TEAM
DASD
z/OS
DB2 Legacy & SAP
DBA
Datacom
DB2 Connect

COMMAND
Whofixes dasd
Whofixes mfos
Whofixes db2soft
Whofixes dba
Whofixes datacom
Whofixes db2connect

NOTE:- WHOFIXES information is accessible only from Legacy not from SAP LPARS.

2.3. Scope of services


The DBOL DB2 team is responsible for Monitoring and maintaining all the Development, Test,
Quality and Production Subsystems supported by DB2 SAP.

DB2 System programming

DB2 Software Installation and Maintenance.

System Load libraries and Naming standards followed

Subsystem level Performance Tuning.

DB2 Vendor Software installation and maintenance

Business Continuation and Disaster Recovery.

Infrastructure Maintenance.

Request Handling

Day to Day Activities

Escalation Procedures

Contact Information

Installation and Maintenance of DB2 Connect in Windows environment

Rebooting the Mainframe Operating System to reflect changes in DB2 Installation.


Server Reboot can be done only by the system owner based on the requirement.

Problems with the application programs. This happens when the DB2 team performs
all the tests and confirms when DB2 is up and running. Under such circumstances the
request can be sent to the application team for them to review and analyze their
programs.

2.4. List of reports to be reported to the client


N/A

2.5. Primary support Tasks and Responsibilities matri x


Primary support tasks

Responsibility

Accountability

Consultancy

Informed

System Administration of
DB2 Regions

Software
Installation,
Maintenance,
Upgrade, Migration
of
subsystems,
Carrying
out
Production
copy
refreshes,
Subsystem Level
Tuning, Incident &
Change

DBOL- DB2 Team

KC

Users
and
Application
owners

Management.

2.6. Team Organization


2.7. Escalation Matrix
Call Type
Level I
Severity 1
Offshore
Team lead /
Onsite Team
Lead

Points of escalation
Timelines
Level II
Timelines Level III
Timelines
30 min from Offshore TM / 60 min from Program Manager90 minutes
time of call
Onsite TM time of call
from time of
logging
logging
call logging

2.8. OS, Hardware, Software, Server and database details


Included here is a diagram of the SAP Infrastructure that shows the application servers,
mainframe, DB2s, Workplace access, and other components.
An excel spreadsheet of all UNIX and NT servers running SAP.

SAP Release
Kernel
A collection of all executable programs which implement the technical basis of the
R/3 System together with the existing operating system services and database services

Release 4.6C, 6.20, 6.40


Kernel 4.6C

Connected to DB2 Using ICLI

Uses RRSAF

Connected to DB2 using DB2 Connect fix pack

Uses DRDA

Kernel 6.20

Kernel 6.40

DB2 Connect

Downward compatibility
i.e. The 6.40 release level of Kernel compatible with 6.20 Basis Version

DBSL Database Service Level Software


It provides the database independent code which resides on the application server. DBSL
provides the interfere between R/3 database independent code and the ICLI code

Subset of Kernel

Converts SAP code (ABAP code) to SQL statements

Release level of BASIS 4.6C


6.20
6.40

Connecting to DB2
Two ways to connect to DB2 from SAP

ICLI Integrated Call Level Interface

DB2 Connect

Database details:
Server Name
DB2 V8.1

OS
Z/OS 1.6

Type of Server
Database Server

Hardware
System Z9

All the DB2 Systems under SAP are currently running in V8.1. There is a plan of migrating
the DB2 Systems to V9.0.
Vendor Products:
Vendor Tools
IBM DB2 Performance Expert

IBM DB2 Administration Tool

Usage
This is used to do performance analysis
like running Explain, collecting statistics
from RUNSTATS etc.
This is used to do all the administrative
activities like starting a DB2 subsystem,
stopping a DB2 subsystem etc.

Version
V2.1.0

V5.1

3. CUSTOMER ORGANIZATION CHART & CONTACT LIST


3.1. Organization Chart
Mike Mauritz
3.2. Customer & Vendor Contact list
Application owners and Application users are the customers for DBOL DB2 team, but DBOL
DB2 team wont interact with application team directly. Application team approaches DBA team
and DBA team communicates with DBOL.

The details about the Vendor products and the expiry date are attached below.

4. ORGANIZATION CHART & CONTACT LIST


4.1. Organization Chart

4.2. Customer contact list

team
Name

Role

5. SUPPORT PROCESS
The DBOL DB2 team provides a continuous support to resolve the customer problems related
to DB2. The

support is provided through user requests wherein the user can specify his

requirements, through phone calls also through the escalations from the service desk and
attending to Magic tickets.

5.1. Resource Availability


resources will be available 24/7 after the transition. resources will be available at
offshore Monday through Friday (6.00 PM to 2:00 AM IST) during the transition
phase and the onshore resource will be available from Monday through Friday (8
AM CST to 5 PM CST). During steady state coverage will be provided from both Chennai
and Neenah. The coverage hours will be published when we transition to service phase.

5.2. Knowledge Transition plan


has the onsite / offshore methodology for Knowledge Transition. Onsite team and offshore
team will get the transition from the client. They will document the information received from
the client and prepare a standard procedure document for the process followed in KC for the
DB2 activities. A Playback will be scheduled for every week with the client for the transition
received from the client. The feedback and scorecard for the playback done by the Onsite
team and offshore team will be given by the client.
.

6. DBOL DB2 ROLES AND RESPONSIBILITIES


DBOL DB2 SAP team provides the infrastructure support for development, System testing,
Quality, production and Software testing environments in KC. The team maintains 113
subsystems running in 11 different LPARs. Some of them are defined as data sharing. The
primary responsibility of the team includes,

Install and maintenance of DB2 and vendor software

Install and Maintenance of the following vendor products

IBM- DB2 PE.

IBM- DB2 ADMIN TOOL

Plan, install, customize, integrate, upgrade and verify DB2 system software and related

products/utilities/tools.

Apply maintenance PTFs (program temporary fixes) for DB2 system software and related

products/utilities/tools.

Interact with core technical teams (MVS, Network and Storage) for the DB2 sub system

creation and setup.

Perform problem determination and provide resolution for DB2 system software and

related products/utilities/tools.

Monitor DB2 system software performance on a continuous basis. (Perform DB2

subsystem level health check).

Conduct tuning exercise on DB2 system software level when there is a reported

performance problem.

Provide technical assistance to production DBA during planned disaster recovery testing

exercise.

Perform capacity planning exercise to come up with forecasted and optimized DB2

workload process set up by analyzing the past and current process workload.

Perform Research & Development and proof-of-concept work for testing the recent and

new features of DB2 system software related products and reports the long-term cost-benefits
to the senior management on a continuous basis.

Assess the performance of DB2 system software related products and provide

recommendations for opportunities for performance improvement and associated long term
cost-benefits

to

senior

management.

Implementing

the

performance

improvement

recommendations in the environment after obtaining the approval from senior management.

Analyze and resolve DB2 system software product related issues and problems raised by

production DB2 DBAs and mainframe data center staff.


o

For example, associated network connectivity product problems, DB2 subsystem


level parameter DSNZPARM value related problems, Buffer pool problems, Internal
Resource Lock Manager (IRLM) problems etc.)

Subsystem Level Tuning,

Problem and Change Management

Oncall Support

Co-ordination with Vendor/IBM

Trace and Dump Analysis.

7. DB2 SAP ENVIRONMENT OVERVIEW


7.1 Environment Landscape
The DB2 SAP Infrastructure Support team manages infrastructure support on Mainframe
Servers.
DB2 SAP environment has three SYSPLEXes
1. SAP0
2. SAPQ
3. SAPX.
The following table shows relationship between sysplex, LPARS and Coupler.

SYSPLEX

LPARS

Coupler
SAP0ICF2,
SAP0ICF3

SAPQ

TCP0, TCP1, TCP2,


TCP3, TCQ0,
TCQ1, TCT0, TCT1,
TCT2
TWQ0,TWQ1

SAPX

TCX0, TCX1

SAP0

SAPQWCF0,
SAPQWCF1
SAPXICF2,
SAPXICF3

SAP0 Sysplex:
SAP0 is the main sysplex and the tables shown below gives detailed information about the
LPARs in the sysplex and the DB2 regions in each LPAR.
LPAR
Production LPARs
(TCP0,TCP1,TCP2,TCP3)
Quality LPARs (TCQ0,TCQ1)
Development/Test LPARs
(TCT0,TCT1,TCT2)
Total Subsystems Supported
LPAR
TCP0
TCP1
TCP2
TCP3
TCQ0

No of DB2 Subsystems
50
23
40
113
DB2 Subsystems
PABA,PAPA, PAWB, PEBB, PECB, PEPB,
PEWB, PE3B, PGFA, PGWA, PHAA, PLPA,
PM3A, PPPB, PPWA
PABB,PACB, PAPB, PA3A, PA4A, PA4C,
PECA, PGFB, PGWB, PHAB, PLPB, PLWB,
PL3B, PM3B, PPPA, PPWB
PACA, PA3B, PA4B, PA4D, PGCA,
PGHB,PLWA, PL3A, PP3A
PAWA, PEBA, PEPA, PEWA, PE3A, PGCB,
PGHA, PP3B
CEWA, CGWA, IABA, IGKA, IP3A, QACA,
QE3B, QGHA, QGWA, QHAA, QM3A, QP3A

TCQ1
TCT0
TCT1
TCT2

CGCA, IEBA, IEWA, IE3A, QEBA, QECA,


QEWA, QE3A, QGCA, QLWA, QPWA
DACA, DE3B, DGCA, DGFA, DGHA, DHAA,
DLPA, DL3A, DL4A, DM3A, DP3A, SAWA,
SGBA, SGCA, SGWA, SGXA
DAPA, DA3B, DA4B, DA5A, DEBA, DECA,
DEPA, DEWA, DE3A, DGWA, DPPA, DPWA,
MG3A, QA3B, SE3A, SG3A, SLWA
DABA, DAWA, DA3A, DA4A, IA4A, QAWA,
QA3A

Sandbox Sysplex: SAPX


This is where z/OS and DB2 Teams can do all the testing like Installing DB2 Subsystem,
Upgrading to a newer version and doing a DB2 Maintenance in SAPX if there is an early
code. This is more like a play ground for the Z/OS and DB2 teams.
The table below shows the detailed information about the LPARs in the Sysplex and DB2
Regions in each LPAR.
LPAR
TCX0
TCX1

DB2 Subsystems
BXBA, BXCA, BXWA, BX3A
BXBB, BXCB, BX3B,BXWB

SAPQ: (Production Copy region)


This sysplex is present in a Separate Data Center (TCC-West). Here we use an IBM Product
for cloning (Mainstar) to run the Production copy jobs for every instance from the Production
Sysplex.
The table below shows the detailed information about the LPARs in the sysplex and the DB2
regions in each LPAR.
LPAR
TWQ0
TWQ1

DB2 Subsystems
CGHA, CL3A, QABA, QA4A, QL3A, QP3A
CA4A, CE3A, CGCA, CGWA, CP3A, DLWA,
SGHA

Each Data sharing group contains two members A and B. A is the primary member and B
acts as a failover member.
Among all these subsystems the production subsystems which start with letter P as their first
letter are most critical ones.
Most of the subsystems in SAP are set up in data sharing mode. All the production regions
are data shared and some of the quality regions are data shared. The following table shows

different data sharing groups and the corresponding members. All the members in SAPQ
sysplex are set up as non data shared.

Data Sharing Group Primary Member

Secondary Member

PAB
PAP
PAW

PABA
PAPA
PAWA

PABB
PAPB
PAWB

PEB

PEBA

PEBB

PEC

PECA

PECB

PEW

PEWA

PEWB

PE3

PE3A

PE3B

PGF

PGFA

PGFB

PGW

PGWA

PGWB

PHA

PHAA

PHAB

PLP

PLPA

PLPB

PL4

PL4A

PL4B

PM3

PM3A

PM3B

PEP

PEPA

PEPB

PPP

PPPA

PPPB

PPW

PPWA

PPWB

PAC

PACA

PACB

PA3

PA3A

PA3B

PA4

PA4A and PA4C

PA4B & PA4D

PLW

PLWA

PLWB

PL3

PL3A

PL3B

PP3

PP3A

PP3B

PGC

PGCA

PGCB

PGH

PGHA

PGHB

QE3

QE3A

QE3B

DA3

DA3A

DA3B

QA3

QA3A

QA3B

BXB

BXBA

BXBB

BXC

BXCA

BXCB

BXW

BXWA

BXWB

BX3

BX3A

BX3B

We have D (Development) and S (Application Sandbox) regions which are used by the
application teams.
All the regions in the SAPX LPAR which has a starting letter as B in its name is used by
BASIS team for their testing purpose.

All the names of the production regions start with P. These are the most critical regions and
they are always up and running. They are setup to provide 24X7availability. They are
provided with a failover member so that whenever one member in the group is brought down
for maintenance purpose the other member takes over.
All the regions in SAPQ sysplex are setup as non data shared. They are used by the quality
assurance group for the purpose of Quality assurance.
The DB2 Address spaces and jobs can be viewed using SDSF job list menu. SDSF is used to
view job log output. ANYSTC is the owner address space for most of the DB2 Address
spaces.
SAP uses RACF for security purposes

7.2 DB2 Region Naming Conventions


The subsystem names which consist of 4 characters and each character determines the type
of the subsystem.
1st character:- This character tells whether the region is a Development(D), Quality (Q),
Instructional (I), Basis (B), Sandbox (S), Copy (C).
2nd character:- This character determines the geographical area of the subsystem like North
America (A), Asia Pacific (P), Europe (E), Latin America (L), Mexico(M), Australia (G).
3rd character:- Type of

application Backup (B), SAP R/3 (3), Warehouse (W),Human

resources(HR).
4th character:- Stands for data sharing group A or B.
Eg: PA3A (Production SAP R/3 Subsystem running in North America)
DP3A (Development SAP R/3 Subsystem running in Asia Pacific)

Environments

Location / Business Unit

Applications

Development

D North America

A SAP R/3

Quality

Q Consumer

C APO

Production

Europe

Instructional

Global

G CRM

Basis Sandbox

Health Care or Accenture


HR Portal

H xApp Product Definition

Application
Sandbox

S KC SAP Portal For HR

K System Landscape Directory

Data Migration

M Latin America

Prod Fix / Copy

C Master

M RFID True Demand Grid

Light (Portal)

Asia / Pacific

RFID

R Knowledge Warehouse

EBP / SRM

cFolders

HR

External / Computer
Services / Mexico

X Live Cache (APO)

RFID True Demand Master

Workplace / Enterprise Portal

xRPM / xPD / cProjects

Solution Manager

BW

WEB Application Server / SAP


Exchange Infrastructure

RFID

Data set Naming Conventions used By DBOL


All the important datasets used by the DBOL- DB2 SAP team has a high level qualifier
DB2V8. This stands for DB2 Version 8. Some of the important datasets are given below.

Dataset

Description.

DB2V8.CNTL

This dataset contains some important Utility

DB2V8.DB2.BP

JCLs.
This dataset contains JCL specific to Bufferpool

DB2V8.INSTALL.CNTL

operations.
This dataset contains model jobs specific to

DB2V8.MAINT.CNTL

Installation.
This dataset contains Maintenance specific model

DB2V8.MIG.CNTL

jobs.
This dataset contains model jobs used for

DB2V8.NEWSID.CNTL

Migration.
This dataset contains model jobs used for

DB2V8.PROCLIB

creation of a new subsystem.


This dataset contains all the PROCs that are

DB2V8.RSUMMYY.CNTL

used.
This dataset contains the maintenance jobs that

DB2V8.SOURCE

are to be run for that particular PUT level.


This dataset contains some of the jobs that we
run during Cloning process and some of the

DB2V8.SMP.CNTL

ZPARMS.
This dataset contains the jobs that are used in the

DB2V8.SYS6.DSN81*.*

SMP/E processing
These datasets contains the SMP/E libraries.

DB2V8.TEXT

This dataset contains the members which provide


instructions for carrying out a production copy,
new sid creation, migration

7.3

SMP/E Environment Overview

The DB2 SMP/E Environment in SAP comprises of DB2 Libraries which has 6 sets each, from
A to F.
For example A set of SMP/E libraries will follow the naming conventions shown below:DB2V8.SYS6.DSN81A.SMPE.DB2.DLIB.CSI
DB2V8.SYS6.DSN81A.SMPE.DB2.DLIB.CSI.DATA
DB2V8.SYS6.DSN81A.SMPE.DB2.DLIB.CSI.INDEX
DB2V8.SYS6.DSN81A.SMPE.DB2.GLOBAL.CSI
DB2V8.SYS6.DSN81A.SMPE.DB2.GLOBAL.CSI.DATA
DB2V8.SYS6.DSN81A.SMPE.DB2.GLOBAL.CSI.INDEX
DB2V8.SYS6.DSN81A.SMPE.DB2.TARGET.CSI
DB2V8.SYS6.DSN81A.SMPE.DB2.TARGET.CSI.DATA
DB2V8.SYS6.DSN81A.SMPE.DB2.TARGET.CSI.INDEX
The maintenance and patches will be applied to one of these sets. Always one set will have
the latest changes and there by we maintain 5 previous versions from the current
maintenance level.
The subsystems will use one of these sets of libraries depending on the current maintenance
level. The A-F set of libraries are not hard coded in the system rather they are referred though
the alias.
The alias will be defined for each subsystem during maintenance and they will be pointing to
any particular set depending on the maintenance level of the SMP/E that we are in.
For ex, a subsystem pa4a is upgraded from PUT level 0702 to PUT level 0703 then the F set
Of libraries used by the subsystem in the PUT0702 and now it will be using A set of libraries
in the new PUT level.
During maintenance, while upgrading PA4A, the alias will be deleted and recreated to point to
A set of libraries.
The new changes will be active as a part of the IPL. Once the subvsystem is recycled through
the automation the redefined alias will point to the new set of libraries, A set in the above
case.
During installation all the SMP/E datasets are saved as prefix.smp/e datasetname. Where
prefix is the name of the subsystem for example, while creating the subsystem PGHA all the

SMP/E datasets are saved as PGHASYS.smp/e datasetname and the smp/e datasetname
can be the name of the datasets like SDSNLOAD, SDSNEXIT etc. After this changes have
been made to these dataset names we can find SDSNLOAD, SDSNEXIT, SDSNLINK,
SDXRRESL

datasets

in

PGHASYS.SDSNLOAD,

PGHASYS.SDSNLINK,

PGHASYS.SDSNEXIT, PGHASYS.SDXRRESL respectively. The remaining set of SMP/E


datasets can be found in DB2V8.SYS6.DSN81X.smp/edatasetname. Here the letter X
denotes the letter of maintenance. Were using 6 set of libraries A to F every month we will be
using one letter for maintenance and the PUT level of each letter will be different, we can find
this in

and the datasets PGHASYS.SDSNLOAD, PGHASYS.SDSNLINK, PGHASYS.SDSNEXIT,


PGHASYS.SDXRRESL are just alias names and these will be pointing to a corresponding
SMP/E letter like PGHASYS.D.DB2V8.SDSNLINK, here D denotes the SMP/E letter that
were using for the current maintenance level.
This is a table showing the SMP/E datasets and their changed dataset names.

ORIGINAL SMP/E DATASET


NAME

CHANGED SMP/E DATASET NAME

PGHASYS.SDSNEXIT

PGHASYS.D.DB2V8.SDSNEXIT

PGHASYS.SDSNLINK

PGHASYS.D.DB2V8.SDSNLINK

PGHASYS.SDSNLOAD

PGHASYS.D.DB2V8.SDSNLOAD

PGHASYS.SDSNDBRM

DB2V8.SYS6.DSN81D.SDSNDBRM

PGHASYS.SDXRRESL

PGHASYS.D.DB2V8.SDXRRESL

PGHASYS.SDSNMACS

DB2V8.SYS6.DSN81D.SDSNMACS

PGHASYS.SDSNSAMP

DB2V8.SYS6.DSN81D.SDSNSAMP

PGHASYS.DBRMLIB.DATA

PGHASYS.DB2V8.DBRMLIB.DATA

PGHASYS.RUNLIB.LOAD

PGHASYS.DB2V8.RUNLIB.LOAD

PGHASYS.SRCLIB.DATA

PGHASYS.DB2V8.SRCLIB.DATA

The history of changes made to these set of SMP/E letters can be found in

7.4

DATA SHARING

7.4.1 DATA SHARING - OVERVIEW


Overview:
Data Sharing group is a collection of one or more DB2 Subsystems that access shared DB2
Data. Data sharing function enables applications that run on more than one DB2 Subsystem
to read from and write to the same set of data concurrently. DB2 Subsystems that share data
must belong to a DB2 data sharing group which runs on Parallel Sysplex. Each DB2

Subsystem that belongs to a particular data sharing group is a member of that group. All
members of a data sharing group use the same shared DB2 Catalog and Directory. Maximum
no of members in a data sharing group is 32.

7.4.2 ADVANTAGES OF DATA SHARING


DB2 data sharing improves price for performance, improves the availability of DB2, extends
the processing capacity of your system, and provides more flexible ways to configure your
environment. There is no need to changes SQL in your applications to use data sharing,
although some tuning might be needed for optimal performance.
DB2 data sharing gives you a database solution that is powerful enough to handle complex
business requirements. More DB2 users demand access to DB2 data every hour of the day,
every day of the year. DB2 data sharing helps you meet this service by improving availability
during both planned and unplanned outages.
When multiple members of data sharing group have opened same tablespace, index space or
partition, and atleast one of them has opened it for writing, the data is said to be of inter-DB2
read/write interest to the members. To control access to data (i.e.) of inter-DB2 interest,
whenever data is changed DB2 Caches it to a storage area that is called Group Bufferpool.
Mapping exists between Group Bufferpools and Bufferpools of group members. Each DB2
has a bufferpool named BP0. For data sharing, Define a Group Bufferpool (GBP0) in coupling
facility that maps to Bufferpool BP0. GBP0 is used for caching the DB2 Catalog and directory
tablespaces and indexes, partitions that use Bufferpool BP0.
For more details on Data Sharing, refer to: DB2 UDB for O S390 and z/OS V7 - Data
Sharing, Planning and Administration. PDF from IBM Red Books.

7.4.3 BENEFITS OF DATA SHARING

Improves availability of DB2, extends processing capacity of the system, more flexible
ways to configure the environment and increases transaction rates. It also improves
availability during planned and unplanned outages.

Improves scalability and one can add a new DB2 onto another Central processor complex
and access same data through DB2. All DB2s in a data sharing group have concurrent
read and write access, and all DB2s use single directory and catalog

Runs application on more than one DB2 Subsystem to achieve transaction rates that are
higher than possible on a single subsystem.

More capacity to process complex queries. Sysplex Query parallelism enables DB2 to
use all processing power of data sharing group to process a single query. For complex
data analysis/ decision support , Sysplex query parallelism is a scalable solution.

For more details on Data Sharing, refer to:


DB2 UDB for O S390 and z/OS V7 - Data Sharing, Planning and Administration. PDF from
IBM Red Books.

7.4.4 DATA SHARING KC ENVIRONMENT


On SAP, all Production DB2 Regions have been setup as Data shared. All of them have an
A member which is always active and a standby B member which is used only for failover
of work processes if the A member fails. The B member is also activated when the A
side gets shutdown for the monthly IPL, which occurs during the weekend. Because the SAP
environment has multiple Production LPARS, the A and B sides are never down at the
same time.
There are a couple of SAP Development and Quality regions set up as Data-Shared but
mostly non data-shared. Most of the DB2 Regions use DB2 Connect Interface. Some of the
DB2 Subsystems which is running in older SAP Systems use ICLI (Integrated Call Level
Interface) which is an interface from which Application servers connect to Mainframes.
Regions that are actively using DB2 Connect do not use ICLI.
There are R/3, HR, BWs and PGC applications running on SAP. All DB2 regions running on
R/3 applications uses old ICLI regions and the naming convention for them will start with
PA3A, PE3A etc..
All the HR Applications will start with PGH, PEH etc.. and use DB2 Connect interface for the
same.
Similarly all the BWs (Business warehousing) applications also use DB2 Connect interface
for the same.

7.5

KC AUTOMATION - OVERVIEW

There are few automation tasks that has been done in SAP for Maintenance for
bringing up and bringing down a Subsystem during the maintenance that is
carried out on Saturdays, bringing down of subsystems is taken care by Netview
and it is done automatically. Then after the subsystems come down a task called
DB2FLIPV8 starts automatically which flips the smp/e letter to the current
maintenance level of that week. This task gets information from the flags set in
the member $SID6W8.
Then after the IPL and when the subsystems start coming up there are some after
jobs which are to be run this is also taken care by automation by using the
information from the flags set in the member $SID4WS. If there are any jobs that
are to be run after the subsystem comes down it can be done by the flags in
$SID3WS.
All this automation is a part of Netview which is IBM Tivoli tool developed for the
purpose of making the operators work easier, now the operator doesnt have to remember
the commands to bring the subsystem down or to bring up the subsystem

7.6

VENDOR AND THIRD PARTY TOOLS

The only vendor products that are used in SAP are IBM products DB2 Performance Expert
and DB2 Admin Tool. These products are used widely by DBA team to collect Real-time DB2
Subsystem statistics and monitor threads. DB2 Admin Tool provides in-depth catalog
navigation by displaying and interpreting objects in the DB2 catalog and executing dynamic
SQL statements. It is integrated with other DB2 utilities to simplify the creation of DB2 utility
jobs, which creates additional functionality with product-specific line commands for table
editing, SQL cost analysis, and path check analysis.
DBOL team is responsible for the installation and maintenance of the following vendor
products

IBM-DB2 Performance Expert.

IBM-DB2 Admin Tool

KC Management will first request for the Vendor product tapes from the vendors when
new release comes out.

When we receive the Tapes from Vendors, we will be getting

manuals and CDs along with the tapes. We also get Program Directory for the same. The
Installation process is carried out through SMP/E process and customizing the values for
various panels which we take from the previous installation releases.

Vendor product maintenance is carried out whenever a new maintenance level release comes
up. For Performance Expert we have 6 set of libraries and it is always maintained at the same
set as the subsystem which is using it. Each subsystem has its own set of smp/e libraries for
Performance Expert, like SFPELOAD, SFPELINK and SFPEDBRM and these libraries are
maintained at the same set as the subsystem

7.7

SOFTWARE INSTALL & MAINT OVERVIEW

Installation of DB2 is done by the DBOL Team. When a new installation needs to be carried
out, tapes are received from IBM and we install the same using the SMP/E Libraries. The
Installation/Migration of the DB2 Regions is done on the Sandbox region first. Once the
Sandbox regions are migrated, we precede the migration to rest of the DB2 Regions in the
below mentioned hierarchy of the SAP environment:

1. SANDBOX
2. DEVELOPMENT/TEST
3. QUALITY
4. PRODUCTION
PICTORIAL REPRESENTATION OF SOFTWARE INSTALLATION
PROCESS

Sand Box

Production
Development /
Test

Quality

Maintenance on DB2 SAP Subsystems is been done on a Monthly basis. The maintenance is
carried out on the first 3 weekends of the month that we are doing maintenance as per the

schedule in the corresponding LPAR. We roll the maintenance on the Sandbox region first
before rolling onto the test regions. The schedule for the maintenance is as below:

1st Saturday of month: Development (D) and Testing region(T)


2nd Saturday of month: Quality (Q) and Prod copy (C)
3rd Saturday of month: Production (P)
Maintenance for vendor products is also carried out during the weekly maintenance process.
We maintain 6 sets of Libraries (smp/e letters) from A to F to carry out maintenance. Every
month well be using a particular smp/e letter for carrying out the maintenance.

7.8

TRACE - OVERVIEW

When using DB2 UDB you might on occasion encounter an error message that directs you to
"get a trace and call IBM Support", "[turn] on trace [and] examine the trace record", or to
"contact your technical [support] representative with the following information: problem
description, SQLCODE, SQLCA contents (if possible), and trace dataset (if possible)". Or,
when you report a problem to IBM Support, you might be asked to perform a trace to capture
detailed information about your environment.
DB2 traces can be especially useful when analyzing recurring and reproducible problems,
and greatly facilitate the support representative's job of problem determination.
DB2 trace is essentially a log of control flow information (functions and associated parameter
values) that is captured while the trace facility is on. Traces are very useful to DB2 technical
support representatives who are trying to diagnose a problem that may be difficult to solve
with only the information that is returned in error messages.
IBM will request for a GTF trace or a selective dump using DSN1SDMP utility. The first option
is achieved by enabling the trace and selecting the destination as GTF records. The second
option is achieved by forcing the dumps when selected DB2 trace events occur and writing
DB2 trace records to a user defined dataset.

7.9

SVC DUMP OVERVIEW

"SVC dump is like a burglar alarm.... It lets you know something's wrong and helps
you pinpoint where it started."
An SVC dump provides a representation of the virtual storage for the system at the
time the dump is taken. Most commonly, a system component requests an SVC dump when
an unexpected system error occurs. After the dump has completed, processing can usually
continue.

Whenever there is an ABEND and the program which caused that ABEND is requesting for
an SVC dump then SVC dump occurs. An SVC dump provides a representation of the virtual
storage for the system at the time the dump is taken. Most commonly, a system component
requests an SVC dump when an unexpected system error occurs. After the dump has
completed, processing can usually continue.
An authorized program can request an SVC dump with the SDUMP or SDUMPX macro. The
operator can also request an SVC dump by using the SLIP or DUMP command. Both are
used to obtain diagnostic data to aid in problem resolution. System Automation process
recognizes the Dump occurred and sends the mail to the Group Mail Box.
It is also possible to take a dump manually if the dump is not created automatically. It is done
by the following command in the log
SLIP SET ID=pk01, j=jbname, action=svcd, c=0c4,end
We can also see what are the SLIPs that are currently active by giving D SLIP command in
the log
RESPONSE=TCQ0
IEE735I 14.09.46 SLIP DISPLAY 911
ID STATE

ID STATE

ID STATE

ID STATE

ID STATE

0001 ENABLED X013 ENABLED X028 ENABLED X052 ENABLED X058 ENABLED
X066 ENABLED X070 ENABLED S071 ENABLED SS71 ENABLED X073 ENABLED
X0DX ENABLED X0E7 ENABLED X0F3 ENABLED X13E ENABLED X1C5 ENABLED
X222 ENABLED X322 ENABLED X33E ENABLED S3C4 ENABLED X422 ENABLED
X42X ENABLED X47B ENABLED X622 ENABLED X71A ENABLED X804 ENABLED
X806 ENABLED X80A ENABLED X81A ENABLED X91A ENABLED X9FB ENABLED
XB37 ENABLED XC1A ENABLED XD1A ENABLED XD37 ENABLED XE37 ENABLED
XEC6 ENABLED XXC6 ENABLED
This shows for what abends SLIP is enabled.
To disable an ID we can use SLIP MOD, DISABLE, ID=xxxx.
To delete an ID we can use SLIP DEL, ID=xxxx.
When system goes for IPL the SLIP information is deleted. The information about dumps is
found in B41380.NOTES(DUMP) member.

In KC environment, whenever a SVC dump is produced by any DB2 subsystem, an


automated email will be sent to DBOL group mail box. The DBOL team should analyze and
create a PMR record.
Based on the kind of Problem that has occurred, we need to open a Problem
Management Report (PMR) with IBM with the Severity nature.
The Severity nature can be classified as below with some cases:

Severity 1: DB2 Region is down -> Highest Level.

We would also request for E-Mail response from IBM at every point of time on the status and
progress of the issue.

Severity 2: DB2 Region went down, region comes up again. (i.e.) Region is Unstable

Severity 3: Mediocre level problems

Severity 4: Impact Problems & Questions

7.10 IBM PROBLEM REPORTING AN OVERVIEW


Problem can be anything that affects the functionality of a region or to any of the threads or
applications connected with the region. Usually problems results in causing ABENDS on the
region or the applications connected to the region abends. If this abends are within the scope
of the DBOL team i.e if they can be resolved manually without any co-ordination from vendor,
the same is done. In case any problem needs to be reported to vendor (IBM) for co-ordination
it will be reported through the IBM service link. To use the IBM service link first the user
should be registered with the IBM link and then the details about our login id has to be passed
to the Global computer security (_Computer Security, Global) and they will set up the
accounts with IBM link. This is the IBM link where the problem can be reported.
http://www-3.ibm.com/software/support/
Based on the kind of Problem that has occurred, we need to open a Problem Management
Report (PMR) with IBM with the Severity nature.
The Severity nature can be classified as below with some cases:
Severity 1: DB2 Region is down -> Highest Level. We will also request for E-Mail response
from IBM at every point of time on the status & progress of the issue
Severity 2: DB2 Region went down, region comes up again. (i.e.) Region is Unstable
Severity 3: Mediocre level problems
Severity 4: Impact Problems & Questions

The detailed usage of IBM service link can be found in

7.11 DB2 CONNECT AN OVERVIEW


DB2 Connect is used by Applications that run on DB2 distributed platforms work on data that is
stored in DB2 for z/OS transparently, as if a local database server managed it. One can also use
a wide range of off-the-shelf or custom-developed database applications with DB2 Connect and
its associated tools.
DB2 Connect provides connectivity to mainframe and midrange databases from Windows, Linux,
and UNIX platforms.
DB2 Connect can be considered as a middle ware which is a server actually which takes the
embedded SQL queries in the application and passes it to DB2.
There are a number of DB2 Connect editions available: Personal Edition, Enterprise
Edition, Application Server Edition, and Unlimited Edition. This product is an add-on product to
DB2 that can be purchased separately.

7.11.1 DB2 CONNECT Enterprise Edition


DB2 Connect Enterprise Edition is a connectivity server that concentrates and
manages connections from multiple desktop clients and web applications to DB2 database
servers running on host or iSeries systems. IBMs DB2 Universal Database (UDB) for
OS/390 and z/OS, and DB2 for VSE & VM databases continue to be the systems of choice for
managing most critical data for the worlds largest organizations. While these host and iSeries
databases manage the data, there is a great demand to integrate this data with applications
running on Windows and UNIX workstations

7.11.2 DB2 CONNECT Personal Edition


DB2 Connect Personal Edition provides access from a single workstation to DB2 databases
residing on servers such as OS/390, z/OS, OS/400, VM and VSE, as well as to DB2
Universal Database servers on UNIX and Windows operating systems. DB2 Connect
Personal Edition provides the same rich set of APIs as DB2 Connect Enterprise Edition.This
product is currently available for Linux and Windows operating systems.

7.11.3 DB2 CONNECT Unlimited Edition


DB2 Connect Unlimited Edition is a unique package offering that allows complete flexibility of
DB2 Connect deployment and simplifies product selection and licensing. This product

contains both DB2 Connect Personal Edition and DB2 Connect Enterprise Edition with license
terms and conditions that allow the unlimited deployment of any DB2 Connect product.
License charges are based on the size of the S/390 or zSeries server that DB2 Connect users
will be working with.
This package offering is only available for OS/390 and z/OS systems, and licensing is only
valid for DB2 for OS/390 and z/OS data sources.

7.12 Environment Landscape - Overview


7.12.1 Datasets Information
1. SMP/E JCLs:- The JCLs related with SMP/E are RECEIVE, APPLY, APPLYCHECK &
ACCEPT. The PDS DB2V8.RSU.CNTL is the model which contains a member $BILDJCL
which when run with all the changes creates a PDS DB2V8.RSUyymma.CNTL where
yymm is the PUT level, a is the smp/e letter. The SMP/E datasets are there in this PDS
RECEIVEJB, APPLYCHK & ACCEPT.

2. JCLs for Subsystem:- The PDS DB2v8.NEWSID.CNTL is the model which contains all
the members for creating a new subsystem. The member BILDNON is used for creating a
non-data shared region, the members BILDSHRA and BILDSHRB are used for creating
data shared regions. These members creates a PDS DB2V8.subsysname.CNTL where
subsysname is the name of the subsystem that is to be created.

3. Most of the utility JCLs are found in DB2.CNTL


4. All the after jobs which runs after the subsystem comes up during the maintenance
resides in DB2.AFTER.STARTUP

5. The dataset DB2V8.TEXT contains the check list members for most of the tasks like
installation, maintenance and prodcopy. DB2V8NON, DB2V8SHA, DB2V8SHB & P$$4C$
$ respectively.

6. SYS7.PARMLIB & SYS7.PROCLIB are the common dataset that is shared across
LPARS.

7.12.2 Naming Convention and Standards


DB2 subsystems will contain the Directory, Catalog, Log and other datasets. Some of the
datasets will be shared across the Subsystems as the regions will be data shared.
The Dataset for each DB2 Subsystem follow the below naming conventions followed in DB2
SAP are below:

The HLQ for catalog and directory for SAP is first three letters of subsystem followed by
numeral 1 and SAP, for example for PA4A and PA4B they belong to the same data sharing
group PA41 so their catalog and directorys HLQ is PA41SAP.
For each Data Sharing group, the Dataset convention for Directory and Catalog follow the
below conventions:
PA41SAP.DSNDBC.A000XAAA.#DESCRZ3.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ABAPTREE.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ABAP1WT4.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ABDOCMOD.I0001.A001
PA41SAP.DSNDBC.A000XAAA.ABDO1EMC.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ADOWNERR.I0001.A001
PA41SAP.DSNDBC.A000XAAA.ADOW1LVX.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ADRCOMCS.J0001.A001
PA41SAP.DSNDBC.A000XAAA.ADRC1SDX.I0001.A001
The active log data sets for PA4A follow the naming convention shown below
PA4ALOG.LOGCOPY1.DS21
PA4ALOG.LOGCOPY1.DS21.DATA
PA4ALOG.LOGCOPY1.DS22
PA4ALOG.LOGCOPY1.DS22.DATA
PA4ALOG.LOGCOPY1.DS23
PA4ALOG.LOGCOPY1.DS23.DATA
PA4ALOG.LOGCOPY1.DS24
PA4ALOG.LOGCOPY1.DS24.DATA
PA4ALOG.LOGCOPY1.DS25
PA4ALOG.LOGCOPY1.DS25.DATA
PA4ALOG.LOGCOPY1.DS26
PA4ALOG.LOGCOPY1.DS26.DATA
PA4ALOG.LOGCOPY2.DS21
PA4ALOG.LOGCOPY2.DS21.DATA
The archive log datasets for PA4A follow the naming convention shown below
PA4AARC.ARCHLOG1.D07124.T0144243.A0016217
PA4AARC.ARCHLOG1.D07124.T0144243.B0016217
PA4AARC.ARCHLOG1.D07124.T0330108.A0016218
PA4AARC.ARCHLOG1.D07124.T0330108.B0016218
PA4AARC.ARCHLOG1.D07124.T0444349.A0016219
PA4AARC.ARCHLOG1.D07124.T0444349.B0016219
PA4AARC.ARCHLOG1.D07124.T0616588.A0016220

PA4AARC.ARCHLOG1.D07124.T0616588.B0016220
PA4AARC.ARCHLOG1.D07124.T0722208.A0016221
PA4AARC.ARCHLOG1.D07124.T0722208.B0016221
PA4AARC.ARCHLOG1.D07124.T0857371.A0016222
PA4AARC.ARCHLOG1.D07124.T0857371.B0016222
PA4AARC.ARCHLOG1.D07124.T1048289.A0016223
The load libraries for PA4A has the following naming conventions
PA4ASYS.D.DB2PM.SDGOLOAD
PA4ASYS.D.DB2V7.SDSNEXIT
PA4ASYS.D.DB2V7.SDSNLINK
PA4ASYS.D.DB2V7.SDSNLOAD
PA4ASYS.D.DB2V7.SDXRRESL
PA4ASYS.D.DB2V7.SFPEDBRM
PA4ASYS.D.DB2V7.SFPELINK
PA4ASYS.D.DB2V7.SFPELOAD
PA4ASYS.D.DB2V8.SDSNEXIT
PA4ASYS.D.DB2V8.SDSNLINK
PA4ASYS.D.DB2V8.SDSNLOAD
PA4ASYS.D.DB2V8.SDXRRESL
PA4ASYS.D.DB2V8.SFPELINK
PA4ASYS.D.DB2V8.SFPELOAD

7.12.3 Maintenance Jobs


All the jobs for maintenance are found in DB2v8.RSUyymma.CNTL where yymm is the PUT
level, a is the smp/e letter. For example DB2v8.RSU0703E.CNTL is the dataset for PUT level
0703.
It contains jobs like
BACKUP81- Which contains the job to back up the previous smp/e libraries of E set on to
tapes.
ALLOC81- Allocates the E set of libraries
CLONE81- Copies D set to E set.
SMPCLN81- Copies the DLIB, TLIB and CSI zones.
HFSJOB81- Changes the HFS path to the new set.

7.12.4 ZPARM

The ZPARMS settings for all the DB2 Subsystems in SAP can be found in the Dataset
DB2V8.SOURCE.
For example: DB2V8.SOURCE(PA4A) contains the values for PA4A Subsystem.
& DB2V8.SOURCE(PA4B) contains the values for PA4B Subsystem
All the changes made to ZPARM are also tracked in the dataset DB2V8.SOURCE
(subsysname) where subsysname is the DB2 Subsystem name. Each DB2 member has its
own Assembly JCL.
The member name contains the date on which the parameter settings was changed along
with the reason and requestor name for the changes that was done. Then it contains the
source code for the ZPARMS.
Below link shows the sample example of how ZPARMS will look like for a PA4A Subsystem
present in PA4A member which contains the history of changes made along with the current
settings of ZPARMS.

DB2.CNTL contains all the DB2 Batch jobs and the Utility jobs like DSNJU004, DSNJU003
etc Below link shows an Sample example of execution of DSNJU004 utility which prints the
BSDS contents for BX3A. DB2.CNTL(BSDSLIST) is the member.

7.12.5 Assembly JCL


Assembly JCL contains the Job for assembling the ZPARMS. The JCL Contains the code for
assembling the DSN6 Macros and creates the DSNZPARM. Then it link edits the
DSNZPARM and puts the Load module in SDSNEXIT Library which is subsysname where
subsysname stands for the DB2 Region.
Eg: PA4A (For PA4A Region).
The Assembly JCL is present in DB2V8.SOURCE(PA4AALZH) to assemble the ZPARMS.

7.12.6 Common datasets - SYSPLEX


The SYS7.PARMLIB Dataset

contains the PROGKA , IEFSSNKD members for APF

authorizing the DB2 Libraries and defining the DB2 Subsystem and IRLM to z/OS

The dataset SYS7.PROCLIB contains the DB2 Address spaces


procedures like MSTR, DIST, DBM1, IRLM etc This dataset is
shared across LPARS

7.12.7 JCL Library

In SAP for the JCLs for the common utilities are found in the dataset DB2.SHARE.CNTL.

There is a member called ##OLDTHD it lists the jobs which are running for
more than 30 hours. Here is a part of JCL
//*******************************************************************
//****** COPIED FROM BATCH JOB AK05OT32
//*******************************************************************
//*===================================================================*
//* IDENTIFY DB2 THREADS IN "PE3A" OLDER THAN 30 HOURS.
//*===================================================================*
//CLEAR
//DD1
//

EXEC PGM=IEFBR14
DD DSN=B41380.AK05OT32.PE3A.OLD.THREAD.RPT,

DISP=(MOD,DELETE,DELETE),UNIT=SYSDA,SPACE=(TRK,1)

//*===================================================================*
//THD2OLD EXEC PGM=IKJEFT1B,DYNAMNBR=20,COND=(0,LT)
//STEPLIB DD DISP=SHR,DSN=DB2V8.TCPALIAS.SDSNLOAD
//SYSPRINT DD SYSOUT=*
//IKJ.SYSTSPRT DD SYSOUT=*
//IKJ.SYSPROC DD DSN=TSOUSERS.CMDPROC,DISP=SHR
//IKJ.THD
//*IKJ.THD

DD SYSOUT=*
DD DSN=B41380.AK05OT32.PE3A.OLD.THREAD.RPT,

//*

DISP=(,CATLG,DELETE),UNIT=SYSDA,

//*

SPACE=(TRK,(10,10),RLSE)

//SYSTSIN DD *
THD2OLD PE3A 30

7.13 Applications Supported


DB2 Subsystems on SAP run on applications R/3, HR, BWs and PGC. All the productions
regions are critical and need 24*7 Availability. Some of the most critical applications are
running in PA4 and PE3 Regions.
In case of emergency requests by which we need to shutdown DB2, we need to contact
BASIS team first for the same. Application team normally contacts BASIS team first in case of
any outages and the requests will be forwaded to the DBOL Team.

7.13.1 DB2 Applications

SAP R/3 -

APO Only one application use DB2

EBP

CRM Except Mexico

C Folders

HR

Knowledge Warehouse

Workplace

BW

WEB Application Server / SAP Exchange Infrastructure Only Some


application use DB2

7.14 SAP Exits Overview


There is an Authorization exit program that are running in SAP. There is a
Authorization exit routine DSN3@SGN and DSN3@ATH which is built during the
Migration process of DB2.

During the Installation/Migration process, there is a job DSNTIJEX which


builds the sample authorization exit routines DSN3@SGN and DSN3@ATH,
and the user version of the access control authorization exit routine,
DSNX@XAC, from the source code in prefix.SDSNSAMP. Job DSNTIJEX
then assemble and link-edit the sample version of DSNACICX, which we can
use to modify CICS parameters that the DSNACICS caller specifies

7.15 Alert Monitoring


Alerts are generated on Group Mailboxes for abends that are generated by the
DB2 Region. For example, when a DB2 Region generates an Abend say 04E an
SVC Dump will be captured immediately in the System and a mail will be
generated in the Group Mail box notifying the SVC Dump on the Particular DB2
Region and the type of abend that was captured.
Eg:
We will get a mail as below:
Subject: K00035P0 FOLLOW UP NOTICE IEA794I SVC DUMP PEWBDBM1
PEWB
Message:
P0 K00035P0 07107 063 14:13:15.662416 IEA794I SVC DUMP
PEWBDBM1 PEWB,ABND=04EP0 K00035P0 07107 063 14:13:15.662416 IEA794I SVC DUMP
PEWBDBM1 PEWB,ABND=04E-

The person who is On call has the primary responsibility of monitoring the group
mail boxes and will needs to investigate and follow up on the issue.
Similarly we also get notifications in case a DB2 Region abends and also mail will
be sent to Operations team to contact the Oncall person of DBOL Team through
DB2SOFT which is Whofixes for DBOL Team.
We need to resolve problems proactively by monitoring the DB2 Logs of all the
Subsystems and the group mailboxes on a regular basis.

7.16 Upgrades- Past and Future


Past Upgrades
DB2 was migrated from V7.1 to V8.1 in 2006.

Future Upgrades
DB2 Admin tool currently running in V5.1 is going off support on 30 th September
2007. The DB2 Admin tool will be migrated from V5.1 to V7.1 shortly.

7.17 Change Ticket Process


Whenever a new Configuration Item like a new subsystem is to be created and added
as a CI to the configuration Management Database CMDB, well create a work order
for that and send it to the change management team. Whenever there is any
maintenance that is scheduled in the weekend well raise the change request and
send it to the CAB for their approval and then well implement the same. In case of
any Emergency changes we follow the emergency workorder template.
At this time there are no Key Performance Indicators that relate to change
management. However, it is important that we follow the ITS 05.27: Computer
Services Change
Management/Change Control Procedure (HP Service
Desk)
Change Owner - An owner of a Change Request is the technician who is responsible
for the overall quality of the change.
Work Order Owner The technician who completes the actual tasks required to
build, test, and implement a change.
Change Assessor The person responsible for reviewing and ensuring the change
information recorded is complete and accurate.
Change Advisory Board (CAB) Member A representative from the team
accountable for providing the Change Manager with accurate assessments of risk
and scheduling impact of changes.
Weekly Meeting

The CAB representative for the DBMS team must attend the weekly CAB and change
meetings or get someone to attend in his or her place. The CAB coordinator must
understand all changes entered by the team and understand their impact. The CAB
coordinator must also communicate any potential impact a change entered by
another team may have on products and services we provide.
The activity that require Change tickets to be raised are given below:

Activity
Installation of DB2 Subsystems
Migration of DB2 Subsystems
Applying Maintenance on DB2
Vendor Tool Upgrade
Vendor Tool Maintenance
Production Copy process

Change
Ticket
required (Y/N)
Y
Y
Y
Y
Y
N

Change Approver
Charlesworth Janet
Charlesworth Janet
Charlesworth Janet
Charlesworth Janet
Charlesworth Janet

Backup
Mike/Saketh
Mike/Saketh
Mike/Saketh
Mike/Saketh
Mike/Saketh

8. HOW TO SETUP DATA SHARING


To setup an DB2 Subsystems in Data Sharing mode, the following activities needs to be
carried out:

We need to allocate a data sharing group and create a Coupling Facility structures for

the DB2 Regions that is going to participate in the Data Sharing. The same will be carried out
by z/OS team.

We need to create an IRLM XCF Group to allocate the LPAR on which new

subsystem has to be created. This also will be carried out by z/OS team.

Define a Group Bufferpool (GBP0) in coupling facility that maps to Bufferpool BP0.

GBP0 is used for caching the DB2 Catalog and directory tablespaces and indexes, partitions
that use Bufferpool BP0. This also needs to be taken care by z/OS team.

Once DBOL Team starts the process of creating the Subsystem, we need to specify

the DB2 Configuration parameter Data Sharing=YES in the Main Panel to enable the DB2
Members to be in Data Sharing. Also we need to add the Data Sharing group definition in the
DB2 Address spaces in SYS7.PROCLIB which will be generated by the Installation jobs.
The subsystem creation document contains the process and procedure to configure/create
Data sharing environment.

9. LOG MANAGEMENT
DB2 records all data changes and significant events in a log as they occur. In the case of
failure, DB2 uses this data to recover. DB2 writes each log record to a disk data set called the
active log. When the active log is full, DB2 copies the contents of the active log to a disk or
magnetic tape dataset called the archive log. We can choose either single logging or dual
logging.
A single active log contains between 2 and 31 active log data sets. With dual logging, the
active log has the capacity for 4 to 62 active log data sets because two identical copies of the
log records are kept. Each active log data set is a single-volume, single-extent VSAM LDS.
There are Active and Archive logs that are maintained for SAP. There are 6 sets of Active log
datasets present. Dual archiving process is followed here with both the Archive log1 and
Archive log 2 goes to Archive volumes (Archive Pool) in DASD. They stay on the Archive
volumes in DASD for sometime and then one set of Archive logs goes to Centera disk and
other set of Archive logs is done on tapes.
We can find the active log datasets as subsysnameLOG.LOGCOPYx.DSxx

where subsysname is the name of the subsystem, x is a number.


For example for PA4A the active log datasets can be found in
PA4ALOG.LOGCOPY2.DS12
We can find the archive log datasets as
subsysnameARCARCHLOGx.D07ddd.T00hhmms.A0000nnn

where

subsysname is the name of the subsystem, x is a number, ddd is


the Zulian day, hhmms is time, nnn is a number. The archivelog
frequency is different for different subsystems for the busiest
subsystems like PA4A it is 24 hours.
The messages shown below are an example of what happens when

offloading to tape is done.


00.31.45 STC10170

DSNJ001I

-PAPA DSNJW307 CURRENT COPY 2 ACTIVE LOG

884

DATA SET IS DSNAME=PAPALOG.LOGCOPY2.DS12,

884

STARTRBA=00EC34C7B000,ENDRBA=00ECB4CAEFFF

00.31.45 STC10170

IGD01008I DSN ALLOC IN ARCHLOG POOL

00.31.47 STC10170

IGD01008I DSN ALLOC IN ARCHLOG POOL

00.31.47 STC10170

IGD01008I DSN ALLOC IN ARCHLOG POOL

884

00.31.48 STC10170

IGD01008I DSN ALLOC IN ARCHLOG POOL

00.36.41 STC10170

DSNJ003I

-PAPA DSNJOFF3 FULL ARCHIVE LOG VOLUME

099

099

DSNAME=PAPAARC.ARCHLOG1.D07150.T0031452.A0000616,

099

STARTRBA=00EBB4C47000, ENDRBA=00EC34C7AFFF, STARTLRSN=C0A2AB4

099

ENDLRSN=C0AB5641509A, UNIT=TAPE, COPY1VOL=AR1069, VOLSPAN=00,

099

CATLG=YES

00.36.41 STC10170

DSNJ003I

-PAPA DSNJOFF3 FULL ARCHIVE LOG VOLUME

100

100

DSNAME=PAPAARC.ARCHLOG2.D07150.T0031452.A0000616,

100

STARTRBA=00EBB4C47000, ENDRBA=00EC34C7AFFF, STARTLRSN=C0A2AB4

100

ENDLRSN=C0AB5641509A, UNIT=TAPE, COPY2VOL=AR2309, VOLSPAN=00,

100

CATLG=YES

00.36.41 STC10170

DSNJ139I

-PAPA LOG OFFLOAD TASK ENDED

DB2 keeps track of the tapes used in BSDS. There is a separate Tape Management system
(TMS) which keeps track of the tapes (HSM, Archive logs) that are been used and manage
them. Archiving is done on tapes which goes to the Offsite location (South-West) which
contains the Offsite tape storage location and the retention period for the Archive logs is 35
days after which the tapes will be scratched for further usage.
We retrieve the Log records through the following events:
1. A log record is requested using its RBA.
2. DB2 searches for the log record in the locations listed below, in the order given:
a. The log buffers.
b. The active logs. The bootstrap data set registers which log RBAs apply to
each active or archive log data set. If the record is in an active log, DB2
dynamically acquires a buffer, reads one or more CIs, and returns one
record for each request.
c. The archive logs. DB2 determines which archive volume contains the CIs,
dynamically allocates the archive volume, acquires a buffer, and reads the
CIs.

10. SOFTWARE INSTALL


10.1. PROCESS & PROCEDURE

Whenever a new Installation or upgrade of software happens there will be an associated


project charter for it which needs to be prepared by the tower lead, then the project charter is
to be approved by the KC Management. Then this installation which is considered as a
project from now on starts. Installation of DB2 is done by the DBOL Team.

When a new installation needs to be carried out, the tapes are


received from IBM and we install the same using the SMP/E
Libraries. We allocate all the Distribution and Target libraries and
apply all the required modules onto the distribution and target
libraries. Once all the Product libraries have been installed, we
need to run the Installation CLIST for creating the Subsystem.
The Installation/Migration of the DB2 Regions is done on the Sandbox region first. Once the
Sandbox regions are migrated, we proceed to migrate the rest of the DB2 Regions in the
below mentioned hierarchy of the SAP environment:

1. SANDBOX
2. DEVELOPMENT/TEST

3. QUALITY
4. PRODUCTION
We have to inform the application folks who are using the above DB2 Regions except for
Sandbox when we plan to migrate them by coordination through DBA and Management and
get the clearance for the activity. We need to do an Impact analysis on the Migration to V8
and inform the DBAs and Application folks on the same. We need to check the RETAIN
Database in IBM and find out what level DB2 has to be in for

migrating to the next version. We also need to check with the


vendors on what version the vendor products have to be in before
migrating to the next version.
10.1.1. Purpose
To Install new version of DB2 on SAP LPARS.

10.1.2.

PREREQUISITES

1. Management needs to order the Installation tapes from IBM


2. Be ready with the preparatory work done and Impact analysis for the Installation to be
carried out.

3. Receive the Installation tapes from IBM


4. Co-ordinate with DASD, Mainframe team during Installation.
5. Create the change ticket using HPSD for the Installation to be carried out.
6. Wait for the Change Advisory Board to approve the Change Ticket

10.1.3. RACI MATRIX

Primary activities

Responsibility

Accountability

Consultancy

Informed

Installation of DB2

DBOL
Team

DBOL-Lead

N/A

Application
owner

DB2

10.1.4. INPUT
1. Installation Tapes.
2. CAB approved change ticket.

10.1.5. PROCESS
Detailed steps of Installation procedure can be found in the following link.

Detailed process

SMP/E Environment

Change Ticket

Installation Check list

10.1.6. OUTPUT

1. Tailored installation jobs.


2. Updated SMP/E library.
3. Test the Installation using verification jobs

10.1.7. FORMS, CHECLISTS OR TEMPLATES USED


1. Change Request number and work order number (Using KC default Template).
2. Installation Check list.

10.1.8. REFERENCE
1. DB2 version 8 Installation & Migration Guide from IBM red books
2. The dataset X.TC.INSTALL.DB2.V8R1M0.SAPR3.TEXT available on legacy LPARS

11. SOFTWARE MAINTENANCE


11.1. PROCESS & PROCEDURE
Maintenance activities on SAP are carried out on a monthly basis on each of the LPARs. The
maintenance is carried out on the first 3 weekends on the DB2 Subsystems as per the
schedule in the corresponding LPAR.
The schedule carried out is as below:
1st Weekend: TCT0-TCT2
2nd Weekend: TCQ0, TCQ1 and TWQ0, TWQ1
3rd Weekend: TCP0-TCP3
There are 6 sets of SMP/E Libraries and Load libraries for each DB2 Subsystem from A to F
that is available for DB2. The Alias for these SMP/E and load libraries will be pointing to a
particular set depending on the maintenance level of the SMP/E that we are in.
For example PGHA is running on C set and were doing maintenance on it, after maintenance
is done and the maintenance level gets changed, the alias for DB2 load libraries are made to
point to another set of libraries say D when the DB2 Subsystem is brought down via
Automation. Once IPL is done on the LPAR and DB2 Subsystem comes up it will point to the
new set of Libraries. We need to do some post verification to make sure that all the DB2
Regions are properly running which is a part of post verification. TPX comes down on
Saturday Night at 8:00 except 2nd weekend because DBAs are doing their maintenance
during that time.
Mainframe team does not do any maintenance from Middle of December to middle of January
since there will be year end processing that will be going on. DBOL team does not do
anything for Year end processing. We do maintenance for both the PUT levels of Jan and Feb
during Feb month.
Co-ordinate with Z/OS team and storage team during maintenance, the person who is on call
is the primary contact during maintenance. The details of that person can be found out using
WHOFIXES an in-house tool in KC or using the calendar of the Group mail box. The person
who is on call primary on that week, his name is displayed on the top of the calendar.

SMP/E Dataset names that are available in Legacy along with Zone names are:
Dataset Name

Zone Name

DB2V8.SYS6.DSN81x.SMPE.DB2.DLIB.CSI

DLIB

DB2V8.SYS6.DSN81x.SMPE.DB2.GLOBAL.CSI

Global

DB2V8.SYS6.DSN81x.SMPE.DB2.TARGET.CSI

Target

Where x is the Set name of the Maintenance level


Eg:
The D set of the SMP/E Libraries are:
Dataset Name

Zone Name

DB2V8.SYS6.DSN81D.SMPE.DB2.DLIB.CSI

DLIB

DB2V8.SYS6.DSN81D.SMPE.DB2.GLOBAL.CSI

Global

DB2V8.SYS6.DSN81D.SMPE.DB2.TARGET.CSI

Target

After maintenance is done and the maintenance level gets changed, the alias for DB2 load
libraries are made to point to another set of libraries say D when the DB2 Subsystem is
brought down via Automation. Once IPL is done on the LPARs and DB2 Subsystem comes
up it will point to the new set of Libraries.
Co-ordinate with Z/OS team and storage team during maintenance, the person who is on call
is the primary contact during maintenance. The details of that person can be found out using
WHOFIXES an in-house tool in KC or using the calendar of the Group mail box. The person
who is on call primary on that week, his name is displayed on the top of the calendar
In case we migrated the Sandbox regions to V9 and the rest of the regions are still in V8, we
freeze the maintenance for V8. At that point we apply the maintenance for V9 only on the
Sandbox regions. Once the rest of the regions are migrated to V9, we can start rolling the
maintenance for V9 onto all the regions. We apply only the emergency PTFs for V8.
During maintenance DB2 commands are not used to bring down the DB2 subsystems,
automation takes care of it.

11.1.1. Purpose
Maintenance is carried out in order to keep the regions in a recent PUT level. To apply any
special PTFs specified by IBM or SAP or any other users of the system.

11.1.2. PREREQUISITES

Download the PUT Level from IBM

Be ready with the preparatory work done and Apply Check analysis for the
Maintenance to be carried out.

Send the Apply Check report analysis which are relevant to the DBA team.

Co-ordinate with Mainframe team during Maintenance.

Prepare the post installation jobs for pre-compiling and assembling the security exits
and jobs that updates the BSDS

Create the change ticket using HPSD for the Maintenance to be carried out.

Wait for the Change Advisory Board to approve the Change Ticket.

11.1.3. RACI MATRIX

Primary activities

Responsibility

Accountability

Consultancy

Informed

Applying Maintenance
to the regions.

DBOL
Team

DBOL-Lead

KC

To the user

DB2

11.1.4. INPUT

1. PTFs which are to be applied for that weeks maintenance.


2. CAB approved change ticket.
3. Maintenance Check list.

11.1.5. PROCESS
Detailed steps of maintenance procedure can be found in the following link.
Detailed process

Check list

SMP/E Environment Change Ticket

11.1.6. OUTPUT
1. The changed aliases for the SMP/E datasets of the regions.
2. Updated SMP/E letter with the current maintenance level.

3. Running the After jobs for some PTFs or Sysmods to take full effect

11.1.7. FORMS, CHECKLIST OR TEMPLATES USED


1. Change Request number and work order number (Using KC default Template).
2. Maintenance checklist

11.1.8. REFERENCE
The data set DB2V8.RSU0702D.CNTL($install).

12. SUBSYSTEM CREATION/MIGRATION


12.1. OVERVIEW
DBOL team will be receiving requests from Application team to create new DB2 regions
through group mail box or through HP service Desk change request then one of the
person in the DBOL team will take the ownership of the request and creates the new
region. While creating new region DBOL team has to co-ordinate with Z/OS team and
storage team.
Z/OS team is responsible for creating CFRM structures, WLM classification rules, IRLM
XCF Group name to allocate the LPAR on which the new subsystem has to be created
and to reserve the ports which are used by DBOL team for DDF. Joint effort is been put
by the z/OS and DBOL Team to create WLM in Application environments.
Storage team is responsible for defining, initializing the disk space and defining catalogs
and storage groups.
DBOL teams responsibility is to create a data sharing group and add the members to the
data sharing group.
The Port number assignment when applications connect to Mainframes via DB2 Connect
is done by z/OS and Network team. DBOL team need to coordinate with them and get the
port numbers for the DB2 Subsystems. While assigning port numbers to the DB2
Subsystems in a data sharing group, the TCP port number remains the same for all the
DB2 Subsystems in a Data sharing group only the RES port changes. We inform the
z/OS team to add the Network LU Name in the VTAMLST.
We decide upon the Configuration Parameter values for a new DB2 Subsystem based on
the values that has been specified in the widely used Subsystem whether its test, quality

or Production. We can check with DBAs for understanding the Application requirements
before entering the values for ZPARM. During migration, we do not change the
configuration values

12.1.1. Purpose
The creation of new subsystem is done based on the requirement of the application team.
Each subsystem is dedicated to a given set of applications, like this we can localise the effect
of any outage, i.e if there is any outage on a particular subsystem only those applications that
use that subsystem wont be available, rest of the applications are not impacted.

12.1.2. PREREQUISITE
1. Receive request from the application team and approval from the management.
2. Create the change ticket using HPSD for the maintenance to be carried out.
3. Wait for the Change Advisory Board to approve the Change Ticket.
4. Be ready with the preparatory work done.
5. Co-ordinate with DASD, Mainframe team during Installation and check if Mainframe team
has created the Coupling facility structures in case we are building a Data sharing subsystem.
6. Receive the storage packs from DASD, LPAR information and Port values from Mainframe
teams.

12.1.3. RACI MATRIX

Primary activities

Responsibility

Accountability

Consultancy

Informed

Creation/ Migration of
subsystem

DBOL
Team

DBOL-Lead

N/A

Application
owner

DB2

12.1.4. INPUT
1. Storage packs from DASD, LPAR information & port values from Mainframe teams.
2. CAB approved change ticket.
3. New subsystem creation check list.
4. Subsystem migration check list.

12.1.5. PROCESS
Detailed steps of maintenance procedure can be found in the following link.

Detailed process

New subsystem creation subsystem Migration

Detailed Check Lists for subsystem Creation


Non-Data Shared

Data Shared A

Data Shared B

Precise Check list for Subsystem creation


Non-Data Shared

Data Shared A

Data shared B

12.1.6. OUTPUT
1. New subsystem ready to use/ Existing subsystem migrated to a new version.
2. New entry in the Automation Netview.
3. Run the verification jobs and test the new Subsystem created with Application folks

12.1.7. FORM, CHECKLIST, OR TEMPLATE USED


1. Change Request number and work order number (Using KC default Template).
2.

subsystem creation check list.

3.

Subsystem migration check list.

12.1.8. REFERENCE
1. DB2 version 8 Installation & Migration Guide from IBM red books.
2. The dataset DB2v8.TEXT(DB2UPGAB) available on SAP LPARS
3. The dataset DB2v8.TEXT(DB2UPG78) available on SAP LPARS
4. The dataset DB2v8.TEXT(DB2v8SHA) available on SAP LPARS
5. The dataset DB2v8.TEXT(DB2V8SHB) available on SAP LPARS
6. The dataset X.TC.INSTALL.DB2.V8R1M0.SAPR3.TEXT available on legacy LPARS
Before retiring a Subsystem, the strategy followed in KC is to the take the Backup of all
the DB2 System datasets, Decommission the DB2 Subsystem members from
SYS7.PROCLIB and remove the Subsystem definition from Automation. We need to
shutdown the Subsystem and wait for 6 months to see if there are no user requests for
bringing up the Subsystem. If so, we can go ahead and delete the datasets after getting
approval from Management

13. PRODUCTION COPY


13.1. OVERVIEW
Production copy is the process of cloning production region to quality region, before going into
the details lets have a recap of software development life cycle (SDLC). The Application
development team will start coding new applications and after a point of time they will come
up with their deliverables and the Software Testing team tests them and gives it Quality
Assurance group for final Quality Check so that it can be released into Production. So for the
Quality Assurance team to ensure that the new applications wont cause any disastrous
effects to the production data and to ensure that they run fine they need an exact replica of
production system. They will request the DBOL team through Production copy refresh website
to schedule a production copy. Then DBOL team approves that request or can cancel that
request based on the availability and then they will add it to their Group Mailbox calendar.
Sample Scenario when DBOL team rejects a prod-copy refresh:
When requests are made to clone a production region into two quality regions simultaneously,
for example PGHA to QGHA & SGHA simultaneously, these kind of requests are rejected.
The Production copy process is a joint activity done by Storage, DBOL and BASIS teams.
Production copy starts with the Storage team. They setup the DASD and shutdown the target
region for which the Production copy needs to be done since the current disk in which the
target region lie is to be refreshed. Cloning is a kind of Production copy. Once the storage
team completes their tasks, They send out a notification to the DBOL team and also update
the Website on the status and then its the turn of the DBOL team to do the part. Once DBOL
team completes their activities, they notify the BASIS team who completes the process.
The Production Copy process starts with the checklist present in DB2V8.TEXT (P$$4C$$)
member.

According to the check list we have two kinds of refreshes, one is when were

doing a refresh for a target region for the first time then weve to follow all the steps given in
the check list inclusive of those mentioned as First in the checklist, the second kind of refresh
is a refresh for a target system not for the first time, then we dont have to follow the steps
mentioned as First in the checklist, we can skip them.
Advantages of using IBM cloning tool for production copy:
1. With cloning tool we are able to do a live cloning i.e we dont have to bring down the
subsystem for this purpose and hence we are avoiding an outage.
2. If we are not using this tool we have to do an unload of the existing production data and
load it into the prod copy subsystems this is a time consuming and hectic process.

13.1.1. PURPOSE
To replicate the production data to quality so that quality assurance team can proceed with
their quality checks.

13.1.2. PREREQUISITE
1. Receive requests from Basis team through production copy refresh website.
2. Inform the storage team & co-ordinate with them.
3. Schedule the same in the Group mail box calendar.
4. Be ready with the preparatory work done before the DASD snap happens.
5. Keep the prod copy checklist ready and proceed accordingly.

13.1.3. RACI MATRIX


Primary activities

Responsibility

Accountability

Consultancy

Informed

Carry out Production


copy process

DBOL
Team

DBOL-Lead

KC

Application
owner

DB2

13.1.4. INPUT
1. Basis teams request.
2. prod copy checklist.

13.1.5. PROCESS
Detailed steps of prod copy process can be found in the following link.

Detailed process

First time Copy

13.1.6. OUTPUT
1. The cloned prod copy region

Checklist

2. Updated Prod copy Refreshes website

13.1.7. FORMS, CHECKLIST OR TEMPLATES USED


1. Prod copy check list.

13.1.8. REFERENCE
1. IBM Cloning Tool (Mainstar) Manual.
2. The dataset DB2V8.TEXT (P$$4C$$).

14. BACKUP AND RECOVERY


14.1. OVERVIEW
Storage team does the backup activity on a regular basis. They are using disk mirroring
techniques for backup, the disaster recovery site is 10 miles east to the primary site. TCC
EAST is the primary site and TCC WEST is the disaster recovery site.
Storage team is using tools like SRDF (Synchronous Replication data facility) to replicate the
data between primary site and the disaster recovery site. Time Finder (EMC Tool) is another
tool storage team is using for Backup. They are using EMC consistent split Technology to
take dumps. DBA team is responsible for Recovery activities theyll be taking Image copies on
a regular basis. DBA team is responsible for the recovery of Catalog and Directory.
Full Image copy is taken on a weekly basis and incremental image copy is taken on daily
basis. They use TWS Scheduler for doing the same.
DBOLs role in Backup and Recovery is to coordinate with DBA team or storage team if they
need any help. All the recovery process in DB2 SAP is handled by the DBA group
Image copies of all the System catalogs are scheduled using the CA7 Scheduler.
Responsibility of the System catalogs will be DBOLS in the sense that if the job fails mail will
be sent to the DBOL team group mailbox with the details of the job failure.
There are 3 scenarios in which Recoveries might occur:
1.

In case of Total System Failure (LPAR failure), the DASD team is responsible end-

end for recovering the DB2 Datasets and Subsystems back in place
2.

In case we need to recover system catalog and directory, the DBOLS team is

responsible for recovering the same from Log files


3. In case of Application data recovery, the same is handled by DBA team

DASD "classes" and naming conventions


There are 4 classes of DASD configured for each SAP DB2 subsystem.- 0
(zero), 1, 2 and 3 class.
Naming convention: <sid><class>nn (e.g. BX3001 or PA3201)

Class 0 - A set of volumes for the DB2 catalog and directory, DB2
loadlibs,and target libraries and the ICF catalog for this
data.Volumes are named <Sid>0*
Class 1 - A set of volumes for sort.
Volumes are named <Sid>1*
Class 2 - A set of volumes for the BSDS, active logs and the ICF
catalog for this data.
Volumes are named <Sid>2*
Class 3 - A set of volumes dedicated to the SAP data and the ICF
catalog for this data.
Volumes are named <sid>3*, <sid>4*, <sid>5*, etc.

All archive logs and image copies for all Sids will be assigned to a shared set of
DASD volumes.

DB2 Disaster recovery backup


The Storage team does volume back up on regular basis using SRDF (Synchronous
Replication data facility).Replication of primary site to
disaster recovery site is done using
SRDF.

Production Backup Strategies


There will be two kinds of on-line backups taken in the production
environment.
1. There will be weekly volume based backups.
These backups will be taken by Time Finder and there will be dedicated
BCV's (Backup Control Volumes) assigned. This backup will be used to
support all recoveries of an entire SAP system to current or a prior point in
time.
2. There will be weekly based backups(image copies)
This backup will back on all tablespaces in an SAP system including the DB2 catalog
and directory.

This backup will be used to support recovery of individual tablespaces to


current or a prior point in time.
This backup is being taken because it is very difficult to recover individual
tablespaces from volume based backups.
Daily incremental imagecopy are taken for the production system
The production Timefinder backups will be kept for 21 days. All and image copy
backups and archive logs will be kept for 35 days.

Non-Production Backup Strategies


There will be two kinds of on-line backups taken in the nonproduction
environment.
1. There will be weekly volume based backups.
These backups will be taken by Time Finder and supported by the Storage
Management team.
This backup will backup the three categories of data listed

above.

There will be multiple rotating sets of BCV's (Backup Control Volumes)


assigned.
This backup will be used to support all recoveries of an entire SAP system to current
or a prior point in time.
2. There will be weekly object based backups(image copies)
This backup will backup all tablespaces in an SAP system including the DB2 catalog
and directory.
This backup will be used to support recovery of individual tablespaces to
current or a prior point in time.
This backup is being taken because it is very difficult to recover individual
tablespaces from volume based backups.
All backups and archive logs will be kept for 22 days.

15. BCP & DISASTER RECOVERY EXERCISE


TCC East is the primary site where all the production activity takes place and TCC west is the
disaster recovery site. All the data in the primary site (volumes) will be replicated to the disaster
recovery site by the storage team. They are using some hardware technology developed by
EMC like SRDF (Synchronous Data Replication Facility), Consistent split technology and some
Time Finder tools for this purpose. With this if any disaster happens on the primary site we can
bring the Disaster recovery site as if it had gone a normal DB2 crash. Automation has been
defined with specific definitions for Disaster Recovery and it automatically brings up the
subsystem in case of Disaster.
During disaster only production Subsystems will be brought up. This Disaster Recovery
test will be conducted once in a year separately for SAP and Legacy. During this process if the
application team comes up with any verification jobs to be run as a part of this DBOL team has to
run them. Every one in the DBOL team has to be available 24 hours during the Disaster
Recovery process in order to handle any issues that come from operations team. DASD team
and MVS team will be accountable before the subsystem is brought up and once the subsystem
is up the DBA team takes over and it is their responsibility to recover the application table
spaces.
In case of Disaster Recovery there will be a callout that will be announced by the
Operations team and the management gets notified about this. There are predefined escalation
procedures under this Disaster Recovery process.
NOTE: - Yet to get the document regarding the escalation procedures, checklist during
disaster recovery process.

16. DBOL DB2 - DAY TO DAY ACTIVITIES


Checking and responding for the mails in the DBOL Group mail box (DBOLS, Computer
Services). Oncall person has the primary responsibility to check the mail box.
After receiving requests for refreshes the person who is primary on call in that week is responsible
to edit the Groups calendar.

System Automation which is controlled by z/OS team will recognize any DB2 Dumps that
occur and sends the mail to the Group mail box of the DBOL team.
Using Magic & HPSD tools to solve Requests.

Open PMR with IBM in case there is any abends or problems.

There are no house keeping jobs run as a part of DB2 teams day to day activities

17. PERFORMANCE TUNING


Performance tuning requests is generally handled by Performance team. DBOL team
generally dont do much in the areas of performance except that requests come in the form of
Subsystem level tuning which is handled by DBOL team while changing the DSNZPARMS
values. Suggestions for changing the ZPARMS settings will come from the performance/DBA
team following which we need to coordinate with them and make the changes.

18. KC AUTOMATION
DB2 Startup and shutdown process for SAP is done using Automation. Whenever DB2 is
brought down for Maintenance or IPL its done using Automation. Automation will not allow
DB2 to come down even if we manually try to bring down DB2 using the commands.
Similarly ICLIs (Integrated Call level Interface) running in some DB2 SAP Applications is
Brought down and brought up using Automated means.
However on the IPL weekends, we dont have to coordinate with any other team. Operations
team will have a special automation through which DB2 Subsystems are brought down during
IPL.
Whenever a new subsystem is created the mainframe team is notified ahead of time and they
are the present owners of Netview automation they will take care of defining the new
subsystem to Netview automation.
Overview

Detailed process

19. TOOLS - OVERVIEW

TOOLS
Endeavor (Change control)
IDEAL-DATACOM/DB Utilities & Functions

DESCRIPTION
Used to manage the process of developing and
maintaining software for mainframe application.
Integrated Development Environment.

Output Processing

(IOF)

DB2 Admin Tool

DB2 Performance Expert

DAE (DUMP Analysis & Elimination)

LogRec
Netview

SMP/E

Interactive Output Facility is a productivity aid for


TSO and DBOL environments and provides tools
to manage JES2 resources
This is used to do all the administrative activities like
starting a DB2 subsystem, stopping a DB2 subsystem
etc.
This is used to do performance analysis like
running Explain, collecting statistics from
RUNSTATS etc.
This tool is used to analyze the SVC dumps which
are taken automatically and report the problem to
IBM. Whenever DB2 dumps occur, mail will be
sent to the DBOL team and the same will be
captured in
Dataset which can be viewed through the DAE
tool. We can also prevent the repetitive dumps
from getting suppressed using the DAE tool.

.
It is a tool to keep track of any abends that are
occurring.
Netview tool is used which brings down and
bring up the DB2 Subsystems through
Automation. Whenever DB2 is brought down
for Maintenance its done using Automation
using Netview. Automation will not allow DB2
to come down even if we manually try to
bring down DB2 using the commands.
Similarly ICLIs (Integrated Call level
Interface) running in some DB2 SAP
Applications is
Brought down and brought up using Automated
means through the Netview tool
System Maintenance Program.

TR
Computer Tape Handling Request/Authorization
Form to be filled in prior to sending the tapes to
tape library rack.
INFOPAC

A TPX session which stores job output (class 8)


and reports. DBOL output data sets are stored in
Infopac.

IBM Cloning Tool for Z/OS

It is IBM utility used for the purpose of cloning


production regions to Test regions.

Magic

Magic tool is used by the DB2 SAP team to record


changes and to keep track of problems. It is the
responsibility of the DB2 SAP team to check our
magic queue regularly for all open problems and
act upon them to close. Sometimes the Magic
ticket is assigned to the DB2 team by the Service
Desk.

Whofixes

It is an in-house tool developed by KC which is


available on the Legacy systems. It is used to get
information about the primary contact on that day
for Oncall support and maintenance activities for
SAP

20. PERFORMANCE EXPERT


20.1. OVERVIEW
The DB2 Performance Expert for z/OS is an IBM supplied, host-based and workstationbased, performance analysis and tuning product. DB2 PE integrates the reporting and
monitoring functions of IBM DB2 Performance Monitor (DB2 PM) and IBM Buffer Pool
Analyzer (BPA) into one tool. It also adds queries and rules of thumb for expert analysis.
The Performance Expert tool is used to get the real time DB2 Subsystem Statistics. We can look for the
thread information in Performance Expert and you can perform operations like cancelling a thread from
CA-Insight. We can also enter commands to change DB2 ZPARM values and cancel threads from CAInsight exceptions that has been defined to perform these functions. The Alternate product that is
available in the market for the Performance Expert tool is CA-Insight tool.
The main objective of DB2 PE is to simplify DB2 subsystem performance management. DB2
PE gives you the capability of monitoring applications, system statistics, and system
parameters. In addition, DB2 PE provides you with support to analyze your performance
bottlenecks and gives you tuning recommendations about how you can improve system and
application performance.

Benefits OF DB2 PERFORMANCE EXPERT:

Have one central point of control to manage and monitor all DB2 instances

and DB2 Subsystems.

Provide a real time Online Monitor of several DB2s in parallel, independent

of where the system resides.

Show DB2 subsystem statistics and system parameters (DSNZPARM).

Provide application and thread details, including bottlenecks, such as locking

conflicts.

Highlight when you exceed exception thresholds or when you reach event

thresholds.

View and examine current activities as well as history data.

Have a graphical view of important performance data (System Health).

Collect trace data and immediately post-process event trace information to

produce batch Reports.

Obtain a wide variety of DB2 PM batch reports.

Explain the access path of an SQL statement in order to optimize it.

Monitor DB2 Connect and the connection with remote applications along with

the host thread information giving you a complete picture of resources and time
spent with DB2, DB2 Connect, and the network.

Control exception processing and get information about exception conditions,

exception events, and review those exceptions that have occurred in the
exception log.

Provide analysis, simulation, and reports of buffer pool usage.

Store and manage performance data in a Performance Warehouse.

Obtain tuning recommendations and expert analysis reports.

Logon to multiple systems.

We have 6 sets of smp/e libraries that is used for maintenance on performance Expert.
The datasets are of the following format
DB2V8.SYS6.DSN81E.FPE210.AFPEDATA
DB2V8.SYS6.DSN81E.FPE210.AFPEDBRM
DB2V8.SYS6.DSN81E.FPE210.AFPEEXEC
DB2V8.SYS6.DSN81E.FPE210.AFPEFORM

DB2V8.SYS6.DSN81E.FPE210.AFPEINS0
DB2V8.SYS6.DSN81E.FPE210.AFPEMENU
DB2V8.SYS6.DSN81E.FPE210.AFPEMOD0
DB2V8.SYS6.DSN81E.FPE210.AFPEMOD1
DB2V8.SYS6.DSN81E.FPE210.AFPEPENU
DB2V8.SYS6.DSN81E.FPE210.AFPESAMP
DB2V8.SYS6.DSN81E.FPE210.AFPESLIB
DB2V8.SYS6.DSN81E.FPE210.AFPETENU
DB2V8.SYS6.DSN81E.FPE210.AFPEWS01
DB2V8.SYS6.DSN81E.FPE210.SFPEDATA
DB2V8.SYS6.DSN81E.FPE210.SFPEDBRM
DB2V8.SYS6.DSN81E.FPE210.SFPEEXEC
DB2V8.SYS6.DSN81E.FPE210.SFPEFORM
The current version of Performance Expert is 2.01 and the current maintenance level is 703.
This document gives a detailed view on how to use PE

20.1.1. PURPOSE
Install new version of Performance Expert

20.1.2. PREREQUISITE
1. Management needs to order the Installation tapes from IBM.
2. Be ready with the preparatory work done and Impact analysis for the Installation to be
carried out.
3. Receive the Installation tapes from IBM.
4. Co-ordinate with DASD, Mainframe team during Installation.
5. Create the change ticket using HPSD for the Installation to be carried out.
6. Wait for the Change Advisory Board to approve the Change Ticket.

20.1.3. RACI MATRIX


Primary activities

Responsibility

Accountability

Consultancy

Informed

Install Performance
Expert

DBOL
Team

DBOL-Lead

KC

DBA

DB2

20.1.4. INPUT
1. Installation tapes.
2. Storage disk packs from storage team.

20.1.5. Process
Detailed steps of this process can be found in the following link.

Once the above steps are completed successfully, configure the subsystems.
Detailed steps of this process can be found in the following link.

20.1.6. OUTPUT
1. Tailored installation jobs.
2. Updated SMP/E library.
3. Test the Installation using verification steps.
In case of problem with Performance Expert tool, we need to check out the problem and try to
resolve the issue; else we need to contact the vendor to track the problem.

20.1.7. FORMS, CHECKLIST OR TEMPLATE USED


None

20.1.8. REFERENCE
The website for the IBM Products is:
http://www-3.ibm.com/software/support/
When a new release of Performance Expert is carried out, we need to contact the DBAs.

21. IBM DB2 ADMINISTRATION TOOL


21.1. OVERVIEW
DB2 Admin is an ISPF application that uses dynamic SQL to access DB2 catalog tables. DB2
Admin can greatly increase the productivity of the entire DB2 staff (database administrators,
system administrators, and application developers.
IBM DB2 Administration Tool for z/OS provides a comprehensive set of functions that help
DB2 personnel manage their DB2 environments efficiently and effectively. DBA ADMIN tool
offers a solution for handling complex processes associated with change management.
Database changes can impact database and application performance, pushing errors deeper
into the database and making it more difficult to correct mistakes. The tool eliminate the need

for DBAs step through the processes of data unloading, object dropping and rebuilding, and
data reloading.
The tool is designed with an easy-to-use interactive system productivity facility (ISPF)
interface that lets you manage and process the DB2 objects, and organize them for better
system throughput. DB2 Admin Tool provides in-depth catalog navigation by displaying and
interpreting objects in the DB2 catalog and executing dynamic SQL statements. It is
integrated with other DB2 utilities to simplify the creation of DB2 utility jobs
The alternative tool available in market for DB2 Admin tool is CA-Platinum.

BENEFITS OF DB2 ADMIN TOOL:

Displays the DB2 catalog quickly and logically

Displays any object in the catalog

Displays related DB2 objects using special line commands

Interprets catalog information

Displays the authorization for objects

Displays the static SQL statements from application plans and packages

Displays the DDL for existing views

Runs on one of multiple copies of the DB2 system catalog

Executes dynamic SQL statements (in many cases, without requiring you to remember

SQL syntax)

Issues DB2 commands against databases and table spaces (without requiring you to

remember DB2 command syntax)

Runs most DB2 utilities.

Supports LISTDEFs and TEMPLATEs on DB2 Version 7 and above

Allows complex performance and space queries

Does EXPLAIN functions

Manages SQL IDs

Performs various system administration functions, such as updating RLIMITs,

displaying threads, and managing DDF

Allows reverse engineering of DB2 objects

Supports DB2 predictive governing

Enables you to alter the definition of a DB2 table

Enables you to copy (migrate) DB2 databoth databases and table spacesto other

DB2 systems

Enables you to extend existing DB2 Admin applications or to rapidly develop new

applications

Enables you to perform space-related functions such as resizing page sets; lets

you move page sets to and from STOGROUP- and VCAT-defined space; and helps you
estimate space allocations for new table spaces and indexes

Enables you to create and manage work statement lists (WSLs) and run them in batch

Enables you to launch installed IBM DB2 tools that have an ISPF interface

Enables you to dynamically manage system parameters (if running with DB2 Version 7

or above) Enables you to request the Prompt function, so that you are prompted before a
statement is executed
We have 6 sets of smp/e libraries that is used for maintenance on Admin tool.
The datasets are of the following format
DB2V8.SYS6.DSN81E.ADB510.AADBBASE
DB2V8.SYS6.DSN81E.ADB510.AADBCLST
DB2V8.SYS6.DSN81E.ADB510.AADBDBRM
DB2V8.SYS6.DSN81E.ADB510.AADBEXEC
DB2V8.SYS6.DSN81E.ADB510.AADBMLIB
DB2V8.SYS6.DSN81E.ADB510.AADBNCAL
DB2V8.SYS6.DSN81E.ADB510.AADBPLIB
DB2V8.SYS6.DSN81E.ADB510.AADBSAMP
DB2V8.SYS6.DSN81E.ADB510.AADBSLIB
DB2V8.SYS6.DSN81E.ADB510.AADBTLIB
DB2V8.SYS6.DSN81E.ADB510.SADBBASE
DB2V8.SYS6.DSN81E.ADB510.SADBCLST
DB2V8.SYS6.DSN81E.ADB510.SADBDBRM
DB2V8.SYS6.DSN81E.ADB510.SADBEXEC
DB2V8.SYS6.DSN81E.ADB510.SADBLINK
DB2V8.SYS6.DSN81E.ADB510.SADBLLIB
DB2V8.SYS6.DSN81E.ADB510.SADBMLIB
DB2V8.SYS6.DSN81E.ADB510.SADBPLIB
DB2V8.SYS6.DSN81E.ADB510.SADBSAMP
DB2V8.SYS6.DSN81E.ADB510.SADBSLIB
DB2V8.SYS6.DSN81E.ADB510.SADBTLIB

The current version of DB2 Admin tool is 5.1 and the current maintenance level is 703.
This document gives a detailed view on how to use DB2 Admin tool.

21.1.1. PURPOSE
Install new version of DB2 Admin.

21.1.2. PREREQUISITE
1. For a new installation project charter has to be prepared and management has to approve
it.
2. Management needs to order the Installation tapes from IBM.
3. Be ready with the preparatory work done and Impact analysis for the Installation to be
carried out.
4. Receive the Installation tapes from IBM.
5. Co-ordinate with DASD, Mainframe team during Installation.
6. Create the change ticket using HPSD for the Installation to be carried out.
7. Wait for the Change Advisory Board to approve the Change Ticket.

21.1.3. INPUT
1. Installation tapes.
2. Storage disk packs from storage team.

21.1.4. RACI MATRIX

Primary activities

Responsibility

Accountability

Consultancy

Informed

Install Performance
Expert

DBOL
Team

DBOL-Lead

KC

DBA

DB2

21.1.5. PROCESS
Detailed steps of this process can be found in the following link.

Once the above tasks are completed successfully, follow the below instructions to configure
the subsystem. Detailed steps of this process can be found in the following link.

21.1.6. OUTPUT
1. Invoke DB2 Admin Tool.

2. Test the Subsystem configuration.


3. The users for which it is intended should be able to use DB2 admin tool.

21.1.7. FORMS, CHECKLIST OR TEMPLATE USED


None

21.1.8. REFERENCE
In case of problem with DB2 Admin tool, we need to check out the problem and try to resolve
the issue, else we need to contact the vendor to track the problem.
The website for the IBM Products is:
http://www-3.ibm.com/software/support/
When a new release of DB2 Admin tool is carried out, we need to contact the DBAs.

22. LOGREC
LOGREC is a free tool provided by IBM, it is used for the purpose of capturing the error log
from a specified time to a specified time. It is done with the help of EREP program. This
information may be used for the purpose of Analyzing the problem or to report the problem to
IBM.
We can use the following options also
S = Summarize LOGR LOGREC data
D = Detailed LOGR Software Records
I = LOGR LOGREC Inventory
O = User EREP Input from dataset
Z/OS team installs this product. It is a free product so no service is available for this product.
Detailed information is found in

23. DAE
DAE (Dump Analysis and Elimination) is a tool which keeps track of the dumps. It creates a
dump for an abend if it is happening for the first time. Then it wont create a dump it just
records the number of times that dump has occurred. We can go into that tool and manually
take a dump by giving (T) take next dump option over there. It is basically used to keep track
of the abends and eliminate the process of taking unnecessary dumps.

We can use the following options also


S- Show Entry details
T-Take next dump
V- View dump index entry
W- Leave this panel
Z/OS team installs this product. It is a free product so no service is available for this product.

Detailed information is found in

24. NETVIEW
Netview is a IBM Tivoli Product developed for the purpose of making the operators work
easier, now the operator doesnt have to remember the commands to bring the subsystem
down or to bring up the subsystem. In automation every thing is considered as a task, apart
from bringing up and bringing down the subsystem a variety of other information can be
obtained like we can display the status of the task, its desired and actual states, the eligible
resource upon which it can run, the tasks that are required to be up before bringing up this
task and the tasks that are supported by this task i.e the tasks which this task automatically
brings up.

1.1.1. KC AUTOMATION
There are few automation tasks that has been done in SAP for Maintenance for bringing up and
bringing down a Subsystem during the maintenance that is carried out on Saturdays, bringing
down of subsystems is taken care by Netview and it is done automatically. Then after the
subsystems come down a task called DB2FLIPV8 starts automatically which flips the smp/e letter
to the current maintenance level of that week. This task gets information from the flags set in the
member $SID6W8.
Then after the IPL and when the subsystems start coming up there are some after jobs which are
to be run this is also taken care by automation by using the information from the flags set in the
member $SID4WS. If there are any jobs that are to be run after the subsystem comes down it can
be done by the flags in $SID3WS.
All this automation is a part of Netview which is IBM Tivoli tool developed for the purpose of
making the operators work easier, now the operator doesnt have to remember the commands to
bring the subsystem down or to bring up the subsystem

1.1.2. NETVIEW AUTOMATION


Netview is an IBM Tivoli Automation tool developed for the purpose of making the operators
work easier, now the operator doesnt have to remember the commands to bring the
subsystem down or to bring up the subsystem. There are automations available for displaying
the status of an Individual SID in an LPAR Image graphically. We can change the status of
tasks using simple options given in that graphical panel.

24.

CURRENT VERSION, MAINT LEVEL AND EOS DATE


The present DB2 SAP Environment runs in DB2 version 8.1. The current maintenance level is
PUT0703.The SAP environment is the busiest in KC it will just lag with the IBMs current DB2
PUT releases by just 2 months. Every month well be going to a new PUT level. The tools
DB2 Performance Expert version 2.1.0 and DB2 Admin Tool version 5.1 are also under the
same PUT level as DB2 version 8.1. The table below shows the End Of Service Level for
each product.
Product Name
DB2 for Z/OS
DB2 Admin Tool
DB2 Performance Expert

Version
8.1
5.1
2.1.0

EOS Date
Not Available
30 Sep 2007
30 Sep 2008

Replacement
DB2 Admin Tool Version 7.1
OMegaMon XE for DB2
Performance
3.1

25.ACCESS REQD FOR DBOL DB2


The following mainframe groups are to be connected in each LPAR

LPARs

Groups

TCP0

DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM

TCP1

Expert

version

TCP2

TCP3

TCQ0

TCQ1

TCT0

TCT1

TCT2

TWQ0

TWQ1

TCX0

TCX1

DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG
DATABASE
DB2ADM
DB2SUPP
OMVSGRP
SYSPROG

The DBOL administrator needs 3 use rids

Q id , which is the primary id


X id, is the secondary id
S id, is to logon into DB2 connect servers.

And access to the following is required,


SYSADMN access in all LPAR
Alter access to SYS*.* datasets
DBOL Mail box
Access to IBM and CA Websites.
IBM

http://www-304.ibm.com/usrsrvc/account/userservices

CA

http://www.ca.com/us/support

DB2CONN
IBM Software
handbook

http://www.ibm.com/software/support/probsub.html
http://techsupport.services.ibm.com/guides/handbook.html

ESR

http://www.ibm.com/software/support/help.html

26. GENERAL PROCEDURES


Procedure for Adding A New User
Purpose
Purpose is to add a new user into the team.

Strategy or approach
When a new user joins KC, what are all the various activities to be completed are being listed here.

RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
activities
Getting Employee
id
Getting physical
access to the KC
area
Getting Qid

Responsibility

Accountability

Consultancy

Informed

Team Leader

Project Manager

MATC

member,

Team Leader

ISC

Admin

Team
PMO
Team
PMO

Team Leader

Tower Leader

KC-Tower leader

member,

Getting allocated
in Ultimatix
Task allocation in
iPMS

Project Manager

Accounts
Manager
Project Manager

MATC

Team
PMO
Team
PMO
Team
PMO

Tower Leader

PMO

member,

member,
member,

Input
1. Offer letter needed for getting employee id.

Steps
1. After receiving the employee id, the employee needs to send a mail or contact the team
leader, for getting physical access to the KC area.
2. The Team leader will forward the request to ISC, the ISC will send the request through the
Lotus notes security database and Admin will validate give the access rights to the list of
people.
3. The team leader will get the KC-NDA signed by the team member and fax it to KC. The
request for Qid will also be raised in the incident management tool.
4. The Qid will be received by the Service Desk team and informed to the team member.
Team member to update the team leader with details. The team leader will give the
updates to the PMO.
5. Project manager to raise the request in Ultimatix and get the allocation done for the team
member.
6. Once the member is allocated to the WON, task will be allocated in iPMS for the member.

Output
1. Physical access id card.
2. Updated list of employees of KC with Employee id, Name, QId, Access availability,
Onsite/Offshore and status available with PMO.

Forms, Checklists or templates used


Table to be maintained by the PMO
Tower

Employee
id

Employee
name

QID

Access
(Y/N)

Availability

References
Ultimatix My allocations to verify the allocation
iPMS to know the WON in which allocation is done

Onsite/offshore

Status

Overall Core Support Procedure


Purpose
Purpose is to give the call lifecycle procedure.

Strategy or approach
The process flow is below.

Core Support Resolution


Process

Problem reported through


End User / ServiceDesk
User
1
Perform Initial Analysis
Support Team Member

Does the ticket belong


to the correct group?

No

Inform Help Desk / Operations to


Transfer the ticket to the
appropriate support group
Support Team Member

Yes

Is this a valid Problem?

No

Inform Operations / Help Desk


that reported problem is not a
valid problem
Support Team Member

Yes
Inform Operations / Help Desk "Problem is being addressed"
Support Team Member
Prioritize the problem, if there's
more than one at the same time
3
Support Team Member

No

Does Fix involve coordination / support from


other Support groups?

Yes

Co-ordinate with other Support


group(s) and estimate effort
Support Team Member

Request Service Desk Team to


co-ordinate re-prioritization of
request(s) for other Support
group(s)
4
Service Desk

Yes

Does Fix require


change / revision of priority
for other Support groups?

Perform the fix to recover /


resolve the issue - as appropriate
5
Support Team Member

Does Fix require


more time than the SLA?

Yes

Follow Escalation Process


Support Team Member

No
For major incident, PIR will be conducted by KC
with all concerned parties (IS, Business, Support
Team Mgmt) and a PIR report will be generated.
6
Support Team Member
a) Update and Close the ticket, on receiving user feedback
b) Send Problem Report with all details to concerned parties, if applicable
b) Record the problem details in Solution Vault
Support Team Member

RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
activities
Problem reporting
Analysing,
resolving
and
escalating

Responsibility

Accountability

Consultancy

Informed

User/Service Desk
Team Member

Team Leader

User/Service
Desk operator

End user

Input
1. Problem ticket or problem reported by the user through desk phone, mobile phone or
through pager, at day or night.
2. The problem can be even through conference calls by the Service Desk.

Steps
Explanation of the process flow diagram
No.

Explanation

Problem Reported through End User/Service Desk


The problem may be notified by Operations because of a job failure or by a user due to nonavailability / malfunction of an application system. It could also be a system-generated notification
of an error or failure.

Initial Analysis
When a problem is reported, an initial analysis has to be performed by the support person. If it is
determined that the reported one is actually a core-support problem and belongs to the
concerned group, the support person will take immediate action to provide the necessary fix.
If the problem does not belong to the concerned group, then Operations or Help Desk would be
requested to reassign the problem ticket to the relevant group.

Prioritization
When there is more than one Core-support request, Support Team Leader will prioritize them
based on the Severity Guidelines and the recommendations given by KC as there will be only a
limited number of resources in a pool to work on them. When multiple users are affected or entire
site is down, the PIR will be conducted by KC.

Co-ordination
If the fix requires support effort from other Support groups / customers, Support Team member
will request the Service Desk to co-ordinate the re-prioritization of tasks for other Support
group(s) and inputs / feedback required from customers.

Perform Fix
Break-fix requests should be given the highest priority over any other task. Production problems
would be given the next priority. Support Team Member will ensure that the fix is provided on
time to ensure smooth running of business.

No.

Explanation

Communication
a) During resolution Support Team Member will keep all concerned parties (Business, IS
and management) posted on the progress. A preliminary Problem Report would be sent to all
if SLA slippage is anticipated.
b) On Completion of the Fix Support Team will communicate to all concerned parties and
close the request after receiving the feedback from the user. A detailed Problem Report
would be sent to all if there was a slippage. Problem details would also be logged in the
Solution Vault tool.

Additional Support Request Process


Non-Core additional support tasks such as Scheduled Maintenance and Business Sustaining
tasks would have to be open-ended and would be performed based on availability of Support
effort (40 hrs per person per week) to perform the activity.

Escalation Process
The escalation process to be followed if Support team cannot meet the SLA for a Coresupport request is explained in Escalation Procedure.

Forms, Checklists or templates used


Magic tool

References
Policies and procedures documents of the Mainframe Tower.

Escalation Procedure
Purpose
Purpose is to escalate the tickets if they are of sev1 and are missing SLAs.

Strategy or approach
The escalation matrix is to be developed and communicated to all the members in
the team. 5 levels of escalations are to be documented. These need to have
names of people from
tower.

and KC. This is to be discussed and finalized for each

RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
activities
Preparation
updating
escalation
procedures
Following
escalation
procedures

Responsibility

Accountability

Consultancy

Informed

and
of

Team Leader

Tower Leader

Customer

All
the
members

the

Team member

Team Leader

Tower Leader and


customer

End Users and all


affected parties.

team

Input
1. Escalation matrix
Elapsed time *
10 minutes
20 minutes
45 minutes
90 minutes
120 minutes
Until resolution

Action to be taken (based on severity)


The Primary On-call person would respond to Operations / update Magic regarding
the action being taken, after receiving a phone call/ticket from Operations / Magic.
If the Primary On-call person is unable to identify the problem, the Secondary On-call
person and/or the Application Owner would be called immediately.
If the Primary and Secondary On-call persons are unable to solve the problem, it
would be immediately escalated to the Tower Lead . The Tower Lead would in turn
escalate it to the KC Tower Lead, based on the SLA guidelines.
If the problem is not resolved, it would be escalated to the concerned KC SPOC and
KC PM. Even if the resolution is known, but completion of the same will exceed the
SLA, both the KC SPOC and PM have to be notified.
If the problem is still not resolved, it would be escalated to Relationship Management
and the concerned KC CRM, with the likely SLA.
Operations / Magic would be updated about the action taken and the status of the
problem on regular intervals.

Steps
1. Ticket analysis is to be done and depending upon the severity of the issue and the time elapsed,
the next level of escalation is to be done according to the escalation matrix, the contact person
from the contact list is to be notified.
Note:
1. If a problem is being reassigned to several groups, the SLA would have to be calculated based on
the actual time spent by concerned groups in the problem resolution. Standard escalation process
would still have to be followed by Support team.
2. If a lower severity problem (Severity 3) leads to a higher severity problem (Severity 1) either due
to a bad fix or oversight, then a new Severity 1 ticket have to be opened by the concerned support
person for his own support group, so that further escalation would happen as expected.
3. If the problem resolution requires action from more than one support group, the KC SPOC would
be requested to co-ordinate the reprioritization of the other groups tasks.

Output
1. Updated Ticket in the Ticketing system.
2. PIR report

Forms, Checklists or templates used


Templates
1. The contact names list
Module

Group

Management

Name

Work No.

Home No.

Login ID

Team
leader
Tower
leader
KC Tower
leader
PM
KC PM
KC
SPOC

Mainframe Systems
HR & Payroll
Inventory & Logistics,
Marketing, Pricing
Stores
Non-Mainframe Systems
Datawarehouse
Infrastructure, Stores

<The contact list should cover all the Cross-functional contacts needed for resolving issues>
2. SPOC list
No.

Business Area

1.

Advertising, Pricing, Marketing

2.

CRM

3.

Datawarehouse

4.

Finance, HR & Payroll

5.

Inventory & Logistics

6.

DBA (SQL, Oracle)

7.

DBA (DB2)

SPOC

KC SPOC

Form with the details of how to update the ticket in the tool can be added.

Ticket analysis checklist can be prepared and added.


PIR report template also can be added.

References
None.

Root Cause Analysis


Purpose
Purpose is to do root cause analysis on the problem, why it has happened and what steps
can be taken to avoid the recurrence of the problem in future.

Strategy or approach
Customer is conducting PIR meeting, depending upon the severity of the problem.
Similar meeting is to be conducted in for all the problems and brainstorming &
fishbone analysis tools to be used to find the root cause of the problem.
Action plan for preventing that problem in future also be brainstormed and
prepared.

RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
activities
Creating
problem ticket

Responsibility

Accountability

Consultancy

Informed

The team member


& team leader

Tower Leader

Customer
and
affected parties

Conducting RCA
meeting

Team Leader &


Tower Leader

Project Manager

Preparation
RCA report

Team Leader

Tower Leader

End user and


other
affected
parties
All the related
team
members,
other
tower
members
and
customer
Team members

of

Customer
and
affected parties

Customer

Input
1. Problem and severity details of the problem

Steps
1. Problem ticket is to be created in the ticketing tool
2. Team member to escalate to the Team Leader and involve him for the meeting.
3. Team Leader to call Tower leader and involve him for the meeting.
4. Team leader to schedule and conduct the meeting and involving all the relevant people.
5. Brainstorming and Fishbone tools to be used to find the root cause of the problem.
6. Preventive action plan is also to be discussed.
7. RCA report is to be prepared and the preventive action plan to be submitted to all the
affected parties and the customer.

Output
1. Updated Ticket in the Ticketing system.
2. RCA report
3. Preventive action plan.

Forms, Checklists or templates used


RCA report template
Subject

Root Cause Analysis Report for Incident # ____________

Description
System

Component

Type of Failure

Problem Owner

Outage

Severity

Start Date/Time
Restore Date/Time

Duration

Description

Analysis

Impact analysis
Root cause
Time line

Contributing factors
Resolution
Analyzed by

Reviewed by

Permanent fix / Future


prevention
Target Date for
Implementation

References

Ticket Review Procedure


Purpose
Purpose is to explain the procedure of call quality monitoring for the service desk
and ticket review and audit procedure for service desk and other towers.

Strategy or approach
All the tickets handled by each person through tickets or emails or phone calls are
to be logged into excels on daily basis. Each tower to have their own review folder
configured and monthly folders are to be created. Under those folders, weekly
excel files will be created with daily excel sheets in them.
For Example -> KC folder for the tower -> Ticket tracking & reviewing folder -> Apr 07
folder
This folder to have excel file for every week, with the start date of the week.
02-Apr-07 week1.xls This file will have sheets for every day 02-Apr-07, 03-Apr07 etc.

Team Leader to verify the ticket details entered every day. Tower leader to do an
audit on the ticket review details every week.

RACI Matrix
<eg> Responsibility, Accountability, Consulting and Informed (RACI)
Primary
Responsibility
Accountability
Consultancy
activities
Logging the ticket Team Member
Team Leader
Users
details into the
ticket
review
excel
Creation of the Team Leader
Tower leader
Ticket Review file
Verification of the Team Leader
Tower Leader
Team Members
ticket
details
entered
every
day
Validation of the Tower leader
Project Manager
Team
Leaders
ticket
details
and Members
entered
every
week

Informed

Team Members

Input
1. Incidents received through phone, mails and tickets.

Steps
1. The template to be used for logging and tracking the ticket details.
2. Monthly folders and weekly files are to be created by the tower/team Leader.
3. Daily sheets are to be created in the files for every week. The file to be a shared file, so that
multiples people can do entries.
4. The ticket details are to be entered by the team members.
5. Every day the team leader needs to verify whether every one in the team has entered all
the tickets handled by him.
6. Every week Tower Leader needs to verify the sheets on completeness and correctness of
entries.
7. He should also prepare a ticket audit report in an excel.
8. The Quality Manager will do monthly verification on the process being followed.

Output

1. Weekly review files with daily sheets Team Leader and Team Members
2. Weekly Audit files Team Leader & Tower Leader
3. Monthly FI report - QM

Forms, Checklists or templates used


Weekly review file template

Review log template.xls

Weekly audit template

ticket audit template.xls

Monthly FI checklist

FI checklistIS-Chennai V1.1.doc

References
None

25. CURRENT VERSION, MAINT LEVEL AND EOS DATE


The present DB2 SAP Environment runs in DB2 version 8. The current maintenance level is
PUT0703. Maintenance is applied on the SAP environment on a monthly basis.
The information for the Vendor Products is given below:

25.

Vendor Tool

Version

EOS Date

DB2 Admin Tool for z/OS


DB2 Performance Expert

V5.1
V2.1

30-Sep-07
30-Sep-08

APPROACH TO PROBLEM SOLVING

For resolving DB2 related problems, the approach and steps to be taken are
given below:

Determine the cause of the problem by analyzing the DB2 Logs. The DB2 Logs will give the
information about the error code, reason code, type and the Object name.

Look into the DB2 Messages and codes manual to find the detailed description of the
problem. The same can be found in:

Look at the online RMF reports to check for any enqueue related to DASD or CPU related
delays. Escalate to appropriate team depends on DASD (storage) or CPU (z/oS) related
problems.

Based on the analysis, take the necessary steps like running utilities or jobs for
problem.

In case of any emergency changes that needs to be done like recycling the DB2 Subsystem,
we need to contact the necessary persons for approvals and implement the changes.

Work with the impacted application and technical teams top resolve the issue.

Once the issue is resolved, inform all the concerned parties.

It is important to note down the actions and timing during the problem solving process.

Send a recap to team lead and management.

26.

resolve the

ACTIVE PROBLEM MANAGEMENT RECORDS

The following tabular column lists the open PMRs

PMR Record Number

Description

22793,379,000

PE3 Problem: Cancelled tasks went into Backout


with Starting RBA value of 000000000000
QAWADIST Abended with 04E

76231,379,000

Das könnte Ihnen auch gefallen