Beruflich Dokumente
Kultur Dokumente
Patrick Randall Edmund Haefele Alexandre Hogate Roland Leins T S Ranganathan Thomas Ritter
ibm.com/redbooks
SG24-5743-00
June 2000
Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix I, Special notices on page 375.
First Edition (June 2000) This edition applies to Tivoli Storage Manager, Release Number 3.7.1, Program Number 5697-TSM; SAP R/3 4.0B; ORACLE 8.0.4; DB2 UDB for AIX V5.2; AIX 4.3.2. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. 471F Building 80-E2 650 Harry Road San Jose, California 95120-6099 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 2000. All rights reserved. Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xvii Part 1. Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Chapter 1. Introduction to Tivoli Storage Manager . . . . . . . . . . . 1.1 The Tivoli Storage Management solution . . . . . . . . . . . . . . . . . 1.2 Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Tivoli Storage Manager architecture . . . . . . . . . . . . . . . . . 1.2.2 Base concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Tivoli Storage Manager complementary products . . . . . . . . . . . 1.3.1 Tivoli Space Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Tivoli Disaster Recovery Manager . . . . . . . . . . . . . . . . . . . 1.3.3 Tivoli Decision Support for Storage Management Analysis 1.4 Tivoli Data Protection for applications . . . . . . . . . . . . . . . . . . . . 1.5 Tivoli Data Protection for Workgroups . . . . . . . . . . . . . . . . . . . . Chapter 2. R/3 data management overview . . . . . . . . . . . . . . . 2.1 R/3 introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 R/3 system architecture. . . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 R/3 system landscape . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 R/3 and relational databases. . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 RDBMS fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 RDBMS terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 R/3 supported RDBMS . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Types of errors affecting database operation. . . . . . . . . 2.2.5 R/3 backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.6 R/3 recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 R/3 data availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Standby installation based on database features. . . . . . 2.3.2 Standby installations based on storage system features 2.4 R/3 space management . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. R/3 data management tools . . . . . . . . . . . . . . . 3.1 Overview of R/3 ORACLE database administration tools 3.1.1 BRBACKUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 BRARCHIVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 BRRESTORE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 SAPDBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.5 Recovery Manager (RMAN) . . . . . . . . . . . . . . . . . . 3.1.6 SAP BACKINT interface for ORACLE databases. . . 3.1.7 Tivoli Data Protection for R/3 . . . . . . . . . . . . . . . . . 3.2 Overview of R/3 ADABAS database administration tools. 3.2.1 CONTROL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 ADINT/ADSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Overview of R/3 DB2 UDB database administration tools ... ... ... ... ... ... ... ... ... ... ... ... ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 . .3 . .5 . .6 .12 .17 .17 .18 .19 .20 .22 .23 .23 .23 .25 .27 .27 .31 .32 .33 .36 .40 .44 .44 .46 .48 .53 .53 .54 .54 .54 .55 .55 .56 .58 .68 .69 .69 .72
.. .. .. .. .. .. .. .. .. .. .. .. ..
iii
3.3.1 DB2 Control Center . . . . . . . . . . . . . . . . . . . . . 3.3.2 Backup and recovery . . . . . . . . . . . . . . . . . . . 3.4 EDMSuite CommonStore for SAP . . . . . . . . . . . . . . 3.4.1 General considerations . . . . . . . . . . . . . . . . . . 3.4.2 Architecture and functionality of CommonStore 3.4.3 Supported platforms . . . . . . . . . . . . . . . . . . . .
. . . . . .
.. .. .. .. .. ..
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.. .. .. .. .. ..
. . . . . .
. . . . . .
. . . . . .
. . . . . .
.. .. .. .. .. ..
. . . . . .
73 73 75 75 76 81
Part 2. Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Chapter 4. SAP data management case study . . . . 4.1 Customer environment . . . . . . . . . . . . . . . . . . . . 4.2 Business requirements . . . . . . . . . . . . . . . . . . . . 4.2.1 Data protection . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Availability . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Change management and quality assurance 4.2.5 Ease of management . . . . . . . . . . . . . . . . . . 4.2.6 Space management . . . . . . . . . . . . . . . . . . . 4.3 Planning for R/3 data management . . . . . . . . . . . 4.3.1 Planning checklist . . . . . . . . . . . . . . . . . . . . 4.3.2 Implementation examples . . . . . . . . . . . . . . 4.4 Lab hardware and software overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . 85 85 86 87 88 88 88 88 89 90 90 92 97
Chapter 5. Tivoli Storage Manager setup . . . . . . . . . . . . . . . . . . . 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Data storage: storage pools and device classes . . . . . . . . 5.1.2 Data storage policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Server setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 General server setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Specific Tivoli Data Protection for R/3 setup . . . . . . . . . . . 5.2.3 Specific EDM Suite CommonStore for SAP setup . . . . . . . 5.3 Tivoli Storage Manager client and API setup . . . . . . . . . . . . . . . 5.3.1 General prerequisites for client and API setup . . . . . . . . . . 5.3.2 Automation: client scheduler . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Specific API setup for Tivoli Data Protection for R/3 . . . . . 5.3.4 Specific API setup for EDM Suite CommonStore for SAP . Chapter 6. Tivoli Data Protection for R/3 implementation . 6.1 Setup of Tivoli Data Protection for R/3 . . . . . . . . . . . . . . . 6.1.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Function overview of Tivoli Data Protection for R/3 . . . . . 6.2.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Backup function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Inquire function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Restore function . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Tivoli Data Protection for R/3 FIle Manager . . . . . . . 6.3 Backup automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 R/3 scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Tivoli Storage Manager schedule . . . . . . . . . . . . . . . 6.3.3 UNIX crontab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. ..
. 99 . 99 101 102 103 104 104 104 107 110 111 111 112 113 114 117 117 117 118 123 123 124 124 125 125 127 127 128 129
iv
6.4 R/3 backup and recovery using Tivoli Data Protection for 6.4.1 Backup scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Restore/Recovery scenario . . . . . . . . . . . . . . . . . . . 6.5 Possibilities to improve backup/restore performance . . . . 6.5.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Adaption of the network. . . . . . . . . . . . . . . . . . . . . . 6.5.3 Adaption of Tivoli Storage Manager . . . . . . . . . . . . 6.5.4 Adaption of Tivoli Data Protection for R/3 . . . . . . . . Chapter 7. Administration Tools implementation . . . . 7.1 Setup of Administration Tools . . . . . . . . . . . . . . . . . . 7.1.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Using the Administration Tools User Administration . 7.2.1 Create user profiles. . . . . . . . . . . . . . . . . . . . . . 7.2.2 Delete user profiles . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Change user profiles . . . . . . . . . . . . . . . . . . . . . 7.3 Using the Administration Tools Configurator . . . . . . . 7.3.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Edit configuration . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Save configuration . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Copy configuration . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Show configuration history . . . . . . . . . . . . . . . . 7.4 Using the Administration Tools Performance Monitor 7.4.1 Prerequisite. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Functional overview . . . . . . . . . . . . . . . . . . . . . 7.4.3 Monitoring R/3 backup activities . . . . . . . . . . . . 7.4.4 Analyzing performance bottlenecks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.130 .130 .134 .147 .147 .148 .149 .150 .153 .153 .153 .154 .161 .162 .162 .162 .163 .163 .165 .171 .173 .173 .175 .175 .175 .177 .181 .189 .189 .189 .191 .195 .195 .195 .196 .201 .202 .203 .212 .217 .219 .219 .220 .221
Chapter 8. CommonStore implementation . . . . . . . . . . . . . . . 8.1 Overview and planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Overview R/3 data archiving and customizing settings . 8.1.2 Planning the installation . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Installation and setup of CommonStore Server . . . . . . . . . . . 8.2.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Installation of the CommonStore server software . . . . . 8.2.3 Adaption of the UNIX Environment for CommonStore . . 8.2.4 Customizing of the CommonStore server profile . . . . . . 8.3 R/3 customizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 R/3 communication settings . . . . . . . . . . . . . . . . . . . . . 8.3.2 R/3 ArchiveLink customizing . . . . . . . . . . . . . . . . . . . . . 8.3.3 R/3 ADK customizing . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Usage of CommonStore to archive data . . . . . . . . . . . . . . . . 8.4.1 Creation of test data for the FI_BANKS archiving object 8.4.2 Starting the CommonStore server . . . . . . . . . . . . . . . . . 8.4.3 Starting the archive run in the R/3 system . . . . . . . . . . .
Part 3. Advanced scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .227 Chapter 9. System cloning using Tivoli Storage Manager 9.1 Overview of of SAP system cloning concepts . . . . . . . . . 9.2 Implementation using Tivoli Storage Manager . . . . . . . . . 9.2.1 Planning Homogeneous System Copy. . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . .229 .229 .232 .233
9.2.2 Installation of the R/3 central instance and database . . . . . 9.2.3 Off-line backup of source database system . . . . . . . . . . . . 9.2.4 Restore of source database backup on the target system . 9.2.5 Activate the target database . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 Finish the R/3 system installation . . . . . . . . . . . . . . . . . . . . 9.2.7 Reconfiguration of Tivoli Storage Manager and R/3 . . . . . . 9.3 Cloning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Restoring files in different file systems on target system . . 9.3.2 System copy with online backup . . . . . . . . . . . . . . . . . . . . 9.3.3 System copy with reorganization (import/export) . . . . . . . . Chapter 10. Split mirror implementation in R/3 . . . . . . . . . . . . 10.1 Split mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Storage subsystem specific implementation. . . . . . . . . . . . . 10.2.1 7133 SSA disk subsystem . . . . . . . . . . . . . . . . . . . . . . 10.2.2 IBM Enterprise Storage Server (ESS) . . . . . . . . . . . . . 10.2.3 Enterprise Storage Server Specialist (ESS Specialist) . 10.2.4 EMC Symmetrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Comparison of split-mirror implementations . . . . . . . . . . . . . 10.4 R/3 split mirror backup using Tivoli storage manager . . . . . . 10.4.1 R/3 split mirror backup overview . . . . . . . . . . . . . . . . . 10.4.2 R/3 split mirror backup configuration . . . . . . . . . . . . . . 10.4.3 R/3 split mirror backup procedure . . . . . . . . . . . . . . . . Chapter 11. R/3 warm standby using Tivoli Storage Manager 11.1 Introduction into R/3-ORACLE standby database concepts . 11.1.1 Implementation based on database features . . . . . . . . 11.1.2 Integration with SAP brtools . . . . . . . . . . . . . . . . . . . . . 11.1.3 Integration with storage subsystem features . . . . . . . . 11.1.4 Considerations for minimizing data loss . . . . . . . . . . . . 11.1.5 Backup data center scenario . . . . . . . . . . . . . . . . . . . . 11.1.6 Administration using additional tools . . . . . . . . . . . . . . 11.2 R/3 warm standby using Tivoli Storage Manager . . . . . . . . . 11.2.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.3 Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.4 Revert back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 12. Tivoli integration of R/3 data management . . . 12.1 Tivoli Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Tivoli Management Environment . . . . . . . . . . . . . . . 12.1.2 Tivoli Product Architecture . . . . . . . . . . . . . . . . . . . . 12.2 Tivoli Manager for R/3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Tivoli Storage Manager integration into Tivoli Framework 12.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Tivoli Plus for Tivoli Storage Manager Setup . . . . . . 12.3.3 Tivoli Storage Manager Setup . . . . . . . . . . . . . . . . . 12.4 Tivoli and Tivoli Data Protection for R/3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . .
234 235 237 240 241 242 243 243 245 248 253 253 255 255 263 264 270 274 276 276 277 279 283 283 283 285 288 289 291 294 296 296 306 308 311 313 313 313 313 321 322 322 323 327 330
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
.. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. ..
Part 4. Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 Appendix A. Split mirroring scripts and files . . . . . . . . . . . . . . . . . . . . . . . . 335 A.1 Split mirroring in a HACMP environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
vi
A.1.1 Configuration files overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 A.1.2 Split mirror procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 A.2 Lab split mirror setup scripts and files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Appendix B. Tivoli Data Protection for R/3 profile . . . . . . . . . . . . . . . . . . . 343 B.1 Keyword Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Appendix C. Alternate/parallel backup paths and backup servers . . . . . C.1 Use of alternate/parallel paths for increased availability . . . . . . . . . . . . . . . . C.2 Use of alternate/parallel paths for increased performance . . . . . . . . . . . . . . C.3 Use of alternate/parallel servers for disaster recovery . . . . . . . . . . . . . . . . . 351 352 352 353
Appendix D. Sample script for activating the standby database . . . . . . . 355 Appendix E. Sample Tivoli Storage Manager Profiles . . . E.1 Client User Options File: dms.opt . . . . . . . . . . . . . . . . . . . . E.2 Client System Options File: dsm.sys. . . . . . . . . . . . . . . . . . E.3 Include/Exclude List: inclexcl.lst . . . . . . . . . . . . . . . . . . . . . Appendix F. Checklist for System Copy Project Plan. . . . F.1 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.2 Technical Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F.3 Subsequent Actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ............. ............. ............. ............. ...... ...... ...... ...... ....... ....... ....... ....... 357 357 357 357 359 359 359 359 361 361 361 361 362 362 362 363 364
Appendix G. Checklist for R/3 Installation Requirements . . . . . . . . . . . . . G.1 Central System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.1.1 Hardware Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.1.2 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.1.3 Other Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.2 Check Assistance for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.2.1 Hardware Requirements: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.2.2 Software Requirements: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G.2.3 Additional Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix H. R/3 using DB2 UDB and Tivoli Storage Manager . . . . . . . . . 367 H.1 Customizing the Tivoli Storage Manager Client for R/3 with DB2 UDB . . . . 367 H.2 Considerations for R/3 system cloning with DB2 UDB . . . . . . . . . . . . . . . . . 368 Appendix I. Special notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Appendix J. Related publications . . . . . . . . . . . . . . . . . . . . J.1 International Technical Support Organization publications . J.2 IBM Redbooks collections . . . . . . . . . . . . . . . . . . . . . . . . . . J.3 Tivoli Storage Management publications . . . . . . . . . . . . . . . J.4 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...... ...... ...... ...... ...... ....... ....... ....... ....... ....... 377 377 377 378 379
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .381 IBM Redbooks fax order form. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .383 IBM Redbooks review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .389
vii
viii
Figures
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. Tivoli Storage Management and Tivoli Enterprise . . . . . . . . . . . . . . . . . . . . . . . 4 Tivoli Storage Manager architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Backup/Archive client user interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Tivoli Storage Manager administration interfaces . . . . . . . . . . . . . . . . . . . . . . . 9 Tivoli Storage Manager supported environment . . . . . . . . . . . . . . . . . . . . . . . 11 Progressive Backup Methodology vs. other backup schemes . . . . . . . . . . . . . 13 Tivoli Storage Manager storage management concept . . . . . . . . . . . . . . . . . . 15 Policy relationships and resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Hierarchical storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Tivoli Decision Support for Storage Management Analysis . . . . . . . . . . . . . . . 20 Tivoli Data Protection for Lotus Domino. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Three-tier Client/Server structure of R/3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Complex R/3 system landscape. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Database structure implementation using JFS . . . . . . . . . . . . . . . . . . . . . . . . 28 Database instance directories and files for Oracle and DB2 UDB . . . . . . . . . . 29 Released R/3 database and operating system combinations . . . . . . . . . . . . . 33 Data types in an R/3 environment using an ORACLE database . . . . . . . . . . . 36 Steps involved in recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Complete recovery scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A point-in-time recovery scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Standby database scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Standby server using storage mirroring features . . . . . . . . . . . . . . . . . . . . . . . 47 The R/3 data archiving process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Archiving scenario using the archive link interface . . . . . . . . . . . . . . . . . . . . . 50 ORACLE database backup using the BACKINT interface . . . . . . . . . . . . . . . . 57 Tivoli Data Protection for R/3 internal structure . . . . . . . . . . . . . . . . . . . . . . . . 59 Flow of backup operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Tivoli Data Protection for R/3 backup protocol entry . . . . . . . . . . . . . . . . . . . . 61 Administration Tools internal structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Administration Tools within an R/3 system landscape . . . . . . . . . . . . . . . . . . . 67 ADINT/ADSM internal structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Life cycle of a DB2 log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Interfaces with CommonStore with R/3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 CommonStore Server components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Typical R/3 system infrastructure at a customer site . . . . . . . . . . . . . . . . . . . . 85 R/3 data archiving example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Single site implementation of a data management solution . . . . . . . . . . . . . . . 93 Dual site implementation of a data management solution . . . . . . . . . . . . . . . . 95 Lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Policy Configuration for our case study - Tivoli Data Protection for R/3 . . . . 100 Policy Configuration for our case study - CommonStore . . . . . . . . . . . . . . . . 100 Entry for inittab file for Tivoli Storage Manager client scheduler . . . . . . . . . . 113 Variables for file dsm.opt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Example of file dsm.sys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 New server stanza in file dsm.sys for CommonStore . . . . . . . . . . . . . . . . . . 115 Customized Tivoli Data Protection for R/3 profile initTSM.utl . . . . . . . . . . . . 121 Customized SAPDBA profile initTSM.sap . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Tivoli Data Protection for R/3 File Manager - select Backup ID . . . . . . . . . . . 126 File Manager - showing files belong to a backup ID . . . . . . . . . . . . . . . . . . . 126 R/3 DBA planning calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
ix
51. Available actions for scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128 52. Sample shell script to start BRBACKUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 53. Sample crontab entry to start a shell script to a specified schedule . . . . . . . . 129 54. Server setup - specify ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 55. Server setup - hostname. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 56. Slave server setup - select option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 57. Slave server setup - customize slave server. . . . . . . . . . . . . . . . . . . . . . . . . . 158 58. Slave server setup - specify directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 59. Starting Administration Tools with Netscape. . . . . . . . . . . . . . . . . . . . . . . . . . 160 60. Administration Tools start screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160 61. User Administration panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 62. System Configurator start panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164 63. Load configuration panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 64. Edit configuration start panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166 65. Edit a single Tivoli Data Protection for R/3 profile parameter . . . . . . . . . . . . . 167 66. Consistency check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 67. Successfully changed Tivoli Data Protection for R/3 profile parameters . . . . 169 68. Complete Tivoli Storage Manager server parameter list. . . . . . . . . . . . . . . . . 170 69. Creating a new Tivoli Storage Manager server entry . . . . . . . . . . . . . . . . . . . 171 70. Save configuration panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 71. Copy configuration panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 72. Configuration history panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 73. Complemented Tivoli Data Protection for R/3 profile . . . . . . . . . . . . . . . . . . . 175 74. Performance Monitor panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 75. Running R/3 backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 76. Graphical performance analyzer panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 77. Select run for review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180 78. Review control panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180 79. Check of running sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 80. Check of data compression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 81. Check of multiplexing level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 82. Function #1 with striking course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186 83. Function #2 with striking course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .187 84. Overview for components for archiving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 85. Customizing the Common Store Server profile . . . . . . . . . . . . . . . . . . . . . . . . 194 86. Access to transaction S002. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 87. Accessing transaction SMGW from S002 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 88. Accessing a transaction by transaction code . . . . . . . . . . . . . . . . . . . . . . . . . 199 89. Menu path in transaction SMGW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .200 90. Parameters from transaction SMGW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 91. Menu Steps for adding a user in R/3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203 92. User maintenance: entry screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 93. Logon data settings for the CommonStore CPIC user in the R/3 system . . . . 204 94. Profile settings for the Common Store CPIC user in the R/3 system . . . . . . . 205 95. RFC maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 96. Initial Settings for the RFC destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 97. RFC destination settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 98. Accessing the gateway options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 99. Gateway options of the RFC destination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 100.Client maintenance in transaction SCC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . .208 101.Client parameters in SCC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 102.Maintaining logical path and file definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 209 103.Assignment of physical paths to logical paths . . . . . . . . . . . . . . . . . . . . . . . . 210
104.Assignment of logical paths to physical paths . . . . . . . . . . . . . . . . . . . . . . . . 210 105.Create/change logical files, client specific . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 106.Customizing of ARCHIVE_DATA_FILE_WITH_ARCHIVE_LINK . . . . . . . . . 211 107.Customizing of ARCHIVE_DATA_FILE_. . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 108.Access to the Implementation Guide for R/3 customizing . . . . . . . . . . . . . . . 212 109.Access to the R/3 Reference IMG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 110.ArchiveLink customizing in the IMG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 111.Defining the archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 112.Selecting the objects for table TOAOM in transaction SE16 . . . . . . . . . . . . . 215 113.Entries of table TOAOM as seen in the data browser . . . . . . . . . . . . . . . . . . 215 114.Entries for creating a new link table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 115.Checking the link table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 116.Creating ArchiveLink queues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 117.Entry screen of transaction SARA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 118.Customizing for object FI_BANKS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 119.Maintaining bank master data with transaction FI01 . . . . . . . . . . . . . . . . . . . 220 120.Maintaining bank master data within transaction FI01. . . . . . . . . . . . . . . . . . 220 121.Access to the ADK administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 122.Starting an archive run in transaction SARA . . . . . . . . . . . . . . . . . . . . . . . . . 221 123.Maintaining a variant for the archive job . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 124.Select start time of the archiving job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 125.Start of the archive run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 126.Jobs of the archive run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 127.Archiving session overview for FI_BANKS . . . . . . . . . . . . . . . . . . . . . . . . . . 224 128.Information to a specific archive run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 129.Homogeneous System Copy - our case study . . . . . . . . . . . . . . . . . . . . . . . 232 130.Parameters of source initTSM.utl file that is copied to target initCPY.utl file . 238 131.Parameters to change in file dsm.sys on target system. . . . . . . . . . . . . . . . . 238 132.Parameters to change in file dsm.sys on target system. . . . . . . . . . . . . . . . . 242 133.Parameters of source initTSM.utl file that is copied to target initCPY.utl file . 242 134.Example of restoring file in different destination directory . . . . . . . . . . . . . . . 244 135.Example of system copy with online backup for case study.. . . . . . . . . . . . . 246 136.Example of system copy with reorganization. . . . . . . . . . . . . . . . . . . . . . . . . 250 137.Storage subsystem to Logical Volume Manager logical-physical relationship254 138.Storage layout for split mirror - lab environment . . . . . . . . . . . . . . . . . . . . . . 257 139.Initial Volume group and logical volume layout . . . . . . . . . . . . . . . . . . . . . . . 258 140.Volume group and logical volume layout after establishing the mirror . . . . . 259 141.Volume group and logical volume layout after a split mirror . . . . . . . . . . . . . 262 142.Implementation of FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 143.EMC TimeFinder operations on BCVs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 144.SAP.BCV - EMC Regular physical disk to BCV physical disk map . . . . . . . . 272 145.Split mirror backup: starting point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 146.Status after the execution of split_cmd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 147.Status after the execution of resync_cmd . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 148.Setup of the standby database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 149.brarchive support for standby databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 150.brbackup support for standby database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 151.Setup/operation of the standby database using storage subsystem features.289 152.Standby database recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 153.Minimum data loss recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 154.Backup data center scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 155.Administration of the standby database using Libelle DBMIRROR . . . . . . . . 295 156.Graphical user interface of the Libelle DBMIRROR. . . . . . . . . . . . . . . . . . . . 295
xi
157.Overview over the standby lab file system structure . . . . . . . . . . . . . . . . . . . 296 158.SAPOSCOL communication to the database server . . . . . . . . . . . . . . . . . . . 303 159.Creation of the RFC destination for the standby database server . . . . . . . . .305 160.Parameters of the RFC destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 161.Adding an SAPOSCOL destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 162.Tivoli Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 163.Tivoli Manager for R/3 components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 164.Tivoli plus for Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 165.Tivoli Storage Manager Event Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 166.Example for dsmserv.opt file to specify a Tivoli server. . . . . . . . . . . . . . . . . . 330 167.Example for initSID.utl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 168.SAP.MAP - EMC regular physical disk to BCV physical disk map . . . . . . . . . 336 169.Output of the EMC inq command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 170.Map of AIX hdisks (source to target), volume serial number and SCSI ID . . . 337 171.Volume group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 172.Logical volume to filesystem mount point mapping for a volume group . . . . . 337 173.Logical partition to physical partition map on a AIX hdisk. . . . . . . . . . . . . . . . 338 174.Scripts for performing the split mirror in the lab environment . . . . . . . . . . . . . 339 175.Example of <vg>_pvid in the lab example . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 176.Example of a logical volume map file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 177.Content of <vg>_lv in the lab example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 178.Usage of the comment symbol # . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 179.Tivoli Data Protection for R/3 profile example #1 . . . . . . . . . . . . . . . . . . . . . . 352 180.Tivoli Data Protection for R/3 profile example #2 . . . . . . . . . . . . . . . . . . . . . . 353 181.Tivoli Data Protection for R/3 profile example #3 . . . . . . . . . . . . . . . . . . . . . . 354 182.Scripts for activating the standby database . . . . . . . . . . . . . . . . . . . . . . . . . . 355 183.Example User Option File for Tivoli Storage Manager Client . . . . . . . . . . . . . 357 184.Example System Options File for Tivoli Storage Manager Client . . . . . . . . . . 357 185.example for Include and Exclude List files for Tivoli Storage Manager Client 358
xii
Tables
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. Version 3 servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Version 3.7 UNIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Version 3.7 PC clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Tivoli Data Protection for application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Supported platforms for Tivoli Data Protection for R/3 version 2.4 . . . . . . . . . 65 Requirements for storing the backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 tSolution planning checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Definition of the Storage Pools for our case study . . . . . . . . . . . . . . . . . . . . . 101 Storage hierarchy of the storage pools for archive redo log files . . . . . . . . . . 102 Management Classes definition for our environment . . . . . . . . . . . . . . . . . . . 102 Node definition for our case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Storage hierarchy of the storage pools for archive redo log files . . . . . . . . . . 107 Lists the API environment variables for Unix . . . . . . . . . . . . . . . . . . . . . .112 SAPDBA profile parameter combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Tuning of network settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Tuning of SP switch buffer pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Tuning Tivoli Storage Manager configuration file attributes . . . . . . . . . . . . . . 150 CommonStore profile parameters and R/3 parameter settings . . . . . . . . . . . 193 CommonStore Server profile and Tivoli Storage Manager settings . . . . . . . 194 SAP Documentation for R/3 release 4.0B with Oracle and AIX . . . . . . . . . . 229 List of files that must be transferred from source to target system . . . . . . . . 240 Example of changing the path for one data file in CONTROL.SQL file . . . . . 245 Files and directories that should be copied from source to target . . . . . . . . . 247 Advantages and Disadvantages of this reorganizations procedures . . . . . . . 249 Comparison of copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Context and authorization role required on Tivoli Management Environment 326 List of additional software components in AIX . . . . . . . . . . . . . . . . . . . . . . . . 364
xiii
xiv
Preface
In this redbook, we give an overview of different data management topics related to a typical R/3 data center. The intrinsic functionality of R/3 is not designed to completely handle all the tasks by itself, but the R/3 system offers several interface possibilities to attach external tools to it. At first, as a main tool for central storage management, the Tivoli Storage Manager is introduced. Then, after an introduction covering the different data management topics occurring in an R/3 infrastructure environment, an overview of several further tools is given. Some of these tools function as middleware between the R/3 system and the Tivoli Storage Manager. The tools are applied to build up different data management scenarios within a lab environment (based on an R/3 4.0B system with an ORACLE database). Different tasks are shown, such as backup and restore, database recovery, backup monitoring and tuning, as well as data archiving. Also, some advanced backup/availability considerations like split mirror backup and standby databases are covered. Finally, an overview of the application management of R/3 within the Tivoli TME product series is given.
Important: On-line Version Available! This book-formatted version is valid only for the date of publication. The most recent version of this material is maintained on the ITSO Internet Web pages at http://www.redbooks.ibm.com/solutions/systemsmanagement/tsm/data_prot_r3.html
xv
planning, implementing high availability solutions, output management solutions, and backup/restore solutions in an R/3 environment. He holds a Ph.D. in Physics from the University of Heidelberg. Alexandre Hogate is an Advisory I/T Specialist working for IBM Global Services in the Strategic Outsourcing, Technical Support group in CTI, Brazil. He has been with IBM for 4 years, and his areas of expertise include Oracle database administration and development, R/3 basis, and UNIX. He holds a degree in Computer Engineering from the Universidade Estadual de Campinas, Brazil. T S Ranganathan is a Senior IT Architect in the Advanced Technical Support Organization for ERP Americas. He has more than 10 years of experience as an IT architect, including 5 years of R/3 Basis experience. He holds a Masters degree in Computer Science from the Birla Institute of Technology, India. His areas of expertise include R/3 performance and storage solutions for ERP. Thomas Ritter is a Software Engineer in Germany. After joining IBM in 1997, he concentrated mainly on development of backup/restore interface utilities for R/3 databases. His area of expertise is software development. He has 2 years of experience with ADSM implementation and administration. He holds a degree in Computer Science from the Friedrich Schiller University of Jena, Germany. Thanks to the following people for their invaluable contributions to this project: Yoichiro Ishiil International Technical Support Organization, Austin Center Edson Manoel International Technical Support Organization, Austin Center Tivoli Data Protection for R/3 and CommonStore Development Teams, especially Gerd Basel, Arnold Erbsloeh, Hans-Ulrich Oldengott, Thomas Prause, Dagmar Fink, Heiko Haensel, Elmar Meyer zu Bexten, Dirk Seider, R/3 Data Management Solutions/IBM Germany, and Boeblingen Lab Bolivar Ditrtich IBM Brazil
xvi
Comments welcome
Your comments are important to us! We want our redbooks to be as helpful as possible. Please send us your comments about this or other redbooks in one of the following ways: Fax the evaluation form found in IBM Redbooks review on page 389 to the fax number shown on the form. Use the online evaluation form found at http://www.redbooks.ibm.com/ Send your comments in an Internet note to redbook@us.ibm.com
xvii
xviii
Part 1. Concepts
Change Management
Asset Management
Operations Management
Tivoli Enterprise
Change Management
Security Management
Enterprise protection implements an enterprise-wide solution for data protection, disaster recovery, space management, and record retention. It covers all types of heterogeneous system platforms starting from mobile systems up to large scale enterprise servers, and it supports all types of storage resources, locally attached as well as network or SAN attached storage. Flexible storage management policies are supporting business needs and powerful automation features by eliminating labor and cost intensive manual storage management tasks. Strategic business applications are typically complex collections of interdependent application components from both commercial and proprietary software, and they span desktop, distributed, and mainframe computing environments. Application protection is concerned with the data availability, performance, and recoverability and integrates the application data management into enterprise data protection. Storage Area Network (SAN) is a new architecture that puts storage on a separate dedicated network to allow businesses of all sizes to provide access to and share data, regardless of operating systems, as a significant step towards helping customers cope with the explosive growth of information in the e-business age. SAN management is concerned with the efficient management of the Fibre Channel based SAN environment. Physical connectivity mapping, switch zoning, performance monitoring, error monitoring, and predictive capacity planning are among the most important features. Workgroup protection provides a reliable, easy to use, backup, recovery, and disaster recovery solution for stand-alone mobile, desktop, and small server systems. It is targeted to small and medium businesses (under 800 nodes) and any enterprise with thousands of remote, stand-alone servers.
Combined with Tivoli Enterprise, Tivoli Storage Management becomes an integrated management suite that transforms IT into a strategic business resource.
Administration Client
WEB
Server Systems
The Tivoli Storage Manager server software builds the data management backbone by managing the storage hardware, providing a secure environment, providing automation, reporting and monitoring functions, and implementing the storage management policies and by storing all object inventory information in the Tivoli Storage Manager database. The Tivoli Storage Manager client software and complementary products implement data management functions like data backup and recovery, archival, hierarchical space management, or disaster recovery. The client software can run on different systems, including laptop computers, PCs, workstations, or server systems. The client and server software can also be installed on the same system for a local backup solution or used to implement LAN-free backup solutions exploiting SAN infrastructure. It is also possible to define server hierarchies or multiple peer-to-peer servers in order to provide a multi-layer storage management solution or an electronic vaulting solution. 1.2.1.1 Tivoli Storage Manager server A main architectural component of the Tivoli Storage Manager server software is the relational semantic database. The storage manager database was especially designed for the task of managing data, and it implements zero-touch
administration. All policy information, logging, authentication and security, media management, and object inventory is managed through this database. Most of the fields are externalized through Tivoli Storage Manager high level administration commands, SQL SELECT statements, or for reporting purposes, by using an ODBC driver. For storing the managed data, the Tivoli Storage Manager server software uses the storage repository. The storage repository is designed from any combination of disk, optical, tape, or robotic storage devices, which are locally connected to the server system or which are accessible through a SAN. The server software provides built-in drivers for more than 300 different device types from every major manufacturer. Within the storage repository the devices can operate by stand-alone or can be linked together to form one or more storage hierarchies. The storage hierarchy is not limited in the number of levels and can also span over multiple servers by using so called virtual volumes . See 1.2.2.2, Storage and device concepts on page 14 for storage management functions defined on the storage repository. 1.2.1.2 Tivoli Storage Manager Backup/Archive client Data management functions are implemented using Tivoli Storage Manager client software and complementary Tivoli and non-Tivoli products, which are working together with the Tivoli Storage Manager server backbone product. The Tivoli Storage Manager Backup/Archive client, included with the server program, provides the operational backup and archival function. The software implements the patented progressive backup methodology and unique record retention methods as described in 1.2.2.1, Backup, and archival concepts on page 13. All version 3.7 and above Backup/Archive clients are implemented as multi-session clients, which means that they are able to exploit the multi-threading capabilities of modern operating systems. This enables the running of backup and archive operations in parallel to maximize the throughput to the server system. Some UNIX based clients use a new plug-in architecture to implement an image backup feature for raw device backup. This allows you to backup and recover data that was not stored in file systems or supported database applications. Also, it provides an additional method to make point-in-time backups of full file systems as single objects and recover them in conjunction with data backed up by using progressive backup methodology. Depending on the client platform, the backup/archive client provides graphical, command line or Web user interfaces (see Figure 3). Especially for help desk usage the web client interface offers a convenient way to perform backup, archive, or recovery operations for a client system from a remote location.
1.2.1.3 Tivoli Storage Manager administration For the central administration of one or multiple Tivoli Storage Manager instances, and the whole data management environment, Tivoli Storage Manager provides command line or java-based administration interfaces (see Figure 4), also called administration clients.
By using the unique enterprise administration feature, it is possible to configure, monitor, and manage all server and client instances from one administrative interface, known as enterprise console . It includes: Enterprise configuration Administrative command routing Central event logging functions Enterprise configuration allows server configurations to be defined centrally by an administrator and then propagated to other servers. This simplifies the
configuration and management of multiple Tivoli Storage Manager servers in an enterprise significantly. Administrative command routing allows administrators to issue administrative commands from one server and route them to other target servers. The commands are executed on the target servers, and the command output is returned and formatted on the server where the command was issued. In an enterprise environment with multiple Tivoli Storage Manager servers, client and server events can be logged to a central management server through server-to-server communications, thereby enabling centralized event management and automation. 1.2.1.4 Tivoli Storage Manager externalized interfaces Tivoli Storage Manager provides a data management application programming interface (API), which can be used to implement so called application clients to integrate business applications like databases or groupware applications into the Tivoli Storage Management solution or to implement specialist clients for special data management needs or non-standard computing environments. In general, we distinguish between Tivoli Data Protection for application software products and the API exploitation through vendor applications. Tivoli Data Protection for applications are separate program products delivered by Tivoli to connect business applications to the Tivoli Storage Manager data management API. Such applications, for example, Oracle, Lotus Notes, Microsoft Exchange, and Microsoft SQL server, usually have their own storage management interfaces. For more information, see 1.4, Tivoli Data Protection for applications on page 20. On the other hand, there is a great number of vendor applications that exploit the Tivoli Storage Manager data management APIs by integrating it directly into their software product to implement new data management functions, or to provide backup and archival functionality on additional system platforms. Some examples are IBMs CommonStore for SAP/R3 data archival, IBMs BRMS/400 to provide an AS/400 backup solution, and SSSIs ABC OpenVMS for data backup and recovery. In addition to the externalized interfaces and the server database as described in 1.2.1.1, Tivoli Storage Manager server on page 6, Tivoli Storage Manager offers multiple interfaces for event logging, reporting and monitoring the data management environment. In general, activities of the Tivoli Storage Manager server and client are logged in the server database and can be sent for reporting and monitoring purposes to external event receivers by using the event filter mechanism. Potential event receivers are the Tivoli Enterprise framework, SNMP based systems management software packages, the NT event log, user written applications or others. To integrate Tivoli Storage Manager storage management with external library management applications Tivoli Storage Manager offers an external library manager interface. By using this interface, it is possible to integrate the Tivoli Storage Manager server into 3rd party storage management environments.
10
1.2.1.5 Tivoli Storage Manager supported environment Tivoli Storage Manager server and client software is available on many different operating system platforms and can exploit different communication protocols to communicate with each other. Figure 5 gives an overview of the supported environment.
IBM*
DATA GENERAL DG/UX
FUJITSU***
CRAY*** UNICOS
HEWLETTPACKARD HP-UX
AUSPEX**
APPLE Macintosh
Linux DB2
INFORMIX
LOTUS NOTES
MICROSOFT Exchange Server SQL Server ORACLE Oracle7 EBU Oracle8 RMAN R/3 SYBASE TANDEM GUARDIAN (ETI)***
NOVELL NETWARE
SCO PYRAMID Unix 386 NILE Open Desktop Open Server SEQUENT PTX
HP-UX
Windows NT
Disk
Optical
Tape
Storage Hierarchy
The Tivoli Storage Manager server software runs on 8 different operating platforms. At this time, the software is available in the most recent Version 3.7 on the platforms as shown in Table 1, illustrating server platforms, operating system level, and Tivoli Storage Manager server version.
Table 1. Version 3 servers
Operating system level 4.3.1, 4.3.2 10.20, 11.0 4, 5.1 or higher 1 or higher 2.6, 7 3.7 3.7 3.7 3.7
Server version
11
Operating system level Workstation 4.0 (SP3/4) Server 4.0 (SP3/4) 4.3 2.2,2.3,2.4 WARP 4.0 or high WARP Server 4.0 3.7 3.1.2 3.1 2.1
Server version
The following tables provide an overview of all available Version 3.7 clients at the time of publishing this book. Check the product information for the most current list on the Tivoli Storage Manager home page:
http://www.tivoli.com/products/index/storage_mgr/
There are several variations of UNIX clients. Table 2 details the UNIX clients and the operating system levels that are supported.
Table 2. Version 3.7 UNIX clients
Client Platforms AIX HP-UX Sun Solaris 4.3.1, 4.3.2 11.0 2.6, 7
Operating System
There are three different PC clients available. Table 3 details the PC systems and the operating systems that are supported as clients.
Table 3. Version 3.7 PC clients
Tivoli Storage Manager PC Clients Platforms Novell NetWare Microsoft Windows (Intel) Microsoft Windows (Alpha)
Operating Systems 3.12, 3.20, 4.11, 4.20, 5.0 NT 4.0, Win 95, Win 98, Win 2000 NT 4.0
12
1.2.2.1 Backup, and archival concepts Backup, in Tivoli Storage Manager terms, means creating an additional copy of a data object to be used for recovery. A data object can be a file, a raw logical volume or a user defined data object like a database table. The backup version of this data object is stored separately, in the Tivoli Storage Manager server storage repository. Potentially, you can make several backup versions of the data, each version at a different point-in-time. These versions are closely tied together and related to the original object as a group of backups. If the master data object is invalid or lost, restore is the process of using a backup version of the data to recreate the master copy. The most current version of the data would normally be used, but you can restore from any of the existing backup versions. The number of backup versions is normally controlled. Old backup versions may be automatically deleted as new versions are created. You may also delete them after a certain period of time. For file level based backup the main difference from many other backup applications is that Tivoli Storage Manager uses the progressive backup methodology. As shown in Figure 6, Tivoli Storage Manager operates with incremental backups only after has created a copy of all files. In consequence, only those files, that have been changed since the last backup, will be backed up.
Full+Incremental Backup
Backup cycle
Full Backup Incremental Incremental Incremental
Full+Differential Backup
#1 #2 #3 #4
#1 #2 #3 #4
#1 #1 #1 #1 #2 #2 #2
#7*
#7*
#1
#2
#1... #4
#1 #4
#1
#2
The reorganization of the physical storage media to store each clients data physically together on a small number of mediain order to provide faster access in the case of a complete system recoveryis done transparently to the client, and it is completely automated on the server using data meta information stored in the server database.
13
By comparing this file level backup methodology with other methods like Full+Incremental or Full+Differential backup schemes (see Figure 6 on page 13), the advantage of this method is to prevent backups of unchanged data (unnecessary backups) and to reduce and consolidate the recovery tape-set. This includes also a more efficient use of storage resources by not storing redundant data and a faster recovery by not restoring multiple versions of the same file. At any point in time Tivoli Storage Manager allows the creation of a complete set of client files ( backup set) on the server system by using the most recent backup versions stored in the server storage repository. These backup sets can be used to retain a snapshot of client files for a longer period of time ( Instant Archive) or for LAN-free recovery of a client system by copying this backup set on a portable media and by restoring them locally ( Rapid Recovery). File Archive with Tivoli Storage Manager means creating a copy of a file as a separate file in the storage repository to be retained for a specific period of time. Typically, you would use this function to create an additional copy of data to be saved for historical reasons. Vital records (data that must be kept for legal or other business reasons) are likely candidates for the archive process. As the archive copy of data is created on the server, you can delete the original copy. Thus, you can use archive to make additional space available on the Tivoli Storage Manager client. However, archive should not be thought of as a complete space management function, because automatic recall is not available. You can access archived data by using retrieve to return it to the Tivoli Storage Manager client, if the data is needed at some future time. To locate the archived data within the storage repository, Tivoli Storage Manager allows you to add a description to the data and to compose so called archive packages . Using a search engine and the server database, the description can be used for a search to determine which data to retrieve. Thus, the difference between backup and archive is that backup creates and controls multiple backup versions that are directly attached to the original file; whereas archive creates an additional file that is normally kept for a specific period of time, as in the case of vital records. 1.2.2.2 Storage and device concepts All the Tivoli Storage Manager-managed client data are stored in the Tivoli Storage Manager storage repository, which can consist of different storage devices, such as disk, tape, or optical devices, and controlled by the Tivoli Storage Manager server program. To do so, Tivoli Storage Manager uses its own model of storage to view, classify, and control these storage devices, and to implement its storage management functionality (see Figure 7). The main difference between the storage management approach of Tivoli Storage Manager and other commonly used systems is that Tivoli Storage Manager storage management concentrates on managing data objects instead of managing and controlling backup tapes. Data objects can be files, directories, or raw logical volumes that are backed up from the client systems; they can be objects like tables or records from database applications, or simply a block of data that a client system wants to store on the server storage.
14
Client System
Server System
Copy
Data Object
Relocate
To store these data objects on storage devices and to implement storage management functions, Tivoli Storage Manager has defined some logical entities to classify the available storage resources. Most important is the logical entity called a storage pool. A storage pool describes a storage resource for one single type of media; for example, a disk partition or a set of tape cartridges. Storage pools are the place where data objects are stored. A storage pool is built up from one or more storage pool volumes . For example, in the case of a tape storage pool, this would be a single physical tape cartridge. To describe the methods by which Tivoli Storage Manager can access those physical volumes to place the data objects on them, Tivoli Storage Manager has another logical entity called a device class. A device class is connected to a storage pool and describes how volumes of this storage pool can be accessed. Tivoli Storage Manager organizes storage pools in one or more hierarchical structures. The so called storage hierarchy can span over multiple server instances and is used to implement management functions to migrate data objects automaticallycompletely transparent to the clientfrom one storage hierarchy level to another; or in other words, from one storage device to another. This function may be used, for example, to cache backup data (for performance reasons) onto an Tivoli Storage Manager server disk space before moving the data to tape cartridges. The actual location of a data object is tracked within the server database. Also, Tivoli Storage Manager has implemented storage management functions for moving data objects from one storage volume to another. As discussed in the previous section, Tivoli Storage Manager uses the progressive backup methodology to back up file level data to the Tivoli Storage Manager storage
15
repository. The reorganization of the data and storage media for fast recovery happens completely within the server. For this purpose, Tivoli Storage Manager has implemented functions to relocate data objects from one volume to another and to co-locate data objects that belong together, either at the client system level or at the data group level. Another important storage management function implemented within the Tivoli Storage Manager server is the ability to copy data objects asynchronously and to store them in different storage pools or on different storage devices, local at the server system or on another server system. It is especially important for disaster recovery reasons to havein the event of losing any storage media or the whole storage repositorya second copy of data available somewhere in a secure place. This function is fully transparent to the client, and can be performed automatically within the Tivoli Storage Manager server. 1.2.2.3 Policy concepts Basically, a data storage management environment consists of three types of resources: client systems, rules, and data. The client systems contain the data to be managed, and the rules specify how the management must occur; for example, in the case of backup, how many versions should be kept, where they should be stored, and so on. Tivoli Storage Manager policies define the relationships between these three resources. Figure 8 illustrates this policy relationship. Depending on your actual needs for managing your enterprise data, these policies can be very simple or very complex.
Policy set Copy group Rules Nodes Machines Policy domain Copy group Rules Management class Data Management class Data Management class Data
Tivoli Storage Manager has certain logical entities that group and organize the storage resources and define relationships between them. Client systems, or nodes in Tivoli Storage Manager terminology, are grouped together with other nodes needed by the same storage management, into a policy domain.
16
The policy domain links the nodes to a policy set, a collection of storage management rules for different storage management activities. A policy set consists of one or more management classes . A management class contains the rule descriptions called copy groups, and it links these to the data objects to be managed. A copy group is the place where all the storage management parameters, like number of stored copies, retention period, storage media, and so on, are defined. When the data is linked to particular rules, it is said to be bound to the management class that contains those rules. Another way to look at the components that make up a policy is to consider them in the hierarchical fashion in which they are defined. That is to say, consider the policy domain containing the policy set, the policy set containing the management classes, and the management classes containing the copy groups and the storage management parameters. 1.2.2.4 Security concepts Because the storage repository of Tivoli Storage Manager is the place where all the data of an enterprise are stored and managed, security is a very vital aspect for Tivoli Storage Manager. To ensure that data can only be accessed from the owning client or an authorized party, Tivoli Storage Manager implements, for authentication purposes, a mutual suspicion algorithm , which is similar to the methods used by Kerberos authentication. Whenever a client (Backup-Archive as well as administrative) wants to communicate with the server, an authentication has to take place. This authentication contains both-sides verification, which means that the client has to authenticate itself to the server, and the server has to authenticate itself to the client. To do this, all clients have a password, which is stored at the server side as well as at the client side. In the authentication dialog these passwords are used to encrypt the communication. The passwords will not be sent over the network, to prevent hackers from intercepting them. Only if both sides are able to decrypt the dialog, will a communication session be established. If the communication has ended, or if a timeout period without activity is passed, the session will be automatically terminated and a new authentication will be necessary.
17
an HSM client, which interfaces with DMAPI and implements the functionality outlined in Figure 9.
E RAT MIG
SERVER
LL CA RE
DB
AGENT
Migrates Inactive Data Transparent Recall Policy Managed Integrated with Backup
Cost/Disk Full Reduction
The Tivoli Space Manager maximizes usage of existing storage resources by transparently migrating data off workstation and file server hard drives based on size and age criteria to the Tivoli Storage Manager storage repository. When the migrated data is accessed, Tivoli Space Manager transparently recalls the data back onto the local disk. The migration of files and the management of migrated files is controlled by policies. However, also user controlled migration and recall is possible. The Tivoli Space Managers HSM function is fully integrated with Tivoli Storage Manager operational backup. It is possible to specify not to migrate a file until it has a backup version in the server storage repository. If a file is migrated and then a backup is done the next day, Tivoli Storage Manager copies the file in the storage repository and does not require a recall to the client system to back it up again, which would cause multiple transfers of the same file across the network.
18
Tivoli Disaster Recovery Manager automatically captures information required to recover the Tivoli Storage Manager server after a disaster. It assists in preparing a plan that allows recovery in the most expedient manner. This disaster recovery plan contains information, scripts, and procedures needed to automate server restoration, and it helps ensure quick recovery of your data after a disaster. Also, Tivoli Disaster Recovery Manager manages and tracks the movement of off-site media to reduce the time required to recover in the event of a disaster. It is able to track media that are stored on-site, in-transit, or off-site in a vault, no matter whether its a manual or an electronic vault, so your data can be easily located if disaster strikes. Client recovery information can be captured by Tivoli Disaster Recovery Manager to assist with identifying which clients need to be recovered, in what order, and what is required to recover them.
19
ODBC Driver
RDBS
Reporting database
RDBMS Schemas
ODBC Driver
Discovery Administrator
The information used by the guide is obtained directly from the Tivoli Storage Manager server with the use of the ODBC interface. The information is then transferred to a relational database, another requirement for Tivoli Decision Support for Storage Management Analysis. The databases supported for the feed of the information are DB2, MS SQL and Oracle. The databases can reside on the same system as Tivoli Storage Manager or Tivoli Decision Support or on a separate system. The databases are used for queries in order to generate the Tivoli Decision Support reports.
20
Domino R5 server
Transaction log
Domino API
Domino R5 databases
TSM API
The function of Tivoli Data Protection for application solutions is to receive application backup and restore requests and to translate them into Tivoli Storage Manager backups and restores. The activity is always initiated from the application. This means that backups or restores can also be done while the application is on line. But, it also means that only that level of protection can be guaranteed, that is, implemented in the applications storage management API. Table 4 shows the available Tivoli Data Protection for application solutions and the platforms, operating system level, and application level they cover at the time of publishing this book.
Table 4. Tivoli Data Protection for application
TDP solution
TDP for Lotus Notes TDP for Lotus Domino TDP for Lotus Domino, S/390 Edition TDP for MS Exchange TDP for MS SQL Server TDP for Informix TDP for Oracle
Application level
4.5.x, 4.6.0, 4.6.1, 4.6.3 5.0.1 5.0.1 4.0, 5.0, 5.5 6.0, 6.5 IDS 7, UDO 9 7.3.4, 8
Operating system
AIX NT AIX NT OS/390 NT NT AIX Sun Solaris AIX NT Sun Solaris HP-UX AIX HP-UX Sun Solaris Tru64 UNIX NT (Intel)
21
For latest information on Tivoli Data Protection solutions check the following Tivoli Storage Management web page:
http://www.tivoli.com/products/solutions/storage/
22
23
S/39 0 D B2 Serve r
Communication
RS /6000
O S /390
DB 2
Op en Fun ctions
W indow s 9 5
AIX
AIX N etfinity
W indo w s NT
W ind ow s NT
... XO R ... N T / O ther U NIX
M a c Intos h
Server LAN
RS /6000 S P
Client LAN
The R/3 Technology Infrastructure is divided into the following three layers: Database layer application layer Presentation layer The R/3 system is scalable on each of these layers. This guarantees that the entire R/3 System is portable to all hardware, operating systems, and database management systems within the open systems computing concept. Each of these layers is described in depth in the subsections of this section.
2.1.1.1 Database layer The database layer manages an organization's working data. This includes the master data, transaction data, and also the meta-data maintained in the repository that describes the database structure.
The most common implementation of the database layer is a single database server implementation. The R/3 database is a centralized database, that is, all R/3 data is in a single database and is not distributed across multiple databases. The centralized database runs on mostly on a single server, with very few parallel database implementations involving partitioning the single database across multiple servers.
2.1.1.2 Application layer The applications that are based on the DBMS constitute the second layer. These work with data that they fetch from the database layer and write the resulting new data back to this layer. The R/3 System applications are processed in this layer, as are custom-written application enhancements developed by using the ABAP Workbench. Applications can be fetched from the database as required, loaded into the application layer, and then run from here.
24
There can be more than one server hosting the application layer. This is mainly to distribute the application processing workload. A special form of an R/3 application server, called the central instance, is unique per R/3 system and hosts the Message server and the Enqueue server processes. These two processes are used for communication purposes between the R/3 application servers and for the handling of locks on data items for all application servers. The central instance and the database can be grouped together on one server or reside on separate servers. The choice is based on the performance requirements at the database server, as the database server is the most heavily utilized and critical component of an R/3 system.
2.1.1.3 Presentation layer The presentation layer, also known as the user interface, is positioned closest to the user. The end user accesses the system by a graphical user interface. This graphical user interface, called the SAPGUI, is used to connect to an application server of the R/3 system.
All transactions are executed through the SAPGUI on the application server, which accesses the database server to retrieve or update the data. In a typical customer environment, there is more than one R/3 system to consider. This is typically a system landscape of several R/3 systems for different purposes. The next section discusses these R/3 systems.
25
2.1.2.3 Production system The production system only contains released versions of your development work. No development takes place in this R/3 System. The developments and customization parameters can be used in a production system where the actual business processes are reproduced and the real data is entered.
The separation of the development system and production system allows for development and testing applications without interfering with production operation. This also allows for a smooth and quick upgrade of the production system, since the results of the lengthy modification adjustment from the development system can be automatically used. Other R/3 Systems, such as training or demo systems, may also be needed for the presentation of completed developments. Many installations do not require a separate R/3 System for each of these functions. The development system and the quality assurance system (for preparing production operation) can be the same R/3 System. Figure 13 shows such a complex R/3 system scenario.
HR D e v e lo p m e n t/Te s t HD1
HR P r o d u c tio n HP1
Q u a lity A s s u ra n c e QA1
P r o d u c tio n P01
D e v e lo p m e n t D 01
Te s t T01
Q u a lity A s s u ra n c e QA2
P r o d u c tio n P02
Q u a lity A s s u ra n c e QA3
P ro d u c tio n P03
There are three production R/3 systems, which are P01, P02, and P03, reflecting the companys subsidiaries in US, Europe, and Asia. Each of these production systems is delivered from its own quality assurance system: QA1, QA2, and QA3. The production R/3 system P01 exchanges data with an R/3 HP1 (running the human resources module). All development for the production systems (P01, P02, P03) is done in system D01. In system T01, all development is tested, before the development is transported to QA1, QA2, QA3. Development and Test for the human resources system is done in system HD1. The data in all these systems has to be managed. An understanding of the methods used to store R/3 data by using relational databases is essential for designing data management solutions.
26
ALE
In 2.2, R/3 and relational databases on page 27, we will discuss relational database structures for storing R/3 data and relational database systems (RDBMS) commonly used in an R/3 landscape. The information in this section provides the foundation for the later sections of this chapter and the rest of the book.
27
DATABASE
PSAPBTABD Tablespace btabd.data1 1 GB datafile btabd.data2 1 GB datafile PSAPLOADI Tablespace loadi.data1 datafile
1 GB
physical volume 2 btabd.data2 loadi.data1 datafile physical volume 3 datafile physical volume 4
Figure 14 illustrates the following: Each database is logically divided into one or more tablespaces. A tablespace is used to group related logical structures together. For example, tablespaces commonly group all of an application's objects to simplify some administrative operations. A tablespace can be online (accessible) or offline (not accessible). A tablespace is normally online so that users can access the information within the tablespace. However, sometimes a tablespace may be taken offline to make a portion of the database unavailable while allowing normal access to the remainder of the database. This makes many administrative tasks easier to perform. One or more datafiles are explicitly created for each tablespace to physically store the data of all logical structures in a tablespace. The data files are stored in a UNIX file system (JFS), which resides on a separate logical volume. The logical volume can span more than one physical volume. A physical volume can either be a physical hard disk or a set of physical hard disks that are viewed as one by AIX. The combined size of a tablespace's datafiles is the total storage capacity of the tablespace (PSAPBTABD tablespace has 2 GB storage while PSAPLOADI tablespace has 1 GB). The combined storage capacity of a database's tablespaces is the total storage capacity of the database (3 GB).
2.2.1.2 Physical database structures This section explains the physical database structures of a database, including datafiles, log files, and control files. Figure 15 illustrates the directory structure of a database where the data, log and control files are stored.
28
<DB> = oracle OR db2 /<DB>/<SID> Instance Home <SID> = R/3 System Identifier
SAP R/3 active log files SAP R/3 active log files SAP R/3 archive log files Oracle
Linked to /db2/<SID>/log_dir SAP R/3 active log files SAP R/3 archive log files
DB2 UDB
log_dir log_archive
Figure 15. Database instance directories and files for Oracle and DB2 UDB
Datafiles Every database has one or more physical datafiles. A database's datafiles contain all the database data. The data of logical database structures such as tables and indexes is physically stored in the datafiles allocated for a database.
The characteristics of datafiles are: A datafile can be associated with only one database. Database files can have certain characteristics set to allow them to automatically extend when the database runs out of space. One or more datafiles form a logical unit of database storage called a tablespace, as discussed earlier in this chapter.
The Use of Datafiles The data in a datafile is read, as needed, during normal database operation and stored in the memory cache of the database. For example, when a user wants to access some data in a table of a database, if the requested information is not already in the memory cache for the database, it is read from the appropriate datafiles and stored in memory.
Modified or new data is not necessarily written to a datafile immediately. To reduce the amount of disk access and increase performance, data is pooled in memory and written to the appropriate datafiles all at once.
Log Files Every database has a set of two or more log files. The set of log files for a database is collectively known as the database's log. The primary function of the log is to record all changes made to the data. Should a failure prevent modified data from being permanently written to the datafiles, the changes can be obtained from the log and work is never lost.
29
Log files are critical in protecting a database against failures. To protect against a failure involving the log itself, some databases like Oracle allow a multiplexed redo log so that two or more copies of the redo log can be maintained on different disks. For other databases, the disk containing log files must be mirrored to protect against failure. Log files can be broadly classified as active or archived log files. An active log file is used by the database to log transaction changes. Active logs are used by crash recovery to prevent a failure (system power, application error) from leaving a database in an inconsistent state. An archived log file is generally used for recovery to a specific point in time. Some databases like DB2 UDB further differentiate archived log files as online or offline depending on whether the archived log files are saved and moved from their default location.
The use of log files The information in a log file is used only to recover the database from a system or media failure that prevents database data from being written to a database's datafiles.
For example, if an unexpected power outage abruptly terminates database operation, data in memory cannot be written to the datafiles and the data is lost. However, any lost data can be recovered when the database is opened, after power is restored. By applying the information in the most recent log files to the database's datafiles, the database is restored to the time at which the power failure occurred. Log files are also used to recover from user error, for example, accidentally deleting a table from the database.
Control files Most databases have a control file. A control file contains entries that specify the physical structure of the database. For example, it contains the following types of information:
database name names and locations of a database's datafiles and log files time stamp of database creation Like the log files, the database allows for copies of the control file for protection.
The use of control files Every time an instance of a database is started, its control file is used to identify the database and log files that must be opened for database operation to proceed. If the physical makeup of the database is altered (for example, a new datafile or log file is created), the database's control file is automatically modified by the database to reflect the change. A database's control file is also used if database recovery is necessary.
30
The relationships between tablespaces, datafiles, and databases are described in 2.2.1, RDBMS fundamentals on page 27. This section provides a summary of the terms for the database and operating system components that are crucial for R/3 data management.
Tablespace A tablespace provides the link between the logical view of a database as seen by a user and the datafiles that the database uses to store data. Equivalents in the different RDBMSs are:
Oracle - Redo logs INFORMIX - Logical logs Sybase - Transaction logs DB2 UDB - Log files
Physical backup A physical backup of a database involves saving one or more data files of the database. It is referred as a physical backup because the physical structure of the database (from the operating system perspective) is backed up regardless of their logical contents. Logical backup A logical backup of a database unloads (exports) the contents of a database to file system files. The export operation is a logical backup where the RDBMS creates file system files to store the logical units of the database (records) regardless of their physical structure. Restore The process of restoring data from a backup copy is referred to as a restore. It excludes the process of bringing the database up to date by applying log files. Recover The term recover is used to include the process of restoring data files and bringing the data up to date by applying log files. Raw devices Under UNIX, data can be stored on disk through two techniques - raw devices and journaled file system. All the UNIX-based RDBMSs allow both techniques. Raw device files are similar to ordinary files, in that, they have path names that appear in a directory, have the same access protection
31
as ordinary files, and can be used in almost every way that ordinary files can be used. However, there is an important difference between the two. An ordinary file is a logical grouping of data recorded on disk, while the raw device file corresponds to a device entity like a logical subdevice, such as a large section of the disk drive defined as a logical volume at the UNIX level. Block special files are provided for logical volumes and disk devices on the operating system and are solely for system use in managing file systems, paging devices, and logical volumes, and they reside in the /dev directory.
Journaled file system The journaled file system (JFS) is a hierarchical structure (file tree) of files and directories that uses database journaling techniques to maintain its structural consistency. Each journaled file system resides on a separate logical volume. While the logical volume defines allocation of disk space down to the physical-partition level and is managed using raw device files, the file system and other higher level software components such as the Virtual Memory Manager provide higher levels of data management. File systems are not a critical requirement in an RDBMS environment, as the RDBMS is responsible for maintaining data integrity and consistency through the use of log files. Archive The term archive has different meanings - it is used as an equivalent to backup, or as a process by which a file is migrated through a storage hierarchy. In the context of a backup procedure defined in the Tivoli storage manager (TSM), R/3 backups use the archive feature as defined in the TSM. In all other references, it is used to indicate a process of storing data/objects no longer required for regular use but need to be retrieved at some future time. This definition is equally applicable to log files and data objects in the database. The space occupied by this data is then released for regular use.
32
OS/ 390
AIX
COMPAQ TRU64
Dynix
HP-UX
RELIANT
SOLARIS
NT ALPHA
NT INTEL
OS/400
SAP DB ADABAS
DB/2 UDB
X X X
application server on AIX or NT/INTEL (SOLARIS in future)
X X X X
X X X X
DB/2 390
X X X
DB/2 400
INFORMIX
MS SQL Server
ORACLE
X X X X X X X X X X X X X X X
OS/390 UNIX flavors Windows NT AS/400
R/3 data protection The database structures and terminologies discussed in the previous section illustrated the critical components of a database. The loss of these components would result in the failure of key business operations which use the R/3 system. This could mean loss of revenue and significant downtime to recover to a working state. Hence, it is important to develop and implement procedures to prevent such failures. This section describes the main causes of data loss in an R/3 system and the hardware and software mechanisms that can be used to protect the R/3 database.
33
A statement error can also occur if an extensive operation entirely fills up the rollback segment. The reason for such an error is generally incorrect programming. The database administrator does not have to intervene in order to execute a recovery after a statement error.
2.2.4.2 Process errors A process error occurs when a user process is canceled. The database instance is not normally affected by the termination. The database process monitor responds by canceling the database changes made by current transactions and releasing the resources which were used by the process. Work with the database system can then be continued normally.
The database administrator does not have to intervene to carry out a recovery after a process error.
2.2.4.3 Instance error An instance error occurs when the database instance and the corresponding background processes can no longer run. An instance error can result from a hardware problem (such as a power failure) or a software error (for example, the crash of the operating system or of a database background process). An instance error generally results in an immediate abnormal termination of the entire instance. Even if the database system remains active, the data in the database buffer is lost, and the instance can no longer be shut down in the conventional way. Since only an abnormal termination is usually possible the instance must be recovered. Only transactions completed normally, that is, transactions that have been committed can be processed; all others are rolled back.
The database system automatically carries out the recovery of the instance when it is restarted. This is called instance recovery or crash recovery, and uses the entries in the appropriate active log files to do so. The database administrator does not need to intervene during the recovery, provided no database files were changed.
2.2.4.4 User errors A user error occurs when a user deletes data, for example, deletion of a table or program that is required for further system operation, either by mistake or due to lack of knowledge.
Such errors can be corrected if the deleted table or program, has been backed up prior to deletion through R/3 export utilities. This backup can be used to restore the condition of the object at the time of the export. However, database inconsistencies should be taken into account. R/3 also uses versioning for objects in the R/3 data dictionary and repository, so it may be possible to continue working with that version of the object (ideally, the object has not been changed recently), and restored later. In general, the database specific export/import tools cannot be used to recover a lost SAP object. The reason for this is that the SAP database tables are often shared system-wide, and an import of such a table would risk overwriting the work of other users.
34
The objects cannot be recovered by just restoring an earlier database backup as the recovery of a lost object would require an incomplete recovery up to the moment the user error occurred. This is also known as a point in time recovery, and is described in 2.2.6.1, Database recovery procedures on page 41. Any changes which were made to the tablespace from that moment on would be lost.
Warning
Recovery from user errors is restricted to the point prior to the user error and all database changes made subsequent to the error are lost. None of the mechanisms designed to protect data and provide availability, as described in this book, will provide recovery to a point after the user error. Therefore, it is of paramount importance to train users and implement proper user security procedures in order to prevent user errors.
2.2.4.5 Media errors Media failures fall into two general categories: permanent and temporary.
Permanent media failures are serious hardware problems that cause the permanent loss of data on the disk. Lost data cannot be recovered except by repairing or replacing the failed storage device and restoring backups of the files stored on the damaged storage device. Temporary media errors are hardware problems that make data temporarily inaccessible; they do not corrupt the data. For example, a disk controller failure necessitates replacement of the disk controller after which the data on the disk can be accessed. Another instance of a temporary failure is a power failure on the storage device. When power is returned, the storage device and all associated data is accessible again.
In most cases, the database must be recovered after a permanent media failure. The recovery strategy depends on the type of damage in the database. Therefore, cause of the errors must be analyzed and understood before proceeding with the recovery. Media errors usually mean recovering from loss of datafiles, active log files, control files (in case of Oracle), or errors in the archiving process of active log files, or a combination of some or all of these.
Note:
To avoid damaging of the logs due to disk failure, the active logs / online logs should be mirrored. The preferred way to mirror the online logs is to use ORACLEs own, built-in functionality. Mirroring of the active logs of DB2 UDB can be achieved by operating system/ hardware methods. Figure 17 shows the various files in an R/3 system that need to protected from failure.
35
Application files
log_g1_m1.dbf log_g2_m2.dbf
Apart from the files described in 2.2.1.2, Physical database structures on page 29, other files to be protected in an R/3 system environment are the R/3 and database executables and related configuration and log files. Lastly, the files residing in the R/3 transport directory used to apply and control customizations to the R/3 database have to be backed up in a regular manner. Data protection can be achieved by building redundancy in the hardware environment, for example, by mirroring the data on disk by using RAID 1 mirroring, or by using RAID 5 technology, which handles disk failures by rebuilding data on a spare disk. A higher level of data protection from media errors is by taking backups of the data to offline media and developing fast and reliable recovery procedures, which are described in 2.2.5, R/3 backup on page 36, and 2.2.6, R/3 recovery on page 40.
36
State of the database during backup Media to be used for backup Data security and availability required for the database These are just guidelines to think about when organizing database backup and administration. Individual circumstances will involve adapting the procedures at times which requires detailed knowledge. Backups can be started manually by the DBA/operator or automated at the operating system, within R/3, or through external backup programs.
2.2.5.1 Backup modes Two different modes of backup are possible depending on the state of the database - online and offline.
Online backup A backup performed with the database runningthat is, the users can continue to work normallyis classified as an online backup. Although the system is online during the backup, heavy transaction or batch loads can affect the performance of the backup and R/3.
The prerequisites for an online backup is that a record of the starting point of a backup is maintained so that a unique restart point is defined from which the recreation of all the files of a tablespace can be carried out in case of an error. Since there are changes to the database during backup time, the database vendor has to supply mechanisms which guarantee that the backup is consistent. For example, the Oracle database freezes the tablespace datafiles during the time they are backed up and records all database changes in the log files instead. When an individual datafile or online tablespace is backed up, Oracle stops recording the occurrence of checkpoints in the headers of the online datafiles that are being backed up. This means the database is in hot backup mode, and that when a datafile is restored, it has "knowledge" of the most recent datafile checkpoint that occurred before the online tablespace backup, not any datafile checkpoints that occurred during it. As a result, Oracle asks for the appropriate set of log files to apply, in the event that recovery is needed. After the datafile copy is completed, Oracle advances the file header to the current database checkpoint. Oracle also provides an alternative mechanism to backup data without the use of the hot backup mode through the Oracle Recovery Manager (RMAN) program (refer to Oracle and SAP documentation for details). The Recovery Manager program issues commands to an Oracle server process to backup and restore a datafile, control file or archived log. When the Oracle server process reads datafiles, it detects any split blocks and re-reads them to get a consistent block. Hence, tablespaces should not be in hot backup mode when using Recovery Manager to perform online backups.
Offline backup During an offline backup, the R/3 system and the database is shut down. As there are no changes to the database during the backup, a consistent backup is assured.
37
Note:
In a production R/3 system, archive log files are created at a very high rate. It is necessary to monitor the space available in the directory containing the archive logs, as the database suspends operation until more space is made available for log creation.
2.2.5.2 Backup duration Backups often take a great deal of time, and customer requirements may dictate a very short window of time for backup purposes. There are several methods of reducing backup times depending on available resources.
Data backups can be performed in parallel if several backup devices are available. Some enterprise storage management programs use multiple sessions and multiple paths to the backup devices to further reduce backup time. Another method of reducing the backup time is to backup to disk. These backups can then be copied from disk to tape at a later time with lower priority. This may be a viable option if the number and speed of tape devices available is insufficient for parallel backups. The storage management software and backup devices typically reside on a separate dedicated server to minimize the performance impact on the production server. In such a case, it is important to provide a very fast and reliable network connection between the production server and the storage management server. The amount of data transferred through the network connection should be minimized to provide optimum backup and restore performance. This can be achieved through mirroring capabilities provided by storage subsystem vendors. The data from the production server is copied onto another set of disks in a RAID 1 or RAID 5 configuration. As this copy of the data is made available locally to the storage management server, network bandwidth is minimized, and backups can be scheduled on the storage management server without affecting the production server performance and downtime. The concepts behind these techniques are discussed in 2.3, R/3 data availability on page 44, and lab tested procedures are described in Chapter 10, Split mirror implementation in R/3 on page 253. The ability to make backups of large databases depends on the capacity and maximum throughput of the tape devices, disk access times, maximum throughput of the I/O and system buses, and network bandwidth.
2.2.5.3 Backup types There are several ways to back up a database. Some databases like Oracle backup either by copying physical data and log files at the file system level, or by extracting data out of the database and storing them as a set of physical files in a vendor-proprietary format. DB2 UDB provides the latter method for backing data files, and the former method for backing up archive log files.
38
Note
Oracle defines the backup by file level copy as an image copy, and the backup by data extraction as a backup set , while DB2 UDB calls the backup by data extraction from the database as a backup image. However, most databases classify backup types by the amount (complete, incremental, or partial) and type of data (data files, archive log files, or control files) to be backed up.
Full or complete database backups A full database backup is a backup of all datafiles and the control files that constitute a database. Backups without the log files are performed when the database is online, while an offline full backup also backs up the active log files when the database is closed and unavailable for use. Incremental backups Some backup tools allow an incremental backup, where only the changes that have been made since the last complete backup are saved. An incremental backup strategy saves memory and time and optimizes the performance of the data backup. This is of special advantage for large databases, which have few changes on a daily basis.
To be able to make an incremental backup, a reference backup called the complete backup (level 0) must be made. The complete backup (level 0) of the database backs up all database blocks that have already been used. Incremental backups can then made. The incremental backup (level 1, cumulative) of the database backs up all database blocks that have changed since the last complete backup (level 0). The incremental backup option is not available for DB2 UDB in an R/3 environment.
Partial backups A partial backup is a backup of part of a database. The backup of an individual tablespace's datafiles or the backup of a control file are examples of partial backups. Partial backups are useful only when the database's log is operated in archiving mode, which provides for the use of archive logs for recovery. A variety of partial backups can be taken to accommodate any backup strategy. For example, datafiles and control files can be backed up when the database is open or closed, or when a specific tablespace is online or offline.
2.2.5.4 Backup media and tools Standard backup media are backup devices that are locally connected to the database server. Depending on other factors, such as backup and restore performance, downtime required for backup and recovery, disaster recovery requirements, storage capacity and performance, and centralized storage management, it is possible to have a variety of backup devices and software.
The basic requirement for a backup can be met by using offline media, such as tape, that is accessible through tape devices connected to the production server. Tape devices also provide hardware compression, and this reduces back up time,
39
because more data can be written to a single volume. Tape units with hardware compression are very common. For backing up large volumes of data, as is the case in an R/3 environment, a tape library device that consists of multiple tape drives and a large number of tapes is a basic requirement. These tape libraries are usually managed by storage management software, which use interfaces provided by the application vendor to transparently automate backup procedures. These centralized storage management programs provide media management capabilities and support for backup and restore tools provided by the application vendor, database vendor, or third-parties. A detailed description of the various tools used in an R/3 environment is provided in Chapter 3, R/3 data management tools on page 53.
t0
t1
t2
Restore backup
t3
t4 t 5
A ssum ption: system crashes at t1 , last backup was taken at t 0 - At t 2 , the decision is done: Yes, a restore is needed - At t 3 , the restore is com pleted, now all logs (generated in the tim e interval betw een [t 0 , t 1 ]) are applied - At t 4 , the database recovery is finished - At t 5 , the SAP R /3 system is active again
As mentioned earlier in 2.2.4, Types of errors affecting database operation on page 33, depending on the type of error, a recovery can either be carried out automatically or must be performed by the user or database administrator.
40
R/3 Data Management Techniques Using Tivoli Storage Manager
It is therefore important to identify the exact type of error before intervening in any way, to be able to act correctly and adequately. The following sections describe the recovery strategies for such errors.
2.2.6.1 Database recovery procedures The recovery procedures to be applied in order to recover from a media or logical (user) error depends on the type of damage and database configuration for recovery.
When recovering a data file or a database, the file or the database is made current, or current to a specific point in time. This process of recovery is also known as "rolling forward". A database is set to be configured for roll-forward recovery, if it is running in ARCHIVELOG mode (on Oracle) or in the case of DB2 UDB, the logretain database configuration parameter set to "RECOVERY", or if the user exit database configuration parameter is enabled, or both.
Note:
If the database is not configured for roll-forward recovery, the only option available to recover from an error is to restore the database from a consistent database backup. The control file and all datafiles are restored from a consistent backup and the database is opened. All changes made subsequent to the backup are lost. The different approaches to roll-forward recovery are discussed in this section.
Complete recovery A recovery from a failure where there is no loss of data is called a complete recovery. Recovery from user errors, as described in the previous section, is usually not a complete recovery as all changes made after the error are lost.
A complete recovery from the loss of one or more datafiles, and/or all control files (in the case of Oracle), can be performed if all the required data and log files (both active and archived) are available. For example, if one or more datafiles are lost, a tablespace or datafile recovery should be performed. The tablespaces or datafiles are taken offline, restored from backups, recovered and placed online. No changes are lost and the database can be available during the recovery. If one or more datafiles and/or all control files are lost, and the database is not open, then all lost files are restored from backups, recovered, and the database is opened. No changes are lost, but the database is unavailable during recovery. Figure 19 illustrates a complete recovery scenario.
41
C o m p le te o fflin e / o n lin e b a c ku p
1 M e d ia E rro r
B a c k u p o f d a ta file s
3 2
R e s to re b a c ku p o f d a m a g e d d a ta file s
lo g file s
R e c ov e r d a ta b a s e
Step 1 in Figure 19 is the point where a media failure occurs. Step 2 is to identify the damaged files, put the affected tablespaces offline, and restore only the damaged data files. Step 3 starts the recovery process which uses the active and/or the archived log files to recover completely. It is recommended to shut down down the R/3 System before starting the recovery procedure. Tables are used so intensively in the SAP System that it is generally impossible to set the affected tablespace to OFFLINE without terminating the activities of many users.
Note
The database should be closed for recovery in case of : permanent media failure (disk failure) loss of any datafiles of the SYSTEM tablespace or the PSAPROLL tablespace, in the case of Oracle loss of any containers of the system catalog table space, in case of DB2 UDB
If an archive log or online log that is required for recovery is also lost, then a point-in-time database or tablespace recovery is required.
Incomplete recovery A point-in-time or incomplete database recovery is required when an archive log or active log required for recovery is lost, or to recover from user errors.
Data loss due to an incomplete recovery can be avoided by implementing proper backup procedures as recommended by SAP and the database vendor. All datafiles and the control file are restored from backups, which are taken before the point-in-time, and recovered to the point-in-time by using active log files and
42
archived log files, and the database is opened. All changes made subsequent to the point-in-time are lost. A partial recovery produces a version of the database as it was at some time in the past. Incomplete media recovery must either continue on to become a complete media recovery, or be terminated at a desired point-in-time or change level. With Oracle, the database is opened with the open resetlogs operation at this point that creates a new incarnation of the database. The database must be closed for incomplete media recovery operations. Figure 20 illustrates a point-in-time recovery scenario.
Complete Backup
log files
4 Recover database 2
Restore complete backup without control files and active log files
Error identified
A logical error occurred during normal database operations (step 1) that was only recognized later. The structure of the database was changed between the error and the last complete backup. The database needs to be recovered to the point in time before the error. In step 2, the last complete backup is restored without control files and online log files. In step 3, the structural changes made due to tablespace extension is recovered by creating a new data file for the extended tablespace. Finally, step 4 involves using the archived log files to complete the database recovery. If the database remains open and no tablespaces that contain rollback segments (for Oracle) are lost, point-in-time tablespace recovery can also be used. All changes made to the recovered tablespaces after the point-in-time are lost. The database, excluding the recovered tablespaces, is available during recovery. However, due to the complex nature of tablespace point-in-time recovery (TSPITR), the primary issue to consider when deciding whether or not to perform
43
TSPITR is the possibility of application-level inconsistencies between tables in recovered and unrecovered tablespaces (due to implicit rather than explicit referential dependencies).
44
Restoring a complete backup of the primary database to the standby server Transferring all the archived transaction log files to the standby server (synchronously or asynchronously) Placing the standby database in recovery mode Then, continually applying the archived logs of the primary server as they are created, to the standby database. Figure 21 illustrates the standby database concept.
PRIMARY DATABASE
STANDBY DATABASE
copy
Primary Host
data files control files active log files archive log directory archive log directory active log files data files control files
Standby Host
Network
Database open
Two identically configured databases operate on two identically configured hosts. The primary (production) database instance is located on the first host, the database is open and fully available for all SQL prompts of the R/3 System. The primary database system is also the system which directly executes all database requests. The standby database is a copy of the primary database and is constantly in recovery mode. If a disaster occurs, the standby database is taken out of recovery mode and activated for online use. Once the standby database is activated, it cannot returned to the standby recovery mode unless it is re-created as another standby database.
Warning
Activating a standby database resets the active logs of the standby database. Hence, after activation, the logs from the standby database and production database are incompatible. The data files, log files, and control files of the primary and standby databases are on separate physical media. Therefore, it is impossible to use the same control file for both the primary and standby databases.
45
Since all data files are already located on the standby host, costly reloading of the files is avoided. Some log entries may still need to be applied to the files to enable all transactions to be incorporated in the standby instance. This is done by first importing the missing archived log files from the primary instance. The current active log file of the primary instance can then be archived and copied to the standby instance, and then imported in the standby instance. After the takeover, a standby database needs to be set up again (usually on what was the primary host). Changes to the physical structure of the primary database (creating new files, renaming files, changes to online log and control files) are not automatically incorporated in the standby database in every case. The DBA may need to intervene depending on the type of change. If it is not possible to incorporate the changes automatically, the recovery process is stopped, and the DBA needs to intervene manually to incorporate the structural change in the standby database. After that, the recovery process needs to be started again. The original names of the primary database need to be retained. All commands in the primary database should be executed with a recoverable option to ensure that these changes are reflected in the standby instance. The key benefits of this process are: It is totally integrated with an application or database. So, for example, if the customer is using an Oracle database, Oracle's utilities can be used along with a backup and restore mechanism to maintain the standby system. Restoring to a standby system offers a level of validation before an outage puts the customer at risk of losing critical data. The process of restoring a database and applying logs to it is similar to the steps a company would follow during recovery. By continually restoring and recovering the database by applying log files regularly, the customer can validate the integrity of the data before an outage occurs. Using a standby system eliminates the distance limitation imposed on systems duplicated through hardware mirroring. Standby systems minimize corruption or errors in logic. In addition to the validation process discussed earlier, customers can purposely keep the standby system at a different point in time from the primary system. In other words, the standby system can be kept 15 minutes or an hour behind, providing a window of opportunity to catch errors before they get applied to the standby database. There is very minimal impact on the production system. The log backups at the primary system require far fewer CPU resources than the full backups, which can be performed from the standby system. In addition, because it is practical to do multiple full backups from the standby system daily, there are fewer logs to apply. Standby systems aren't dependent on any one type of hardware or software. These solutions will work with a multitude of disk subsystems, server platforms, applications, and database.
database. This is different from the RAID-based mirroring solutions as it also incorporates software components in implementing this solution. The first step in this scenario is the creation of an additional mirror of a RAID 1 or RAID 5 subsystem. The mirrored copy of the data resides on physical storage which is either locally attached to the primary server or connected to a local or remote standby server. Figure 22 illustrates such a scenario.
STANDBY DATABAS E
P rimary Host
S tandby Host
data
files
data files
archive
logs
archive logs
Mirror 1
RAID 1
OR
M irror 2
RAID 5
2.3.2.1 Remote mirroring At the simplest conceptual level, remote mirroring is a RAID 1 mirror of one disk device (source) to a second device (target) in a physically separate storage subsystem over ESCON or other high speed communication links. As with RAID 1, if either disk in the mirrored pair fails, the requested data is instantly available from its mirror copy. No disruption to normal operations occurs. Once the disk again becomes available (through repair, replacement, or operational procedure), the newly available member of the pair can be resynchronized with its mate.
The capability of placing data in two locations has a performance impact, particularly for write operations. The performance impact may be lessened based on the mode of synchronization. The size of this impact is primarily a function of the write activity rate to mirrored volumes, average data block size being updated, distance between sites, type of link, and most importantly, the mirroring mode (Synchronous/Semi-synchronous). Standby server implementations with a remote mirroring solution are also constrained by network latency as all changes are propagated through a network. This configuration should be carefully evaluated based on the customers requirements of disaster recovery and OLTP performance as most often a trade-off to be made. Some viable alternatives using remote mirroring are based on selective mirroring, for example, by just mirroring the archived log files
47
remotely using SRDF or PPRC to keep the log files current and reduce the amount of data transferred over the network.
2.3.2.2 Split mirroring Split mirroring is based on the fact that the additional mirror is split from the rest of the mirrors in the storage subsytem. As this mirror is "split" from the primary mirrors, it is not known to the operating system running on the primary server. This mirror is then attached to the standby server where it can now be used to backup the data without affecting the CPU resources of the production system.
Split mirroring can be used with remote mirroring, that is, a remote mirror is kept synchronized with the primary mirrors, using SRDF or PPRC, and when a backup or a separate system image is required, the remote mirror is split from the primary server and connected to the standby server. However, this implies that the mirror is no longer synchronized with the primary and should be re-synchronized as soon as possible for subsequent use. The primary use of split mirroring is to minimize downtime for backups on the production system and offloading the backup processing from the production server to the standby server. It is also useful for creating an independent database which can be used as a reporting system or for upgrades to the R/3 production system. Split mirroring may not be the best solution for disaster recovery as errors that occur due to an application or system crash can get forwarded immediately from the primary mirrors to the active third mirrors if the third mirror is not split from the primary mirrors. In such a case, the customer has to recover from tape. Therefore, it is important to plan and choose the right strategy and tools to implement such advanced features. In Chapter 10, Split mirror implementation in R/3 on page 253, we will document the split mirror scenario tested during the production of this book. Due to the constraints of the lab hardware, a split mirror implementation using native AIX tools was tested. However, the procedures to be followed to quiesce the database and backup the data on the standby server can be applied to split mirror configurations using EMC Timefinder and the ESSs FlashCopy (when available).
48
Database
Archiving object
Archive file
Archiving session
Figure 23 shows the R/3 data archiving process. In an archiving session, the R/3 data objects, which meet the archiving criteria (for example, documents, which are not longer active in the system and for which a well defined retention period is reached) are collected and archiving objects are built. These archiving objects are then written to an archive file. The data objects are written in a compressed manner to the archive file, and the amount of data is reduced by a factor of five compared to the original size of the data objects in the database. After the archive file is written, the objects can be deleted from the database. To make sure that only objects are deleted, that had been successfully written to the archive file, R/3 reads the archive file again and deletes all data items found there from the database. An archiving object is the smallest unit that can be archived from the database, it not only includes the R/3 data objects but also a data declaration part, archiving programs and customizing settings. The archiving programs include, at least: Write program, which writes the data objects sequentially to the archive files, and Delete program, that deletes objects, which has been successfully written to the archive file, from the database (and optionally also) Preprocessing program, which prepares data objects for the archiving session Display program, which allows to display archived data Postprocessing program, which performs postprocessing actions after the archiving run Reload program, which allows to load archived objects back to the database.
49
Note:
Reloading programs are only available for very few R/3 data objects. Generally, an R/3 data object, which was archived and afterwards deleted from the database cannot be reloaded back to the database. Even if there is a reloading program available, reloading should be carried out with great care to avoid possible database integrity problems. The customizing settings contain object-specific customization parameters for an archiving session. The central component in the R/3 system for controlling the archiving session is the Archive Development Kit (ADK). The ADK handles the archive files, so it creates, opens, describes and closes them. The Archive Link interface of R/3 is a cross application tool which serves as a communication interface between the R/3 application modules and external archive systems. The vendor of the external archive system can certify his interface implementation to ensure that it meet the SAP archive link requirements.
R /3 M o d ule
Archiving Data
ADK
request
1
generate archive file
2
delete data from database
request
confirm ation
archive system
4
transfer archive file to archive system
Figure 24 shows the functionality of transferring files by the Archive Link interface to the archiving system. 1. At first, the archive file is written by the ADK. 2. The archive file (written in step 1) will be re-read, all data objects which can be successfully read are deleted from the database. 3. After the deletion run, the ADK informs the external system that the data file can be archived. 4. The external archiving system copies the archive file to the archive storage medium.
50
5. After completion of this process, the external archive system sends an return code back to the Archive Link interface. If the file was successfully archived, the ADK removes the file from the file system.
51
52
Tivoli Data Protection for R/3 is a backup interface program developed by IBM which will be used for ORACLE based R/3 database backups in connection with Tivoli Storage Manager. CommonStore, an interface program for archiving R/3 data developed by IBM is used to archive R/3 data to external storage management systems. In this case the focus will be Tivoli Storage Manager. This chapter gives a functional overview of Tivoli Data Protection for R/3 and CommonStore. Further, it will provide some insights into the architecture of these tools. Tivoli Data Protection for R/3 was designed to backup/restore ORACLE databases within an R/3 environment. Therefore, we will discuss some other backup utilities for non-ORACLE based R/3 systems, also in connection with Tivoli Storage Manager.
53
3.1.1 BRBACKUP
BRBACKUP allows online or offline backups of the control file, of data files in individual or all tablespaces, and if necessary of the online redo log files. Besides, BRBACKUP saves the profiles and logsthat are relevant for the backup. If a backup operation is started, BRBACKUP performs the following actions: changes of the state of the database (opened or closed), depending on the type of backup wanted (online or offline) checks the status of files sets the tablespace mode (BEGIN/END BACKUP) optimizes the data distribution on the backup media In connection with BRBACKUP an other tool is necessary, named BRCONNECT. It ensures that the database status required for the online/offline backup remains unchanged during the backup.
3.1.2 BRARCHIVE
BRARCHIVE allows you to archive the offline redo log files. Offline redo log files are online redo log files saved to the archiving directory by ORACLE, typically /ORACLE/<SID>/saparch. BRARCHIVE also saves all the logs and profiles of the archiving process. Reasons for archiving offline redo log files: In case of an error, a consistent database status can only be recovered if all relevant redo log files are available. The database system of an R/3 System should be operated in ARCHIVELOG mode (to prevent overwriting of online redo log files which were not saved). To protect the archive directory against overflowing, it has to be emptied regularly. An online backup of data files is of no use if the related redo log files are missing. It is therefore necessary to archive the offline redo log files generated during the online backup immediately after running BRBACKUP. With BRARCHIVE it can be ensured that redo log files are not deleted before they have been archived and that the same files are archived once or twice. BRARCHIVE allows the database administrator to continually archive offline redo log files. This means that the archiving directory, where ORACLE places the offline redo log files, can be kept free by continually archiving and then deleting current redo log files.
3.1.3 BRRESTORE
BRRESTORE can be used to restore files of the following type:
54
Database data files (tablespace files), control files, and online redo log files Saved With Brbackup Offline redo log files archived with BRARCHIVE Profiles and logs (normally not necessary) It can specify files, tablespaces, complete backups, log sequence numbers of redo log files, or the position of a file on tape. The BRRESTORE program automatically determines the corresponding backup tape and the position of the files needed on the tape. BRRESTORE checks whether the required free disk space is available to allow the files to be restored and restores the directory and link structure automatically (directory /ORACLE/<SID> and sapdata<n> mounts must exist).
3.1.4 SAPDBA
SAPDBA is a database administration tool for ORACLE that integrates the tools BRBACKUP, BRARCHIVE, and BRRESTORE. These tools are embedded into the menu structure of SAPDBA so that they are easy to call. The SAPDBA menu options offered for restoring or recovering are specially designed to reflect the users point of view. There are two important situations where SAPDBA provides support: Recovery of the database to its current state after a media error in several files, for example, in case of a disk failure Restoring the entire database to perform a point-in-time recovery or to reset the database to a previous state SAPDBA evaluates the backup logs and the main log to decide whether the chosen recovery can be performed by using the selected backups. For example, it determines whether any actions that would prevent the recovery have taken place between the time of the backup and the selected recovery end time (point in time). If SAPDBA cannot perform a recovery, it will reject the selected backup or the specified recovery procedure. Data can only be automatically recovered by using SAPDBA if BRBACKUP and BRARCHIVE (in native mode or with the BACKINT interface) were used for the backup. In that view, the SAP tools SAPDBA, BRBACKUP and BRARCHIVE can be understand as an integrated solution.
55
Note:
An RMAN backup has to read the database to see if a block has been used or changed just as many times as a backup without RMAN. This means that an incremental backup may not reduce backup time. A significant reduction in backup time is achieved only if the speed of the tape devices is the reason for slow backups. Other RMAN features are: Logical database block errors are recognized automatically during the backup. This makes sure that each backup is consistent. This function replaces the weekly check with DB_VERIFY. Any database blocks that have never been used are not backed up. The command BEGIN/ END BACKUP is not needed in online backups, since the blocks are checked to see if the data is consistent. This significantly reduces redo log information. RMAN uses the SBT interface (System Backup to Tape) to back up to tape devices. External backup tools can implement this interface as a dynamic link library (backup library). The R/3 installation delivers the SAP backup library and the SAP backup tool BRBACKUP for use with RMAN. The integration of the ORACLE Recovery Manager into the SAP tool BRBACKUP have the following impact for existing backup strategies and tools: The recovery catalog is not used. The backup information is stored in the control file. After the backup, the control file is also backed up. In a restoration, the control file is imported first, followed by the data files. The integration of RMAN into BRBACKUP also guarantees integration into the R/3 System (CCMS). The BACKINT interface with external tools can be further used. The SAPDBA user interface is unchanged, however, it has been extended with new options. All previous SAP backup strategies are supported while using RMAN. Exceptions: RMAN is not relevant for offline redo log backups with BRARCHIVE, standby database backups and split mirror backups. For more information on ORACLE RMAN, see the ORACLE8 documentation referred to in the bibliography. For detailed RMAN backup strategies, see the R/3 manuals listed in the bibliography.
56
ORACLE Database
BRBACKUP
BRARCHIVE
BRRESTORE
SAPDBA
BACKINT interface
BACKINT
External backup/restore program
This program or executable processes the backup, restores, inquires about requests, and executes them using the corresponding backup utility. If the external backup/restore interface program is a client/server program, BACKINT communicates with the client running on the R/3 database server. The BACKINT interface supports three functions, that are: backup restore inquire In all cases, the mandatory user ID parameter will be used as an identifier for the SAP database. After a function has been executed, the interface program always returns an integer value, which indicates whether the call was successful.
Backup function The backup function defines a backup request including all the files specified in a list. If the backup request cannot be processed completely, the interface informs the user which files have been backed up successfully (partial backup). The sequence in which the files are saved can be freely determined by the external backup utility. If backup starts, the external backup/restore interface program generates first an unique backup ID (BID) for a set of saved files in one session, which clearly identifies the backup. This BID will be handed over at the end of a backup run to the appropriate SAP database utility. Restore function The restore function is used to pass on a restore request to the external backup/restore interface program. This request consists of a user ID, backup IDs, a list of files to be restored and a list of directories where files should
57
be created. The last parameter is optional. If the backup ID is not set, the last backup of the related file is used. The return information indicates which files have been restored successfully and which backup IDs have been used.
Inquire function The inquire function provides information about the backups managed by the external backup/restore interface program. This function is called using UID, BID and the file name (the last two parameters are optional). If the BID is not set, a list of available backups (BIDs) is provided, which includes the specified file. If a file name is not specified, a list of files belonging to a specific BID is generated.
If neither of the two parameters is set, a list of available backups (BIDs) is generated. If both parameters are specified, the system checks whether this file was saved with a specific BID. In general, the BID does not necessarily identify one backup run. (However R/3 database utilities, such as BRBACKUP, BRARCHIVE, BRRESTORE and SAPDBA, use the BACKINT interface, designed and provided by SAP. This enables them to communicate with an external backup utility.) Generally, the external backup/restore interface program will be implemented and sold by the vendor of the external storage media management software. SAP assumes responsibility for defining BACKINT and guarantees the functionality related to BRBACKUP, BRARCHIVE, BRRESTORE and SAPDBA. For the backup/restore scenarios of the R/3 database in the following sections, the concentration will go to a backup/restore interface program for ORACLE for R/3, named Tivoli Data Protection for R/3 provided by IBM, which is described in 3.1.7, Tivoli Data Protection for R/3 on page 58 in view of functionality and architecture, and also described in Chapter 6, Tivoli Data Protection for R/3 implementation on page 117 in view of setup and usage.
58
3.1.7.1 Architecture of Tivoli Data Protection for R/3 For a certain understanding of the R/3 backup/restore processes in connection with Tivoli Data Protection for R/3, the internal structure of Tivoli Data Protection for R/3 will be explained more in detail.
List of DB objects
BRBACKUP
Configuration File
SAP DBA
BRARCHIVE BRRESTORE
Profile
Database Utilities
ORACLE DB
Agent Tivoli Data Protection #2 API Tivoli for SAP R/3 TivoliD ata Protection Data Protection Agent Tivoli Data ProtectionClient #1 API API API for SAP R/3 for SAP R/3 Tivoli for SAPR/3 TivoliD ata ProtectionClient Tivoli Data Protection n Client Data Protectio API Client API API for S AP RR/3 API for ata R/3 for SAP /3 Tivo li D SAPProtection Client Client API Client API Client for S AP R /3 Client Client
Backup/ Restore Client
C ommunication Interface
One instance of the Tivoli Data Protection for R/3, known as Tivoli Data Protection for R/3 PRO, communicates with one of the R/3 database utilities over the BACKINT interface: It receives and sends commands and messages with lists of database objects over the BACKINT interface It reads and interprets the parameters that are specified in the profile It sets up the proper environment and controls (start, stop) the Tivoli Data Protection for R/3 AGENT(s) Each agent transfers one (in the case of multiplexing several files, see also 6.5.4, Adaption of Tivoli Data Protection for R/3 on page 150) file directly from the ORACLE database to one (of several) Tivoli Storage Manager servers, using the Tivoli Storage Manager API client. The Tivoli Data Protection for R/3 AGENT(s): Reads and writes database objects from and to disk (backup/restore) Transfers data to and from the Tivoli Storage Manager API client Can run in parallel It can be customized in the way that Tivoli Data Protection for R/3 operates through keywords and parameters in a profile that will be analyzed by Tivoli Data
59
Protection for R/3 before any Tivoli Storage Manager subcommands are processed. By customizing this profile, it can be adapted from Tivoli Data Protection for R/3 to the specific needs for any system environment. Parameters that Tivoli Data Protection for R/3 modifies during operation, for example, the backup version number (see 3.1.7.2, Overview of Tivoli Data Protection for R/3 functionality on page 61) or the Tivoli Storage Manager password in encrypted format for the appropriate node, are stored in a separate binary configuration file. The Tivoli Data Protection for R/3 PRO gets a list which contains all the names of the data files (tablespace files respectively offline redo log files) to be backed up or restored. This temporary list will be generated by the appropriate database utility and will be handed over as parameters during the invoke process of Tivoli Data Protection for R/3. Figure 27 shows an example for the flow of backup operation of an R/3 database backup started by BRBACKUP.
BRBACKUP
List of DB objects
ORACLE Database
Backup Protocol #FILE file_1 #SAVED TSM___9911121045 #FILE file_2 #SAVED TSM___9911121045
object 1 object 2
object 1 object 2
read object 2
Tivoli Data Protection Tivoli Agent #1 Data Protec tion API API for ata R/3 for SAP R/3 SAP Protection API Tivoli for ataProtection TivoliD DataProtectionClient TivoliD SA P R /3 API Client API Client API API for SAP RR/3 for SAP /3 for SAP R/3 Tivoli Data Prote ction Client Client API Client API Client for SAP R/3 Client Client Tivoli D ata Protection
read object 1
Tivoli D ata Protection n Tivoli Agent #2 Data ProtectioAPI API API for ata for SAP R/3/3 SAP R Tivo li for SAPR/3 TivoliD Data Protection Tivoli DataProtectionClient Protection Client API Client API API API for S ata RR/3 for SAP /3 for AP R/3 Tivoli D SAPProtection Client Client API Client API Client for S AP R /3 Client Client Tivoli Data P rotection
If the database objects list was successfully created by BRBACKUP, it will be sent to the Tivoli Data Protection for R/3 PRO. By means of this list the Tivoli Data Protection for R/3 PRO distributes the work (which file/object will be read by which Tivoli Data Protection for R/3 AGENT) among the Tivoli Data Protection for R/3 AGENTs. Only the Tivoli Data Protection for R/3 AGENTs communicate with the Tivoli Storage Manager server. This communication will be done with Tivoli Storage
60
Manager API function calls. If an AGENT has done its job, he sends a completion message to the Tivoli Data Protection for R/3 PRO. The Tivoli Data Protection for R/3 PRO generates for each AGENT message a corresponding message and writes them to the backup protocol file in the /oracle/<SID>/sapbackup directory. If the backup was successful the generated message has the format illustrated in Figure 28.
In case of an error during a backup run (the datafile could not be saved), an error message will be generated with the prefix #ERROR instead.
3.1.7.2 Overview of Tivoli Data Protection for R/3 functionality The demands of functionality for a backup/restore utility are:
Automation High throughput High availability Storage management. Tivoli Data Protection for R/3 was designed and developed with regard to the demands above. The way that Tivoli Data Protection for R/3 operates can be customized through keywords and appropriate parameters within the Tivoli Data Protection for R/3 profile that is analyzed by Tivoli Data Protection for R/3 before any Tivoli Storage Manager subcommands are processed. All Tivoli Data Protection for R/3 properties, and the functionality itself, will be determined and activated within that profile. An overview with detailed descriptions of all Tivoli Data Protection for R/3 profile keywords can be found in Appendix B, Tivoli Data Protection for R/3 profile on page 343.
Parallel backup and restore Tivoli Data Protection for R/3 optimizes the data throughput for backup and restore in several ways:
It sorts the database objects in order to balance the load on the resources used for backup. For restore operations, Tivoli Data Protection for R/3 maintains the same sequence for efficient tape transfer. Tivoli Data Protection for R/3 is able to establish multiple backup/restore sessions by starting multiple agents. Each agent reads data from and writes data to storage devices in parallel with (and independently from) the others. One session can be established per backup storage device. It utilizes multiple communication paths to Tivoli Storage Manager servers to eliminate network-induced bottlenecks. Files stored on different physical volumes can be backed up in parallel. This will increase the throughput for the backup operation and will reduce the backup time. Prerequisites are, on the one hand, a parallel read process of the datafiles from disk. On the other hand, it is necessary to open, corresponding to the number of read processes, an equivalent number of write processes, in this case the
Chapter 3. R/3 data management tools
61
number of Tivoli Storage Manager sessions. This functionality is integrated within Tivoli Data Protection for R/3. Each agent session transfers one database object to or from the Tivoli Storage Manager server by using the Tivoli Storage Manager API client functions. Tivoli Data Protection for R/3 optimizes the data transfer with regard to the physical location of the ORACLE objects. Further information can be found in 6.5.4.1, Parallelism on page 150.
Alternate/parallel backup path Tivoli Data Protection for R/3 provides the possibility to define alternate backup paths. Alternate/parallel backup paths improve the availability of the backup/restore process, reduce network-induced bottlenecks, and increase backup/restore performance. In order to use this option, the Tivoli Storage Manager server must be accessible under more than one network address.
For each communication path (for example, a Tivoli Storage Manager server network address), it has to have defined a set of additional communication parameters. This client option data are collected under a logical server name. In an UNIX environment all client option data can be written into the client system option file: dsm.sys. The communication paths can be used simultaneously to increase data throughput, or to ensure the backup operation can continue when one or several paths are down. It can be specified a number of parallel sessions for each path to adjust to different network speeds and to distribute the load over the network. Appendix C, Alternate/parallel backup paths and backup servers on page 351 gives further information by means of some example scenarios.
Alternate/parallel backup servers It can be specified alternate/parallel backup servers to increase performance or for disaster recovery purposes. For disaster recovery, the backup data is routed to other Tivoli Storage Manager servers. This function is similar to alternate/parallel backup paths, except that now physically different Tivoli Storage Manager servers will be used. The backup data is distributed over the configured set of Tivoli Storage Manager servers; however, Tivoli Data Protection for R/3 keeps track of all backups regardless at which server they are stored.
See also Appendix C, Alternate/parallel backup paths and backup servers on page 351 for further information.
Multiple management classes The Tivoli Storage Manager server manages data by using management classes. Tivoli Data Protection for R/3 distinguishes between BRBACKUP and BRARCHIVE and uses different management classes for objects that are saved with each utility.
With the multiple management classes option Tivoli Data Protection for R/3 is able to use more than one library device at a time.
Multiple redo log copies To improve availability and disaster recovery capability, Tivoli Data Protection for R/3 provides the possibility to store multiple copies of the same ORACLE offline
62
redo log file on different physical Tivoli Storage Manager volumes (tapes) during a BRARCHIVE run. A prerequisite to use this option is to have defined more than one archive management classes within one Tivoli Storage Manager server or on different Tivoli Storage Manager servers. These different management classes have to be defined in that way, that they are writing the data to different storage pools within the Tivoli Storage Manager server. This prevents, that the first and second copy of the redo log files will be written to the same tape. The second case assumes, that an alternate/parallel server scenario will be used (see also Appendix C, Alternate/parallel backup paths and backup servers on page 351).
Backup by version Tivoli Data Protection for R/3 has its own backup version control mechanism for keeping a specified number of backups.
Remember:
The Tivoli Storage Manager expiration period for full backups, that is set in the copy group definition of the appropriate Tivoli Storage Manager management class, must be long enough to avoid conflicts with this feature in Tivoli Data Protection for R/3. See also 5.1.2, Data storage policy on page 102. Each database backup version consists of a tablespace and associated offline redo log backups. Every time a full backup completes successfully, Tivoli Data Protection for R/3 increments the version count by 1 and stores the actual version number in the Tivoli Data Protection for R/3 configuration file (init<SID>.bki). This value is also assigned to the tablespace files and to all subsequent offline redo log backups. If the number of versions kept in backup storage is larger than the specified maximum number of backup versions, which will be done within the Tivoli Data Protection for R/3 profile, the oldest version is deleted (together with the corresponding tablespace and redo log files) until only the specified maximum number of most recent versions remains.
Caution:
Tivoli Storage Manager uses the value of the parameter RETVER (specified during copy group definition) to give files an expiration date. If Tivoli Data Protection for SAP/R3 versioning will be used, the Tivoli Storage Manager expiration function has to be bypassed and vice versa. Only one of these methods should be used to control how long a backup will be kept.
System management and reporting System management and reporting stands for a collection of Tivoli Data Protection for R/3 functions which provide possibilities for:
Network management Tivoli Data Protection for R/3 provides status information on the backup/restore process via SNMP traps. It is also enabled to send messages
63
to the Tivoli Storage Manager server (Version 3) that in turn can pass status to a network management tool like Tivoli. Reporting For statistical purposes, Tivoli Data Protection for R/3 provides information about the data volume transferred and the associated elapsed time. In addition, Tivoli Data Protection for R/3 reports which file is currently processed and calculates an estimated end time for the current backup or restore process. These functions can be used on the one hand for a detailed error analysis and on the other hand for statistical purposes and throughput statements.
Password handling To protect backup copies against unauthorized access, Tivoli Storage Manager provides password authentication. If the Tivoli Storage Manager administrator has activated password authentication, the Tivoli Storage Manager client has to identify itself to the Tivoli Storage Manager server by a password. Only after successful password checking the client is able to perform backup or restore operations. Improving performance To ensure a good data throughput in view of reducing database backup with respect to database restore time, the following improvements were embedded within Tivoli Data Protection for R/3: Parallelism
Fast null block compression algorithm File multiplexing Adjustable disk blocksize (blocksize for reading datafiles from disk) Adjustable Tivoli Storage Manager blocksize (blocksize for sending data to the Tivoli Storage Manager server) For further information about performance improvements in general and in view of these statements see 6.5, Possibilities to improve backup/restore performance on page 147.
3.1.7.3 Integration with R/3 Tivoli Data Protection for R/3 supports all the functions of the BACKINT interface (SAPDBA to external storage management tool) specified by SAP including the most current ones:
The util_file_online function This feature reduces the number of redo logs during online backup. Table spaces are only set in backup mode during the time their files are being backed up. Support for ORACLE in raw devices With R/3 3.0D, SAP supports Oracle database installations on raw logical volumes. Tivoli Data Protection for R/3 is compatible with this function. Certification Tivoli Data Protection for R/3 is being developed under a "Complementary Software Program" (CSP) between SAP AG and IBM, and certification tested
64
R/3 Data Management Techniques Using Tivoli Storage Manager
to the BACKINT (BC-BRI) interface. SAP does inform CSP partners early in the development cycle on new features/functions. IBM is committed to make these available on Tivoli Data Protection for R/3 together with the new R/3 release.
3.1.7.4 Supported platforms The following platforms will be supported as illustrated in Table 5.
Table 5. Supported platforms for Tivoli Data Protection for R/3 version 2.4
Operating system
AIX Digital HP SOLARIS Windows NT (Intel) with Service Pack 4 and above Windows NT (Alpha)
For support of ADSM Version 2 API clients, the Tivoli sales representatives have to be contacted.
3.1.7.5 Administration Tools for Tivoli Data Protection for R/3 Administration Tools for Tivoli Data Protection for R/3 are a Web browser based graphical interface to support and relieve the customizing of Tivoli Data Protection or R/3 and the analyzing of R/3 database backup with respect to restore operations.
The objective of Administration Tools is to assist in the configuration, monitoring and administration of Tivoli Data Protection or R/3 from local or remote workstations. It gives R/3 administrators the possibility to centralize the database backup/restore administration work, especially the observation of Tivoli Data Protection or R/3 database backup/restore actions from all R/3 database servers within the system landscape. Chapter 7, Administration Tools implementation on page 153 gives a detailed overview of the setup process of Administration Tools and how it can be used in connection with Tivoli Data Protection or R/3.
Architecture of the Administration Tools Administration Tools are JAVA based background processes. To understand how they work, Figure 29 gives an insight into the structure of Administration Tools.
65
TCP/IP
TCP/IP
The central point of the Administration Tools architecture is the Administration Tools Server. This background process can be installed on every machine within the system landscape. It works as a data collector and data distributor. That means, it collects all incoming Administration Tools data from the R/3 database server and distributes these data to every actually-connected Administration Tools client node (user front-end), which is a Web browser, and it writes these data to different local files stored on the Administration Tools server. These files serve as a data source for later reviewing of completed backups/restores in view of performance/error analysis or to hold different Tivoli Data Protection or R/3 configuration states in the form of a configuration history. The source of data for the Administration Tools Server is the R/3 database server. The kind of data, which will be sent to the Administration Tools Server can be divided in two groups: performance data and configuration data . Performance data (backup/restore transfer rate, number of started sessions, start time of operation, filenames, and so on) will be generated by Tivoli Data Protection or R/3 itself and sent directly to the Administration Tools server for further processing. For configuration data, the method is different. Configuration data are all kinds of data, which will be stored in the appropriate configurations with respect to profiles (keywords and parameter). It can be handled by the Tivoli Data Protection or R/3 profile, the SAPDBA profile, and the Tivoli Storage Manager configuration files. For physical access of these files the Administration Tools server needs the help of the Administration Tools slave server, a background process which must be installed on the R/3 database server, which wants to be accessed. If an Administration Tools client node initiates a configuration process, the Administration Tools server routes that request to his little brother on the R/3
66
database server side. The Administration Tools slave server reads the desired files and routes the data back to the Administration Tools server and to the Administration Tools server to the client that initiated this operation. The datas return is similar. These changed parameters are sent from the client to the Administration Tools server. The server routes these data to the Administration Tools slave server and it writes the changes back to the appropriate files. A system landscape of the real world typically unites more than one R/3 system. At least an R/3 test system and an R/3 production system will be used together. Figure 30 shows a possible example.
Storage Media
LAN
Tivoli Storage Manager Server
BACKBONE
SAP R/3 Database Server Tivoli Data Protection for SAP R/3 Administration Tools Slave Server
SAP R/3 Database Server Tivoli Data Protection for SAP R/3 Administration Tools Slave Server
SAP R/3 Database Server Tivoli Data Protection for SAP R/3 Administration Tools Slave Server
The central Administration Tools instance is the Administration Tools server, which can be installed on any system (UNIX, Windows). A system landscape can contain multiple Administration Tools slave servers, which depend on the number of R/3 database servers. Administration Tools clients all run on these machines, using a standard Web browser. The communication protocol of the Administration Tools is TCP/IP.
Overview of Administration Tools functionality Administration Tools offer a Web browser based graphical interface for database administrators. These tools provide:
A one-step customizing process of all relevant Tivoli Data Protection or R/3 profiles and configuration files, that is, a Tivoli Data Protection or R/3 profile, a SAPDBA profile, and Tivoli Storage Manager configuration files. Dynamic visualization of backup/restore performance. Single point of control.
67
Administration Tools Configurator: The Administration Tools Configurator allows you to customize all necessary Tivoli Data Protection or R/3 profiles with respect to configuration files from one user interface.
A configuration process guides a user with a detailed keyword description through customizing the files. Before configured files will be written back to the filesystem, a keyword parameter consistency check will be performed. It prevents wrong settings within the files and gives hints for appropriate corrections. A set of Tivoli Data Protection or R/3 configuration files can be saved on the Administration Tools server machine to create a configuration history. Such a set of configuration files can be retrieved every time, which makes it easy to change Tivoli Data Protection or R/3 configurations.
Administration Tools Performance Monitor: The Administration Tools Performance Monitor collects transmission and status data for Tivoli Data Protection or R/3 backup or restore jobs. These data are displayed while the job is running and at the same time these data will be stored on the Administration Tools Server machine for replay at a later point in time. Unattended operations, performance bottlenecks, and errors can be reviewed and analyzed in detail. The review function provides some features to relieve this process:
Start/stop function Fast forward/rewind function Select point in time function Single step function
Supported platforms The Administration Tools (Administration Tools server, Administration Tools slave server) are available for the following platforms:
AIX 4.2/4.3 DEC 4.0 HP-UX 10.20/11.0 SOLARIS 2.5.1/2.6/2.7 Windows NT 4.0 (at least Service Pack 4).
68
3.2.1 CONTROL
The administration tool CONTROL is available to manage an ADABAS database system. CONTROL will be used to observe and monitor the ADABAS server, for maintenance purposes for ADABAS database instances and to execute backup with respect to restore processes. CONTROL supports the following operations: Installing the database server Loading of system tables Starting and shutting down the database server Starting and shutting down the remote SQL server Monitoring the database server Saving the database and the log Restoring the database and the log Expanding disk capacities of the database server Diagnostic tool where support is required CONTROL provides an interface for batch operations. The xbackup and xrestore functions are provided primarily to combine the CONTROL backup functions with vendor-specific backup tools, for example, the IBM backup/restore interface for ADABAS ADINT/ADSM (see 3.2.2, ADINT/ADSM on page 69). Two variants are possible: The xbackup and xrestore functions are called under the user interface and thus under the control of the external backup tool. The xbackup or xrestore function calls the external backup tool. Besides, it is also possible to use the xbackup and xrestore functions as genuine batch interfaces. For more information on CONTROL and backup/restore scenarios for R/3 ADABAS databases, see the ADABAS documentation for the appropriate R/3 manuals.
3.2.2 ADINT/ADSM
ADINT/ADSM version 2.1 is a client/server program, which uses the ADSM API client to manage backups with respect to restores of R/3 ADABAS databases in connection with ADSM. With ADINT/ADSM, it is possible to handle R/3 database backups and it will provide the possibility to manage backup storage and processing independently from normal R/3 operations. ADINT/ADSM in combination with ADSM provides reliable, high performance, repeatable backup and restore processes to manage large volumes of data more efficiently. Also, it is possible to follow R/3 backup/restore procedures and use the integrated SAP database tool CONTROL for backup and restore.
69
The following backup/restore scenarios are possible in this environment: Full database backup / full database backup without checkpoint Log backup in database Incremental backup / incremental backup without checkpoint Database and log restore Forward recovery Other R/3 related files (executables) will be backed up using Tivoli Storage Manager standard techniques for file backup and restore, for example, incremental backup, file filtering, and point-in-time recovery.
3.2.2.1 Architecture of ADINT/ADSM For a certain understanding of the R/3 backup/restore processes in connection with ADINT/ADSM, the internal structure of ADINT/ADSM will be explained in more detail.
CONTROL
xbackup
Configuration File
xrestore Profile
Database Utilities
ADINT/ADSM MASTER
Interface program Backup/ Restore Server Storage Media
Pipe #2 Pipe #1
Tivoli Agent #2 Data Protection API Tivoli Data ProtectionClient Agent TivoliData Protection #1 API API API for SAP R/3 for SAP R/3 Tivoli for SAP R/3 TivoliData Protection Tivoli DataProtectionClient Data Protection API Client API Client API for SAP R/3 API for SAP R/3 for SAP R/3 Client ADINT/ADSM API Client Client API Client Client Client for SAP R/3 Tivoli Data Protection
ADABAS DB
Communication Interface
ADSM
The development of ADINT/ADSM was leaning on the existing Tivoli Data Protection for R/3 interface program. Therefore, the internal structure of ADINT/ADSM is comparable with the structure of Tivoli Data Protection for R/3. Most of all, the difference is the connection for the data transfer between the ADABAS database and the ADINT/ADSM agents, which will be established with pipes that are created and managed from the ADABAS database kernel. One instance of the ADINT/ADSM, known as ADINT/ADSM MASTER , communicates with one of the R/3 database utilities. It:
70
Reads and interprets the parameters that are specified in the profile Sets up the proper environment and controls (start, stop) the ADINT/ADSM AGENT(s) Each agent transfers a data stream reading from the pipe directly, from the ADABAS database to one (of several) ADSM servers by using the ADSM API client. The ADINT/ADSM AGENT(s): Reads and writes data blocks from and to a pipe (backup/restore) Transfers data to and from the ADSM API client Can run in parallel It can be customized the way ADINT/ADSM operates through keywords and parameters in a profile that will be analyzed by ADINT/ADSM before any ADSM subcommands are processed. By customizing this profile, it can be adapted from ADINT/ADSM to the specific needs for any system environment. Parameters that ADINT/ADSM modifies during operation, for example, the backup version number, are stored in a separate binary configuration file.
3.2.2.2 Overview of ADINT/ADSM functionality The demands of functionality for a backup/restore utility are:
High throughput High availability Storage management ADINT/ADSM was designed and developed with regard to the demands above. The way that ADINT/ADSM operates can be customized through keywords and appropriate parameters within the ADINT/ADSM profile that is analyzed by ADINT/ADSM before any ADSM subcommands are processed. All ADINT/ADSM properties and the functionality itself will be determined and activated within that profile.
Parallel backup and restore Multiple ADINT/ADSM agents can be started for transferring data in parallel in order to increase the overall throughput of the backup/restore processes. A recommendation is to specify as many parallel sessions (multiple agents) as there are physical storage devices available at the ADSM server side. Version control ADINT/ADSM has its own backup version control mechanism for keeping a specified number of backups.
Remember:
The expiration period for backups, which is set in the copy group definition of the appropriate ADSM management class, must be long enough so that it does not conflict with this feature in ADINT/ADSM.
Each database backup version consists of a SAVEDATA and associated SAVELOG and/or SAVEPAGES backups. Every time a full backup has successfully completed, the version count is incremented by 1 and is stored in the
71
ADINT/ADSM configuration file. This value is also assigned to the SAVEDATA files and to all subsequent log backups. If the number of versions kept in backup storage is larger than the specified maximum number of backup versions, the oldest versions are deleted (together with the corresponding incremental backup files and log files) until only the specified maximum number of most recent versions remain. See also the comments for the Tivoli Data Protection for R/3 version mechanism in Backup by version on page 63.
Multiple management classes In order to take full advantage of ADINT/ADSM parallel backup and restore capabilities, it can be specified one management class for each physical backup device (tape drive).
Further, ADINT/ADSM distinguishes between the several methods of ADABAS backups, such as full backups, incremental backups and log backups, and it is able to use different management classes for objects that are saved with one of these properties. Therefore, ADINT/ADSM can use more than one library device at a time.
Alternate backup server Alternate servers are used for performance or disaster recovery purposes. In this case the backup data is routed to other ADSM servers. It is possible to backup the database to physically different ADSM servers. The backup data is distributed over the configured set of ADSM servers within the ADINT/ADSM profile, however, ADINT/ADSM keeps track of all backups regardless at which server they are stored. Reporting/Tracing For statistical purposes ADINT/ADSM provides information about the number of bytes transferred to or from the server and the effective throughput. ADINT/ADSM provides the possibility to write a backup/restore protocol/tracefile to help analyze problems.
AIX 4.1.5 and above SOLARIS 2.5.1 and above Other UNIX flavors will be built at inquiry Windows NT 4.0.(at least Service Pack 4) The following API client level will be supported: ADSM API client 2.1.0.5 and above
72
Built for a distributed computing environment, DB2 UDB Enterprise Edition includes backup and recovery utilities, a control center and tools, which have been extended by SAP for DB2 customers, for example, the SAP-DB2admin tool for SAP specific database administration tasks.
Online or offline backup and restore Online and offline tablespace-level backup and restore Point-in-time roll forward recovery Fast start recovery R/3 with DB2 UDB supports ADSM with respect to Tivoli Storage Manager in the following topics: Online/Offline backup and restore of the entire database and its tablespaces The connection to the Tivoli Storage manager for backup and restore purposes is an intrinsic feature of DB2 UDB. All possible backup/restore variants for DB2 databases (online, offline, complete database or selected tablespaces) are supported in using ADSM or Tivoli Storage Manager as
Chapter 3. R/3 data management tools
73
target. These functions can be carried out by using either command line options, out of the R/3 environment, or by using the DB2 Control Center. Archive/Restore log files In an R/3 DB2 environment, log files are archived using the SAP provided logging user-exit program (see also 3.3.2.2, Log file management concepts on page 74 for log file management concepts within DB2). As a log file data set becomes full, its content is automatically copied, or off-loaded using the program BRARCHIVE (different from the BRARCHIVE for ORACLE offline redo log archiving), to the R/3 archive directory. From there, it can be archived to tape with SAPs standard tool or to third party systems like ADSM or Tivoli Storage Manager. The archived log file may be retrieved from the Control Center or CCMS for a specified backup. The archive/restore of inactive log files is integrated into the SAP-DB2admin tools. Using the SAP-DB2admin tools, inactive log files are saved either to tape or to ADSM with respect to Tivoli Storage Manager.
3.3.2.2 Log file management concepts Within an R/3 DB2 environment, a log file can have four different states during its life cycle (see also Figure 32):
Online active The log file is currently being used to log transactions. The location of the log file is the LOGPATH database. This can be either a file system path or a raw device. Online retained The log file is no longer being used, but it contains transactions with data pages that have not yet been written from the buffer pool to disk. The location of the log file is the LOGPATH database and DB2DB6_ARCHIVE_PATH (that is, the log file was archived by the db2 user-exit). Offline retained The log file is no longer being used and does not contain transactions with unwritten data pages. The location of the log file is DB2DB6_ARCHIVE_PATH. Archived retained The log file is archived. The location of the log file is ADSM with respect to Tivoli Storage Manager or tape (that is, it has been saved by BRARCHIVE).
Storag e M an a ge r o r Tap e
S00 00 000 5.LO G S00 00 000 6.LO G S00 00 000 7.LO G S00 00 000 8.LO G
B
brrestore
offline retained
archived retained
74
(A) As illustrated in Figure 32, the DB2 user-exit program is able to store log files that go offline automatically into the DB2DB6_ARCHIVE_PATH. When requested by a ROLLFORWARD, it retrieves offline retained logs from the DB2DB6_RETRIEVE_PATH. (B) The BRRESTORE program retrieves archived retained log files from the archive repository and places them in the DB2DB6_RETRIEVE_PATH. BRARCHIVE is used to back up offline log files into an archive. ADSM with respect to Tivoli Storage Manager is the preferred solution. Both BRARCHIVE and BRRESTORE are accessed from the DB2CC.
75
placed in scanqueues. During scanning, description files will be generated in order to assign the documents to a certain scanqueue. As an alternative to local scanning, it can use the preferred scanning program to scan inbound documents into a defined VisualInfo workbasket. With CommonStore, it can handle three different strategies for archiving inbound documents:
Early archiving
In this case, the inbound documents are archived before the business objects are created in the R/3 system. The inbound documents will be classified by document type and archived already in the mailroom. This way, the advantages of the R/3 Business Workflow can be used and therefore it is recommended by SAP.
Simultaneous archiving
The business objects are created before archiving the inbound documents. Inbound document will be marked with a unique barcode label. The data is entered to the business object from the paper form of the inbound documents, which will then be archived in a central archiving location. Afterwards the archived document will be connected to the corresponding R/3 business object, which will be done within the so called barcode link table within R/3.
76
RFC
CommonStore DLL
Sockets
CommonStore Server
TCP/IP
Tivoli Storage Manager VisualInfo
TCP/IP
OnDemand
The CommonStore server is the most important component of CommonStore. It handles the whole archiving functionality. The CommonStore Server uses an RFC connection to communicate with the R/3 application server. The CommonStore client is used for archiving inbound documents and for viewing them. Inbound documents are, for example, invoices received from suppliers. A possible scenario to register invoices is scanning. The communication between the CommonStore client and the R/3 GUI will be handled by OLE-Calls. The CommonStore DLL handles all communication aspects. It is needed to support the SAP ArchiveLink Viewer (SAP AL-Viewer) and the CommonStore Client. The R/3 application server and the CommonStore Server together use a transfer directory. This directory will be used to hand over files between R/3 and CommonStore. If R/3 archives a file, it will be written to the transfer directory. Afterwards, the CommonStore reads this file and sends it to the appropriate archive system. A retrieve operation works just the other way around.
3.4.2.1 CommonStore Server The CommonStore server can likewise be divided up into several different components (see Figure 34).
77
ARCH-Dispatcher ARCH-PRO
ARCH-Agent ARCH-Agent Tivoli Storage Tivoli Storage Manager Manager CommonStore profile
VisualInfo
OnDemand
ARCH-PRO The central instance of the CommonStore Server is the ARCH-PRO. ARCH-PRO invokes all other CommonStore server processes. It distributes commands (receives commands, sends commands to other CommonStore server processes for execution, observes execution of CommonStore server process).
All other CommonStore server processes will be started and stopped by ARCH-PRO. The configuration gets ARCH-PRO from the settings within the CommonStore profile. ARCH-PRO communicates with all other CommonStore server processes over sockets . The only exception is in case of asynchronous commands (see next paragraph). Then, ARCH-PRO communicates directly with the R/3 application server over the RFC interface.
ARCH-Dispatcher(s) The ARCH-dispatchers are these CommonStore server programs, which communicate with the R/3 application server over the RFC interface. The number of dispatchers that will be started can be configured within the CommonStore profile. If the dispatcher was started, it will be registered at the SAP Gateway. The appropriate parameter will be provided by the CommonStore profile. After a successful registration, the dispatcher is able to receive R/3 commands, that could be synchronous or asynchronous commands.
In the case of synchronous commands, the dispatcher routes these commands to the ARCH-PRO, which is responsible for the execution of them. If the execution is done, the ARCH-PRO sends a response message back to the waiting dispatcher. After the processing of these messages by the dispatcher, they will be sent to the R/3 application server.
78
To prevent a bottleneck/blocking situation between R/3 and the dispatcher process (dispatcher waits for response of the ARCH-PRO), it is recommended to use more than one ARCH-dispatchers, which can be set within the CommonStore profile. In the case of asynchronous commands, the dispatcher also routes the R/3 commands to the ARCH-PRO. After that, the dispatcher immediately sends an acknowledgement message back to the application server. The dispatcher does not wait for any ARCH-PRO actions. The execution of the asynchronous commands reports the ARCH-PRO directly with RFC calls to the R/3 application server.
ARCH-Agent(s) The ARCH-Agents are the processes, which execute all archive and retrieve requests. The control of all started agents will be done by ARCH-PRO. All agents communicate only with the ARCH-PRO. The number of agents, that will be started can be configured within the CommonStore profile.
A typical scenario for usage of the CommonStore server is archiving of reorganization data within R/3. With the given time of operation, the volume of data stored in the R/3 database can expand very quickly. This large amount of data then leads to performance bottlenecks (both for the application and for the backup/restore performance) and causes the need for additional hardware resources. Therefore, data which is no longer needed for the applications (reorganization data) should be removed from the database. But simply deleting the data is often not possible, since legal and/or business requirements require at least read access for a given time period. So the data has to be removed from the database and has to be transferred to an external archiving system. This transfer of reorganization data will be done by the CommonStore server. The R/3 system sends instructions over the RFC interface to the ARCH-Dispatcher. These instructions will be routed to the ARCH-PRO. The ARCH-PRO distributes these instructions to the corresponding ARCH-Agent(s), where the instructions will be processed. CommonStore still allows read access to the archived data, which now are physically stored in an external archive system.
3.4.2.2 CommonStore client The CommonStore client is used for the complete communication of the R/3 GUI with the archiving system on 32-bit Windows systems. It is needed for handling SAP Inbound documents, which means archiving, retrieving and viewing.
The CommonStore client needs a running CommonStore server for processing the retrieving and viewing requests. When the CommonStore client needs different connection data, it will contact the CommonStore Server, which in turn will send the needed archive parameters or connection parameters to the R/3 system. The CommonStore client program is used for two purposes: Archiving scanned documents in combination with R/3 and OLE. This is done on the scan stations. The CommonStore client supports early, simultaneous, and late archiving.
79
Viewing R/3 documents with a customized external viewer (for example, Wang Image Viewer, Kodak Viewer, VisualInfo Viewer, Drag and View) using OLE. This is done on all stations where an R/3 GUI runs and viewing archived documents is needed. The CommonStore client is available in two variations: CommonStore Client/View - only for viewing CommonStore Client/Scanstation - for viewing and archiving scanned documents. It includes the functionality of the CommonStore Client/View and in addition can handle archiving, too. The CommonStore client has been divided into these two variations, as the archiving feature is not needed on the viewing stations. The CommonStore Client/View communicates with: R/3 using OLE CommonStore Server using Sockets As shown in Figure 33, the CommonStore client makes it possible to use external viewers in contrast with the standard AL-Viewer provided by the R/3 system. Documents can be retrieved and viewed from the supported archiving systems: Tivoli Storage Manager VisualInfo OnDemand The CommonStore Client/Scanstation can handle scan queues: From the file system From the Kofax Ascent Capture Release Module From the corresponding VisualInfo workbaskets The CommonStore Client/Scanstation can archive documents (using the CommonStore Server) to the supported archiving systems: ADSM VisualInfo OnDemand When using the strategies of early and simultaneous archiving, the CommonStore Client/Scanstation archives scanned documents when contacted from the R/3 system over the OLE interface. When using the strategy of late archiving, scanned documents will be archived when the user of the scanstation initiates this in the program CommonStore Client/Scanstation.
80
81
82
Part 2. Setup
83
84
HACMP Takeover
Production Database Server P01 RS/6000 S70 Development Database Server T01 RS/6000 H50
Quality Assurance Database Server Q01 RS/6000 S70 Tivoli Storage Manager RS/6000 H50
Storage subsystem
3575-L32 Tape Libary
The customer has three R/3 systems: The production R/3 system P01 The quality assurance R/3 system Q01 The test and development system T01
85
All of these R/3 servers are running the operating system AIX 4.3.2; all three R/3 systems are of release 4.0B and are running on an ORACLE 8.0.5 database. For the production R/3 system P01, an IBM 7017-S70 (with 8 processors and 12 GByte of main memory) hosts the database and the central instance. The database of the production R/3 system has a size of 180 GByte, the data is stored on external disk subsystems. The database files is in a RAID 1 mirrored configuration on the storage subsystem. So, a total amount of disk capacity of 360 GByte is needed. The R/3 system P01 has four additional application servers of type IBM 7026-H50 with 4 processors and 2 GByte of main memory each. The quality assurance R/3 system Q01 has the same database size of 180 GByte like the production system, but these data are not mirrored. The R/3 system Q01 has no additional application servers. The database together with the central instance reside on an IBM 7017-S70 (with 8 processors and 12 GByte of main memory). The symmetry of the layout of database and central instance servers for R/3 system P01 and Q01 is caused by high-availability considerations: on both nodes there is the switchover software HACMP for AIX installed. If the (primary) database/central instance server for the R/3 system P01 fails, then the quality assurance system Q01 is stopped and the R/3 system P01 is started on the takeover node. Since the storage subsystem is shared between the two nodes, the takeover node can access all disks holding the data of the database from R/3 system P01. The test and development R/3 system T01 is located on a IBM 7024-H50 (with 4 processors and 2 GByte of main memory). Since this database only contains customizations, the size of this database is 40 GByte. The application servers of the production R/3 system P01 are connected to the central instance and database by a switched 100 MBit Fast Ethernet. The graphical user interfaces of the clients are connected to the application servers by an additional 100 MBit Fast Ethernet (which is not shown in Figure 35). All R/3 systems are backed up to a Tivoli Storage Manager located on an IBM 7024-H50. The Tivoli Storage Manager has an IBM 3575-L32 tape library with 4 IBM Magstar 3570 C-series tape drives attached. The tape library holds a maximum of 324 tapes with a net capacity of 5 GByte each. Since there is an compression algorithm applied during the backup, the amount of file data stored on one cartridge is 10-15 GByte. So, the total capacity of active tapes in the tape library is about 3.2-4.8 TByte. Each Tape drive sustains a rate of native 7 MByte/s (with maximum compression a rate of 15 MBytes/s). Since the backup of the R/3 system is written on two drives in parallel, the data transfer between the R/3 database and the Tivoli Storage Manager uses two 100 MBit Fast Ethernet networks. (One 100 MBit Fast Ethernet has, only the theoretical achievable, an upper limit of 12.5 MByte/s).
86
involves managing multiple R/3 systems (development, quality assurance, and production). Business requirements related to data management can be one or more of the following: Data protection Availability Recovery Change management and quality assurance Ease of management Space management
R/3 system
# of backups to keep
10 generations 3 generations 10 generations
The main amount of data is due to the backup of the database of the production system. For the production system, the amount of data for saving the archived redo is about 2 GB/day, and the redo logs are kept for 10 days. To ensure the existence of one valid copy in case of a tape-failure, there are two copies saved on two different tapes. Since on the two other systems there is less user activity in the R/3 systems, the redo logs can be neglected there. The total amount of data which has to be kept for the R/3 systems is about 3 TB. Since the customer likes to have a complete copy of this data on a second set of tapes, the total amount to consider is about 6 TB of data. The second copy of the data is taken out of the library on a daily basis and the corresponding tapes are transferred to an off-site location. The customer would like to minimize any impact of performing backups on the production system. This would include minimizing the downtime and offloading the processing capacity requirements for a backup to another system.
87
4.2.2 Availability
In a typical R/3 environment, availability considerations are mainly focused on the production database. Data availability requirements for the production system include data protection at the storage subsystem level through a RAID-1/RAID-5 solution to protect from disk failure. The customer also requires a separate server with the same performance as the production system to provide continuous availability for the production system, in case of failure due to hardware (such as CPU, memory, I/O adapters, and network adapters).
4.2.3 Recovery
Recovery needs for the business are the ability to recover from: A natural disaster at the primary site Hardware failure Loss of data due to a disk failure Data loss due to user error There are different interpretations of a disaster based on the customers requirements. The most common definition is a natural disaster causing the entire data center to be unavailable. The customers requirement to recover from such a disaster is to have a remote location duplicating the production environment so that the business can be back online in the shortest acceptable time frame with no loss of any committed transactions. Recovery from hardware failure needs to focus on the single point of failure - the production database server. Some of the requirements to recover from hardware failure have already been discussed in 4.2.2, Availability on page 88. Recovery from data loss on the production database due to disk failure or user errors requires a database recovery. This implies that a mechanism to quickly build a copy of the production data before the failure needs to be implemented.
88
2000
2001 Year
2002
2003
2004
Assumptions: Initial database size 50 GB linear data growth 100 GB/year data not needed after five years: 350 GByte
89
Reducing the size of the database by the means of archiving not only has an impact on future capacity planning for the system, but it also has impact on the system administration.
Business requirement
Data protection
Solution
Hardware redundancy through RAID 1 mirroring or RAID 5 technology. Take regular backups to offline media using IBMs Tivoli storage manager, or similar products.
References
System Availability
System availability is provides through switchover software (HACMP for AIX) and a hot standby server to take over in case of system or network failure. Minimize the downtime for backups using split mirroring, remote mirroring or standby database configurations. Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283. Chapter 10, Split mirror implementation in R/3 on page 253.
90
Business requirement
Disaster recovery
Solution
Transfer copies of tapes containing backups of the production environment at a remote location. The production environment must be duplicated at the remote location and disaster recovery procedures must be established and tested using the backed up data. Time to recover from a disaster can be minimized by keeping as current a copy of the production data as possible through the use of split mirroring, remote mirroring or standby databases.
References
Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283. Chapter 10, Split mirror implementation in R/3 on page 253.
A local or remote mirroring solution in combination with a standby database provides for recovery up to the last committed transaction. However, in case of remote mirroring, this may require a trade-off between online transaction performance and disaster recovery.
A standby database setup in recovery mode with a delay in applying the archived log files can help in fast recovery from users. However, early detection of the user error is crucial to fast recovery. Upgrades to the production system can be performed quickly by cloning the production system through the use of split mirroring. New systems containing a copy of production data can be created through split mirroring techniques. These systems can be used for integration testing and stress testing.
Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283.
Change Management
Chapter 9, System cloning using Tivoli Storage Manager on page 229. Chapter 10, Split mirror implementation in R/3 on page 253.
Space Management
Data no longer required to be in the production system, can be archived by using archiving tools such as IBMs Commonstore or IXOS archive.
91
Business requirement
Data Manageability
Solution
As far as possible, tools and interfaces provided by SAP should be used to manage an R/3 system. Some of these tools provide an transparent interfaces to external storage management programs like Tivoli storage manager. Tivoli storage manager provides centralized storage management, automatic volume management through the use of large capacity tape libraries. The Tivoli data protection for R/3 provides a transparent interface to the R/3 backup and restore utilities. Similar tools exploit the BACKINT interface provided by SAP. The Commonstore product uses SAP R3s ArchiveLink interface to the SAP Archive Development Kit (ADK). This provides transparent data archival of required R/3 data objects. The IBM BackTools software provides backup performance and monitoring information of an R/3 system.
References
Chapter 5, Tivoli Storage Manager setup on page 99.
92
HA CMP Takeover
Storage subsystem
Tivoli Storage M anager RS/6000 H50 3995-C 68 Optical Libary 3575-L32 Tape Libary
This solution does not provide for a duplication of the production environment at a second remote location. The only disaster recovery option available is to keep a copy of the backed up data off-site for use in case of a disaster. In this scenario, new hardware must be made available before restoring data from a copy of tapes containing the backed up production data. Data protection from disk failures is achieved by using a storage subsystem which provides for RAID 1 mirroring or RAID 5. The Tivoli storage manager installed on a dedicated server is used for all backups and restores. System availability is provided by having the quality assurance server act as a hot standby server to the production database server to prevent single point of failure (CPU, network, etc.). HACMP for AIX is used to co-ordinate this switchover. During normal operation, the hot standby server functions as the R/3 quality assurance system. Downtime required for backups is minimized by implementing a split-mirror solution at the storage subsystem.This involves providing access to the storage subsystem from the Tivoli storage manager server. The data in the split disks is accessed by the Tivoli storage manager by making the disks accessible at the operating system level. Once the disks are split from the primary mirror, the production database is free to resume operation in matter of minutes. These procedures are described in Chapter 10, Split mirror implementation in R/3 on page 253. An alternative solution for minimizing backup downtimes is by using the Tivoli Storage Manager server as a standby database server. The standby solution is discussed in detail in Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283. The standby database can then be backed up periodically
93
using the Tivoli Storage Manager residing locally. This solution has the added advantage of providing for fast recovery from user errors. As the customer requires minimum downtime and minimum data loss, a mirror of the production databases log files is kept current in the storage subsystem and split only in case of recovery using the standby database. This ensures that all committed transactions are applied successfully to the standby database. Procedures to be followed are discussed in Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283. Upgrades to the R/3 system are made on the Tivoli Storage Manager server to minimize the impact of changes and downtime on the production system. The quality assurance system is also refreshed with the latest production database copy at regular intervals to test new customizations with production data. All this is can be accomplished either by restoring the production system from tape using the Tivoli storage manager, or by the use of the split-mirror. The split-mirror implementation would require additional disk capacity accessible by all the three servers.Procedures to be followed are discussed in Chapter 9, System cloning using Tivoli Storage Manager on page 229. R/3 data is archived from the production database and stored on the 3995-C68 optical library which has a capacity of 1.3TB. The IBM Commonstore product extracts the data from R/3 and passes it onto the Tivoli Storage Manager for storage in an optical storage pool. Procedures to be followed are discussed in Chapter 8, CommonStore implementation on page 189.
4.3.2.2 Dual site implementation This section describes an implementation for a customer who requires true disaster recovery with the bare minimum downtime and data loss. Figure 38 illustrates the updated infrastructure required for this implementation based on the customer environment discussed in 4.1, Customer environment on page 85.
94
PRIMARY SITE
HACMP Takeover
Storage subsystem
STANDBY SITE
Storage subsystem
This solution is based on the fact that the customer has dedicated a second site located remotely for disaster recovery. The second site is configured for taking over the production workload within minutes/hours. Therefore, the most critical components of the R/3 production environment - the production database and the Tivoli storage manager, are duplicated on similar hardware at the standby site and the sites are connected over a high speed wide area network (WAN). All the storage components (storage subsystem, optical storage, tape libraries) and software associated with these two servers are duplicated as well. This solution also provides for the primary site to act as the new standby in case of a disaster in failback configuration. This alternative facilitates smooth and quick failover from one site to the other. Data protection from disk failures is achieved by using a storage subsystem which provides for RAID 1 mirroring or RAID 5. The Tivoli storage manager installed on a dedicated server is used for all backups and restores. System availability is provided by having the quality assurance server act as a hot standby server to the production database server to prevent single point of failure (CPU, network, etc.). HACMP for AIX is used to coordinate this switchover. During normal operation, the hot standby server functions as the R/3 quality assurance system. An initial backup of the production database is taken using a split-mirror solution at the storage subsystem. The data is then backed up to offline media attached to the Tivoli Storage Manager server at the primary site. Once the disks are split from the primary mirror, the production database is free to resume operation in matter of minutes. These procedures are described in Chapter 10, Split mirror
95
implementation in R/3 on page 253. A copy of the tapes is then shipped for use by the Tivoli Storage Manager at the remote site. Although it is possible to use a remote mirroring solution with a split-mirror to transfer this data to the remote site, this may impact production system performance as database changes need to be applied to both local and remote mirrors before it can be committed. The network bandwidth requirements for the large volume of data (over 200GB) that will be transferred is also a critical factor that affects the electronic transfer of data. An alternative solution for minimizing backup downtimes on the production server, is by using the standby production database server at the remote site. The standby solution is discussed in detail in Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283. The standby database is created using the tapes shipped to the Tivoli storage manager at the standby site. All further backups of the production database are performed using the servers at the standby site. Copies of the backup are then shipped back to the Tivoli storage manager at the primary site. The standby production database is kept current by applying archived log files transferred from the primary site, either by remote mirroring of just the log file directories, or through file transfer using operating system tools. This solution has the added advantage of providing for fast recovery from user errors, by spacing the application of archived log files to the standby database. As the customer requires minimum downtime and minimum data loss, a remote mirror of the production databases log files is kept current in the storage subsystem and split only in case of recovery using the standby database. This ensures that all committed transactions are applied successfully to the standby database. Procedures to be followed are discussed in Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283. Upgrades to the R/3 system can be made on the Tivoli Storage Manager servers at either site to minimize the impact of changes and downtime on the production system. The quality assurance system is also refreshed with the latest production database copy at regular intervals to test new customizations with production data. All this is can be accomplished either by restoring the production system from tape using the Tivoli storage manager, or by the use of the split-mirror. This requires additional disk capacity accessible by all the three servers at each site. Procedures to be followed are discussed in Chapter 9, System cloning using Tivoli Storage Manager on page 229. R/3 data is archived from the production database and stored on the 3995-C68 optical library at the primary site. The IBM Commonstore product extracts the data from R/3 and passes it onto the Tivoli storage manager for storage in an optical storage pool. There are two 3995-C68s, one at each site, with a capacity of 1.3TB each. Data which is archived onto optical media is then copied onto second set of optical media and shipped to the remote location, similar to production data backups on tape. Procedures to be followed are discussed in Chapter 8, CommonStore implementation on page 189.
96
97
capeverde
tr0 9.1.150.54
Token Ring
AIX 4.3.2 SAP R/3 4.0B Oracle 8.0.4 Tivoli Storage Manager Client Tivoli Data Protection for SAP BackTools slave server Common Store
AIX 4.3.2 Tivoli Storage Manager Server
palana
tr0 9.1.150.113 rmt1.smc rmt1 rmt2
3570 Tape Lib
ssa0
7133-B40
98
5.1 Overview
The Tivoli Storage Manager is a client/server program that provides distributed storage management. It provides an automated policy managed backup, archive and hierarchical space management (HSM). The Tivoli Storage Manager setup can be divide in two parts: server and client. In the server, you configure the policy definition and attach the storage devices that you will use. In client, you set up the configuration and define the Tivoli Storage Manager server that will you use. The setup in this chapter is based on the case study in Chapter 4, SAP data management case study on page 85. The policy definition is based on SAP recommendations for backup/restore of data. For this scenario we built a policy definition that is shown in Figure 40 and in Figure 41. Figure 40 shows the policy for our Tivoli Data Protection for R/3 environment, and Figure 41 shows the policy for CommonStore. In the following sections you will see a general explanation about the Tivoli Storage Manager objects based on our laboratory environment: Data storage devices: We show the definition of the storage devices, the association with the storage pools and volumes, and the data storage device hierarchy. Policy: We show the definition of the policy and the objects that customize our particular business requirements. Clients: We show the definition of the Tivoli Storage Manager clients. Also, we show some recommendations for general business requirements.
99
Volume
Volume
... Management Class MC_LOG_TSM1 Copy Group type=archive retver=9999 destination=R3_DISK_LOG1 Device Class DISK Migration from R3_DISK_LOG1 to R3_TAPE_LOG1 Storage Pool R3_DISK_LOG1 Disks
...
Volume
Management Class MC_DATA_TSM Copy Group type=archive retver=9999 destination=R3_TSM_DATA Storage Pool R3_TSM_DATA
Library 3570LIB
Drive1
Drive2
Device Drives
Volume
Volume
...
Tapes
Figure 40. Policy Configuration for our case study - Tivoli Data Protection for R/3
P o lic y D o m a in S A P _ P D _ T S M
P o lic y S e t S A P _ P S _ T S M
V o lu m e V o lu m e
M a n a g e m e n t C la s s M C _ C S T O R E _ T S M C o p y G ro u p
ty p e = a rc h iv e re t v e r= 9 9 9 9 d e s tin a t io n = R 3 _ C S T O R E
...
...
S to r a g e P o o l R 3_C ST O RE
Ta p e s
D e v ic e C la s s 357 0
L ib r a ry 3570LIB
D r iv e 1
D riv e 2
D e v ic e D r iv e s
100
Storage Pool
R3_DISK_LOG1
Usage
This is the first destination for the first copy of the archive redo log files. This is the first destination for the second copy of the archive redo log files. This define next storage pool destination for the first copy of the archive redo log files. This define next storage pool destination for the second copy of the archive redo log files. This is the destination for the data files. This is the destination for the archived data from CommonStore.
R3_DISK_LOG2
DISK
R3_TAPE_LOG1
3570 (TAPE)
R3_TAPE_LOG2
3570 (TAPE)
R3_TSM_DATA R3_CSTORE
You have noticed that we created two storage pools for the archive redo log files. When you define two different storage pools for the same information or data, you assure that they will be in different tapes cartridges. This definition was made in function of R/3 backup policy requirements, where it is necessary to save two copies of the archive redo log files. For the data files, is not necessary since you can restore an older version of it and apply the archived redo logs to the database. There is an important parameter when defining a storage pool: maxscr. It specifies the maximum number of scratch volumes that the server can request for this storage pool. A scratch volume is an empty volume that can be used for store data.
101
There are two important concepts about storage pool that you should give attention to: collocation and migration. The migration is the process to migrate the data from one storage pool to another, normally from disk to tape. This process defines a storage hierarchy in the set of storage devices. For migration, you can define thresholds to start the process of migration. In Table 9 you can see the storage hierarchy of the storage pools for archive redo log files. In this table, we show the next storage pool that will be used when the thresholds for migration were reached in first storage pool. In Figure 40 you can see diagram for the migration process from the primary storage pool to next storage pool.
Table 9. Storage hierarchy of the storage pools for archive redo log files
Collocation is a process where the Tivoli Storage Manager server tries to join the information from one server in one volume. It will improve the performance when you have to restore data, since the data will be concentrated in a small set of volumes and the storage device will not have to change it often.
Management Class
MC_LOG_TSM1 MC_LOG_TSM2 MC_DATA_TSM MC_CSTORE_TSM
Usage
First copy of archive redo logs Second copy of archive redo logs Data files Archive data for CommonStore
102
Tivoli Data Protection for R/3 makes backups of the database. This backup is divided into two types: archive redo logs and data files. SAP recommends, for security reasons, that you have to backup the archive redo log twice. In this case, you will have two copies of you archive redo logs, so even if one copy is not readable from the Tivoli Storage Manager Server, you can restore another valid copy.
Note:
The definition of different management classes does not ensure that the two copies of archive redo log files will be in different tape cartridges, only the definition of different storage pools associated with different management classes will ensure it. For a data file is not necessary, since you can use an older version of it, and recover it with another set of consistent archive redo logs. This is the reason for having two copies of archive redo log files. The configuration of Tivoli Data Protection for R/3 management classes is based on the SAP recommendation: MC_LOG_TSM1 and MC_LOG_TSM2 are the two copies of archive redo logs and MC_DATA_TSM is for data files backup. Defining two management classes for archive redo logs, you will ensure that it will be placed in two different tape cartridges. CommonStore needs only one management class that we called: MC_CSTORE_TSM. The configuration of copy groups for the management classes has two important parameters set: type=archive and retver=9999. The parameter type is set to archive, because both, Tivoli Data Protection for R/3 and CommonStore, control the version of objects that were backed up and archived. In the section, Backup by version on page 63, you can find a more detailed explanation about version control. Parameter retver specifies the number of the days to keep an archive, in our case 9999 specifies an unlimited number of the days. The parameter destination defines a storage pool to where the data will be stored.
5.1.3 Clients
A Tivoli Storage Manager client has access to three services on server: backup/restore, archive/retrieve, and space management services. Tivoli Data Protection for R/3 and CommonStore only use the archive/retrieve service, for backup of its data, because they have their own version control. For application file systems you can use backup/restore and archive/retrieve services with the Tivoli Storage Manager backup/archive client. For Tivoli Storage Manager client setup, you have to register the node in the Tivoli Storage Manager server. After that, you have to set up the configuration files of the client, defining which Tivoli Storage Manager server will be accessed. You have to set a Unix user environment variable, which shows the correct path for this configuration files. When you register the client node on the server you have to assign a policy domain for this client which will define the data storage policy for it.
103
For our case study environment we defined two clients: capeverde and cstore. In Table 11 we show their usage:
Table 11. Node definition for our case study
Client Node
capeverde
Policy Domain
SAP_PS_TSM
Usage
This node is used for the Tivoli Storage Manager Interactive Client and for Tivoli Data Protection for R/3. This node is used for CommonStore.
cstore
SAP_PS_TSM
5.1.4 Safety
Tivoli Storage Manager has a complement product, called Tivoli Disaster Recovery Manager (DRM), that can help you prepare a disaster recovery plan. If a disaster occurs, you can use the plan to recover your applications. You can recover at an alternate site and on replacement computer hardware. You can also use the disaster recovery plan for audits to certify the recoverability of the server.
104
5.2.1.1 General prerequisites We assume that a successful installation of a Tivoli Storage Manager server was already done, which means that the following steps have been completed:
Tivoli Storage Manager Server 3.7 or higher code installation with the latest PTFs in a RS/6000 with AIX 4.3.1 or higher. Options and system files configured Initial space for database and recovery log volume created Licenses registered Default device class and library configured Default storage pool configured Default policy configured
Note:
You will find the latest code fixes for Tivoli Storage Manager on FTP servers:
ftp://index.storsys.ibm.com
When setting up the system file (dsm.sys) you should pay particular attention to communications options. The option COMMMETHOD defines the communication method that will be used to provide connectivity between client and server. This method should be correctly set in the client also, otherwise, you will have communications problems. The options for the communication method are: shared memory, TCP/IP and SNA LU6.2. For more information about Tivoli Storage Manager installation, upgrade, and requirements, consult the book Tivoli Storage Manager AIX Quick Start , GC35-0367 or Tivoli Storage Manager for AIX Administrators Guide, GC35-0388, and the redbook Getting Started with ADSM: A Practical Implementation Guide, SG24-5416. The default policy and storage pool definition are used for the backup and restore of file systems. In other business requirements it will be necessary to configure a policy and storage pool specific for it.
5.2.1.2 Special configurations Before starting the installation, there are important issues about performance that should be considered. The choice of which server to install, network topology and using RS/6000 SP can improve the performance in your system by avoiding performance bottlenecks.
Host for Tivoli Storage Manager Server The Tivoli Storage Manager server should be installed on an exclusive machine. This procedure avoids a concurrent process and disk I/O with other applications. In a single SAP system landscape, a single Tivoli Storage Manager server will be sufficient. If you plan to use it to backup/restore other clients you may consider installing on a fast machine or the use of multiple Tivoli Storage Manager servers. Before considering a second server we recommend upgrading your current server hardware, since it is easier and cheaper to have only one Tivoli Storage Manager server.
105
Network topology Network topologies such as Ethernet, Token Ring, FDDI, Fast Ethernet and ATM all work well with Tivoli Storage Manager. Each has its strengths and weaknesses. In general, choose the fastest network topology you can afford like FDDI or Fast Ethernet since it will decrease the time for backup and restore operations. For client backups other than R/3, consider using Tivoli Storage Manager compression. In this case, be prepared for performance bottlenecks in client machines. Tivoli Storage Manager supports multiple network adapters increasing server throughput by providing multiple connections to the same network or several physically distinct networks with the same server. RS/6000 SP environment It would be faster using one RS/6000 SP node as the Tivoli Storage Manager server. The use of a High Performance Switch network will improve the performance of backup/restore on a Tivoli Storage Manager server.
5.2.1.3 Storage devices setup In this part we explain the setup of our case study storage devices. In Figure 40 on page 100 two storage devices are shown: disks and a 3570 tape library. Tivoli Storage Manager uses these storage devices defining a device class for it. You cannot define a device class for disk, because it is already defined internally by Tivoli Storage Manager and named DISK. For the 3570 tapes or other storage devices, such as optical disks and robots, you must define it in Tivoli Storage Manager before using it.
To define the device class for the 3570 tape library in Tivoli Storage Manager follow these steps: 1. Define one library for a collection of one or more drive (could be a robotic device).
For our case study, the library was called 3570LIB and it was a SCSI device defined in /dev/rmt1.smc. 2. Assign the device drives to the library.
tsm> DEFINE DRIVE 3570LIB DRIVE1 DEVICE=/dev/rmt1 ELEMENT=16 tsm> DEFINE DRIVE 3570LIB DRIVE2 DEVICE=/dev/rmt2 ELEMENT=17
In this example, we define two drives for the library 3570LIB, defined in /dev/rmt1 and /dev/rmt2. To set the parameter element you have to follow the specification of your storage device, in our case we used the configuration shown in the book IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers - Installation and Users Guide, GC35-0154. 3. Define a device class to the library.
106
We defined a device class, called 3570, for a storage device type 3570 and library defined 3570LIB.
In the storage pool setup, you define the storage hierarchy for your environment. In our case study, we defined a storage hierarchy for the backup of archive redo logs. See Table 12. First, we defined the secondary storage pool for backup in the archive redo logs files in tapes, that we called R3_TSM_LOG2. We then defined the primary storage pool for the backup of these files in the disk, called R3_TSM_LOG1, with the definition of the next storage pool for the secondary storage pool
R3_TSM_LOG2.
Table 12. Storage hierarchy of the storage pools for archive redo log files
The definition of the next storage pool (tapes) will be used when the space used in the primary storage pool (disks) reaches the maximum threshold. In this case, the Tivoli Storage Manager will start the migration of data from the primary storage pool to the secondary storage pool until the minimum thresholds of the space used are reached. In our example, we did not define the maximum and minimum thresholds for the storage pool migration, the default values were used, 90% and 70%, respectively.
R3_TAPE_LOG1 3570 MAXSCRATCH=10 R3_DISK_LOG1 DISK NEXTSTGPOOL=R3_TSM_LOG2 R3_TAPE_LOG2 3570 MAXSCRATCH=10 R3_DISK_LOG2 DISK NEXTSTGPOOL=R3_TSM_LOG2 R3_TSM_DATA 3570 MAXSCRATCH=10
107
The definition of the thresholds is a important issue when you are defining the policy for all your environments. The migration process should not interfere with any backup window of your system landscape, and you have to define values for the threshold having this issue in mind. Another way to avoid this interference is by defining the maximum threshold for 100% and making a forced migration by using the Tivoli Storage Manager scheduler when there is no other backup running. If you decide to have high thresholds, you should pay attention to the size of the disks. It is necessary to have enough space to store the archive redo log files, otherwise, you can stop the database, if your system does not back up these files. Having high space in the primary storage pool will decrease the time for recovering the files since they are usually faster than secondary storage pools.
5.2.2.2 Policy definition For the Tivoli Data Protection for R/3 policy definition, you have to define the policy domain, the policy set, the management class, and the copy group. After the definition phase you need to assign a default policy set, validate, and activate it.
For the policy definition for our case study, you can follow all these steps: 1. Policy domain and policy set definition
2. Management class definition for data files, archive redo logs, and copy of archive redo logs
tsm> DEFINE MGMTCLASS SAP_PD_TSM SAP_PS_TSM MC_DATA_TSM tsm> DEFINE MGMTCLASS SAP_PD_TSM SAP_PS_TSM MC_LOG_TSM1 tsm> DEFINE MGMTCLASS SAP_PD_TSM SAP_PS_TSM MC_LOG_TSM2
Note:
If you are planning to use this Tivoli Storage Manager server with multiple R/3 systems, it is recommended to use different management classes for each system. At least, one specific for your production system should be different from the develop and quality assurance systems.
3. Copy groups definition for data files, archive redo logs, and copy of archive redo logs
108
tsm> DEFINE COPYGROUP SAP_PD_TSM SAP_PS_TSM MC_DATA_TSM TYPE=ARCHIVE DESTINATION=R3_TSM_DATA RETVER=9999 tsm> DEFINE COPYGROUP SAP_PD_TSM SAP_PS_TSM MC_LOG_TSM1 TYPE=ARCHIVE DESTination=R3_DISK_LOG1 retver=9999 tsm> DEFINE COPYGROUP SAP_PD_TSM SAP_PS_TSM MC_LOG_TSM2 TYPE=ARCHIVE DESTination=R3_DISK_LOG2 retver=9999
For the copy group definition the parameter type is set to archive and retver is set to unlimited (9999). The reason for this setting is that the Tivoli Data Protection for R/3 and CommonStore have their own version control. Another important setting for the copy group is that the copies of the archive redo logs have their destination first pointing to a storage pool whose device class is a disk. 4. Assigning the default management class
tsm> VALIDATE POLICYSET SAP_PD_TSM SAP_PS_TSM tsm> ACTIVATE POLICYSET SAP_PS_TSM SAP_PS_TSM
5.2.2.3 Node definition All Tivoli Storage Manager client nodes must be registered in the Tivoli Storage Manager server. This is the command to register the node capeverde on Tivoli Storage Manager server palana, for domain SAP_DATA_TSM:
If you upgraded from a previous version of ADSM (ADSTAR Distributed Storage Manager) to Tivoli Storage Manager and use more than one drive you should give special attention to the new parameter MAXNUMMP It defines the . maximum number of mount points that one node can do, and the default value is 1. One node can only use more than one mount point if the parameter is correctly set for the correct number of mount points desired.
The password passwd will be used only the first time that you use the Tivoli Storage Manager client since the parameter PASSWORDACCES=GENERATE is set in the
109
Tivoli Storage Manager API client file dsm.sys. For more detailed information, see 5.3.3.2, Password access generate on page 114.
5.2.2.4 Automation The Tivoli Storage Manager provides functions for automating server and client operations. You can schedule any Unix command line or shell script in the Tivoli Storage Manager server. In the Tivoli Storage Manager client node, you must have the scheduler client running. In this section we show the setup to schedule a shell script backup in the server and the client command to start the client scheduler.
Administrators must create schedules with the ACTION=COMMAND parameter to support command files. In our example by using the DEFINE SCHEDULE command, we define a schedule called BACKUP that will process a shell script called /home/tsmadm/backup.sh, which starts the brbackup every day at 01:00 am. For more information about brbackup automation, see 5.2.2.4, Automation on page 110.
tsm> define schedule sap_pd_tsm backup description=R/3 database backup action=command objects=/home/tsmadm/backup.sh starttime=01:00 duration=1 durunits=hours period =1 preunits=day dayofweek=any
After this, the schedule must be associated with the client node CAPEVERDE. Besides, a client node can be associated with more than one schedule.
The procedure to start the client scheduler you can find in 5.3.2, Automation: client scheduler on page 112.
You will have to define the maximum number of scratch volumes that this storage pool will be allowed to use. The maximum number of scratches depends on the size of the media, the number of media available, and the size of the amount of archived information. The size of archived data depends on your CommonStore and R/3 configuration for data archiving. Since initially you do not know exactly the size of archived data, we suggest that you set the maximum number of scratches for a small number of scratches, for example 5, and then analyze the real requirements for archiving your data.
110
5.2.3.2 Policy definition The policy definition for CommonStore can be the same as defined for Tivoli Data Protection for R/3 as explained in 5.1.2, Data storage policy on page 102. In our case study we use the same policy definition shown on the definition of policy for Tivoli Data Protection for R/3. 5.2.3.3 Node definition For CommonStore we recommend to use another node name in Tivoli Storage Manager server, different from the node name defined for Tivoli Data Protection for R/3. The reason for a different name is to show the differences between R/3 data for backup and archive and to support the different data on a logical and physical level. This node name refers only to internal Tivoli Storage Manager server reference, the machine node name is the same for both, Tivoli Data Protection for R/3 and CommonStore.
The policy domain is the same for Tivoli Data Protection for R/3 as explained in the previous section.
111
Storage Manager Interactive Client, Tivoli Data Protection for R/3, or CommonStore. The setup for the Tivoli Storage Manager client is done by changing the files dsm.opt and dsm.sys. This setup is shown in the next part with the specific Tivoli Storage Manager client setup for Tivoli Data Protection for R/3. Then, for the Tivoli Storage Manager client setup, you can follow this procedure. After this procedure, you have to configure the Include and Exclude file. This file defines which files the Tivoli Storage Manager client will include or exclude for any procedure of backup, archive or hierarchical storage. You can see an example of this file in the appendix, see E.3, Include/Exclude List: inclexcl.lst on page 357. The Tivoli Storage Manager API uses unique environment variables to locate files. This permits you to use different files for API applications from those that the interactive client uses. For the Unix users that will use this API, you should set the variables list in Table 13.
Table 13.
Environment Variable
DSMI_CONFIG
Usage
The fully-qualified name for the client options file. Points to the path containing dsm.sys, dsmtca, the en_US subdirectory, and any other NLS language. The en_US subdirectory must contain
dsmclientV3.cat
DSMI_DIR
/usr/tivoli/tsm/client/ba/bin
DSMI_LOG
/tmp
Also in client setup, you should pay particular attention to setting up the system file (dsm.sys). The option COMMMETHOD defines the communication method that will be used to provide connectivity with the server and must be correctly configured with its setup.
Note:
The file dsm.sys is exclusive for Unix machines. For other operating systems you should consult their respective documentation.
112
The client can choose to start the client scheduler when the operating system is started, or can start it at any appropriate time. For example, an AIX client can include the SCHEDULE command in the inittab file to start the client scheduler when the operating system is started as you can see in Figure 42.
5.3.3 Specific API setup for Tivoli Data Protection for R/3
For the Tivoli Storage Manager API setup, you have to configure two files: dsm.opt and dsm.sys. These two files can be found in the path /usr/tivoli/tsm/client/ba/bin. In the file dsm.opt you set the options for the Tivoli Storage Manager client, and for the Tivoli Data Protection for R/3. You have to set the values shown in Figure 43.
AMENG 1 ON NO 1
(default) (default)
(default)
On dsm.sys, you set up the Tivoli Storage Manager system configuration. An example from our case study is shown in Figure 44. This example shows the server stanza, called palana1, for the Tivoli Data Protection for R/3.
SERVERNAME COMMMETHOD TCPPORT TCPSERVERADDRESS TCPWINDOWSIZE TCPBUFFSIZE TCPNODELAY LARGECOMMBUF SCHEDMODE NODENAME PASSWORDACCESS COMPRESSION INCLEXCL
palana1 TCPIP 1500 9.1.150.113 640 32 Yes Yes PROMPTED CAPEVERDE GENERATE OFF /usr/tivoli/tsm/client/bin/inclexcl.lst
Also in the example of file dsm.sys, there are two important parameters PASSWORDACCESS and COMPRESSION. You can see in the next two sections their purpose and configuration. The parameter INCLEXCL is used to define the path for the Include and Exclude file for the Tivoli Storage Manager client.
113
5.3.3.1 Data compression The Tivoli Storage Manager has a client built-in data compression that could be used to reduce the amount of data to back up (parameter compression on). Another advantage of using data compression is to minimize network traffic. The problem with using Tivoli Storage Manager data compression is that it is a heavy process consumer. To solve this problem, Tivoli Data Protection for R/3 has its own method of data compression for R/3 database backups. This compression is lighter and should be used instead of Tivoli Storage Manager client data compression. Normally we recommend that you use only the Tivoli Data Protection for R/3 data compression, but if you have a very large and fast CPU you could try to run the Tivoli Storage Manager data compression, in addition.
If you have tape drives with hardware compression attached to the Tivoli Storage Manager server, you might get better throughput with the hardware compression than you would with the Tivoli Storage Manager client software compression. You should be careful not to compress the data twice. In general, Tivoli Storage Manager client software compression improves performance only when network throughput is quite small.
5.3.3.2 Password access generate When you define a node in the Tivoli Storage Manager server you have to define a password for the authentication of the connection with the server. The parameter PASSWORDACESS accepts two values: PROMPT and GENERATE. When the value is PROMPT, every time that you make a connection with the server you have to use the password configured. In the case of using GENERATE, the process of authentication is different. At the first time the Tivoli Storage Manager client node makes a connection with the Tivoli Storage Manager server, you need to use the password defined. After this connection, the server generates a coded password that will be stored in a safe place on client machine. Every new connection with the server will be made using this coded password.
The first connection to a Tivoli Storage Manager server must be performed, from the root user, by entering the following commands:
You will be prompted to enter the initial password. For more information on password options, see the book Tivoli Data Protection for R/3, SC33-6379.
5.3.4 Specific API setup for EDM Suite CommonStore for SAP
For the EDM Suite CommonStore for SAP configuration, you have to add a new server stanza in file dsm.sys, called palana2. This new stanza will be the reference to CommonStore to find the Tivoli Storage Manager server with the correct node name. This node name, as explained before, is set different from the Tivoli Data Protection for R/3 node name for an easy understanding of the difference between the data stored in Tivoli Storage Manager. An example of our case study is shown in Figure 45.
114
SERVERNAME COMMMETHOD TCPPORT TCPSERVERADDRESS TCPWINDOWSIZE TCPBUFFSIZE TCPNODELAY LARGECOMMBUF SCHEDMODE NODENAME PASSWORDACCESS COMPRESSION
palana2 TCPIP 1500 9.1.150.113 640 32 Yes Yes PROMPTED CSTORE GENERATE OFF
Notice in Figure 45 that the parameter PASSWORDACCESS, in this example, is set to GENERATE too. For more information, refer to 5.3.3.2, Password access generate on page 114. Remember that when you use this parameter and define a node in the Tivoli Storage Manager server, you must make a connection first by using the Tivoli Storage Manager client interface to initialize the password and the node in the Tivoli Storage Manager server.
115
116
6.1.1 Prerequisites
The Tivoli Data Protection for R/3 code will be delivered typically on floppy discs or CD-ROM, or there is the possibility to download the Tivoli Data Protection for R/3 files directly from the home page of the Enterprise Solution Development department:
http://www.de.ibm.com/ide/esd.
There are some prerequisites to visit before it is possible to activate the backup interface program Tivoli Data Protection for R/3. These are: R/3 3.0D or higher based on a ORACLE database ADSM 2.1 or higher or Tivoli Storage Manager 3.7 for server, client and API In the respective R/3 release notes it is possible to find out which ADSM or Tivoli Storage Manager levels can be used. UNIX operating system at the level supported by R/3 and the ADSM or Tivoli Storage Manager client.
Note:
When the Tivoli Storage Manager Backup/Archive client is installed, ensure that there is a softlink /usr/lib/libApiDS.a pointing to the libApiDS.a file in the installation directory of the Tivoli Storage Manager (/usr/tivoli/tsm/client/api/bin) API.
The following sections give more details about the requirements for R/3 and the Tivoli Storage Manager.
117
6.1.1.1 R/3 database utilities Tivoli Data Protection for R/3 Version 2 Release 4 can be used for R/3 systems with ORACLE databases based on a standard UNIX file system or raw logical volumes.
The following SAPDBA modules at these minimum levels must be installed on the R/3 database server: BRBACKUP 3.0D BRARCHIVE 3.0D BRCONNECT 3.0D BRRESTORE 3.0D It is possible to consult the SAP support with the Online Service System (OSS) for information about which functions are compatible with the appropriate R/3 release. The proper modules can be downloaded from the SAP server. Information about this can be found in OSS note 19466. A minimum level of 3.0F of SAPDBA is required if the R/3 release is 3.0D or higher in combination with ORACLE on raw devices.
6.1.1.2 Tivoli Storage Manager client The installation of the ADSM or the Tivoli Storage Manager clients (Backup/Archive client for filesystem backups, API client for interface programs) has to be done on the R/3 database server machine. At least the ADSM client version 2.1 or the Tivoli Storage Manager 3.7 has to be installed. The client software will be shipped with the server software.
For latest PTFs for Tivoli Storage Manager, see 5.2.1.1, General prerequisites on page 105. TCP/IP or shared memory has to be ready for communication between the ADSM client or the Tivoli Storage Manager client and the ADSM Server or the Tivoli Storage Manger server. After all the prerequisites have been ensured, the installation process can start which includes copying software to the destination directory, customizing the profile, and verifying the installation with a backup and restore. For backup tests, use the SAP database utilities SAPDBA, BRBACKUP and BRARCHIVE. In the case of restore with respect to recovery, we recommend that you use only SAPDBA.
6.1.2 Installation
In this section we will discuss the installation and customization of Tivoli Data Protection for R/3 using manual setup methods. The configuration process can also be configurator based through the Administration Tools for Tivoli Data Protection for R/3 . Further information can be found in Chapter 7, Administration Tools implementation on page 153.
6.1.2.1 System related installation steps An installation of Tivoli Data Protection for R/3 should be done as root user. We assume that all the necessary files reside in a temporary directory on our R/3 database server machine.
118
The system related installation steps are: Rename executables from the Tivoli Data Protection for R/3 package and move them to the target directory. Set ownership and permissions of these files. Rename and move the Tivoli Data Protection for R/3 profile and the Tivoli Data Protection for R/3 configuration file to the home directory of the ORACLE database. Move the Tivoli Data Protection for R/3 catalog file to the message catalog repository. All these steps will be described in more detail in the paragraphs below.
Executables Rename the executables which are backpro.xxx , backagent.xxx , and backfm.xxx and move them to the directory where the R/3 executables SAPDBA, BRBACKUP, BRARCHIVE and BRRESTORE reside. In most cases the name of this directory is /sapmnt/<SID>/exe, where SID stands for system identifier and in the case of the test environment, that was used, this will be TSM.
Ownership and permissions Set the file ownership and permission as follows.
root.dba /sapmnt/TSM/exe/backint root.dba /sapmnt/TSM/exe/backagent root.dba /sapmnt/TSM/exe/backfm 4550 /sapmnt/TSM/exe/backint 4550 /sapmnt/TSM/exe/backagent 4550 /sapmnt/TSM/exe/backfm
Setting the s-bit causes Tivoli Data Protection for R/3 to work without being called by root. For security reasons only users who are members of the group dba can execute Tivoli Data Protection for R/3. Typically this will be done by the R/3 administrative user which is called tsmadm. That user is by default a member of the dba group.
Note:
The two users within the R/3 environment which are <SID>adm and ora<SID>, will be written always in lower case. With regard to our test environment, this will be tsmadm and oratsm .
Note:
If Tivoli Data Protection for R/3 is not installed in the /sapmnt/<SID>/exe directory it cannot be called by the R/3 database utilities.
119
Tivoli Data Protection for R/3 configuration and profile Move and rename the sample Tivoli Data Protection for R/3 profile initSID.utl and the configuration file initSID.bki to the home directory of the ORACLE database which is /oracle/TSM/dbs. It is necessary to set the appropriate file ownership and permissions for these files.
The Tivoli Data Protection for R/3 profile contains all of the keywords/parameters with which Tivoli Data Protection for R/3 can be configured. For detailed information about this see Appendix B, Tivoli Data Protection for R/3 profile on page 343. The configuration file contains some data only for internal use of Tivoli Data Protection for R/3, such as the ADSM or the Tivoli Storage Manager password in encrypted form.
Note:
There should be no changing or deleting activities on this configuration file at any time from the user side. Such activities will cause Tivoli Data Protection for R/3 to fail or to work incorrectly.
root@capeverde:/tsm>mv initSID.utl /oracle/TSM/dbs/initTSM.utl root@capeverde:/tmp>chown oratsm.dba /oracle/TSM/dbs/initTSM.utl root@capeverde:/tmp>chmod ug+rw /oracle/TSM/dbs/initTSM.utl root@capeverde:/tmp>mv initSID.bki /oracleTSM/dbs/initTSM.bki root@capeverde:/tmp>chown oratsm.dba /oracle/TSM/dbs/initTSM.bki root@capeverde:/tmp>chmod ug+rw /oracle/TSM/dbs/initTSM.bki
Tivoli Data Protection for R/3 message catalog Tivoli Data Protection for R/3 requires a message catalog for all of the messages which will be printed to a screen and/or to a trace file for information and debugging purposes. Thus, the Tivoli Data Protection for R/3 catalog file backint.cat will be moved to the message catalog repository which can be /usr/lib/nls/msg/en_US or /usr/lib/nls/msg/En_US depending how the language environment variable $LANG is set. It can be checked with the command env|grep LANG.
Having done all these steps, the installation process is complete and the next focus is on how to configure the Tivoli Data Protection for R/3 profile, like what are the configuration possibilities with the help of the profile and how the installation of Tivoli Data Protection for R/3 with respect to the profile configuration can be tested.
6.1.2.2 Customizing profile In this section, we discuss customizing the Tivoli Data Protection for R/3 and of the SAPDBA profile. The profiles must be customized, because they are, at this point, not ready for work with Tivoli Data Protection for R/3 and show in the case of the Tivoli Data Protection for R/3 profile only an example of how it could look. A description of all possible profile keywords can be found in Appendix B, Tivoli Data Protection for R/3 profile on page 343.
120
The Tivoli Data Protection for R/3 profile has to be customized with some fundamental entries so that it works in all cases (backup, restore, inquire, delete). At this point in time there is no special customizing mentioned, for example, in view of better performance. This is discussed in 6.5, Possibilities to improve backup/restore performance on page 147. Besides, we recommend that you adjust the sample profile called initSID.utl which is part of the Tivoli Data Protection for R/3 installation package, which was moved in a previous step into the /oracle/TSM/dbs directory. There is a full description all of the Tivoli Data Protection for R/3 profile keywords, which can be found in Appendix B, Tivoli Data Protection for R/3 profile on page 343. Figure 46 shows which necessary Tivoli Data Protection for R/3 profile entries have to be set meaning changed in view of the used lab environment.
Figure 46. Customized Tivoli Data Protection for R/3 profile initTSM.utl
The keywords on the left side in the given Tivoli Data Protection for R/3 profile have the following semantics: BACKUPIDPREFIX - Prefix of the Backup ID which will be used for communications with SAPDBA. Default is SAP___. MAX_VERSIONS - Defines the maximum number of database backup versions to be kept in backup storage. Default is 0, meaning that Tivoli Data Protection for R/3 versioning is disabled. MAX_SESSIONS - Number of total parallel sessions which will be established by Tivoli Data Protection for R/3. This number should correspond with the number of simultaneously available tape drives specified for the Tivoli Storage Manager server. Default is 1. BACKAGENT - Complete path and program name of the Tivoli Data Protection for R/3 agent program. Default for name is backagent. CONFIG_FILE - Specifies the configuration file for Tivoli Data Protection for R/3. SERVER - Specifies the used Tivoli Storage Manager server. This value must correspond to an entry in the dsm.sys file (see 5.3.3, Specific API setup for Tivoli Data Protection for R/3 on page 113). SESSIONS - Number of Tivoli Storage Manager sessions that can be started by Tivoli Data Protection for R/3 for the corresponding server. Default is 1. BRBACKUPMGTCLASS - Specifies the Tivoli Storage Manager management class that Tivoli Data Protection for R/3 uses when called from BRBACKUP.
121
BRARCHIVEMGTCLASS - Specifies the Tivoli Storage Manager management class that Tivoli Data Protection for R/3 uses when called from BRARCHIVE. The configuration of the SAPDBA profile initTSM.sap which is part of the R/3 installation refers to three keywords within that file. These keywords are: backup_dev_type - Determines the backup medium that will be used (default is tape). backup_type - Identifies the default type of the database backup. This parameter is only used by BRBACKUP (default is offline). util_par_file - This parameter specifies where the parameter file which is required for a backup with an external backup program is located. In the case of Tivoli Data Protection for R/3 there are three supported combinations between the keywords backup_dev_type and backup_type as shown in Table 14.
Table 14. SAPDBA profile parameter combinations
Operation
Offline backup Online backup Online backup with individual tablespace locking
backup_dev_type
util_file util_file util_file_online
backup_type
offline online online
For the tests with the environment as described in 4.4, Lab hardware and software overview on page 97, to do online backups with individual tablespace locking with the external backup program the Tivoli Data Protection for R/3 the SAPDBA profile parameter must be set as shown in Figure 47:
Before starting the test there is still an additional point which should be handled, and it depends on the selected password handling alternative. If the password handling method is manual (Tivoli Data Protection for R/3 profile keyword PASSWORDREQUIRED with parameter YES), the user has to provide the passwords for the Tivoli Storage Manager server, which had been specified when the client node ID had been registered at the Tivoli Storage Manager server. If the so called manual password handling has been selected (Tivoli Data Protection for R/3 profile settingPASSWORDREQUIRED YES, Tivoli Storage Manager configuration file settingPASSWORDACCESS PROMPT) the password must be made known initially and for each time the password expiration period within Tivoli Storage Manager for this node is over.
122
Tivoli Data Protection for R/3 prompts for all required passwords which had been set at the appropriate Tivoli Storage Manager (depends on how many Tivoli Storage Manager servers were declared in the Tivoli Data Protection for R/3 profile) and checks whether the passwords are valid.
6.1.2.3 Verifying the Installation There is no verification procedure provided with Tivoli Data Protection for R/3. To verify and test the installation it is recommended to do a small backup procedure (tablespace backup) using SAPDBA and to start a full online backup using BRBACKUP.
It is also strongly recommended to do a restore/recovery of the whole R/3 database. A good preparation for this is first to run a complete offline backup using BRBACKUP. The step by step scenarios for these backup procedures will be shown in 6.4, R/3 backup and recovery using Tivoli Data Protection for R/3 on page 130.
6.2.1 General
Tivoli Data Protection for R/3 provides a set of necessary functions to do backup/restore operations within an R/3 ORACLE database environment. Tivoli Data Protection for R/3 will be invoked with a set of appropriate parameters by the R/3 database utilities with one of the functions defined within the SAP BACKINT interface description (see 3.1.6, SAP BACKINT interface for ORACLE databases on page 56). This could be: Backup Inquire Restore Also, a Tivoli Data Protection for R/3 delete function was implemented. The delete function will be used as a part of the versioning control mechanism of Tivoli Data Protection for R/3 (see the section, Backup by version on page 63). A call of the delete function will be done only by Tivoli Data Protection for R/3 itself, not from the R/3 database utilities. The following sections describe the functions supported by the SAP BACKINT interface in more detail.
123
call Tivoli Data Protection for R/3 directly to backup individual files. See the following example:
Note:
The Tivoli Data Protection for R/3 profile initTSM.utl has to be specified with the absolute path statement as shown above. The program prompts you to enter one or more filenames. Every successfully backed up run (collection of one or more files) file gets its own backup ID within Tivoli Storage Manager.
Tivoli Data Protection for R/3 prompts you to enter the inquiry in one of four formats. These are: 1. #NULL - to display all backup IDs saved so far. A typical line of the response could be #BACKUP TSM___9910291715. The backup ID in this case is TSM___9910291715 (#BACKUP does not belong to the backup ID). The first six characters are the user defined prefix (see 6.1.2.2, Customizing profile on page 120). After this the next 10 digits represent the date and time when the backup was started (YYMMDDHHMM). 2. BackupID - to display all of the files which belongs to this backup ID. A result looks like #BACKUP TSM___9910291316 /oracle/TSM/dbs/initTSM.utl. 3. #NULL Filename - to display all of the backup IDs corresponding to this file. Filename requires an input which consists of path and name of the file (full filename). 4. BackupID Filename - to verify whether a specific file has been saved under a certain backup ID. Filename requires an input of a full filename. Besides, if the Tivoli Data Protection for R/3 version control mechanism was activated (profile keyword MAX_VERSIONS greater than 0), the inquire function is used to find out which backup ID(s) can be deleted from the Tivoli Storage Manager server. The delete operation will be done automatically from the internal Tivoli Data Protection for R/3 delete function.
an input filename. The contents of this file are the backup ID and the corresponding full filenames. However, it is possible to call Tivoli Data Protection for R/3 directly to restore individual files.
You will be prompted to enter the backup ID and the full filenames of the files to be restored. If the files should be restored to another directory it is necessary to supply a path entry to the input.
Caution:
When a file will be restored directly, any existing file with the same name will be overwritten without warnings. Thus, it is recommended never to restore database files directly, because this could corrupt the database.
Only users with a deep knowledge about the restore and recovery mechanism of ORACLE databases should use this tool. The recommendation is to fall back to restore operations on the SAP provided tools, SAPDBA and BRRESTORE.
Tivoli Data Protection for R/3 File Manager uses the standard functions provided by Tivoli Data Protection for R/3 to perform all operations. The Tivoli Data Protection for R/3 File Manager user interface consists of a split character based window. On the left will be displayed all backup IDs found on all Tivoli Storage Manager servers which were configured within the Tivoli Data Protection for R/3 profile. To the right of each backup ID Tivoli Data Protection for R/3 File Manager shows all the files that belong to a particular backup ID. There is the possibility to select individual backup IDs or multiple files in view of restore or delete operations. The Tivoli Data Protection for R/3 File Manager has to be started (user has to be member of the dba group) with the path and name of the Tivoli Data Protection for R/3 profile.
tsmadm@capeverde:/home/tsmadm>backfm -p /oracle/TSM/dbs/initTSM.utl
Tivoli Data Protection for R/3 File Manager now establishes a connection to all Tivoli Storage Manager servers which were configured in the Tivoli Data Protection for R/3 profile. The next step is an automatic inquire operation on all
125
backup IDs. Figure 48 shows the Tivoli Data Protection for R/3 File Manager after a finished inquire procedure with a found set of backup IDs.
Figure 48. Tivoli Data Protection for R/3 File Manager - select Backup ID
By using the TAB key, the cursor goes to the right-hand panel and all filenames which belong to the inverted backup ID are displayed as seen in Figure 49.
Performing a restore operation on files these desired files have to be marked. This can be done either with the F3 function key to mark all the files that were found or with the ENTER key to mark only one desired file. Marked files can be identified by the symbol * in front of the filename. Only the marked files will be restored.
126
Single backup IDs can be deleted. A delete operation removes the selected backup ID and its corresponding files from the Tivoli Storage Manager server.
Note:
It is not possible to use Tivoli Data Protection for R/3 File Manager to delete individual files from the Tivoli Storage Manager server. Only backup IDs including the files it belongs to can be deleted, because a consistent database backup characterized through a backup ID becomes inconsistent if there is the possibility to delete one or more files from a set of consistent files.
127
For every day, one or more actions (see Figure 51) can be scheduled by double clicking with the mouse pointer into a field of a calender day shown in Figure 50.
128
(for example shell scripts on UNIX) contain sequences of commands that are run at a scheduled start date and time. In our case the shell script starts the BRBACKUP program, and this program itself starts the database backup with Tivoli Data Protection for R/3.
#!/bin/ksh # backup.ksh # Sample shell script to start BRBACKUP SID=TSM sid=$(print $SID | tr '[A-Z]' '[a-z]') su - ${sid}adm -c "/sapmnt/${SID}/exe/brbackup -c -t online"
For information about how to define a Tivoli Storage Manager schedule for execution of predefined command files Tivoli Data Protection for R/3, to activate it, and to start the schedule process on the R/3 database server, see 5.2.2.4, Automation on page 110 and 5.3.2, Automation: client scheduler on page 112.
root@capeverde:/>crontab -e
For example, the crontab should start the shell script backup.ksh (see Figure 52), which simply uses the SAP database utility BRBACKUP to save the data.
#!/bin/ksh # backup.ksh # Sample shell script to start BRBACKUP brbackup -u system/manager -c -t online"
Figure 52. Sample shell script to start BRBACKUP
Thus, the entry in the crontab to start the script backup.ksh (see Figure 52) Monday through Friday at 11:30 p.m. looks as follows in Figure 53.
# Sample crontab entry to be included in the root crontab jobs. # Save the database files (online backup) scheduled at 11:30 p.m. # Monday through Friday 30 23 * * 1,2,3,4,5 /usr/bin/su - oratsm -c /oracle/TSM/sapscripts/backup.ksh
Figure 53. Sample crontab entry to start a shell script to a specified schedule
129
6.4 R/3 backup and recovery using Tivoli Data Protection for R/3
In this section, we want to show how a backup works with respect to restore/recovery scenario within an R/3 environment in connection with Tivoli Data Protection for R/3. 6.4.1, Backup scenarios on page 130 describes two procedures for starting a backup: full online/offline backup with BRBACKUP tablespace backup with SAPDBA. 6.4.2, Restore/Recovery scenario on page 134 shows a restore of an R/3 tablespace and a recovery of the R/3 database with the help of SAPDBA.
To invoke the BRBACKUP program it is necessary to logon either as tsmadm or as oratsm . An overview of the possible BRBACKUP parameters can be seen with the command:
tsmadm@capeverde:/home/tsmadm>brbackup -help
To start an online backup of the R/3 database the following command has to be executed:
tsmadm@capeverde:/home/tsmadm>brbackup -c -t online
The parameter -c is used to do the backup in a unattended mode so no further user input during the backup operation is required. Command parameter -t online says it is an online backup. In a similar way, an offline backup can be invoked by BRBACKUP. But two things have to be considered, that is, whether the R/3 database is running or not. If the database is running, BRBACKUP first shuts down the database instance. Therefore, BRBACKUP has to be started with the offline_force parameter.
130
tsmadm@capeverde:/home/tsmadm>brbackup -c -t offline_force
If the backup is finished, BRBACKUP starts the R/3 database instance again. In the case of an earlier stopped R/3 database instance, BRBACKUP can be started with the normal offline parameter.
tsmadm@capeverde:/home/tsmadm>brbackup -c -t offline
During the backup there are a lot of screen messages. Every message has their own message code. By means of this code the messages will be classified into: Error messages - last letter of a message code is an E Warning messages - last letter of a message code is a W Information messages - last letter of a message code is an I. All these message codes have a specific prefix. This prefix shows from which program a message was generated. The different kinds of prefixes are: ANS/ANR - ADSM or Tivoli Storage Manger server messages BKI - Tivoli Data Protection for R/3 messages BR - BRARCHIVE, BRBACKUP or BRRESTORE messages ORA - ORACLE database kernel messages. After BRBACKUP has finished it is recommended to check the backup protocol for unexpected errors. The backup protocol is located in /oracle/TSM/sapbackup (or in /oracle/TSM/saparch if it was a BRARCHIVE run). Within this file all of the screen messages were included. All entries for successfully backed up files should be preceded by #SAVED. An error occurred if there are any #ERROR or #NOTFOUND messages. If an error occurred the first step is to check the content of the protocol described above. First, find the error messages, and with the use of the appropriate manual, the message code gives you an explanation of the error that occurred and a recommendation for a user response.
6.4.1.2 Tablespace backup with SAPDBA SAPDBA is a database utility provided by SAP to have a set of various database administration functions and among other things a backup function is provided. This function is not a new implementation of a ORACLE database backup procedure. SAPDBA starts BRBACKUP that again calls Tivoli Data Protection for R/3.
Now we will show a backup of the R/3 tablespace PSAPUSER1D. To invoke the SAPDBA program it is necessary to logon as ora<SID> , in our case oratsm . After, the SAPDBA screen is started, it looks as follows:
131
8.0.4.4.0 TSM /oracle/TSM open 40B, 7 times connected h i j k l m n Backup database Backup offline redo logs Restore/Recovery DB check/verification Show/Cleanup User and Security SAP Online Help
Startup/Shutdown instance Instance information Tablespace administration Reorganization Export/import Archive mode Additional functions
To do a tablespace backup, the Backup database function has to be started with the h key. The following screen pops up:
______________________________________________________________________________ Backup database ______________________________________________________________________________ Current value Normal backup initTSM.sap util_file_online all online no
a b c d e g h i j k l
Backup function Parameter file Backup device type Objects for backup Backup type Query only Special options ... Standard backup Backup from disk backup Restart backup Make part. backups compl.
yes
As already shown in 6.1.2.2, Customizing profile on page 120, the SAPDBA profile initTSM.sap has to be customized properly in view of : the backup type (online, offline) the backup utility parameter file
132
To check the backup utility parameter file, go to the Special options ... menu with the h key. (In the case of Tivoli Data Protection for R/3 usage, it should be set to initTSM.utl file, which is in the /oracle/TSM/dbs directory.) We get the following screen:
a d e i
Confirm backup parameters Backup utility parameter file Enter password interactively Language
If everything was set properly go back with the q key to the Backup database screen as shown above. Now we have to specify the object(s) to be backed up. This can be done by choosing the Objects for backup option with the d key within the Backup database menu. The following screen appears:
Format for the desired objects for backup: <item> or <item>,<item>,... An <item> can be - "all" - "all_data" - "sap_dir" - "ora_dir" - a tablespace_name - an ORACLE file id <number> or a range of file ids <number>-<number> - an absolute file or directory name Enter objects for backup ==>
We enter the desired name of the tablespace for the backup, in our case psapuser1d. If we go back on the Backup database screen we can start the backup procedure with the selection of the Start BRBACKUP function (S key). All the other parameters which are visible and were not discussed should not be changed. These are default parameters set by SAPDBA. For further information about these, see the SAPDBA manual. After starting the backup procedure BRBACKUP starts Tivoli Data Protection for R/3 to send the datafiles that belong to the tablespace PSAPUSRE1D to the Tivoli Storage Manager server. The backup is successfully finished if there is a message on the screen like this:
133
BR056I End of database backup: bdbhowho.pnf 1999-11-08 11.19.42 BR052I BRBACKUP terminated successfully. SAPDBA: BRBACKUP executed successfully. Press <return> to continue ...
All the messages which run during the backup over the screen will be written to a BRBACKUP log file which will be stored in the /oracle/TSM/sapbackup directory with a unique name as shown in the box above (bdbhowho.pnf). If errors occurred check the content of this log file for a detailed error analysis. The backup is finished and we can leave the SAPDBA now.
After starting SAPDBA the initial screen appears as shown in 6.4.1.2, Tablespace backup with SAPDBA on page 131.
134
Note:
Restore and recovery operations within SAPDBA can only be done in the so called expert mode. For information on how to switch to this mode, see the SAPDBA manual.
To do a tablespace restore/recovery, the Restore/Recovery function has to be started with the j key. The following screen pops up:
a - Partial restore and complete recovery (Check and repair, redo logs and control files are prerequisites) b - Full restore and recovery (excl. redo logs, control files incl. if required) c - Reset database (incl. redo logs and control files) d - Restore one tablespace e - Restore individual file(s) q - Return
In our case we will do only a partial restore (datafiles for the tablespace PSAPUSER1D) and afterwards a complete recovery of the R/3 database. Thus, the selection which will be made is Partial restore and complete recovery with the a key. We get the following screen:
135
Status a b c d e f Check Find Restore Find Restore Recover database backup files backup files archive files archive files database not not not not not not finished finished finished finished finished finished
All the functions seen in the previous screen must be done in alphabetical order, or you use the Automatic recovery function which is activated with the g key. The Automatic recovery function goes through all of the functions: it starts with Check database and ends with Recover database. If an error occurred during this process the automatic recovery stops at the function where the error occurred. After the start of the automatic recovery procedure a lot of screen messages will be seen, most of them only for informational purposes. Sometimes confirmations like Press <return> to continue or a yes or no input will be expected. If possible, follow the recommendations of SAPDBA. The first confirmation has to be done at after a successful startup of the instance:
136
************************ SAPDBA - Startup instance TSM ************************ PL/SQL Release 8.0.4.4.0 - Production Connected. ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Server Manager complete.
SAPDBA: Instance TSM mounted. SAPDBA: Database mounted 1.Controlfile /oracle/TSM/sapdata1/cntrl/cntrlTSM.dbf exists 2.Controlfile /oracle/TSM/sapdata2/cntrl/cntrlTSM.dbf exists 3.Controlfile /oracle/TSM/sapdata3/cntrl/cntrlTSM.dbf exists SAPDBA: All control files are O.K. Database mounted Press <return> to continue ...
After pressing the Return key, the check of the database continues:
08.11.99 13:35 --- Check of data file type and structure on disk: SAPDBA: 30 files exist according to V$DATAFILE. 30 files are regular files.
SAPDBA: All
************ End of checking data file type and structure on disk *************
The check continues with checking of datafiles and valid links on the disk:
137
08.11.99 13:39 --- Check of data files and valid links on disk: SAPDBA: File structure information is also derived from '/oracle/TSM/sapreorg/structTSM.log'. SAPDBA: The following data file(s)/dir structure(s) are missing: SAPDBA: Datafile '/oracle/TSM/sapdata5/user1d_1/user1d.data1' missing *************** SAPDBA - Summary: Checking data files on disks ****************
************* End of checking data files and valid links on disk **************
SAPDBA has discovered that one datafile (user1d.data1) is missing (belongs to the PSAPUSER1D tablespace). Check of bad entries in control files will be done:
08.11.99 13:42 --- Check of bad entries in V$DATAFILE (controlfile): SAPDBA: All controlfile entries are valid. *********** End of checking bad entries in V$DATAFILE (controlfile) *********** Press <return> to continue ...
SAPDBA now invokes a safe check, because one data file is missing. It is recommended to use the default instruction by only pressing the Return key. All default instructions within this process will be given in square brackets in the input line.
SAPDBA: Automatic safe check because 1 file is already missing. SAPDBA: To check the database it has to be closed and mounted again because 1 file is already missing. (safe check) SAPDBA: Database is mounted and will be shut down immediately. Do you want to continue? (y/n) [y] ==> y
The safe check checks among other things the status of the tablespaces (if in backup mode) or the status of the archiver (if enough space is in the /oracle/TSM/saparch directory).
138
08.11.99 13:52 --- Check of tablespace backup status: SAPDBA: No tablespace is in backup mode which is O.K. Press <return> to continue ... ****************** End of checking tablespace backup status *******************
SAPDBA: Archive directory '/oracle/TSM/saparch' SAPDBA: FS space (303528 KB free) is sufficient (more than 10% free) SAPDBA: Archiver settings and disk space are O.K. ********************* End of checking status of archiver ********************** Press <return> to continue ...
**************** SAPDBA - Checking status of online redologs ***************** SAPDBA: DB is not open -> Oracle information about redologs is invalid Check will be skipped here.
08.11.99 13:56 --- Check of datafiles status (ONLINE,OFFLINE): SAPDBA: All data files are online. Press <return> to continue ...
The safe check process ends here with a summary of the check results. Now we will continue with the Find backup files procedure.
139
********** End of checking data file's ONLINE status via V$DATAFILE *********** 08.11.99 13:57 --- Check of data file's RECOVERY status via V$RECOVER_FILE: SAPDBA: Data file '/oracle/TSM/sapdata5/user1d_1/user1d.data1' not found - restore necessary SAPDBA: Alert - FILE_MISSING ******* End of checking data file's RECOVERY status via V$RECOVER_FILE ********
********************** SAPDBA - SUMMARY OF CHECK RESULTS ********************** SAPDBA: 1 data file(s) need to be restored and recovered 0 data file(s) need to be recovered only
SAPDBA: Database mounted Do you want to - continue the automatic recovery using function "Find backup files" or do you want to - quit? (c = continue, q = quit) [c] ==> c
SAPDBA informs you that it will be used in the external backup utility, which is, in our case, Tivoli Data Protection for R/3 and was defined in the SAPDBA profile.
SAPDBA: SAPDBA will use the external backup tool 'your_backup_utility' to find and restore files.This information was derived from current backup_dev_type 'util_file_online' in BRBACKUP profile. Press <return> to continue ... ************************* SAPDBA - Find backup files ************************** SAPDBA: Trying to find backups of damaged files using the inquire function of your_backup_utility ... SAPDBA: You are working with your_backup_utility. The utility will need a parameter file. You can specify the filename here. Please, enter the filename [/oracle/TSM/dbs/initTSM.utl] ==>
After an inquiry of the Tivoli Storage Manager server defined in the Tivoli Data Protection for R/3 profile, Tivoli Data Protection for R/3 makes a query to find out if the missing data file on the Tivoli Storage Manager server.
/usr/sap/TSM/SYS/exe/run/backint -u TSM -f inquire -t file -p /oracle/TSM/dbs/initTSM.utl -i /tmp/aaaLK2wqa Contents of commandfile "/tmp/aaaLK2wqa": #NULL /oracle/TSM/sapdata5/user1d_1/user1d.data1 -------------------------------------------End of file - press <return> to continue ...
140
BKI0005I: Start of backint program at: Mon Nov 8 14:10:38 1999. #BACKUP TSM___9911081117 /oracle/TSM/sapdata5/user1d_1/user1d.data1 #BACKUP TSM___9911041636 /oracle/TSM/sapdata5/user1d_1/user1d.data1 #BACKUP TSM___9911031210 /oracle/TSM/sapdata5/user1d_1/user1d.data1 #BACKUP TSM___9911021119 /oracle/TSM/sapdata5/user1d_1/user1d.data1 #BACKUP TSM___9911011743 /oracle/TSM/sapdata5/user1d_1/user1d.data1 BKI0020I: End of backint program at: Mon Nov 8 14:10:39 1999. BKI0021I: Elapsed time: 01 sec.
The find function will scan old backup logs now. How many days should the find function go back at the most? (q = quit) [30] ==>
After a successful find, the restore parameter for the backup files has to be specified.
______________________________________________________________________________ Specify restore parameters for backup files ______________________________________________________________________________ Current value BRBACKUP profile initTSM.sap No restart of restores: Backup determined autom. Backup utility parameter file /oracle/TSM/dbs/initTSM.utl Language English
a b g i
As shown in the screen above all parameters look fine and we can continue with the restore by pressing the q key. SAPDBA shows the parameter again and asks to continue with the restore of the backup files. The given default input value (c - continue) should be used.
List of restore parameters Current value BRBACKUP profile initTSM.sap Backup utility parameter file /oracle/TSM/dbs/initTSM.utl Language English Do you want to - continue the automatic recovery using function "Restore backup files" or do you want to - quit? (c = continue, q = quit) [c] ==> c
The next screen is only for information. Nevertheless, it should be read carefully.
141
************************ SAPDBA - Restore backup files ************************ SAPDBA: Checking whether restore is about to restore a redo log, a control file or an init.ora file ... SAPDBA: Checking whether restore is about to overwrite files ... SAPDBA: Checking whether a backup file was specified for each damaged file and whether there is a file on a volume backup (using your_backup_utility) ...
SAPDBA: Restoring file(s) from the backup that was created on 1999-11-08 at 11.17.48 (bdbhowho.pnf) ... !! !!! !! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! SAPDBA: BRRESTORE might report warnings. Please, ignore them, most of them are considered by SAPDBA using restart intelligence. !! !!! !! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! Press <return> to continue ...
Now SAPDBA starts BRRESTORE which again starts the Tivoli Data Protection for R/3. Here is the last chance to stop the restore (in case of accidental overwriting of data files). Otherwise, type cont. Tivoli Data Protection for R/3 will start the restore process.
/usr/sap/TSM/SYS/exe/run/brrestore -b bdbhowho.pnf -d util_file_online -k no -l E -p initTSM.sap -r /oracle/TSM/dbs/initTSM.utl -m /oracle/TSM/sapdata5/user1d_1/user1d.data1 BR401I BRRESTORE 4.0B (39) BR169I Value 'util_file_online' of parameter/option backup_dev_type/-t ignored f or 'brrestore' - 'util_file' assumed. BR405I Start of database restore: rdbhpmvk.rsb 1999-11-08 14.24.04 BR457W Probably the database must be recovered due to partial restore. BR280I BR407I BR408I BR409I BR411I BR419I BR416I BR421I BR280I BR256I cont Time stamp 1999-11-08 14.24.04 Restore of database: TSM BRRESTORE action ID: rdbhpmvk BRRESTORE function ID: rsb Database file for restore: /oracle/TSM/sapdata5/user1d_1/user1d.data1 Files will be restored from backup: bdbhowho.pnf 1999-11-08 11.17.48 1 file found to restore, size 10.008 MB. Backup device type for restore: util_file Time stamp 1999-11-08 14.24.04 Please enter 'cont' to continue, 'stop' to cancel the program:
After finishing the restore, it will continue with the find of archive files to restore them. In our case, all the necessary archive files are still on disk and therefore there is no need to find and restore these files. SAPDBA has recognized this and suggests to skip this step.
142
SAPDBA: Restored backup file "/oracle/TSM/sapdata5/user1d_1/user1d.data1" successfully. SAPDBA: Restore backup files: 1 file(s) successful, 0 file(s) with errors Do you want to - continue the automatic recovery using function "Find archive files" or do you want to - skip the functions "Find archive files" and "Restore archive files" or do you want to - quit? (c = continue, s = skip, q = quit) [s] ==> s
We have reached the last point in our recovery procedure, the recovery of the R/3 database itself.
Do you want to - continue the automatic recovery using function "Recover database" or do you want to - quit? (c = continue, q = quit) [c] ==> c
After the recovery was done SAPDBA tries to mount and open the database. If this is done successfully, the recovery has finished successfully.
143
************************** SAPDBA - Recover database ************************** Oracle Server Manager Release 3.0.4.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8 Enterprise Edition Release 8.0.4.4.0 - Production PL/SQL Release 8.0.4.4.0 - Production Connected. Media recovery complete. Server Manager complete. SAPDBA: Database successfully recovered using online redo logs. SAPDBA: Trying to startup database from mount to open ... Oracle Server Manager Release 3.0.4.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8 Enterprise Edition Release 8.0.4.4.0 - Production PL/SQL Release 8.0.4.4.0 - Production Connected. Statement processed. Server Manager complete. SAPDBA: Database mounted and opened. Press <return> to continue ... SAPDBA: open ** *** ** ** * * * * * * * * * * * * * * * * * * * SAPDBA: Recover database terminated successfully. * ** *** ** ** * * * * * * * * * * * * * * * * * * SAPDBA: After recovery the DB is open and usually O.K.. However, there may be a block corruption due to restore (network,device) failures or former degradation of hard disks. SAPDBA can call DB Verify to verify all blocks - this might take about 1 hour per 5 GB. Do you want to call DB VERIFY (y/n) [n] ==> n Press <return> to continue ... Automatic recovery terminated successfully. Press <return> to continue ...
It is up to the user, whether or not to do the test in view of block corruption of the disks. In most cases this would take a long time. It depends on how fast the system has to be available. It is not a must, it is the users choice. The restore/recovery is finished and the R/3 system should be ready for work.
6.4.2.2 Full recovery In case of a disaster, a full recovery of the R/3 database has to be started, because the whole R/3 file system is lost. The full recovery of the R/3 database differs from the partial recovery in that point, that perhaps we do need to restore 144
beside of the data files of tablespaces also some redo log files or control files. This means, all of these necessary must be restored and afterwards recovered: Data files Redo log files Control files Similar to the partial restore/recovery, it is also recommended to use the SAPDBA to do a guided recovery of the R/3 database. SAPDBA provides a function to do a full restore/recovery.
a - Partial restore and complete recovery (Check and repair, redo logs and control files are prerequisites) b - Full restore and recovery (excl. redo logs, control files incl. if required) c - Reset database (incl. redo logs and control files) d - Restore one tablespace e - Restore individual file(s) q - Return
To locate this screen, see 6.4.1.2, Tablespace backup with SAPDBA on page 131. In contrast to the partial restore, we now select the function Full restore and recovery with the b key. At the beginning of the full restore/recovery SAPDBA recommends to backup the whole database if the database is not damaged. Otherwise at least a backup of the redo log files is recommended.
SAPDBA: SAPDBA strongly recommends to perform a full backup of the database before a full restore and recovery. If the database is not damaged (i.e. it can be opened) you can use BRBACKUP to back up. Otherwise you should back it up by other means. Additionally, SAPDBA recommends to back up all redo logs using the following procedure if the database is not damaged: - perform as many log switches as online redo log groups (ALTER SYSTEM SWITCH LOGFILE;) - save all your offline redo logs using BRARCHIVE If the database is damaged you should back up the redo logs by other means. Press <return> to continue ...
145
The next step is to select a full online/offline backup made in the past. This can be done with the function Select a full online/offline backup with the A key. As long as this was not done the restore/recovery status is disallowed (see the screen below).
DATABASE STATE : mounted RESTORE / RECOVER: disallowed (see status) Current setting A b c D Select a full online/offline backup Recover until Show status Restore and recover
now
q - Return
Please select ==> A 1-bdbeejyv.aff 2-bdbejsiq.anf 3-bdbfefgf.anf 4-bdbfhuiv.anf 5-bdbfroqz.anf 6-bdbghtwj.anf 7-bdbgljpc.anf 8-bdbgqlxo.aff 9-bdbgwhhw.aff 1999-10-21 1999-10-22 1999-10-26 1999-10-27 1999-10-29 1999-11-01 1999-11-02 1999-11-03 1999-11-04 12.20.41 14.19.44 18.23.49 11.52.45 11.38.01 17.43.37 11.19.32 12.10.24 16.36.08 ALL ALL ALL ALL ALL ALL ALL ALL ALL offline db_to_util_file online db_to_util_file_online online db_to_util_file_online online db_to_util_file_online online db_to_util_file_online online db_to_util_file_online online db_to_util_file_online offline db_to_util_file offline db_to_util_file
Please, enter the number of the BRBACKUP run that you want to restore (q = quit) [9] ==> 9
We recommend that you use the newest BRBACKUP run for the restore which is in our case selection 9. After this was done SAPDBA tries to find archive files using the inquire function of Tivoli Data Protection for R/3. Ensure that the SAPDBA profile initTSM.sap is properly customized (see 6.1.2.2, Customizing profile on page 120). If the SAPDBA comes back to the full restore and recovery screen, the restore/recovery status was changed to allowed. With the selection of the Restore and recover function (D key) the restore/recovery procedure can be started.
146
SAPDBA: This is the last chance to cancel the restore process of all tablespaces of your database. If you continue now all your tablespace files will be deleted and overwritten by an older generation. A recovery up to now will be applied to your database after restore. The database will be opened. If the restore or recovery process fails for some reason your database will probably be in an inconsistent or incomplete state. Do you want to continue? (y/n) [n] ==> y
A warning message pops up, because all tablespace files of the current database (if there any) will be deleted (overwritten). That might be a problem if there is still an existing R/3 database. In case of a disaster, there are no datafiles and, accordingly, this message above will not pop up. All the steps which now follow are similar to the steps in 6.4.2.1, Recovery of one tablespace on page 134. Only the check of the database which checks which files are missing will be dropped, because all data files, all control files, and all necessary redo log files will be restored and afterwards recovered.
6.5.1 General
There are some possibilities to improve the performance of backup with respect to restore operations within the R/3 system landscape. First consider, which hardware environment will be chosen for the backup/restore scenario. Besides things like machines for the R/3 database server, storage manager and storage media, the connection between these systems plays an important role. For a mass of data, often more than 100 GB, a thin network like Ethernet or Token Ring is not enough and results in a bottleneck.
Remember:
The weakest link within a backup/restore solution is finally decided by the overall speed of backup/restore runs. Besides, the hardware decisions that have to be done first, there are some possibilities for adapting the backup of software and hardware to reach better performance. It can be distinguished among:
147
Network settings Tivoli Storage Manager settings Tivoli Data Protection for R/3 settings Performance tuning can be a complex scenario, because a lot of software and hardware are involved. In some cases it can be difficult to find out where the problem is. The following sections will show the most important tuning possibilities. It is not a general solution to guaranty a higher backup/restore performance for any system landscape. Rather, we will give some suggestions and recommendations, such as what is important in any case, and what can be tried to reach a higher level of the backup/restore transfer data rate from the R/3 database server to the backup media and the other way around. Other possibilities to minimize down times of R/3 production systems are: Warm standby scenarios Split mirror backup scenarios These two alternative solutions will be discussed in Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283 and in Chapter 10, Split mirror implementation in R/3 on page 253.
The no command performs no range checking, therefore, it accepts all values for the variables. If used incorrectly, the command can cause the system to become inoperable. Table 15 shows the appropriate network attributes with the recommended values.
Table 15. Tuning of network settings
Attributes
rfc1323
Value
1
Description
Enables TCP enhancements as specified by RFC 1323, TCP Extensions for High Performance. The default is 0. A value of 1 specifies that all TCP connections will attempt to negotiate the RFC enhancements.
148
Attributes
sb_max
Value
1310720
Description
Specifies the maximum buffer size allowed for a socket. The default is 65536 bytes. In view of performance recommendations, the sb_max value corresponds to the TCPWindowsize setting within the Tivoli Storage Manager configuration file dsmsys and should be twice as TCPWindowsize (see also Table 17 on page 150).
To adapt these values, the following commands at the appropriate machine can be invoked by the root user.
# no -o rfc1323=1 # no -o sb_max=1310720
If an SP switch will be used, the following two values should be adapted as shown in Table 16.
Table 16. Tuning of SP switch buffer pools
Attributes
rpoolsize
Value
10485760
Description
The receive pool is a buffer pool for incoming data. The size for values is in bytes The send pool is a buffer for outgoing data. The size for values is in bytes.
spoolsize
10485760
The buffer pool settings can be changed using the chgcss command. After the changes a reboot of the node is required. Further detailed information can be found at:
http://www.rs6000.ibm.com/support/sp/perf
149
Table 17 shows the corresponding Tivoli Storage Manager configuration file attributes with the recommended values.
Table 17. Tuning Tivoli Storage Manager configuration file attributes
Attributes
TCPBuffsize
Value
32
Description
Specifies the size in, kilobytes, of the buffer used for TCP/IP send requests. This option affects whether or not Tivoli Storage Manager sends the data directly from the session buffer or copies the data to the TCP buffer. A 32K buffer size forces Tivoli Storage Manager to copy data to its communication buffer and flush the buffer when it fills. Specifies whether the server should send small amounts of data or allow TCP/IP to buffer the data. Disallowing buffering may improve throughput but more packets are sent over the network. Specifies the size, in kilobytes, which will be used for the TCP/IP sliding window for the client node. This is the size of the buffer used when sending or receiving data. The range for values are 0 to 2048.
TCPNODelay
YES
TCPWindowsize
640
Parallelism can be applied by: Reading data in parallel from multiple disks (parallel sessions) and writing them to multiple tapes (one per session). Reading data from multiple disks and folding them into one data stream (multiplexing) to be written to tape. Thus, high speed tape like 3590 can be used very efficiently by matching disk read with tape write performance (provided sufficient network bandwidth). Transmitting data over parallel network connections (on a session basis). Writing data in parallel to multiple tapes can also be done by sending them to multiple Tivoli Storage Manager servers.
150
Tivoli Data Protection for R/3 allows you to utilize several simultaneously running sessions to drastically increase the overall data transfer rate to a Tivoli Storage Manager server, depending on the MAX_SESSIONS profile keyword (or MAX_ARCH_SESSIONS, MAX_BACK_SESSIONS, MAX_RESTORE_SESSIONS ). Depending on the R/3 database utility program, the management class specified in the profile keyword BRBACKUPMGTCLASS will be used, when running the SAPDBA or BRBACKUP with respect to the profile keyword BRARCHIVEMGTCLASS in case of BRARCHIVE. In the case of running BRBACKUP, usually the data will be written directly to tape drives on the Tivoli Storage Manager server. This means the parameter specified in the MAX_SESSIONS keyword matches the number of tape drives used simultaneously. Here are some recommendations, which should be given attention on the Tivoli Storage Manager server side: No activation of collocation in the (tape) storage pool The proper number of tape drives should be available, when BRBACKUP is running In case of running BRARCHIVE either disk or tape storage pools can be utilized. The size of offline redo log files is much smaller compared to the size of database tablespace files being backed up with BRBACKUP. The advantage of a disk storage pool is due to the random access nature: Several sessions of one BRARCHIVEs can utilize one or two independent disk storage pool(s) Several sessions of BRARCHIVE runs on several databases can simultaneously utilize one or two independent disk storage pool(s) In the case of using tape as primary pools, the same considerations for the tape setup in BRBACKUP apply. For safety reasons the usage of the Tivoli Data Protection for R/3 profile keyword REDOLOG_COPIES with a minimum value of 2 is recommended and a specification of two management classes in the BRARCHIVEMGTCLASSES is highly recommended. If a backup copy of an offline redo log file was lost, in the case of a restore, Tivoli Data Protection for R/3 automatically would get the data to be restored from the pool specified on the second management class if the storage pool would be separated from the storage pool having encountered a read problem.
6.5.4.2 Compression Tivoli Data Protection for R/3 is able to do a null block compression before sending data over the network to the Tivoli Storage Manager server. The Tivoli Data Protection for R/3 profile keyword RL_COMPRESSION has to be set to YES (default is NO). This compression has been designed especially for database files, since they usually contain large portions of null blocks. The compression is very simple and very fast and requires little CPU power. In contrast, the Tivoli Storage Manager compression is much more sophisticated, but also requires many more CPU resources than the simple null block compression. We recommend that you use the null block compression instead of the Tivoli Storage Manager compression.
Chapter 6. Tivoli Data Protection for R/3 implementation
151
6.5.4.3 Multiplexing In order to increase the data rate to a Tivoli Storage Manager server, it can multiplex several files (up to eight) to a joint data stream. To do this, the keyword MULTIPLEXING in the Tivoli Data Protection for R/3 profile has to be set (default is 1).
For example, with MULTIPLEXING 4, each session of Tivoli Data Protection for R/3 reads from four files in parallel and stores the data in a special multiplexed file on the Tivoli Storage Manager server. This multiplexed file consists of a mixture of blocks from all four files. If the average disk read rate is 4.5 MB/s from each disk the database is backed up with a data rate of 4.5 * 4 = 18 MB/s. If each file is compressible by a factor of two and compression is activated, it can be transferred 4.5 * 4 / 2 = 9 MBs to one Tivoli Storage Manager server. With null block compression, but without multiplexing, we could transfer only 4.5 / 2 = 2.25 MBs to one Tivoli Storage Manager server. (This example assumes that the Tivoli Storage Manager server and tape drive are fast enough to cope with this data rate.) Multiplexing reduces the number of necessary Tivoli Storage Manager servers (and tape drives). The optimal value for MULTIPLEXING depends strongly on the hardware environment, for example fast network (FDDI, Fast Ethernet) and fast tape drives within the tape (media) library and on the compressibility of the database files. Good values are expected in the multiplexing range from 1 to 4. The example above shows some of the typical dependencies between the total backup rate, the disk transfer rate, the compression ratios, and the Tivoli Storage Manager data rate.
6.5.4.4 DISKBUFFSIZE In some environments, disk access at certain block sizes is much faster than at other block sizes. The keyword DISKBUFFSIZE in the Tivoli Data Protection for R/3 profile allows the user to specify a certain block size for disk I/O in the range from 4096 to 262144. In most cases it will be expected, that the default value (131072) is already close to an optimal value. 6.5.4.5 ADSMBUFFSIZE Similarly, the keyword ADSMBUFFSIZE in the Tivoli Data Protection for R/3 profile allows you to specify a certain block size in the range from 4096 to 262144 for the buffers, that are passed to the Tivoli Storage Manager API functions. However, the API functions send a large buffer not as a whole, but in several smaller pieces over the network. So the default value (131072) for ADSMBUFFSIZE usually does not need to be changed. This parameter is effective only when it will be used for null block compression. In all other cases it will have no effect.
152
7.1.1 Prerequisites
The following prerequisites must be performed before the Administration Tools can be invoked. 1. A successful installation of Tivoli Data Protection for R/3. See also 6.1, Setup of Tivoli Data Protection for R/3 on page 117.
Note:
The version of Tivoli Data Protection for R/3 must be 2.4 or higher. 2. Administration Tools require a JAVA Runtime Environment (JRE).
Note:
The version requirement for the JRE is 1.1.6 up to 1.1.8. For AIX 4.3, the PTF 8 for JDK 1.1.6 is required too. The version of the installed JAVA software can be checked with the following command from a command line (this is the same for UNIX and Windows systems):
# java -version
Not all vendors provide the JRE separately. You can install the JAVA Development Kit (JDK), because the JRE is part of the JDK. The JDK for AIX can be downloaded from
http://www.software.ibm.com/java/jdk/download
153
For SOLARIS and Windows NT the necessary JAVA software can be found at:
http://www.javasoft.com/products
For other operating systems contact the operating system vendor for JRE or JDK availability. 3. On the client machine a full JAVA capable Web browser is required.
Note:
The chosen Web browser must support the JDK Version 1.1.6 or higher and must be Remote Method Invocation (RMI) capable. You can use Netscape Navigator Version 4.05 or higher. A native Microsoft Internet Explorer does not support RMI, but a RMI patch can solve this problem. The patch can be downloaded at :
http://www.alphaworks.ibm.com/tech/RMI
4. For UNIX systems an X windows system is required. After all prerequisites have been ensured, you can start the installation process, which includes installing software for the Administration Tools server and for the Administration Tools slave server, by using the JAVA based installer on the R/3 database server machine. To verify the installation, simply start the Administration Tools.
7.1.2 Installation
In this section, we will discuss the installation and customization of Administration Tools. The installation process is divided into two parts: one is the Administration Tools server installation, the other is the Administration Tools slave server installation. Before the installation starts, keep these things in mind: 1. The Administration Tools server can be installed in principal on any machine in an R/3 system landscape. 2. The Administration Tools slave server must be installed on the R/3 database server machine. 3. We recommend that first you install the Administration Tools server and then all the necessary Administration Tools slave servers, because the Administration Tools server is the central instance within the Administration Tools scenario and all slave server installations depend on this instance.
7.1.2.1 Installation of Administration Tools server Corresponding to our lab environment, the Administration Tools server will be installed on a Windows NT 4.0 PC running a JAVA 1.1.8 environment. To assist the user in the installation of the Administration Tools package, a setup assistant is provided, called installer, which guides you through the installation process.
154
Note:
It is necessary to have system administrator privileges to install the Administration Tools properly.
The Administration Tools will be delivered as an JAVA class file named install.class, and the installation must be started from a temporary directory.
Note:
There is no need to set the environment variable CLASSPATH . If this variable is set in the system environments the current directory, where the file install.class resides, then it must be included.
C:\Temp>java install
Only the name of the class file has to be passed. If only the JRE is installed, the command
has to be used. Now the installer becomes active, and we recommend that you follow the instructions on the screen. When you reach the specify ports screen, as shown in Figure 54, it is important to: check for free ports in the services file (in most cases, you can use the values given.) remember the specified ports for later use.
155
After this, specify or correct the hostname of the Administration Tools server machine as shown in Figure 55.
156
Remember:
Communication between the Administration Tools server and clients (Web browser with respect to a JAVA applet) is done with JAVA RMI. A RMI communication requires a clear assignment between the IP address and alias name of both communication clients involved. All clients must be capable of resolving the IP address of the other to an alias name and vice versa. If you use a DHCP service on the Administration Tools server machine it is possible that this machine has two names, which shows a typical DHCP problem. A clear hostname of the Administration Tools server is necessary for later communication between clients and this server (server configuration can not be changed later). Thus, the recommendation is to use a static IP address instead of a temporary generated by DHCP. If DHCP is used be aware, in case of a reboot of the Administration Tools server machine, that this machine will get a new IP address and the Administration Tools will not work. After the setup process has finished successfully, the system impact is: On UNIX, a new entry in the /etc/inittab to start the Administration Tools automatically (Admt:23:once:+sh Install directory/sadmt.sh). The expression Install directory will be replaced with the real directory name specified during the setup process. On Windows NT, a new NT service. Therefore setup creates an entry in the Windows NT registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services with the name Admt.
7.1.2.2 Installation of Administration Tools slave server Based on our lab environment the Administration Tools slave server will be installed on the R/3 database server machine (4.4, Lab hardware and software overview on page 97) running JAVA 1.1.6 environment. Most of all installation steps are nearly the same as described above for the Administration Tools server.
After the start of the installer it is only necessary to select the desired install option as shown in Figure 56.
After this finishes, the installer displays the next screen as seen in Figure 57.
157
Fill out two boxes. For the entry in the first field. Enter the hostname of the Administration Tools server, which you installed earlier. The second field gives a default value which is the same as shown in Figure 54. If the value was not changed during the Administration Tools server setup it can be used here too. Otherwise, it has to be adapted. The third field expects the system identifier of the R/3 of that machine. If there are more than one R/3 installed on this machine, these additional system identifiers can be specified by a separation with spaces. To use the configurator of the Administration Tools, you must specify the paths where the configuration files are situated. Figure 58 shows this scenario.
158
All the necessary inputs are finished and the guided installation process will continue. After the setup process has finished successfully, the system impact is: On UNIX variants, a new entry in the /etc/inittab to start the Administration Tools automatically (bksl:23:once:sh Install directory/sbkitSlave.sh). The expression Install directory will be replaced with the real directory name specified during the setup process which will be in our case the default directory/usr/lpp/BkiTslave. On Windows NT, a new NT service. Therefore, setup creates an entry in the Windows NT registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services with the name bksl. For both, the Administration Tools server and the Administration Tools slave server you can uninstall the software from the machine. On UNIX systems, start the script Uninstall.sh, and on Windows NT systems, run the script Uninstall.cmd. We recommend that you uninstall only with system administrator privileges. So the Uninstaller can remove the entries either from the inittab on UNIX or from the registry on Windows NT systems.
7.1.2.3 Verifying the installation There is no dedicated verification procedure provided with Administration Tools. To verify and test the installation, we recommend that you simply start the Administration Tools.
For clients, we will use Netscape 4.6. The Administration Tools become active in connection with Netscape as seen in Figure 59.
159
In the Go to box, enter the hostname of the Administration Tools server followed by a colon and the port for the communication (see Figure 54 on page 156). Every time a user starts the Administration Tools, a Logon panel pops up for identification. The first time, there are no user profiles belonging to a user. Therefore, the initial account has the default user ID ADMIN and the password admin (note the case).
Note:
After you first connect to the Administration Tools with the initial user ID, we recommend that you change the password of the ADMIN user immediately (see 7.2, Using the Administration Tools User Administration on page 161.
Several functions, such as Performance Monitor, System Configuration and User Administration are discussed in the following sections.
160
The left side of the window shows all existing user profiles. A user profile is characterized by the user name and the user ID. To see the properties related to a profile, select the appropriate user profile (mouse click, keyboard). Now, the right side of the window shows all of the settings corresponding to this user. You can perform user dependent functions within the user administration window, such as : Create user profiles Delete user profiles Change user profiles
161
User accounts in this context are isolated from the operating system user accounts. To make all the new user accounts permanent, leave this window by pressing the OK button.
162
The user name and user ID combination are unique. That means every change request of a user name and user ID checks if such a combination already exists and if there is a match, the change request is rejected. Typically, changes to user permissions have to be done if new Tivoli Data Protection for R/3 instances will be added to the node collection at the right side of the User Permissions (see Figure 61). Per default, all the existing user profiles do not have monitoring or configuration rights for these new instances. Thus, the appropriate settings for new nodes and existing user profiles have to be done. See also 7.2.1, Create user profiles on page 162 for further explanations.
7.3.1 General
First of all, here are some words about the philosophy of the Administration Tools configurator. The Administration Tools configurator was designed as a tool to change, edit, and maintain a Tivoli Data Protection for R/3 configuration file set on R/3 database servers. It is not possible to create new configuration files. The configurator needs an already existing set of all the Tivoli Data Protection for R/3 configuration files: Tivoli Data Protection for R/3 profile (initTSM.utl) SAPDBA profile (initTSM.sap) Tivoli Storage Manager configuration files (dsm.opt, dsm.sys) This set of files must be loaded into the configurator before any modifications can be done. In case of the SAPDBA profile and the Tivoli Storage Manager configuration files, these files will be provided on the one hand from the R/3 installation, and on the other hand from the Tivoli Storage Manager installation. There will be no need to create them. The Tivoli Data Protection for R/3 installation package also delivers a sample profile named initSID.utl. This file should be used as basis for further changes and so it was copied to the /oracle/TSM/dbs directory as described in 6.1.2.1, System related installation steps on page 118. To copy this Tivoli Data Protection for R/3 profile on to another R/3 database server, see in 7.3.4, Copy configuration on page 173. The Administration Tools configurator provides the following functions: Edit configuration Save configuration Copy configuration Show configuration history
163
A mouse click on the System Configuration box starts the configurator as shown in Figure 62.
The left side of the screen contains all available R/3 database servers (servers with a successful Administration Tools slave server installation (see 7.1.2.2, Installation of Administration Tools slave server on page 157). This list can be refreshed at any time by pressing the Refresh List button. Before the real work can start, the configuration files (Tivoli Data Protection for R/3 profile, SAPDBA profile, and Tivoli Storage Manager configuration files) must be loaded depending on the selected system. This is done by selecting the desired R/3 database server with the mouse pointer, pressing the mouse button and selecting the function load configuration files from the pop up window. A panel will open as shown in Figure 63.
164
By means of the installation settings of the Administration Tools slave server (seen in Figure 58 on page 159), the configurator knows the path where the configuration files are found. An appropriate fileset (one SAPDBA profile, one Tivoli Data Protection for R/3 profile, and the two Tivoli Storage Manager configuration files dsm.opt and dsm.sys) must be selected (mouse or keyboard) and loaded into the configurator. Now, the configuration handling buttons on the right side of the start panel become active. These buttons allows you to edit and save the configuration files, load archived configurations from the Configuration History, or copy a Tivoli Data Protection for R/3 configuration to another server.
Note:
Observe that the colored bar at the bottom of each panel gives a short message about which action to take. Where applicable, this bar also confirms the success or failure of the chosen action.
165
The left half of this panel, that is, the Configuration Display, shows a tree structure with all these parameters logically arranged in groups represented by a graphic symbol (folder). Navigation through this tree structure can be achieved either with a single mouse click on the bullet in front of a folder, with a double mouse click on the folder itself, or by pressing the Enter key after having positioned the cursor on this parameter. There are three different types of parameters each contained in corresponding folders: SAPDBA related parameters that are relevant to the Tivoli Data Protection for R/3 configuration Tivoli Data Protection for R/3 related parameters Tivoli Storage Manager related parameters that are relevant to the Tivoli Data Protection for R/3 configuration The right half of the panel in Figure 64 with the head line Parameter Editor will be reserved for editing any parameters, which can be selected from the configuration display part of the panel. The next few paragraphs will discuss the various editing possibilities which are provided by the Edit Configuration function. These are: Editing a single configuration parameter Editing a complete server parameter list Include new Tivoli Data Protection for R/3 parameters Creating a new Tivoli Storage Manager server entry.
166
7.3.2.1 Editing a single configuration parameter You can select a parameter for editing by double clicking on it or pressing the 'Enter' key after having positioned the cursor on this parameter. An editor panel pops up where you can edit the corresponding parameter value(s). This editor provides you with information about value type(s), value range and, if appropriate, list(s) of valid values. Additionally, for every selected parameter individual help is available. Figure 65 shows an example of how it can change the initTSM.utl file parameter REDOLOG_COPIES to the recommended value 2.
Figure 65. Edit a single Tivoli Data Protection for R/3 profile parameter
Before accepting the changes the editor performs parameter related consistency checks. In our case, we have not defined a second BRARCHIVE management class, at the moment, so a message pops up as seen in Figure 66 to inform you that a problem has been detected.
167
Now, you can add a second Tivoli Storage Manager management class to the initTSM.utl file parameter BRARCHIVEMGTCLASS which can be found in the subfolder Parameter List for Server: palana1. The steps to change this are similar to the changes made before. The result is shown in Figure 67.
168
Figure 67. Successfully changed Tivoli Data Protection for R/3 profile parameters
In general, all parameters are displayed within the parameter tree. Those which are not currently specified in one of the configuration files (NULL values) are shown with a gray background color, like ADSMNODE or USE_AT in Figure 67. Mandatory parameters are indicated with an appropriate icon. When you have changed a parameter value, another appropriate icon will indicate this (see also Figure 67).
Note:
To find out which parameters in each of the three subsections (folders) can be edited, press the Help button. Help provides a detailed description of all parameters.
To check whether the current configuration is consistent, press the Check Configuration button. A panel will pop up showing all the results of this consistency check.
7.3.2.2 Editing a complete server parameter list Tivoli Data Protection for R/3 Configurator enables you to create, update, or delete complete Tivoli Data Protection for R/3 related server parameter lists.
You can update or delete a server list by selecting the corresponding folder Parameter List for Server: palana1 with a double click or by pressing the Enter key. After the editor has popped up (see Figure 68), you can modify all of the shown parameter values or delete the server list.
169
The editor provides information about value type(s), value range and, if appropriate, list(s) of valid values. Additionally, a help panel is available. Before accepting the changes, the editor performs parameter related consistency checks. Messages pop up, if problems are detected. To create a new server list, select the New ServerParam. List entry in folder Additional, configurable Parameters. The panel looks like Figure 68, as shown. The only differences are that default values are given in the input boxes. As an important feature this editor enables you to specify/change the position of a server (Server Position in Config File - see Figure 68) within the array of server parameter lists. This offers the possibility for you to set the connection order in which the defined Tivoli Storage Manager server will be contacted by Tivoli Data Protection for R/3.
7.3.2.3 Including new Tivoli Data Protection for R/3 parameters If Tivoli Data Protection for R/3 provides new parameters, for example, in the case of a functional expansion, you can add these new Tivoli Data Protection for R/3 parameters to the configuration tree.
These parameters are not checked by this configurator version, of course. We strongly recommend that you use this feature only if it was used before. The current configurator version does not support these new Tivoli Data Protection for R/3 parameters have been introduced.
170
To create a new Tivoli Data Protection for R/3 parameter select the New (Standalone) Parameter entry in folder Additional, configurable Parameters. An editor pops up, where you can specify a parameter name/value pair. All new Tivoli Data Protection for R/3 parameters are arranged in the Tivoli Data Protection for R/3 tree folder Unrecognized Parameters .
7.3.2.4 Creating a new Tivoli Storage Manager server entry Sometimes, a new Tivoli Storage Manager server has to be added to the Tivoli Storage Manager related configuration file dsm.sys. Figure 69 shows how this can be done.
In the tree folder, the Tivoli Storage Manager File is situated in another folder named New Server Parameter List . By double clicking on the Tivoli Storage Manager Server statement, an editor opens, which enables the user to create a new Tivoli Storage Manager server entry in the dsm.sys file. Within this editor, you can specify the parameters SERVERNAME and TCPSERVERADDRESS. All other parameters related to this Tivoli Storage Manager server can be specified later by using the procedure described in 7.3.2.2, Editing a complete server parameter list on page 169.
171
Note:
When the save panel is called up, the consistency checkl shows whether or not the configuration to be saved is error free. The save panel allows you to store a newly edited configuration into the corresponding configuration files (where it will become active). You can save a configuration as is or change it and then afterwards save it. Figure 70 shows an example.
The frames at the left side of the panel show (from top to bottom) the currently loaded configuration files for R/3, Tivoli Data Protection for R/3, and Tivoli Storage Manager with their path definitions. The individual change destination buttons of these frames allow you to rename files and/ or specify new locations for them. After that, you can specify whether to save the old (original) configuration, for example, for later use into the Configuration History. If specified (default setting), provide a brief description of the old configuration. This description serves as reminder of when to look it up later in the history file (with the Show Configuration History function described in 7.3.5, Show configuration history on page 173). If you do not want to save the original configuration just deselect the Put old configuration to history check box. If it was deselected, the check box text will turn red to make you aware of the fact, that the old configuration will not be saved. Finally, the R/3 database server system ID, for which it will be saved, has to be selected before the Save button can be pressed. The newly edited configuration is
172
placed into the profiles/configuration files, whereas the old one is placed into the history file (if the Put old configuration to history check box is selected).
On the left side all available R/3 database servers are listed. Select one target R/3 database server which should receive a given Tivoli Data Protection for R/3 profile. The currently loaded Tivoli Data Protection for R/3 profile, which is the source, is shown in the file name frame (at the upper right of the panel). Enter a destination directory on the target R/3 database server for the Tivoli Data Protection for R/3 profile, the system ID (SID), and the backup ID prefix of the target system. The system ID and the backup ID prefix will be set in the newly created Tivoli Data Protection for R/3 profile on the target system where applicable. Click Copy (or move the cursor to it and press Enter) to finish the operation.
173
these files. The advantage is clear. The user gets a properly working set of the necessary configuration files in a very short time. The Administration Tools server will be used as storage for these old/historical configuration files. Thus, there is a subdirectory within the Administration Tools server installation directory called history. Every system, that holds a set of old configuration files, gets his own unique directory entry within the subdirectory history. In our example this is the directory TSM_capeverde. All the old configuration files concerning this system are put in this directory. Every file is saved with a unique name, with a timestamp like YYYY_MM_DD_hh_mm_ss followed by the real file name (for example, 1999_11_10_14_20_52-initTSM.utl). Figure 72 shows the Configuration History panel after it is started.
The window is divided into three parts. On the upper left, you can see all available configurations stored on the Administration Tools server. With the mouse or keyboard one configuration can be selected. Now, all corresponding files will be shown in the right upper window. To get some information about the selected configuration look at the configuration description in the lower half of the window. After a selection such a configuration can be loaded into the System Configurator either for additional changes or to set back the selected system to that level (see 7.3.3, Save configuration on page 171). If a historical configuration file set is no longer needed, it can be also deleted from the Administration Tools server.
174
backup or restore operation performed by Tivoli Data Protection for R/3 or to review backup/restore operations that have been done in the past. Above all the review function gets importance in view of analyzing of R/3 database backup/restore performance bottlenecks.
7.4.1 Prerequisite
As described in the prerequisites in 7.1.1, Prerequisites on page 153 the use of the Administration Tools performance monitor requires Tivoli Data Protection for R/3 version 2.4 or higher as a prerequisite. During a backup/restore operation, the performance monitor needs a lot of data to make the processes running visible in a graphical manner, for example, how many sessions will be used, what is the multiplex level of the data stream, how many bytes are already saved on the Tivoli Storage Manager server and so on. This heap of performance data is monitored from the Tivoli Data Protection for R/3. Therefore, Tivoli Data Protection for R/3 and the performance monitor need a communication connection during the operational phase over ports. On one side, a port definition was made during the Administration Tools server setup (see Figure 54 on page 156). Now, on the other side, Tivoli Data Protection for R/3 also needs an entry point (port) for communication with the Administration Tools server. For this reason a new Tivoli Data Protection for R/3 profile keyword is defined, named PERF_MONITOR . That keyword expects two parameters, which are the IP-address of the Administration Tools server, and a port number. This port number must be the same as the number specified during the Administration Tools server setup. In our case, the Tivoli Data Protection for R/3 profile was complemented with the following entry (Figure 73).
PERF_MONITOR BACKUPIDPREFIX MAX_VERSIONS MAX_SESSIONS BACKAGENT CONFIG_FILE SERVER SESSIONS BRBACKUPMGTCLASS BRARCHIVEMGTCLASS
175
The server window current system always shows the first or currently running server. To see any additional servers click the downward pointing arrow (at right). Your first action should be to select (click on) a server in this window, unless a server is currently running and you are interested in its performance (this server is automatically selected and shows its data on the screen). The operation type frame (at the far right) shows the server's activity as backup, restore, or idle. If the server is idle, you can review these sessions by pressing the review run button. For every running session (actual or review), a collection of session information appears in the upper part of the performance monitor window, for example, starting time of operation, elapsed time since start, remaining time for completion, total number of files to be transferred, number of processed files (already saved with respect to restored), number of active sessions (active Tivoli Data Protection for R/3 agents), Tivoli Data Protection for R/3 compression used, total amount of data to be handled, and the already saved amount of data. On the left side, there is the progressed files window which shows the names of the files which have already been saved up to this instant. The upper progress bar shows how much of the overall data volume has been transferred until now, in percent of total (meaning the total number of bytes). The lower progress bar shows the average transfer rate, meaning the arithmetic mean over all agents involved in the transfer (total data rate divided by the elapsed time). A scale button (MB/s, GB/h) allows to calibrate the measuring units as Megabytes per second or Gigabytes per hour. This allows you to assess
176
how long the save or restore operation will still take to complete. The scale button is a toggle switch which changes to the other unit each time it is activated. The error and warning messages window shows all messages (for example Tivoli Storage Manager messages) which may have appeared during a backup/restore run, if any. At the lower part of the window you see an area for the values of every running session. This area shows, for each active Tivoli Data Protection for R/3 agent, the names of the files which are being transferred by agents, and the progress of each save or restore operation either in percent of the total file size, or in numbers of bytes transferred up to that point. The default is percent. By activating the show progress in percent / show progress as total toggle switch, you can determine whether the total number of bytes transferred up to this point, or the percentage so far completed should be shown. The switch applies to all running Tivoli Data Protection for R/3 agents. Its label always indicates what the next presentation style is (after it is clicked). The data transfer rate of each agent is shown adjacently. This rate always applies to the currently transferred set of files (this may differ from the total or average). The clear display button allows you to wipe all data from the various display frames without, however, leaving the panel.
Select the proper system to monitor within the performance monitor panel at the beginning of any activity (see current system box).
177
As seen in the figure above you can view a lot of information about the currently running backup, such as start time of the backup, the estimated end time of the backup, successfully processed files, amount of data to be backed up and so on. There are interesting statements about the Tivoli Data Protection for R/3 agents represented by session1 and session2. These statements give information about which tablespace files are being handled, how big a file is and how much data was already read and sent to the Tivoli Storage Manager server. As seen in Figure 75, each of the Tivoli Data Protection for R/3 agents read two data files in parallel. This circumstance shows that the Tivoli Data Protection for R/3 with multiplex level 2 was configured (possibly within the Tivoli Data Protection for R/3 profile). This means two data files will be multiplexed into one data stream to the Tivoli Storage Manager server. For further information, see 6.5.4.3, Multiplexing on page 152. The colored bars in the middle and the right of the performance monitor window give information about the current transfer rate of one Tivoli Storage Manager session (right bars) and the average overall transfer rate of the backup run (lower middle bar). The upper middle bar shows a graphical indicator of the backup progress in percent (in the example above, 54.17 percent of the backup has completed). Every time a backup starts and fills the performance monitor panel with data (see Figure 75) an additional window pops up. It is a graphical backup performance analyzer (Figure 76).
178
This representation shows a detailed development of the backup performance during the complete backup run. The analyzer consists of a time axis (hh:mm:ss) and of a transfer rate axis. Depending of the transfer rate which can be set with the transfer rates in button (see Figure 75), the unit for the transfer rate axis can be MB/s or GB/h. From time to time the scale of the graphical analyzer is adjusted automatically. The main use of the graphical performance analyzer is to find performance leaks and errors during a backup. For a detailed explanation of how this is done and how it looks, see 7.4.4, Analyzing performance bottlenecks on page 181.
7.4.3.2 Reviewing backup sessions The Administration Tools performance monitor allows you to review any Tivoli Data Protection for R/3 actions (backups) made in the past. In most cases, an R/3 database backup cannot be observed at the moment it is running, because:
Backup time is normally scheduled at night. There is more than one R/3 system in an enterprise system landscape. Nevertheless, it should be possible to review every R/3 backup run to check if there was a problem, such as with the Tivoli Storage Manager server, or with the time taken to finish the backup, with respect to the scheduled backup window for the R/3 systems. With the help of the review runs function on the performance monitor, all previous backup runs can be monitored again. After pressing the review runs button (as seen in Figure 74) a window pops up as seen in Figure 77.
Note:
A review run procedure is possible only if the selected server is in idle state. If the selected server is working, the review sessions button is disabled, because the operation running shows on the screen.
179
The information for any review run (such as amount of data, start time, and so on) is written during every backup run into a file. This file has a unique name consisting of a system ID, IP address of the R/3 database server, date and time of backup start (for example, TSM_9.1.150.54_1999_11_12_12_01_08). It is stored in the history subdirectory within the Administration Tools server installation directory. By clicking on an entry (see Figure 77), which shows a backup run with a timestamp of startdate/starttime associated to a file as described above and then by pressing the review button, the review of this selected run will start. The delete button deletes an earlier selected entry in the list above (obsolescence). After the review is invoked, all parameters in the performance monitor panel will be set to the appropriate values for this run, for example, operations type (backup), compression (on/off), and so on. In addition to the performance monitor panel, the graphical performance analyzer (Figure 76) and a review control panel (Figure 78) will pop up.
180
A horizontal slider moves from the starting point (left) to the completion point (right) as a visual progress indicator. You can also influence the speed at which the operation is presented by moving the vertical speed slider further up, towards fast, or further down, towards slow. The stop key allows you to halt the run at any point. By clicking the single-step keys ( << or >> ) the run will be retarded or advanced, one step at a time. By clicking the start key |< , you return to the beginning, by clicking the termination key >|, you jump to the end. Alternatively, you can move the progress slide lever in either direction (press and hold the mouse button and move the mouse left or right). All the displays will then follow accordingly.
R/3 database server Tivoli Storage Manager server Storage device library (tape, optical) Type of network (Token Ring, FDDI) It is easily possible to continue that list. Also important is the equipment of all involved hardware devices (memory, hard disks, count of CPUs). The logical view, which means on the one side all involved software components plays an very important role as well as the configuration and settings within a server network, too. As seen, performance bottlenecks could be hidden in various components and it is in many cases quite hard to find them. In the following examples we investigate performance tuning leaks and backup errors on the Tivoli Storage Manager server side and on the Tivoli Data Protection for R/3 side. The performance monitor is designed for these kind of scenarios. It is not a general system performance analyzing tool. For general recommendations to improve the network, Tivoli Storage Manager and Tivoli Data Protection for R/3 performance can be found detailed in 6.5, Possibilities to improve backup/restore performance on page 147.
7.4.4.2 Analyzing examples As already mentioned above, the Administration Tools performance monitor is used to analyze Tivoli Data Protection for R/3 dependent backup scenarios. 181
A performance respectively error analyzing should not be done during a live backup run. Rather we recommend that you use the review function, which provides all the necessary options for analyzing like navigation through the backup to all points in time. The following examples can be seen as an analysis recipe. In many cases these are good entry points to find out something about performance problems during an R/3 backup run.
Remember:
Be aware as mentioned in 7.4.4.1, General on page 181, performance problems can be related to a lot of system hardware and software components. Also as a general rule, only one configuration parameter should be changed at one time. This procedure makes it easier to find certain (performance) problems during backup runs.
Example 1: Check the used backup sessions Description
A backup session is a Tivoli Storage Manager client session that Tivoli Data Protection for R/3 establishes to the Tivoli Storage Manager server. For performance reasons (higher data transfer rate, shorter backup time) we recommend tthat you use as many parallel sessions as tape drives are available, if a direct backup to tape drives is used.
Check - Tivoli Storage Manager settings and performance monitor session window
a. In our lab environment as shown in 5.1, Overview on page 99, the data of our R/3 system is sent directly to a tape drive. To be sure that the defined library has two defined drives, the following command within the Tivoli Storage Manager command line administrative client has to be invoked:
tsm>query drive Session established with server PALANA1: AIX-RS/6000 Server Version 3, Release 7, Level 1.0 Server date/time: 11/14/99 12:45:10 Last access: 11/14/99
11:32:04
The query result shows, that there are two defined tape drives for the 3570 tape library which can be used. b. The analysis of the values of every running session part of the performance monitor panel after the review start shows (Figure 79), that only one session was established between Tivoli Data Protection for R/3 and the Tivoli Storage Manager server.
182
Conclusion - Check Tivoli Data Protection for R/3 profile The Tivoli Data Protection for R/3 profile allows you to define the count of sessions, that can be established between Tivoli Data Protection for R/3 and the Tivoli Storage Manager server during a backup run. The profile keyword is MAX_SESSIONS (for further information see also Appendix B, Tivoli Data Protection for R/3 profile on page 343). In our scenario the parameter was set to 1, which means only one session will be established. Response - Change profile parameter MAX_SESSIONS As checked above, our library provides two tape drives. To reach a higher data transfer rate and a shorter backup time, these two drives should be used for an R/3 database backup run. Thus, the Tivoli Data Protection for R/3 profile parameter MAX_SESSIONS will be increased to 2. Next time, when a database backup is started, two sessions between Tivoli Data Protection for R/3 and the Tivoli Storage Manager server will be established. The benefit will be a higher data transfer rate and a shorter backup time.
Example 2: Check used compression Description
Tivoli Data Protection for R/3 performs its own data compression method (null block compression) for compressing/decompressing data before they are sent over the network. The additional CPU load is small.
Check - Performance monitor window if compression is activated
As seen in Figure 80 during this database backup the compression was deactivated.
183
Conclusion - Check Tivoli Data Protection for R/3 profile The Tivoli Data Protection for R/3 profile allows you to define the usage of the integrated compression algorithm. The profile keyword is RL_COMPRESSION (for further information see also Appendix B, Tivoli Data Protection for R/3 profile on page 343). In our scenario, the parameter was set to no, which means the Tivoli Data Protection for R/3 compression is not active. Response - Change profile parameter RL_COMPRESSION To reach a higher data transfer rate and a shorter backup time, the Tivoli Data Protection for R/3 compression algorithm should be used for an R/3 database backup run. Thus, when the Tivoli Data Protection for R/3 profile parameter RL_COMPRESSION is set to yes, the benefit will be a higher data transfer rate and a shorter backup time in most cases, because the amount of data to be sent over the network from the R/3 database server to the Tivoli Storage Manager server is decreased.
Example 3: Check used multiplexing level Description
184
Tivoli Data Protection for R/3 allows you to multiplex different data files, which are read at the same time from different disks, into one data stream to the Tivoli Storage Manager server. Multiplexing makes sense when fast tapes and fast networks are available, when the database files are compressible and when the CPU load is not to high. Check - Performance monitor session window The analysis of the values of every running session of the performance monitor panel, after the review has started, shows in Figure 81, and the multiplexing level for each established Tivoli Storage Manager session is 4.
Conclusion - Check Tivoli Data Protection for R/3 profile The Tivoli Data Protection for R/3 profile allows you to define the multiplexing level, which specifies the number of data files multiplexed into one Tivoli Storage Manager data stream. The profile keyword is MULTIPLEXING (for further information see also Appendix B, Tivoli Data Protection for R/3 profile on page 343). In our scenario the parameter was set to 4, which means four data files will be multiplexed into the Tivoli Storage Manager data stream. Response - Change profile parameter MULTIPLEXING In our scenario the measured transfer rates are quite good. Normally, a Token Ring network landscape would not have such a good throughput, because the bandwidth of a Token Ring network is small (16 MBit/s) and lot of other clients would use this network. If we assume, that we have such a multi-user scenario, the network bandwidth probably would be too small for a multiplex
185
level four R/3 database backup with Tivoli Data Protection for R/3. In our case the profile parameter MULTIPLEXING should be set to 2 or 1. For network environments with a higher bandwidth (for example FDDI, Fast Ethernet), a multiplex level between 2 and 4 should be quite good.
Example 4: Utilization of the graphical performance analyzer Description
The graphical performance analyzer provides a diagram with a plotted function, which shows the development of the overall backup transfer rate. This tool is used to detect partial bottlenecks of the performance. Check - Graphical performance analyzer window a. Figure 82 shows an example of a backup transfer rate function with one suspect section.
The stroke dotted circle in Figure 82 marks the problem section. Striking is the decreasing of the data transfer rate at this area. The reasons, at the appropriate time period, could be: Higher load of the Tivoli Storage Manager server Higher network traffic Higher load of the R/3 database server But also combinations between these problem scenarios are conceivable. b. Figure 83 shows another example of a backup transfer rate function with one suspect section.
186
The stroke dotted circle in Figure 83 marks the problem section. Here we have a jump in the function graph that points at a problem of one of the two started sessions, and maybe one session was canceled. Conclusion a. We have to investigate the reason for either a higher load of the Tivoli Storage Manager server respectively R/3 database server or a higher network traffic. For this, all running and scheduled jobs on these machines to the appropriate point in time have to be checked. In our case, the R/3 database server has no time critical jobs running at this time. The schedule of the Tivoli Storage Manager server shows a planned backup window for this time period. All Tivoli Storage Manager clients in our system landscape will start their file system backup around this point in time. Thus, all clients send their data over the network (higher network traffic) to the Tivoli Storage Manager server (higher load on the Tivoli Storage Manager server). b. We have to check the activity log of the used Tivoli Storage Manager server. To this log all activities in the form of messages within the Tivoli Storage Manager server will be written, for example, all Tivoli Storage Manager server process messages, all client dependent messages, session messages, and so on. There are three different kinds of messages: information messages, warning messages, and error messages. It can also be specified a time frame, for which this log will be displayed.
Response
a. During any database backup, no other schedules should be planned. This means, for our case, either the schedule window for the daily client filesystem backup has to be planned for another time frame, or the R/3 database backup start has to be switched to another point in time. b. Logon to the Tivoli Storage Manager server with the administrative command client and invoke the query actlog command. Start date and start time for printing the activity log can be gotten from the performance monitor panel.
187
Now, all messages have to be checked carefully, most of all in view of error messages. Further help and message descriptions can be found in the Tivoli Storage Manager manuals.
188
In this chapter, we describe the procedure for the installation and setup of CommonStore for archiving R/3 data with the Tivoli Storage Manager. Several steps are carried out on the operating system level and in the R/3 system. These are shown in a step-by-step procedure. Since only the archiving of data is regarded, it is sufficient to install the server component of CommonStore. In the case of archiving documents, it is necessary to install the CommonStore client on the frontend PCs for displaying archived documents or print lists.
189
interface due to a well-defined set of Remote Function Calls (RFC). (The capability to support this set of RFC functions is subject to the SAP ArchiveLink certification process). For the purpose of the archival data exchange, both the R/3 and the archiving system need access to a shared file system. CommonStore then sends/receives the data to/from the Tivoli Storage Manager by using the Tivoli Storage Manager API client. The Tivoli Storage Manager then archives the data on a storage device (optical disk library, tape library). For all these different elements between the selection of the data from the database to the storage of the data on media controlled by the Tivoli Storage Manager, setup parameters and customizing procedures have to be set up. Concerning customizing ADK, this section is limited to customizing the technical settings. The technical settings concern where to write/read the archive data, the size of the generated files, and how to store the data. In a production R/3 system, customizing the technical settings is only a small part compared to the application specific customizing (which interfaces to user departments, R/3 application administration, and auditing/controlling departments). To interface with CommonStore, settings for the ArchiveLink interface have to be customized. The maintenance of Archives and Links is absolutely necessary. The Archive specifies where to store the R/3 data/document. (In the case of using Tivoli Storage Manager in conjunction with CommonStore as archiving system, the archived data of the Archive would be physically stored on a device associated to a Management Class of the Tivoli Storage Manager). The Link specifies the mapping between different R/3 document types and archives. In addition, further settings like logical paths and files, and queues have to be set up. Since the communication between Archive Link and CommonStore is made by RFCs, this and the CPIC user are maintained in the R/3 system. On the other hand, some of these parameters set up in the different areas of the R/3 customizing must be made known to CommonStore. Therefore, a profile for CommonStore named archint.ini also has to be maintained. Setup steps made at the operating system level of the CommonStore Server include installing the CommonStore software, creating an administrative user for the CommonStore software and adapting system and Tivoli Storage Manager settings.
190
FI CO SD MM PP ...
S A P R /3 ADK
a rc h in t.in i
d s m .s ys d s m .o p t
A rc h iv e L in k
RFC
d a ta e x ch a n g e file s y s te m s
c u s to m iz a tio n d a ta s to re d in D B c u s to m iz a tio n d a ta s to re d in D B
Tiv o li S to ra g e M anager
S to rag e d e v ic e
191
For communication to a specific CLIENT within the R/3 system, the client of the R/3 system is assigned to a logical system within an R/3 system landscape. The logical system for the client is stored in the CommonStore server profile parameter LOGICAL_SYSTEM . To communicate with the R/3 instance, a corresponding program ID in an RFC destination in the R/3 system and also a CPIC user in the client is required. This R/3 CPIC user is reflected in the USER parameter of the profile. In the R/3 system, one (or more) Archive(s) are created. Each archive definition contains an Archive ID and directory paths for exchanging the archive files. In the CommonStore server profile, the setting of the ARCHIVE , ARCHPATH and BASEPATH parameters are maintained. The ARCHPATH and BASEPATH point to a directory on the operating system level. A corresponding file system with sufficient space has to be created, this file system must be accessible both for CommonStore server processes and the R/3 instances.
Note:
If the R/3 application server and the CommonStore server run in a heterogeneous environment on different operating systems, additional software like AIX connections, SAMBA or a Windows NT NFS implementation may be necessary to perform the task of sharing the ARCHPATH and the BASEPATH between both systems. On the other hand, for each archive a Tivoli Storage Manager management class definition MGMT_CLASS has to be assigned in the CommonStore server profile. This management class reflects the physical storage destination (in the Tivoli Storage Manager controlled devices) for the archived data of the R/3 system. Table 19 shows the parameters in the Common Store server profile related to Tivoli Storage Manager settings. Connecting the Common Store to the Tivoli Storage Manager requires the STORAGETYPE ADSM (other storage types would connect CommonStore to Visual Info or On Demand). CommonStore connects as node ADSMNODE to the Tivoli Storage Manager SERVER. Before customizing the R/3 system and the Tivoli Storage Manager and adapting these settings to the profile archint.ini , these settings have to be carefully planned. The step-by-step procedure for the R/3 customizing related to the settings contained in Table 18 is described in detail in 8.3, R/3 customizing on page 202. Setting up the Tivoli Storage Manager according to the parameters of Table 19 is shown in 5.2.3, Specific EDM Suite CommonStore for SAP setup on page 110. Table 18 contains an overview of the relations of these parameter settings.
192
R/3 System
SID Program ID
Description
R/3 System ID The name of the RFC Server program started by RFC connection type TCP/IP. Gateway host name Gateway service (defined either by port number or by the services name). Host name or IP address of the R/3 application server. Service name or port number to connect to the application server. System, in which applications run on a common data basis. According to SAP standards, a client is member of a logical system. A legally and organizationally independent unit which uses the R/3 system. Name of the CPIC user in the specified client. SAP ArchiveLink: Basic path for archive server on R/3 application server. The ArchiveLink archives objects to this path. SAP ArchiveLink: Standard archive path. The ArchiveLink retrieves objects from this path. SAP ArchiveLink: Archive System ID.
GWHOST GWSERV
LU TP
Host name Service sapdpXX , XX reflecting the instance number of the R/3 application server Logical system
LOGICAL_SYSTEM
CLIENT
Client
USER BASEPATH
ARCHPATH
Arch. Path
ARCHIVE
Archv.
193
Table 19. CommonStore Server profile and Tivoli Storage Manager settings
Description
If STORAGETYPE equal to ADSM, use Tivoli Storage Manager as archiving system.
SERVER
SERVER
Name of the Tivoli Storage Manager Server to connect to (as defined in the clients dsm.sys ). Management Class Name of the destination of the archived data on the Tivoli Storage Manager server. Clients nodename in the Tivoli Storage Manager server.
MGMT_CLASS
ADSMNODE
NODE
Figure 85 gives a summary for the dependencies of different, important parameters of the CommonStore archint.ini profile. The parameters of archint.ini are shown in the upper left box of the picture. These parameters must reappear in the customized settings of the R/3 system (box on the right, follow the arrow from the archint.ini). The transaction code for maintaining these R/3 settings is shown above of the boxes describing the customized settings.
T SM D 1.C A PV E RD E cap verd e sap gw 00 cap verd e sap dp0 0 T 90C LN T0 90 8 00 C ST O R E /u sr/sap /tra nsfer /u sr/sap /tra nsfer A0 A DS M p ala na1 m c_ cstore _tsm C ST O R E
SAP R /3 S ystem T SM
S M 59 P RO G RA M ID
C lient 800
B D 54 / S CC 4 L O G IC AL SY S TE M S U0 1 U S ER F ILE / SF 01 A RC HIVE _G LO BA L _P A TH O AC 0 B AS IC P AT H A RC H . P AT H O PT . A RC H IVE ID
archint.ini
Tivoli Storage M anager
194
8.2.1 Prerequisites
Before starting the software installation, satisfy the following prerequisites: AIX 4.1.5 or higher on the CommonStore server An installed and operational R/3 system of release 3.0E or higher An installed and operational R/3 gateway for RFC communication to the CommonStore server During a proper R/3 installation, a default gateway process is installed for the R/3 system. This default gateway can be used for CommonStore. An installed and operational ADSM Server Version 2.1 or higher or an installed and operational Tivoli Storage Manager Version 3.7 or higher An installed and operational ADSM client with additionally installed ADSM Client API Version 2.1.0.3 or higher or an installed and operational Tivoli Storage Manager Client Version 3.7 or higher An actual installation image of the CommonStore server software has to be available.
Note:
For more information about the download of the evaluation code, access the Enterprise Data Protection Development Website:
http://www.de.ibm.com/ide/esd.html
# smitty install_selectable_all
195
4. Insert the actual directory in the INPUT device / directory for software (or type just .), select /dev/cd0 if the software was shipped on CD.
Install and Update from ALL Available Software Type or select a value for the entry field. Press Enter AFTER making all desired changes. [Entry Fields] [.]
5. Choose SOFTWARE to install by pressing F4, select CommonStore base package for ADSM as SOFTWARE to install, leave the other items on their default values and start the installation.
Since the CommonStore processes have to access data files written by the R/3 system, the CommonStore administrative user has to be member of the sapsys and dba groups. 6. Create the CommonStore administrative user with the System Management Interface Tool (SMIT ), fastpath mkuser
# smitty mkuser
and fill in name (in our example cstore), group membership and home directory into the smit panels. The home directory must be /usr/lpp/commonstore/bin and use /bin/ksh as a login shell. For the other items in the panels, use the default values or adapt them according to your local user policies.
196
Add a User Type or select values in entry fields. Press Enter AFTER making all desired changes. [TOP] * User NAME User ID ADMINISTRATIVE USER? Primary GROUP Group SET ADMINISTRATIVE GROUPS ROLES Another user can SU TO USER? SU GROUPS HOME directory Initial PROGRAM User INFORMATION EXPIRATION date (MMDDhhmmyy) [MORE...37] F1=Help Esc+5=Reset Esc+9=Shell F2=Refresh Esc+6=Command Esc+0=Exit F3=Cancel Esc+7=Edit Enter=Do [Entry Fields] [cstore] [] false [staff] [sapsys,dba] [] [] true [ALL] [/usr/lpp/commonstore] [/bin/ksh] [] [0]
# + + + + + + +
F4=List Esc+8=Image
7. Assign an initial password to the administrative user. You can do so with the System Management and Interface Tool (SMIT), fastpath passwd
# smitty passwd
Enter the CommonStore administrative user and assign the initial password to it. 8. Change the ownership of /usr/lpp/commonstore/bin (home directory of the CommonStore administrative user)
9. Check if the CommonStore server executables are within reach of the administrative users environment. Switch to the CommonStore administrative user and check the PATH variable:
In the PATH variable, the directory /usr/lpp/commonstore/bin has to be included. If /usr/lpp/commonstore/bin is not present, add it to the CommonStore administrative users PATH definition in /usr/lpp/commonstore/.profile. 10.Add the environment variable DSMI_DIR to the .profile of the administrative CommonStore user. DSMI_DIR has to point to the directory where the client option file resides (assuming that there was a client option file already built as described in 5.3.1, General prerequisites for client and API setup on page
197
111). Depending on which API client you are using, add one of the following lines to /usr/lpp/commonstore/.profile. For Tivoli Storage Manager API client:
export DSMI_DIR=/usr/tivoli/tsm/client/ba/bin
export DSMI_DIR=/usr/lpp/adsm/bin
8.2.3.2 Checking and adapting operating systems settings 11.If youve installed the CommonStore server software on a node which does not run an R/3 gateway process (and you are not distributing the services by Network Information System (NIS)), then check the local entries in /etc/services.
To get the information for which instance and gateway to connect to, log into the R/3 system to be archived and check the Gateway Monitor transaction. Access the gateway monitor transaction from the R/3 logon entry screen by first accessing the Systems Management entry screen by selecting Tools->Administration from the menu ( Figure 86).
The Systems Administration screen appears; this is the entry screen for a lot of administrative tasks in the R/3 System. To access the Gateway monitor, choose Tools->Administration from the menu as shown in Figure 87.
198
Another way to access the Gateway monitor (from every R/3 screen) is to use a transaction code. Enter the transaction code SMGW (with /n as prefix) in the box in the upper part of the SAPGUI and press Enter (Figure 88 ).
Note:
In the following sections, the menu path for some of the R/3 transactions will not be shown. For these cases, the transaction can be reached by the transaction code.
The Gateway monitor base screen will appear, as shown in Figure 89. Select Goto->Parameters from the menu and search (as shown in Figure 90) for the content of the two parameters, gateway host and gateway service, in the Attributes section of the list.
199
The corresponding entry in /etc/services for the gateway service has to be present, for example:
Both parameters of Gateway host and Gateway service must match the settings in the CommonStore server profile archint.ini (See 8.3, R/3 customizing on page 202). 12.Create the exchange directory for the files to be archived. The R/3 system will create the archived files to this directory, after the database records have been deleted from the database, the Common Store processes will then transfer the data to the Tivoli Storage Manager.
200
Note:
The R/3 system and the CommonStore processes need read and write access to that directory. Therefore, set the user ownership of that directory to the administrative CommonStore user, and the group ownership of that directory to the administrative R/3 system group. Since all application servers of the R/3 system may run archiving batch jobs, assure that all of them can access this directory. So, distribute this directory to all application servers by means of the Network File System (NFS), or equivalent tools, and mount it on the remote application servers.
The filespace needed in the file system depends on the amount of data/documents which will be archived/retrieved.
#-----------------------------------------------------------------------# # Technical connection parameters for R/3. Similar to the sideinfo # file provided by the SAP. # Multiple LOGICAL_SYSTEM sections can be specified. #-----------------------------------------------------------------------# DESTINATION TSM PROGID d1.capeverde GWHOST capeverde GWSERV 3300 LU capeverde TP 3200 LOGICAL_SYSTEM T90CLNT090 CLIENT 800 USER cstore
The steps for obtaining the gateway parameters of the R/3 system were described in 8.2.3.2, Checking and adapting operating systems settings on page 198. All these settings must match the communication settings in the R/3 system. The procedure of setting the correspondent parameters in the R/3 system is described in 8.3.1, R/3 communication settings on page 203. Two profile parameters describe from where the CommonStore server will read the archived data files. At the UNIX level, the creation of these
201
directories was done in 8.2.3.2, Checking and adapting operating systems settings on page 198.
#-----------------------------------------------------------------------# # Path in which R/3 is storing the data for CommonStore. It should # be shared through all R/3 application servers via NFS mount. #-----------------------------------------------------------------------# BASEPATH /usr/sap/transfer #-----------------------------------------------------------------------# # Path in which CommonStore is storing the data for R/3. It should # be shared through all R/3 application servers via NFS mount. #-----------------------------------------------------------------------# ARCHPATH /usr/sap/transfer
The corresponding values in the R/3 system are defined in 8.3.2.1, Definition of archives on page 213 and 8.3.1.4, Customizing the logical path (client independent) on page 209. For the connection of CommonStore to Tivoli Storage Manager, the settings in the CommonStore server profile must match the definitions in 5.1.3, Clients on page 103 and 5.2.3, Specific EDM Suite CommonStore for SAP setup on page 110.
#-----------------------------------------------------------------------# # Technical connection parameters when ADSM is used as archive. # The first entry could be ARCHIVE DEFAULT. # Further entries with explictly defined archive ID are needed # if different ADSM Servers and/or different ADSM Management Classes # should be used for storing R/3 documents. #-----------------------------------------------------------------------# ARCHIVE A0 STORAGETYPE ADSM SERVER palana2 MGMT_CLASS mc_cstore_tsm ADSMNODE CSTORE LOGICAL_SYSTEM T90CLNT090
202
Note:
In the lab example, customizing the ArchiveLink and ADK for the R/3 was made in an R/3 4.0B system. For other R/3 releases, transaction screens may be different, and eventually even the contents/functions of the transactions used may be different.
To create a user in the R/3 system, logon to the client (whose data will be archived to the archiving system) and choose Tools->Administration->User Maintenance->Users from the menu, as shown in Figure 86 on page 198 and Figure 91 (or use transaction code SU01).
As shown in Figure 92, the entry screen of the user maintenance transaction appears.
203
Enter the user name, maintain the address data, specify an initial password, and set logon data and profile parameters according to Figure 93 and Figure 94. User type is CPIC User Assign authorization profiles SAP_ALL and SAP_NEW
Note:
In principle, it is not necessary to assign the authorization profile SAP_ALL to the user. Dependent on the archiving scenarios, the user has to have all authorizations necessary to access and use the data of different application modules. So, for the sake of simplicity, in this section we assigned the SAP_ALL profile to the user.
Figure 93. Logon data settings for the CommonStore CPIC user in the R/3 system
204
Figure 94. Profile settings for the Common Store CPIC user in the R/3 system
8.3.1.2 Creating an RFC destination For creating the RFC destination, start from the R/3 entrance screen and choose Tools->Administration->Network->RFC Destinations from the menu (or use transaction code SM59). The R/3 RFC display and maintenance transaction appears (Figure 95).
Choose Create from this screen. Enter the RFC destination name D1.CAPEVERDE, the communication type T (start an external program via TCP) and a description for the destination into the corresponding fields of the upcoming screen (Figure 96).
205
After pressing Enter, the next screen appears(as shown in Figure 97). Since the Common Store processes are registered to the gateway process (and are not started out of R/3), the activation type is register. Press the button Registration as Activation type in the technical settings. Afterwards, enter the Program ID (as defined in the CommonStore server profile archint.ini in 8.2.4, Customizing of the CommonStore server profile on page 201) into the corresponding box.
206
Then, select from the menu of this screen Destination->Gateway options (Figure 98).
A new sub screen appears (Figure 99). Enter the Gateway host and the Gateway service and click OK .
The sub screen disappears and the RFC destination display/change screen of D1.CAPVERDE appears. Do not forget to Save the RFC destination.
Note:
If you plan to do a test-connection to the (newly created) RFC destination, make sure that the CommonStore Server processes are started. (See 8.4.2, Starting the CommonStore server on page 220.)
8.3.1.3 Checking/Creating the logical system for the client A logical system is a system in which applications run on a common data basis and may communicate together. The logical system is relevant for the use of Application Link Enabling (ALE) and Workflow purposes. So, according to the R/3
207
standard, a client corresponds to a logical system. Since the ArchiveLink interface can be embedded in Workflow scenarios, the client has to be assigned to a logical system. Enter the transaction SCC4 (Figure 100) and choose the client to connect the archiving system to (by double clicking).
Detailed information about the client parameters will be displayed (Figure 101). Check if there is already an entry for the Logical system defined for the client.
If there is one defined, this value has to correspond to the entry in CommonStore server profile archint.ini.
Note:
In a production system, the value of the Logical system cannot be changed if a value (other than the initial empty field) had been made. If (a non-initial) Logical system is changed, you might not be able to find some documents in your system any longer. So, avoid changing the existing settings of the Logical system.
208
If there is no Logical system specified for the client, then create one according to transaction BD54 (or choose an existing one) and assign it to the client. The value of the Logical system has to correspond to the value set in the CommonStore profile archint.ini.
8.3.1.4 Customizing the logical path (client independent) When accessing an archive file, the ADK does not use a hardcoded definition for path and filename, but for all operations the file is accessed using logical path and file definitions. These paths defined in this section are reflected later on in the technical settings of the ADK in 8.3.3, R/3 ADK customizing on page 217. In this step, a logical path is defined by a transaction FILE (Change View Logical File Paths: Overview in Figure 102).
Note:
Accessing files by logical file and pathnames also has advantages in a heterogeneous R/3 system environment (for example running application servers on the operating systems AIX and Windows NT in the same R/3 system. Two syntax groups would be assigned (UNIX and Windows NT) to a logical path to arrange that all application servers can access the path reflecting the different naming conventions for the different operating systems. Access the transaction FILE and modify the global path ARCHIVE_GLOBAL_PATH (Figure 102):
First, select Logical file path definition in the navigation section, then mark the ARCHIVE_GLOBAL_PATH from the list of the logical file paths. Then select --> Assignment of physical paths to logical paths.
209
A new screen appears (Figure 103). Double click on the correspondent Syntax Group (UNIX).
In the following screen (Figure 104), enter the path you specified as BASEPATH in the archint.ini profile and add the extension /<FILENAME>.
Save the entry. 8.3.1.5 Customizing the logical files (client dependent) Access the transaction Logical file names, client-specific by the transaction code SF01. A screen appears (Figure 105).
210
Press New entries and add the definitions for the logical file. As shown in Figure 106, first select: ARCHIVE_DATA_FILE_WITH_ARCHIVELINK Make sure to have ARCHIVE_GLOBAL_PATH assigned as Logical Path.
Then, as shown in Figure 107, select: ARCHIVE_DATA_FILE Again, make sure to have ARCHIVE_GLOBAL_PATH assigned as Logical Path.
211
The base screen of the IMG appears, select the SAP Reference IMG either by the menu (Figure 109) or by pressing F5.
212
In the SAP Reference IMG, expand Basic Components->Basis Services->Archive Link (Figure 110), customizing settings for ArchiveLink have to be done in the System Settings and the Archive Settings.
Note:
The structure shown for the IMG is valid for R/3 4.0B. The paths/names may change with other R/3 releases. For example, in the R/3 4.5B release, the name Archive Settings changed to Storage System Settings .
8.3.2.1 Definition of archives The definition of the archive is done in R/3 by transaction OAC0. When reaching this transaction, press the Display->Change button and add New entries afterwards.
213
Define the optical according to the panel shown in Figure 111. Enter the name of the RFC destination defined in 8.3.1.2, Creating an RFC destination on page 205. As protocol, use ARCHPRT. Basic path and Arch. path are the paths for the data exchange with the archiving system.
Note:
Make sure that both paths contains the delimiter / at the end Activate the RFC radio button.
8.3.2.2 Definition of links The LinkTable in the R/3 system defines the correlation between the different object and document types and to which optical archive they will be archived. The entries in the LinkTables are client specific, so for different clients the same types may be archived to different optical archives. Since we are dealing with the archiving of data, check at first if there is already an entry available for the REO data type.
Enter transaction SE16 (data browser) and select the table TOAOM. As shown in Figure 112, select the criteria of the selection screen. Set: SAP_OBJECT to ARCHIVE AR_OBJECT to ARCHIVE Press the EXECUTE button.
214
Figure 112. Selecting the objects for table TOAOM in transaction SE16
If there is such an entry for the DOC_TYPE REO, as shown in Figure 113, the ARCHIV_ID has to been modified in the Link Table. Ensure that the entry is active, and therefore the AR_STATUS is set to X.
If there is no entry in the table TOAOM for the DOC_TYPE REO, then create an entry using the transaction OAC3. Create an new entry by changing the display to change mode (press button Display -> Change), press the New Entries button afterwards and enter the settings as shown in Figure 114.
215
If an entry already exists (as shown in Figure 115), make sure that the Archive setting corresponds to the proper optical archive ID. If not, select the entry with the status X (this is the active one) and change the Archive accordingly.
8.3.2.3 Definition of queues The necessary Queues are created using the R/3 transaction OAQI. Figure 116 shows the appearance of this transaction. Keep all default X settings in the small boxes, assign a Queue administrator to the queues and press the Execute button.
(The queue administrator is the R/3 user in this client which is responsible for administering the queues. For administration purposes, the queue administrator uses the transaction OAM1).
216
217
All the ADK archive administration can be done out of transaction SARA. Figure 117 shows the entry screen for this transaction
Select Customizing and Archive Object specific customizing afterwards. Figure 118 shows the correspondent screen for the archive object specific customizing. The correspondent Logical File Name ARCHIVE_DATA_FILE must match the definitions we made before. The document type has to be ARCHIVE. Both the archiving run and afterwards the deletion run are started automatically. So, after the archiving run has successfully finished writing the archive file, the deletion run rereads the file and deletes the items from the database. Afterwards the file is transferred to the archiving system. For the delete program, select useful variants both for test and production runs.
218
Note:
During the first implementation tests, the delete program and the connection should not be started automatically. The individual steps of deleting the data and then transferring them to the archive system should be carried out manually (out of transaction SARA) to trace their proper execution and the proper connection to the archive system. Before performing the first deletion run tests, the generated archive file should be backed up before.
219
Note:
Entering/Changing such test data should be performed in the R/3 test system only.
After pressing Enter, the next screen appears, since these are test database records, only the Bank name is maintained (Figure 120).
220
The entry screen of the archive administration was already shown in Figure 117 on page 218. For starting an archiving run, enter FI_BANKS as archiving object and press the Archiving button. (See Figure 122). Before the archiving run starts as a background job in the R/3 system, additional parameters have to be maintained.
221
Definition of a variant for the archiving run A variant is mandatory as it contains the arguments for the archiving job:
Create a test variant by entering its name into the box and press Maintain. As shown in Figure 123, enter the following settings: Select the Bank country as AD Set Min. no. of days in the system to 0 Disable the Only with deletion flag checkbox Disable the Test run checkbox Enable the Detail log checkbox So, all banks of the Bank country Andorra (AD) will be archived regardless how long they exist in the system and regardless of their deletion flag.
Note:
The variant of the archiving program contains the parameters for the actual run. Dependent on the attributes of an archiving objects, the variant defines specific selection criteria for the archival data like number ranges or retention periods. For real archiving runs, the parameter settings in the variant must be carefully maintained according to the business needs. In general, this maintenance task is not done by the IT department only, but also involves user departments and the R/3 application administration. Press the Continue button (or hit F6), enter a description in the following screen and Save the variant afterwards. You will get back to the main screen of maintaining the archive run parameters.
222
Definition of the user running the archive job The default setting for the user name (the R/3 user under which account the archiving run is started), is the queue administrator defined in 8.3.2.3, Definition of queues on page 216. For the archiving run, this default setting can be used. Maintenance of the start date for the archive run Press the Start Date button, and a new screen pops up (Figure 124). Select Immediate and Save afterwards.
Maintenance of the spool output parameters Press the Spool Params button, a new screen pops up. Select your actual printer in the Output Device box. The list of the output of the archive job will be sent to this printer after the archive run is completed. Start of the archive run Now, after all the parameters are maintained, the screen has the appearance as shown in Figure 125.
The archive run can be started (by pressing the Execute button). The archiving run consists of three jobs: The submit job (SUB) , which contacts the application for selecting the data. The write job (WRI), which writes the archive file to the file system. The delete job (DEL), which rereads the archive file and deletes the data from the database.
223
If the Start.automat. checkbox is enabled in the technical customizing of the archive object (as shown in Figure 118 on page 219), then after the completion of the delete job a request will be submitted by ArchiveLink to CommonStore to archive the file. As shown in Figure 126 (achieved by pressing the button Job Overview) a list of the actual archive jobs are displayed.
On the other hand, an overview of the status of the archive run can be gained by choosing the archive object and Management on the base screen of the transaction SARA . As shown in Figure 127, a screen appears which contains information about all the archive runs done for this archiving object. Expanding the tree and clicking the button Archive System for a selected run item leads to more detailed information about this run.
As shown in Figure 128, this detailed information is separated into information about Status, Buffer and Accessibility of the archived file. Each archiving run has
224
R/3 Data Management Techniques Using Tivoli Storage Manager
a unique ID. The name of the archive files reflects this ID, the archiving objects, and sequence information. From the point of view of R/3, the status of this run is complete, all database records belonging to this run are deleted from the database, the archive file has been successfully stored on the archive system, and the archive file is accessible via the ArchiveLink interface. In a production run, this list would contain more then one archive file. For each of them, this status information is displayed.
225
226
227
228
Product Number for R/3 release 4.0B with Oracle and AIX
R/3 Homogeneous System Copy R/3 Installation on UNIX - Oracle Database, Check list - Installation Requirements: Oracle SAP Documentation Guides CD
The correct SAP documentation should be used for each particular environment. This documentation can be found in the SAP Documentation Guides CD, which is part of R/3 installation package, or with SAP technical support, or at this Web site:
http://sapnet.sap.com
In this book, we do not cover an R/3 Heterogeneous System Copy, only a Homogeneous System Copy.
229
A common R/3 system landscape consists of three instances: development, quality assurance, and production. In the development system, customization and implementation of an R/3 environment is carried out. Quality assurance is the test environment for all changes made in development. The production system is the real user environment for the company. For setting up this R/3 system landscape, basically, two options are considered:
In the case of a hardware upgrade, the system copy can be used to transfer one R/3 system from the old machine to a new one. But, since you will not change the System Identifier (SID) of your system, you will not need to make a Homogenous System Copy. The best procedure would be to copy all file systems of an R/3 instance and the entire Oracle database to the new server. This procedure transfers files of the entire R/3 system at the operating system level. Here are some reasons for the procedure of file system copy in the operating system level: The executable of the R/3 system and the database may have been updated since installation, and so, they may have a higher release level than the files on the CDs of the R/3 installation package. The file system copy will prevent a redo of all software upgrades. There are changes that are executed to fix a problem in the R/3 system. The SAP OSS-Notes contain information about changes that have to be applied in the R/3 system to correct a problem. Sometimes, this correction is made at the operating system level, and a Homogeneous System Copy will not contain these changes because it is equivalent to a first time installation.
230
For these reasons, in a hardware upgrade, we recommend the copy of the file systems. But the copy of file systems, as a procedure of system copy in a hardware upgrade, is efficient since you ensure that all file systems were copied or transferred to the new system.
Test system
When you will change any hardware or software that may cause an impact in your R/3 system landscape, for example, hardware or software upgrade, SAP recommends that you make a system copy and apply these changes first in this clone system. With this procedure you identify problems before applying them in your production system. Also, you can use the system copy to clone an exact copy of your production system for making tests with the same data, for example, performance test. This system is usually called Quality Assurance.
Training and demo system
Use this when you need a system to train the R/3 users or a demo system and do not want to interfere with any production system.
Standby system: for a database other than Oracle and Informix
If you use a database other than Oracle and Informix, you can use the system copy procedure to make a standby system. This copy procedure is different from the procedure shown in this chapter, and you can find more information about these procedures in SAP documentation R/3 Homogeneous System Copy, for our case study, see the product number in Table 20 on page 229. In Chapter 11, R/3 warm standby using Tivoli Storage Manager on page 283, we show the procedure to build a stand by server environment using an Oracle database. In the next section, we show the procedure for R/3 system copy using the Tivoli Data Protection for R/3 with Tivoli Storage Manager. Afterwards, we explain some considerations about cloning an R/3 system: System copy using different target directories: We describe how to restore files to a different destination in the target system. System copy using online backup and complete recovery in the target system: We explain the procedure to make a system copy using online backup and complete recovery of the database on the target system. System copy with full reorganization: We show the major steps to have a target system fully reorganized. For the procedure for homogeneous system copy shown in this chapter, we used our case study presented in Chapter 4, SAP data management case study on page 85. For this environment, Figure 129 shows some steps for the system copy that we have detailed here.
231
Node: palana
2
Node: capeverde
TSM Server
3 4
Tapes
3
TDP for SAP R/3
TSM API
4
TDP for SAP R/3
TSM API
Controlfile cntrlTSM.dbf
Controlfile cntrlCPY.dbf
Database TSM
These are the general steps for the Homogeneous System Copy using Tivoli Data Protection with Tivoli Storage Manager: 1. Planning a Homogeneous System Copy 2. Installation of the R/3 instance and database on the target system 3. Off-line backup of source database system 4. Restore of source database backup on the target system 5. Activate the target database 6. Finish the R/3 system installation 7. Reconfiguration of Tivoli Storage Manager and Tivoli Data Protection The steps above are explained in the following sections.
232
The procedure of system copy presented in this chapter is based on the SAP documentation R/3 Homogeneous System Copy and R/3 Installation on UNIX Oracle Database; for our case study, see the product number in Table 20 on page 229. You should use this procedure as a complement of this manual, since we do not explain all steps for a complete installation in this book.
Note:
In advance of performing the Homogeneous System Copy, make sure you have carefully examined the two manuals mentioned above, but also the latest versions of the OSS-Notes related to installation and Homogeneous System Copy concerning your actual operating system, R/3 release and database release. For our case study, we used R/3 4.0B with Oracle. The OSS-Note was 86859 for Homogenous System Copy. In the next section there is a short overview of the planning of a Homogeneous System Copy, followed by its detailed procedural steps.
233
These two tools mentioned above are used to restore the database backup and should be correctly installed to proceed with the database restore.
In a system landscape, at least the Unix group identifier (GID) should be the same for all <sid>adm and ora<sid> on all participating system nodes. This should be done because they share a NFS file system (/usr/sap/trans) and otherwise, there will be permissions problems in this file system. Choose the system name (CPY) Creation of Oracle and SAP file systems with default size If it is not created yet, creation of transport directory (/usr/sap/trans) This file system will be shared, by NFS protocol, with all R/3 system landscape and is part of Transport Management System configuration. Creation of installation directory (/tmp/install) Installation of R/3 instance For the installation of the central instance, we used the specific script for this type of installation (Homogeneous System Copy) CEDBR3CP.sh. instead of default script CENTRDB.sh. These scripts are shown in SAP documentation R/3 Homogeneous System Copy, for our case study see the product number in Table 20 on page 229. Starting the installation command R3SETUP -f CEDBR3. This parameter (CEDBR3) is used only for the Homogeneous System Copy. Installation of Oracle database server with orainst. Upgrade of Oracle for version 8.0.4.4
234
The target SAP system is ready to start the restore of the source database and creation of the new database server instance identified by the new system identifier CPY.
Note:
You must ensure that the version and patch level of the Oracle database is the same for the system, source and target. Before the activation of the target Oracle database you must upgrade the Oracle to reach the same patch level. To see which Oracle version on the system uses Server Manager (svrmgrl), and for latest Oracle database patch level and upgrade procedure, consult OSS-Notes for your database version. The procedure to activate the target system is explained in 9.2.5, Activate the target database on page 240. Our version of Oracle is 8.0.4, and in this case, the OSS-Notes with the latest release is 99311.
Stop the R/3 system and Oracle database server as R/3 administrative user tsmadm ( <source_sid>adm ).
capeverde:tsmadm> stopsap
capeverde:oratsm> svrmgrl SVRMGR> connect internal SVRMGR> startup restrict SVRMGRL> quit
235
Now, change to the working directory and start the script r3copy as administrative database user:
The r3copy script builds a SQL script file to recreate the Oracle database control file using the new target SID CPY. This new control file renames the Oracle instance to the target SID. The screen below shows the options menu for the script r3copy.
---------------------------------------------------------------r3copy - MAIN MENU ---------------------------------------------------------------(a) Source system: Force n log switches before backup where n is the number of redo log groups (b) Source system: generate script CONTROL.SQL (c) Source system: BACKUP CONTROLFILE TO TRACE (manual procedure) ------------------(d) Target system: Create new control files ------------------(q) Quit Please select (a|b|c|d|q):
In the source system you have to choose the first three options: (a), (b), and (c). And the option (d) could be used in the target system to create a new control file. The execution, in source system, of r3copy must follow the ordered sequence of the options. Below we explain these three options:
If the three options of r3copy are successfully completed, start the offline backup using Tivoli Data Protection for R/3 to the Tivoli Storage Manager. First, you have to perform a shutdown normal in the Oracle database instance:
capeverde:oratsm> svrmgrl SVRMGR> connect internal SVRMGR> shutdown normal SVRMGRL> quit
For detailed information about using Tivoli Data Protection for R/3, see Chapter 6, Tivoli Data Protection for R/3 implementation on page 117. Afterwards, you can restart the source R/3 system:
capeverde:tsmadm> startsap
Set BACKUPIDPREFIX and all settings corresponding to the SERVER statements equal to the settings to the profile of the source system. In Figure 130 we show the parameters from source initTSM.ora file that should be copied to target initCPY.ora file.
237
Figure 130. Parameters of source initTSM.utl file that is copied to target initCPY.utl file
In this last step, these parameters must be changed to reflect the final configuration of the Tivoli Data Protection for R/3 for this new system using the target SID CPY.
Tivoli Storage Manager server file dsm.sys
Add or change all settings corresponding to the Tivoli Storage Manager server specified in the SERVER parameter (of the init<SID>.utl) equal to the parameters in the dsm.sys of the source system. In Figure 44 on page 113 we show the dsm.sys file for the source system. The important parameters that you should change are shown in Figure 131 to simulate the Tivoli Storage Manager client node capeverde.
In this last step, these parameters must be changed to reflect the final configuration of the Tivoli Storage Manager for this new client node.
Tivoli Data Protection for SAP password (reflected in init<SID>.bki)
Since we will access data owned by the source system, we have to access the Tivoli Storage Manager by the target system simulating the Tivoli Storage Manager client node of the source system on the target system. Therefore, we have to use the same password. You copy file initTSM.bki from the source system to the target system keeping the file name. If it is the first time for connection with Tivoli Storage Manager server, you have to set up the password. This procedure is explained in Chapter 6, Tivoli Data Protection for R/3 implementation on page 117.
Generate symbolic links to the original paths of online logs and data files
In the file system layout on the target machine, the mount points/directories are created with names containing the SID of the target R/3 system. Since the files are stored containing the SID of the source system in the path names, the file manager will store them to their original path. As user root , perform the steps according to the following example:
238
After the end of this procedure you will not need this symbolic link and you can remove it.
Create a list of the database files
Since the BACKINT/TSM File Manager is able to restore files only, but doesnt care about directories, all directories containing data files must be created in advance. Also, the BACKINT/TSM File Manager restores the files with the wrong file permissions. We will extract a list of all the database data files out of the CONTROL.SQL file. This list can be used later for adapting the settings. Change as user ora<target_sid> to the directory $ORACLE_HOME/sapreorg and issue the command:
palana:oracpy> cd $ORACLE_HOME/sapreorg palana:oracpy> grep $ORACLE_HOME/ CONTROL.SQL | tr -d "'," > dbfiles
The file $ORACLE_HOME/sapreorg/dbfiles contains a list of all the data files and online log files. Create the subdirectories as user ora<target_sid> either manually or by the following short (Korn-Shell) script:
palana:oracpy> cd $ORACLE_HOME/sapreorg palana:oracpy> ksh palana:oracpy> cat dbfiles | while read fs; do > dir=$(dirname $fs) > mkdir $dir >done
After these preparations, you are able to restore the database files. The procedure of restoring files is described in 6.2.5, Tivoli Data Protection for R/3 FIle Manager on page 125. For the full restore of data files, in backfm, you should select all files of the offline backup executed in the source system. The set of files includes Oracle data files and online redo log files. The Oracle database control file must not be restored since it will be recreated to the new target instance.
Note:
You must ensure that your target sapdata<n> directories have enough space for the restored files. The directories sapdata<n> are the allocated space to Oracle data files. These directories are standard to R/3 systems and should not be changed. BACKINT/TSM File Manager restores the files according to the source ownership, the ownership has to be changed to the administrative database user of the target system (ora<target_sid>) oracpy. Below, you can find a suggested procedure to change the ownership of the files. Change the ownership as user root either manually or by the following short (Korn-Shell) script:
239
# cd /oracle/CPY/sapreorg # ksh # cat dbfiles | while read fs; do > chown oracpy $fs >done
This step should not be performed in the case of Homogenous System Copy without a change in the SID on target system, for example, in an upgrade of hardware of a database server. In this case, you can use the same control file of the source system and open database.
Copy the files CONTROL.SQL and initCPY.ora generated during the r3copy run to the target system. (Copy CONTROL.SQL to /oracle/<TARGET_SID>/sapreorg and init<SID>.ora to /oracle/<TARGET_SID>/dbs, and assign the ownership of the administrative database user and its group to them). See Table 21.
Table 21. List of files that must be transferred from source to target system
Make sure that you have not restored the source control file to the target base. (If you have done so, then delete it). Create three directories for the new control files in the sapdata1, sapdata2, sapdata3 file system (as user ora<target_sid> ). The path for these files were defined in Oracle initialization file init<target_sid>.ora, for security reason you have to ensure that the location for the control file is in three different disks. In case you lose one disk, with control file, you still have two other copies of it.
Now, the database Control Files can be generated. Therefore, execute the SQL script CONTROL.SQL in the Oracle Server Manager, alter it to the ARCHIVELOG mode, and open the database (resetting the logs).
240
palana:oracpy> svrmgrl VRMGR> connect internal SVRMGR> startup nomount SVRMGR> @/oracle/CPY/sapreorg/CONTROL.SQL SVRMGR> alter database archivelog; SVRMGR> alter database open resetlogs; SVRMGR> exit
An image of the source database is now running with the instance TARGET_SID (CPY) of the target system. We recommend that a backup offline of this database be done as soon as possible. Before you restart R3SETUP, you must reset the passwords of the database users:
palana:oracpy> svrmgrl SVRMGR> connect system/<password_of_source_system> SVRMGR> alter user sapr3 identified by sap; SVRMGR> alter user system identified by manager; SVRMGR> alter user sys identified by change_on_install; SVRMGR> quit
At this point, the database is completely restored and ready to continue the R/3 installation. For the next installation step, you can keep the database in open status, you do not need to stop it.
These are some of the tasks executed in this final part by the R3SETUP: Creation of the database user ops$<sid>adm (in our case study ops$cpyadm) Configuration of Oracle Listener files listener.ora and tnsnames.ora, and its initialization (tnslsnr). At the end run of the R3SETUP program, it will start the R/3 system. After the R/3 installation procedure you have subsequent technical actions to execute with R/3 out. For actions on the operating system and R/3 Level consult SAP documentation R/3 Homogeneous System Copy and R/3 Installation on UNIX - Oracle Database, for our case study see the product number in Table 20 on page 229. In the Oracle database, you have two actions: Delete the user ops$<source_sid>adm (for our case study ops$tsmadm) Delete all entries from the tables of the SAP database statistic: DBSTATHORA, DBSTAIHORA, DBSTATIORA and DBSTATTORA.
241
palana:oracpy> svrmgrl SVRMGR> connect system/manager SVRMGR> drop user ops$tsmadm cascade; SVRMGR> delete sapr3.dbstathora; SVRMGR> delete sapr3.dbstatihora; SVRMGR> delete sapr3.dbstatiora; SVRMGR> delete sapr3.dbstatora; SVRMGR> commit; SVRMGR> quit
Note:
For security reason, you must change the password for the database users SAPR3, SYSTEM and SYS.
Tivoli Data Protection for R/3 profile init<target_sid>.utl Set BACKUPIDPREFIX to the new target system identifier and all settings for the new target system to SID CPY. You have to create another Management Class for this system on Tivoli Storage Manager, then all backup files set for this system will not interfere with the source system. See Figure 133.
Figure 133. Parameters of source initTSM.utl file that is copied to target initCPY.utl file
242
Tivoli Data Protection for SAP file init<SID>.bki The file transferred from source system initTSM.bki is not necessary anymore and you should remove it. The new file initCPY.bki installed with the Tivoli Data Protection for R/3 will be used instead from this point.
Note:
As you have seen, this procedure opens the database with reset logs. You must do a backup of this new system otherwise you will not be able to recover it. We recommend that you perform an offline backup of the database and all application (R/3 and Oracle) filesystems of your environment.
3. When the Oracle data files should be redistributed in different sapdata<n> from the original path, for performance reasons Here, we show the steps in database to restore the files in different directories and reconfigure the file CONTROL.SQL. When you restore a file using BACKINT/TSM File Manager, you can choose another destination path, since the original files are backed up with their absolute target path . Using the example in Table 22, we do not have the original path in the target system. In this case, when we restore this file we can assign a new path for it. But, in the case of having many files, it is better to create a symbolic link for the original directory structure pointing to the target directory structure. Having an equivalent source directory structure you do not have to specify another target path of the files. You restore the data files as you would restore in the source system, but the files will be restored physically in another directory. We show an example below to create a symbolic link to a new destination directory:
The structure of the directories of Oracle data files, in an R/3 instance, is based on having only one data files in a directory. When redefining the path, using a symbolic link, it is necessary to ensure that all source paths have an equivalent path in the target. Using our case study we show, in Figure 134, an example of restoring a file in a different destination directory. Also, in this figure, you can see the changes in file CONTROL.SQL to reflect the new directory for the data file.
Node: palana
TSM Server
Tapes
TDP for SAP R/3
TSM API
244
When you change the original path, the script that creates the control file (CONTROL.SQL) becomes inconsistent. You have to change this file, otherwise, you are not able to recreate the new control file. It is very simple, you just have to edit the file CONTROL.SQL and find the line with the original path. You must change, the original path for the data file, to the new path, as shown in Table 22.
Table 22. Example of changing the path for one data file in CONTROL.SQL file
Original Path
/oracle/TSM/sapdata5/clud_1/clud.data1
New Path
/oracle/CPY/sapdata501/clud_1/clud.data1
In some cases, not only one data file path needs to be changed, but all data files need changes. For this case, the change procedure needs to be repeated for all data files path in CONTROL.SQL. The destination path of the data files in SQL script file CONTROL.SQL must be correct, otherwise, it is not possible to recreate the new Oracle control file. After the changes are made in the file CONTROL.SQL, the new control file can be created following the procedure in 9.2.5, Activate the target database on page 240. For this procedure the main problem is to ensure that all changed path are corrected.
245
a SAP note is the procedure of making changes in R/3 application for corrections or improvements.
Note:
This procedure should not be executed with the sapdba recover tool. In Figure 135, we show an example of Homogeneous System Copy using an online backup for our case study. In the steps for the system copy procedure, we show each step beginning with the machine on which that step will be executed. Source is the original system and Target is the destination system.
saparch directory
sapdata<n> directories
Datafiles
controlfile cntrlTSM.dbf
4
Tapes with backup of archive redo logs Tapes with backup on-line of datafiles
6 5
6 2
controlfile cntrlTSM.dbf Archive redo logs Online Redo Logs Datafiles SAP R/3 and Oracle directories
Figure 135. Example of system copy with online backup for case study.
Steps to make a Homogeneous System Copy using backup on line: 1. Source: Online backup of the database (brbackup). 2. Target: Restore the source online backup files using BACKINT/TSM (backfm). All data files must be restored in original paths.
Note:
You must have enough space in the target Oracle directories when restoring Oracle data files. 3. Source: Stop your R/3 system and database (stopsap)
246
4. Source: In Oracle database, using Server Manager (svrmgrl), switch log files to archive all redo logs. The number of switch logs depends on the number of redo log groups that the Oracle database has. Ensure that you backed up all archive redo logs since the begin of the online backup. 5. Target: Restore all archive redo logs since the start of your online backup using Tivoli Data Protection for R/3 file manager (backfm).
Note:
For this database recover, you must have all archive redo log files in the directory /oracle/<TARGET_SID>/saparch, since the start of the online backup in the source system until the switch log file and shutdown. 6. Source: Copy all R/3 and Oracle directories and files transferred for the target system. See Table 23.
Table 23. Files and directories that should be copied from source to target
Type
Oracle control files
Paths
/oracle/TSM/sapdata1/stdby/cntrlTSM.dbf /oracle/TSM/sapdata2/stdby/cntrlTSM.dbf /oracle/TSM/sapdata3/stdby/cntrlTSM.dbf /oracle/TSM/dbs/init<sid>.ora /oracle/TSM/origlogA /oracle/TSM/origlogB /oracle/TSM/mirrlogA /oracle/TSM/mirrlogB /oracle/TSM /oracle/TSM/sapreorg /sapmnt /usr/sap/SID /usr/sap/trans
Note:
In our case study we change the host name of the target system. In a database server upgrade it would be better if you keep the same hostname and IP address from source system. In this step, you can turn off the source system and rename the target system to the source hostname and IP address. With this procedure you will not need to change the name of the database server in R/3 profiles, for example. 7. Target: Recover database with Oracle Server Manager (svrmgrl) using the command ALTER DATABASE RECOVER AUTOMATIC DATABASE; 8. Target: Now you can open the database with option NORESETLOGS.
247
Note:
At this point, after opening the database, you can change the system identifier (SID) of database. For this case, you do not need to execute the r3copy script in the source system, since you already have an exact copy from it. You can execute the script r3copy in the source system, removing actual control files and recreating it with the SQL script file CONTROL.SQL created. 9. Target: Backup offline or, if you are not able, you can make an online backup since you do not open the database with resetlogs. 10. Target: Start your new R/3 system (startsap) 11. Source: Backup offline, this backup is useful as a normal offline backup since you do not open the database with resetlogs. But, if you change any path of your data files and you need to recover it, you must recover it in the new path, otherwise, Oracle database server will not able to find it.
248
Procedure
1. Standard Homogeneous System Copy and Full Reorganization 2. Export source and Import target
Advantages
- Well documented - Easy - Fast - Tablespace system reorganized
Disadvantages
- Slow
Both procedures will provide a complete reorganization of the R/3 data for tables and indexes.
1. Steps using standard Homogeneous System Copy and full reorganization
a. Make a standard Homogeneous System Copy as explained in 9.2, Implementation using Tivoli Storage Manager on page 232. b. Using the SAP tool sapdba make a full reorganization of all PSAP% tablespaces. c. Backup offline of target system. See Figure 136.
2. Steps using Source Export and Target Import
This procedure is fully based in the R/3 tool sapdba. You should have some experience in this tool to execute this procedure. For more information on this tool, see the R/3 documentation BC SAP Database Administration: Oracle that can be found in CD R/3 System Release 4.0B Documentation Print Files - Product Number 50031307 in directory /PDF_Bib/En/bcdboradba.pdf.
249
saparch directory
Datafiles
controlfile cntrlTSM.dbf
saparch directory
First, the steps in the source system: a. Use, in sapdba tool, the option e (Export/import) and afterwards the option a - Export tables and indexes including data to export all data in tables and the structure of its indexes. For a fast export and import choose parameter commit=no and parallel <> 1, and to save space in saprerog directory: compress dump = yes. These parameters will be used in the import procedure also, but they are already configured when all these files are transferred to the target machine. b. Export R/3 database structures: constraints, views and sequences. c. Build a report with all tablespaces of the database. You can use transaction DB02 in SAP GUI to obtain this information. This report will show you the statistic space for all tablespace assigned in the Oracle database server. Afterwards, the steps in the target system: a. Creation of database and creation of all tablespaces PSAP% using the report of source system tablespaces. The target tablespaces must have at least the size of used space in the source system. b. Transfer the source directory, with the export dump files, in sapreorg directory to the target sapreorg directory.
250
You have to change some files, since the SID changed. Edit files imp*.sh and change the Oracle environment variable from ORACLE_SID=<source_sid> to ORACLE_SID=<target_sid>. c. Import data and an automatic index will be built d. Import structures e. Backup offline If the import process failed for any case, you should correct the error and restart again. If you have some experience using sapdba you can edit the file restart.exd and choose what should be executed again or not. This file is found in the same dump files directories. You can continue with the installation of the R/3 system.
251
252
253
tools and the logical volume manager (LVM) in AIX. Depending on the characteristics of the storage subsystem, it is possible that what the operating system sees as a physical disk is actually a portion of the disk defined as a logical volume at the storage subsystem level. Figure 137 illustrates the physical-logical relationships required for a host system to access the data stored on the storage subsystem.
Storage subsystem Storage subsystem physical disks Storage subsystem logical volumes
18 GB Physical Disk
18 GB Physical Disk
8 GB Logical volume 1
4 GB Logical volume 4
4 GB Logical volume 5
4 GB Logical volume 6
Operating System
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6
volgroup 1
volgroup 2
volgroup 1
AIX Logical Volumes
lgvol 1 lgvol 3 lgvol 5 lgvol 2 lgvol 4
volgroup 2
lgvol 6
lgvol 8 lgvol 9 lgvol 7
Each storage subsystem physical disk in this example has a capacity of18GB. The total available physical disk capacity is divided into logical volumes or units of 8GB or 4GB each, which can be created over multiple physical disks. The connection from the host system to the storage subsystem is through a SCSI interface. Therefore, each of these logical units are assigned a SCSI logical unit number (LUN) and SCSI target IDs. At the operating system level, these numbers are used to define a physical volume or hdisk, through the use of the AIX configuration manager (cfgmgr). Each of these hdisks is assigned a physical volume identifier (PVID). The hdisks are then grouped by the AIX logical volume manager (LVM) and assigned to a volume group. The volume group is then partitioned into several AIX logical volumes (see section 2.2.2, RDBMS terminology on page 31 for description of logical volume and filesystems). The logical volumes can be spread over multiple hdisks . All files at the operating system level are stored in logical volumes, either as raw device files or filesystem files. Now that the relationships required for a host to access data on the storage subsystem have been established, a procedure for the mirrored copy of data to be accessed is required.
254
Several configuration files are generated and maintained to map this relationship, which include those created by the storage subsystem configuration tools and the AIX operating system. Once the copy is made at the logical volume level of the storage subsystem (at the hdisk level of AIX) using the copy service tools, these configuration files are used to establish the relationship between the copy and the operating system. For example, when a storage subsystem logical volume is to be copied, both the source and target logical volumes are specified through a serial number assigned by the storage subsystem. The copy creates an identical copy of the source volume. Since the target volumes now have the same PVID, these volumes can be made accessible only on a different host system and maintain the original relationships. The configuration files can then be used to bring the target volume online, mount it, and so on, on the new host system.
Note:
When a storage subsystem logical volume is copied, the AIX PVID of the source volume is also copied to the target volume. The target volumes can be accessed either from the same host where the source volumes are being accessed, or from a different host. In order to access this logical volume from AIX on the same host, the AIX PVIDs of all the copied volumes must be reset through the AIX chdev command before it is accessible. This procedure cannot be used if the AIX logical volumes on the copied volume have a filesystem defined at the AIX level. This is due to the fact that the filesystem is currently being used to access the AIX logical volumes on the source volume. Section 10.2, Storage subsystem specific implementation on page 255, will describe the procedures typically used to establish these relationships, and highlight copy features peculiar to each storage subsystem.
255
The 7133 SSA disk subsystem in a standalone configuration does not use RAID technology - RAID 1 for hardware mirroring or RAID 5. Therefore, a split mirror implementation using 7133 SSA disk subsystems can only be achieved by mirroring the data through software. This software mirroring on AIX is implemented through the AIX Logical Volume Manager (LVM). This section provides an overview of the AIX Logical Volume Manager (LVM) tools used to create a split mirror, followed by a set of lab tested procedures for making a software split mirror copy.
10.2.1.1 AIX Split mirror implementation The set of operating system commands, library subroutines, and other tools that allow you to establish and control logical volume storage is called the Logical Volume Manager (LVM).
The Logical Volume Manager (LVM) controls disk resources by mapping data between a more simple and flexible logical view of storage space and the actual physical disks. The LVM does this using a layer of device driver code that runs above traditional disk device drivers. All of the physical volumes in a volume group are divided into physical partitions (PPs) of the same size. Within each volume group, one or more logical volumes (LVs) are defined. Each logical volume consists of one or more logical partitions (LPs). Each logical partition corresponds to at least one physical partition. A logical partition is one, two, or three physical partitions, depending on the number of instances of your data you want maintained. Specifying one instance means there is only one copy of the logical volume (the default). In this case, there is a direct mapping of one logical partition to one physical partition. Each instance, including the first, is termed a copy. This is the mechanism used to implement mirroring of data at the AIX logical volume level. An AIX logical volume can be mirrored either when it is created with the mklv command or after creation, through the mklvcopy command. The mklvcopy and rmlvcopy commands are used to increase or decrease the number of copies of an AIX logical volume. The next section will describe the lab setup for the split mirroring implementation on the 7133 disk subsystem using these LVM commands.
10.2.1.2 Lab setup for split mirroring Our lab environment was described in section 4.4, Lab hardware and software overview on page 97. Figure 138 shows the storage layout for a split mirror setup in our lab.
256
Token Ring
capeverde
palana
ssa0
7133-B40
ssa0
vscsi1
hdisk5
hdisk6
hdisk7
hdisk8
hdisk9
Shared Disks as seen by capeverde
hdisk10
hdisk11
hdisk12
hdisk13 hdisk9
hdisk14 hdisk10
hdisk15 hdisk11
hdisk16 hdisk12
In the figure, the darkly shaded AIX hdisks , hdisk5 to hdisk16, are connected and available on the capeverde system. Of these hdisks, which are defined on a total of twelve physical disks, there are four physical disks (shown overlapped in Figure 138) which are connected to both capeverde and palana. The physical disks appear as AIX hdisks - hdisk13 to hdisk16 (darkly shaded) on capeverde and as hdisk9 to hdisk12 (lightly shaded) on palana. Since the eight hdisks (four on each system) all refer to the four shared physical volumes, they have the same PVID on both systems. We will be using these 4 disks to establish, split, re-establish the third copy of the logical volumes. Figure 139 shows the volume group and logical volume layout as it exists after initial installation.
257
7133-B40
capeverde
hdisk5 hdisk6 hdisk7
COPY 1 COPY 1 COPY 1 COPY 1 COPY 1 COPY 1
palana
hdisk8
Logical Volume - sapdata1 Logical Volume - sapdata2 Logical Volume - sapdata3 Logical Volume - sapdata4 Logical Volume - sapdata5
hdisk9
hdisk10
hdisk11
COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2
hdisk12
Logical Volume - sapdata1 Logical Volume - sapdata2 Logical Volume - sapdata3 Logical Volume - sapdata4 Logical Volume - sapdata5
hdisk13 hdisk9
hdisk14 hdisk10
hdisk15 hdisk11
hdisk16 hdisk12
The volume group ssavg is created on capeverde and the AIX hdisks , hdisk5 through hdisk12, assigned to the volume group. There are six logical volumes defined, sapdata1 through sapdata6, striped across hdisk5 through hdisk8. These six logical volumes are also mirrored on hdisk9 through hdisk12 by specifying the number of copies as two during the creation of the logical volumes. The shared disks are not assigned to any volume group. We can now describe the procedures to perform the split mirror copy. The procedures can be divided based on function as follows: Establish logical volume mirror Split logical volume mirror Re-establish logical volume mirror The next sections will describe procedures each of the above functions. Scripts used to automate the steps described for these procedures are provided in Appendix A.2, Lab split mirror setup scripts and files on page 339.
10.2.1.3 Establish logical volume mirror This is a one-time preparatory step before splitting a logical volume mirror and involves creating a third copy of the logical volumes to be split. All subsequent operations will split and re-establish this copy and will be described in 10.2.1.4, Split logical volume mirror on page 260, and 10.2.1.5, Re-establish logical volume mirror on page 262.
258
On the R/3 database server capeverde: 1. The shared AIX hdisks must be assigned to the volume group containing the logical volumes to be copied. This is done with the command:
extendvg -f VolumeGroup [PhysicalVolumes]
The extendvg command adds the shared AIX hdisks specified by the PhysicalVolumes parameter to the volume group specified by the VolumeGroup parameter. 2. Mirror the AIX logical volumes with the command:
mklvcopy LogicalVolume 3 [PhysicalVolumes]
The mklvcopy command adds physical partitions to the logical partitions in the logical volume specified by the LogicalVolume parameter so that a total of three copies exists for each logical partition. In our case, since we already had two copies, one extra copy is added and the additional physical partitions are stored in AIX hdisks specified by the PhysicalVolumes parameter. Figure 140 shows the new logical volume layout after the third copy of the logical volumes is established.
7133-B40
hdisk5 hdisk6 hdisk7
COPY 1 COPY 1 COPY 1 COPY 1 COPY 1 COPY 1
hdisk8
capeverde
palana
Logical Volume - sapdata1 Logical Volume - sapdata2 Logical Volume - sapdata3 Logical Volume - sapdata4 Logical Volume - sapdata5 Logical Volume - sapdata6
hdisk9
hdisk10
hdisk11
COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2
hdisk12
Logical Volume - sapdata1 Logical Volume - sapdata2 Logical Volume - sapdata3 Logical Volume - sapdata3 Logical Volume - sapdata4 Logical Volume - sapdata4 Logical Volume - sapdata5 Logical Volume - sapdata5 Logical Volume - sapdata6
hdisk13
hdisk14
hdisk15
COPY 2 3 COPY 2 3 COPY 2 3 COPY 2 3 COPY 2 3 COPY 2 COPY 3
hdisk16
Logical Volume - sapdata1 Logical Volume - sapdata2 Logical Volume - sapdata3 Logical Volume - sapdata4 Logical Volume - sapdata5 Logical Volume sapdata6 Logical Volume -- sapdata6
Figure 140. Volume group and logical volume layout after establishing the mirror
259
10.2.1.4 Split logical volume mirror Once the third mirror is established as described in 10.2.1.3, Establish logical volume mirror on page 258, the third mirror can be split using the set of procedures described in this section.
On the R/3 database server capeverde: 1. Make a list of all logical volumes in the volume groups to be split with the command:
lsvg -l VolumeGroup
The lsvg command lists all the logical volumes for the volume group specified by the VolumeGroup parameter. 2. For each of these logical volumes, create map files containing the mapping between the logical and physical partitions related to the third mirror of the logical volume. This third copy will be the split mirror. The map file will be used to create the logical volume on palana. The map files are created with the command:
lslv -m LogicalVolume > LogicalVolume.map
The lslv command with the -m option lists the physical partitions of each logical partition for all the copies of the logical volume identified by the LogicalVolume parameter. This output is stored in the file LogicalVolume.map. Refer to Figure 176 in Appendix A.2, Lab split mirror setup scripts and files on page 339 for the contents of this file. Since this file contains maps for all the copies, it needs to be filtered to create a map for just the third copy of the logical volume. 3. The third copyset of all the logical volumes are then removed from the logical volume. In the view of the Logical Volume Manager (LVM), the logical volume now has only two copysets. But the data from the third copyset was not erased, so, if there is no new logical volume created on the disk, the data in the physical partition will remain. The third copy is removed with the command:
rmlvcopy LogicalVolume 2 [PhysicalVolumes]
The rmlvcopy command reduces the number of copies of the logical volume in the LogicalVolume parameter to 2. The PhysicalVolumes parameter ensures that only the copies residing the AIX hdisks specified here are removed. 4. The volume group containing the logical volumes is now reduced by the disks defined for the split mirror copy. This is done in order to re-assign these hdisks on palana. The volume group is reduced with the command:
reducevg -f VolumeGroup [PhysicalVolumes]
The reducevg command reduces the volume group specified in the VolumeGroup parameter by the AIX hdisks specified in the PhysicalVolumes parameter. 5. Since the AIX hdisks used for the split mirror can be accessed from both capeverde and palana, they share the same PVID. However the AIX hdisk names are different on each node. Hence, a map of the AIX hdisks to the PVID is created on capeverde, which will be used to create a volume group on palana. This file is created with the command:
260
R/3 Data Management Techniques Using Tivoli Storage Manager
The lspv command lists the hdisk name, physical volume identifier (PVID), and volume group information for all hdisks used on a system. Since this file contains maps for all the hdisks on capeverde, it needs to be filtered to create a map for just the hdisks to be split. 6. The files created in steps 1, 2 and 5 are now copied over to palana. These files are required in order to make the data accessible from AIX. On the node running Tivoli Storage Manager palana 7. The split hdisks are already available at palana to create a volume group under different hdisk names, but with the same PVID. Therefore, the first step is to map the PVIDs obtained from capeverde to the hdisks names on palana. 8. A new volume group with the split hdisks is now created with the command:
mkvg -f -s PPSIZE -y VolumeGroup [PhysicalVolumes]
The mkvg command creates a volume group specified by the VolumeGroup parameter with the physical partition size specified in the PPSIZE parameter on the hdisks specified in the PhysicalVolume parameter. The physical partition size of the volume group on palana should be same as it was on capeverde. 9. Using the list of the logical volumes and the map files copied over in step 6, new logical volume are created. The logical volumes are created the command:
mklv -m LogicalVolume .map -t JFS -y LogicalVolume VolumeGroup PP
The mklv command with the -m option uses the logical to physical partition map (specified by the LogicalVolume.map file) for the logical volume in the LogicalVolume parameter. The VolumeGroup parameter specifies the volume group created in step 8, and the size of the logical volume in physical partitions is specified by the PP parameter. The size of the logical volumes on palana should be same as it was on capeverde. 10.After the creation of the logical volumes in step 9, the filesystems associated with the logical volumes should be mounted using the same mountpoint they had on capeverde. The filesystems are mounted with the command:
mount -o log=/dev/JFSLOG /dev/LogicalVolume MountPoint
The mount command mounts the logical volume (specified by the LogicalVolume parameter) identified by its device name (/dev/LogicalVolume). The MountPoint parameter specifies the location (directory) where the filesystem should be mounted. The -o log=/dev/ JFSLOG option specifies the name of the filesystem logging logical volume name where the following filesystem operations are logged. The filesystem is mounted with these options as AIX does not know about the existence of the filesystems which were created on capeverde. Figure 141 shows the volume group and logical volume layout after a split mirror.
261
7133-B40
hdisk5 hdisk6 hdisk7
COPY 1 COPY 1 COPY 1 COPY 1 COPY 1 COPY 1
hdisk8
capeverde
palana
Logical Volume - sapdata1 Logical Volume - sapdata2 Logical Volume - sapdata3 Logical Volume - sapdata4 Logical Volume - sapdata5 Logical Volume - sapdata6
hdisk9
hdisk10
hdisk11
COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2 COPY 2
hdisk12
Logical Volume - sapdata1 Logical Volume - sapdata2 Logical Volume - sapdata3 Logical Volume - sapdata3 Logical Volume - sapdata4 Logical Volume - sapdata4 Logical Volume - sapdata5 Logical Volume - sapdata5 Logical Volume - sapdata6
hdisk9
hdisk10
hdisk11
COPY 2 3 COPY 2 3 COPY 2 3 COPY 2 3 COPY 2 3 COPY 2 COPY 3
hdisk12
Logical Volume - sapdata1 Logical Volume - sapdata2 Logical Volume - sapdata3 Logical Volume - sapdata4 Logical Volume - sapdata5 Logical Volume sapdata6 Logical Volume -- sapdata6
Figure 141. Volume group and logical volume layout after a split mirror
10.2.1.5 Re-establish logical volume mirror If the split mirror copy of the logical volumes is required on a regular basis, for example, to backup the production data, this copy needs to be refreshed with the latest data on the primary mirrors. This is done by re-establishing the mirror on the primary node, which in our case is the capeverde system. Before re-establishing the mirror, the volume group and logical volumes on the shared hdisks should be unassigned from palana.
On the node running Tivoli Storage Manager palana 1. Unmount all filesystems in the volume groups to be refreshed, with the command:
umount FileSystemName
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be unmounted. 2. Deactivate the volume group containing the logical volumes to be re-copied. This is done with the command:
varyoffvg VolumeGroup
The VolumeGroup parameter specifies the name of the volume group to be de-activated.
262
3. Remove the volume group definition from the operating system with the command:
exportvg VolumeGroup
The exportvg command removes the definition of the volume group specified by the VolumeGroup parameter from the system. Since all system knowledge of the volume group and its contents are removed, an exported volume group can no longer be accessed. The exportvg command does not modify any user data in the volume group. On the R/3 database server capeverde 1. The shared AIX hdisks must be re-assigned to the volume group containing the logical volumes to be copied. This is done with the command:
extendvg -f VolumeGroup [PhysicalVolumes]
The extendvg command adds the shared AIX hdisks specified by the PhysicalVolumes parameter to the volume group specified by the VolumeGroup parameter. 2. The AIX logical volumes can now be mirrored with the command:
mklvcopy LogicalVolume 3 [PhysicalVolumes]
The mklvcopy command adds physical partitions to the logical partitions in the logical volume specified by the LogicalVolume parameter so that a total of three copies exists for each logical partition. 3. As the third copy may not have the latest data, we need to refresh it. This is done by synchronizing the volume group containing the logical volumes with the command:
syncvg -l LogicalVolume
The syncvg command synchronizes the physical partitions of the logical volume specified by the LogicalVolume parameter. The physical partitions that are synchronized are copies of the original physical partitions of the logical volume. The synchronization is started as a background task.
We refer below to the ESS FlashCopy facility. At the time of writing, this function was planned for availability in 2000. The IBM Enterprise Storage Server is a general-purpose high capacity disk system consisting of disk drives attached to a storage server via high-speed serial interfaces. The ESS is powered by two four-way RISC SMP processors, with large cache, and a variety of host attachment options (UltraSCSI, ESCON, FICON, and Fibre Channel). The ESS uses redundant hardware and RAID 5 disk arrays to provide high availability and data protection. It provides for centralized storage operations management through a Web browser interface called the Storage Specialist. The ESS provides copy services through the FlashCopy, Peer-to-Peer Remote Copy, and Extended Remote Copy software.
263
This section provides an overview of the FlashCopy function used to create an instant copy of the production volumes, followed by a set of procedures for implementing FlashCopy in an AIX environment.
FlashCopy is only possible between disk volumes or LUNS, which are identified by their ESS internal serial numbers. FlashCopy requires a target volume to be within the same logical subsystem as the source. However, the ESS does not require special devices like the EMC BCV device in order to make a copy. See Figure 142 for an illustration of FlashCopy.
264
Source volume
Target volume
Write Time
When IBM FlashCopy is invoked, the ESS presents a virtual secondary volume to the host almost immediately, often known as a T0 copy. This is accomplished by creating a record of all the data to be copied at a point in time, and performing the copy as a background task. This implies that the host application can resume access to the data without waiting for the copy to finish. At the time when FlashCopy is started, the target volume is empty in some sense. The background copy task copies data from the source to the target. Through the record, the ESS keeps track of which data has been copied from source to target. If an application wants to read some data from the target that has not yet been copied to the target, the data is read from the source; otherwise, the read can be satisfied from the target volume. Before a track on the source that has not yet been copied can be updated, the track is copied to the target volume. The subsequent reads to this track on the target volume will be satisfied from the target volume. The background data copy operation will proceed until all of the data that resided on the production volume at the time FlashCopy was originally invoked has been copied to the secondary volume. The relationship between the source and target volume ends when the copy is complete. FlashCopy can be used with the NOCOPY option, which suppresses the background data copy operation. When NOCOPY is selected, FlashCopy will only copy data to the secondary volume which is about to be changed on the production volume. This option is useful for tasks such backing up a dataset from disk to tape where it is sufficient to read the data from disk as it exists at a point in time and copy it to tape.
265
The relationship between the source and target volume when using FlashCopy with the NOCOPY option has to be manually terminated with a WITHDRAW option.
10.2.3.2 Invoking FlashCopy FlashCopy and Peer-to-Peer Remote Copy (PPRC) can be used for UNIX and Windows NT systems using the ESS Specialist Copy Services Web user interface. To enhance automation a command line interface is provided to allow commands to be issued to the ESS Specialist Web interface from the operating system. Scripts are available to allow the user to issue FlashCopy commands and to execute pre-defined tasks.
The command line interface scripts will be available from the Web site http://www.storage.ibm.com/hardsoft/products/ess/ess.htm. The FlashCopy feature code must be installed on all ESSs that you will use. The scripts are based on CORBA and JAVA technology and have the following software requirements: AIX 4.3.3 with JAVA 1.1.7 or above Windows NT 4.0 with JAVA 1.1.7 or above HP-UX 10.20 with JAVA 1.1.77 or above Additional software may be required, check with the vendor SUN Solaris 2.5.1 with JAVA 1.1.7 or above Additional software may be required, check with the vendor Using the command line interface scripts, you can: Establish a relationship between AIX hdisks and ESS logical volumes identified by serial number. Invoke FlashCopy in parallel on multiple source and target volume pairs. Query the status of the ESS volumes to ensure copy completion. The commands that are provided by the command line interface scripts are:
rsFlashCopy
This establishes or withdraws a FlashCopy between one or more pairs of disks. This obtains the status of disks and paths. Disks can be specified either by host disk name, or ESS disk serial number. This executes one or more pre-defined tasks. Tasks can be defined using the ESS Specialist Copy Services Web interface to perform any FlashCopy or PPRC function. Tasks can then be executed in parallel. This correlates host disk names with ESS disk serial numbers and sends this information to the Copy Services server. This lists the mapping between host disk names and ESS disk serial numbers.
rsQuery
rsExecuteTask
rsPrimeServer rsList2105s -
Establish - This will invoke FlashCopy between the source and target volumes. The FlashCopy completes almost instantaneously; you can choose whether to actually copy the data from source to target by specifying the NOCOPY option.
266
If NOCOPY is specified, there is no background data movement. The default option is for the data movement to complete as a background task.
Withdraw - This option is used to break the relationship between a source and target volume pair established through a FlashCopy invocation with the NOCOPY option.
The next sections will describe a typical set of procedures using the FlashCopy Establish option to make instant copies in the AIX environment. The set of procedures described here has not been tested and should be used only as a guide. These procedures are based on using the FlashCopy command line interface. Initial configuration of the ESS must be performed before a FlashCopy and include the following: Defining an ESS as the copy services server to store all information. Defining and allocating target logical volumes on the ESS. Creating a list of the source and target volume pairs for the FlashCopy We can now divide the procedures based on function into: Establishing FlashCopy Re-establishing FlashCopy Recovery using FlashCopy
10.2.3.3 Establish FlashCopy These steps are required when the FlashCopy is started for the first time on a list of source and target volume pairs.
On a new node: 1. Invoke FlashCopy: Use the rsFlashCopy command, for each disk. The command sample shows specifying hdisk1 as the source and hdisk2 as the target:
rsFlashCopy -s hdisk1 -t hdisk2 my.ess.server
Alternatively, pre-define the establish task for all the disks on the ESS Specialist Web interface, and then execute the task using the command:
rsExecuteTask my.ess.server taskname
2. Make the copied ESS logical volumes available as AIX hdisks with the command:
cfgmgr
3. Import and activate each volume group in the copied volume with the command:
importvg -y VolumeGroup PhysicalVolume
The PhysicalVolume parameter specifies only one physical volume to identify the volume group; any remaining physical volumes (those belonging to the same volume group) are found by the importvg command and included in the import. The -y VolumeGroup parameter specifies the name of the imported volume group.
267
4. Mount all filesystems in the imported volume groups with the command:
mount FileSystemName
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be mounted.
10.2.3.4 Re-establish a FlashCopy These steps are required for refreshing the target volume which was copied earlier using FlashCopy, with the current production volume data.
On a new node: 1. Unmount all filesystems in the volume groups to be refreshed, with the command:
umount FileSystemName
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be unmounted. 2. Unassign from the volume groups all AIX hdisks to be copied with the command:
reducevg VolumeGroup [PhysicalVolumes]
The VolumeGroup parameter specifies the name of the volume group, while the PhysicalVolumes parameter specifies all the AIX hdisks contained in VolumeGroup. This command will also remove the volume group. 3. Unconfigure the AIX hdisks to be refreshed with the command:
rmdev -lhdiskName
The rmdev command unconfigures or both unconfigures and undefines the device specified with the device logical name (the -l hdiskName flag). The default action unconfigures the device while retaining its device definition in the Customized Devices object class. 4. Invoke FlashCopy:
rsFlashCopy -s hdisk1 -t hdisk2 my.ess.server
5. Make the copied ESS logical volumes available as AIX hdisks with the command:
cfgmgr
The PhysicalVolume parameter specifies only one physical volume to identify the volume group; any remaining physical volumes (those belonging to the same volume group) are found by the importvg command and included in the import. The -y VolumeGroup parameter specifies the name of the imported volume group. 7. Mount all filesystems in the imported volume groups with the command:
mount FileSystemName
268
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be mounted.
10.2.3.5 Recover using FlashCopy This is an option in case of recovery from user error, for example, if a user deleted all the AIX files containing application data. This involves restoring the data from an earlier FlashCopy on the target volume to the source volume. The source volumes are then made addressable to the operating system, followed by application recovery.
On the primary node: 1. Define and allocate target logical volumes on the ESS. (In this case, the target volumes are actually the source volumes of the previous FlashCopy.) 2. Create a list of the source and target volume pairs for the FlashCopy. (The source and target volumes are reversed.) 3. Save the existing filesystem layout of the logical volumes of the volume groups to be restored. 4. Unmount all filesystems in the volume groups with the command:
umount FileSystemName
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be unmounted. 5. Unassign from the volume groups all AIX hdisks to be copied with the command:
reducevg VolumeGroup [PhysicalVolumes]
The VolumeGroup parameter specifies the name of the volume group, while the PhysicalVolumes parameter specifies all the AIX hdisks contained in VolumeGroup. This command will also remove the volume group. 6. Unconfigure the AIX hdisks to be refreshed with the command:
rmdev -lhdiskName
The rmdev command unconfigures or both unconfigures and undefines the device specified with the device logical name (the -l hdiskName flag). The default action unconfigures the device while retaining its device definition in the Customized Devices object class. 7. Invoke FlashCopy:
rsFlashCopy -s hdisk1 -t hdisk2 my.ess.server
8. Restore the existing filesystem layout saved in step 3. 9. Make the copied ESS logical volumes available as AIX hdisks with the command:
cfgmgr
10.Import and activate each volume group in the copy with the command:
importvg -y VolumeGroup PhysicalVolume
The PhysicalVolume parameter specifies only one physical volume to identify the volume group; any remaining physical volumes (those belonging to the
269
same volume group) are found by the importvg command and included in the import. The -y VolumeGroup parameter specifies the name of the imported volume group. 11.Mount all filesystems in the imported volume groups with the command:
mount FileSystemName
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be mounted. The application files are now available for application recovery.
The Business Continuance Volume (BCV), is the Symmetrix device type used to establish the third mirror, which is addressable by the host. This allows a mirror of a production logical volume to be synchronized and put to use in production. The BCV is not separately addressable from the host/server while the BCV is established, but after a split, it can be addressed by another host and mirrored again. EMC TimeFinder is a combination of host software and Symmetrix microcode. Host software enables customers to alter the state of BCVs and control the sequence in which these alterations occur. The host components also provide the ability to invoke user-written processes to control the usage of BCVs. Managing the BCVs requires host management utilities to control the creation of a BCV pair with a standard volume and the synchronization of data between the volumes. The Symmetrix Manager Base Component provides a Graphical User Interface (GUI) to define BCV pairs. Once the BCV pairs are defined, the process of establishing, splitting and re-establishing the BCV is done through command line utilities on UNIX. This process is illustrated in Figure 143.
270
R/3 Data Management Techniques Using Tivoli Storage Manager
copy
Figure 143. EMC TimeFinder operations on BCVs
As shown in Figure 143, the procedures for the split mirror implementation can be divided based on function as follows: Establish a BCV Split a BCV Re-establish a BCV Restore a BCV The set of procedures described in this section have not been tested and should be used only as a guide. These procedures can be automated through the use of AIX shell scripts incorporating EMC and AIX command line utilities. Implementation of the split mirror in a HACMP environment requires special considerations and is discussed in Appendix A.1, Split mirroring in a HACMP environment on page 335. The EMC regular logical volumes and the BCV logical volumes will be referred to as the EMC regular disk and BCV disk respectively. To establish, split, re-establish or restore a BCV pair, the EMC OSMGUI must be used to create a file containing the mirroring relationship between the EMC regular disks and the BCV disks. The content of this file (called SAP.BCV in our example) is shown in Figure 144 below:
271
# SymmWin BCV File # Reg Dev Num BCV Dev Num 1000Exxx 10053xxx 10002xxx 10055xxx 1000Bxxx 10056xxx 1000Dxxx 10057xxx 1000Axxx 10058xxx 10006xxx 10059xxx 10010xxx 1005Axxx 1000Cxxx 1005Cxxx 10009xxx 1005Dxxx 10003xxx 1005Exxx 10007xxx 1005Fxxx 10005xxx 10060xxx 1000Fxxx 10061xxx 10001xxx 10062xxx 1002Bxxx 10065xxx 10014xxx 10066xxx 10013xxx 10067xxx
Figure 144. SAP.BCV - EMC Regular physical disk to BCV physical disk map
This file is called during a split, establish, re-establish, or restore operation using EMCs bcv command line utility.
10.2.4.2 Establish BCV This procedure causes all the data on the EMC Regular disks on the primary node to be copied to its BCV disk pair. The BCVs cannot be accessed by the host operating system at this point.
1. Create the BCV logical volumes on the EMC Symmetrix using the EMC OSMGUI tool. 2. Create a list of Regular disk to BCV disk map for the disks to be copied using the EMC OSMGUI tool - see SAP.BCV On new node 3. Establish BCV with the command:
bcv est -f SAP.BCV
10.2.4.3 Split BCV This procedure is used to split the BCV from its regular disk mirror. Once the BCV is split, it is made addressable by the host operating system.
2. Import and activate each volume group in the BCV copy with the command:
importvg -y VolumeGroup PhysicalVolume
The PhysicalVolume parameter specifies only one physical volume to identify the volume group; any remaining physical volumes (those belonging to the same
272
volume group) are found by the importvg command and included in the import. The -y VolumeGroup parameter specifies the name of the imported volume group.
10.2.4.4 Re-establish BCV This procedure is required if the data on the BCV must be kept current with the data on the primary mirror.This will refresh the BCV mirror with updates on the primary mirror and discard the updates on the BCV while the two were split and allows for a quick resynchronization of the BCV pair. Before re-establishing the BCV, the configuration enabling the operating system to address the BCV must be deleted.
On new node 1. Unmount all filesystems in the BCV volume groups with the command:
umount FileSystemName
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be unmounted. 2. Deactivate all the BCV volume groups with the command:
varyoffvg VolumeGroup
The VolumeGroup parameter specifies the name of the volume group. 3. Delete all the BCV volume group definitions with the command:
exportvg VolumeGroup
The VolumeGroup parameter specifies the name of the volume group whose definition is removed from the system,. 4. Re-establish BCV with the command:
bcv est -f SAP.BCV
10.2.4.5 Restore from a BCV This is an option is case of recovery from user error, for example, if a user deleted all the AIX files containing application data. This involves restoring the data on the BCV after a split to the primary mirror. The primary mirrors are then made addressable to the operating system, followed by application recovery.
On primary node 1. Create a list of BCV disk to Regular disk map for the disks to be restored - see SAP.BCV 2. Save the existing filesystem layout of the logical volumes of the volume groups to be restored. 3. Unmount all filesystems in the volume groups with the command:
umount FileSystemName
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be unmounted. 4. Deactivate all the volume groups with the command:
varyoffvg VolumeGroup
273
The VolumeGroup parameter specifies the name of the volume group whose definition is removed from the system. 6. Restore BCV with the command:
bcv restore -f SAP.BCV
7. Import and activate each volume group in the Regular disk with the command:
importvg -y VolumeGroup PhysicalVolume
The PhysicalVolume parameter specifies only one physical volume to identify the volume group; any remaining physical volumes (those belonging to the same volume group) are found by the importvg command and included in the import.The -y VolumeGroup parameter specifies the name of the imported volume group. 8. Mount all filesystems in the BCV volume groups with the command:
mount FileSystemName
The FileSystemName parameter specifies the name of the filesystem mount point for the filesystem to be mounted. The application files are now available for application recovery.
274
EMC TimeFinder
Copy at the storage-subsystem level Copies EMC Symmetrix logical volumes Physical copy of data Copy using EMC command line utilities Requires mirroring using special BCV device Re-synchronization required. Fast Requires PVID reset only if copy is accessed on same system or if configured as HACMP shared resource.
The 7133 disk subsystem implementation of the split mirror is not recommended in a production environment. This is due to the fact that because it is a software implementation, it is much slower than a hardware implementation at the storage subsystem level. Also, the fact that mirror disks are accessible on both systems, makes it prone to errors and mismanagement. This setup was used in our lab environment to develop and test the procedures which could then be applied to the storage subsystem copy implementations. While comparing the two storage subsystem based copy services - EMC TimeFinder and IBM ESS FlashCopy, we can see that EMC TimeFinder requires a special device definition called the BCV to make a copy. TimeFinder does a full volume copy (physical copy), while FlashCopy presents a virtual copy (T0 copy) almost instantaneously, while copying tracks in the background. This implies that with TimeFinder, the application must wait until all tracks have been copied. To overcome this problem, the copy is performed at an earlier time by establishing the BCV pair, and when the copy is required at a different node, the BCV is split. If the copy is required to be current, TimeFinder must re-synchronize the BCV by establishing the BCV pair once again. But in this case, TimeFinder keeps track of changes and copies only the changes to the BCV. With FlashCopy, when the copy
275
must be updated, there is no need to re-synchronize as another FlashCopy can be invoked for an instantaneous copy. However, the ESS FlashCopy copy service is currently limited in its usage due to the fact that there are no command line utilities to automate the copy process while being transparent to the application.
276
backup. This enables the CCMS (Computing Center Management System) of the production R/3 System to monitor the backups. The BACKINT interface is also supported in the split mirror configuration. This configuration is transparent for BACKINT and no additional aspects have to be taken into account. Therefore, the Tivoli Data Protection for R/3 can be used with the Tivoli Storage Manager for a split mirror backup. The split mirror disks scenario only includes backups with brbackup. The brarchive backups (used to backup archive log files) are not included as they do not place any significant load on the production host. As a standard, the backup of the archive log files is done on local or remote tape devices, on disk or with Tivoli Data Protection for R/3. In case of an offline backup, the database must be shut down for a few minutes for the splitting of the disks.
2. The Oracle database software must be installed on the backup host. The ORACLE_HOM E directory structure must match the SAP standard installation. This is expected by brbackup. The ORACLE_HOME directory is /oracle/<SID >, where SID is the three character System Identifier of the R/3 database instance. Standard directories: $ORACLE_HOME/dbs $ORACLE_HOME/bin
$ORACLE_HOME/sapbackup
$ORACLE_HOME/sapreorg $ORACLE_HOME/saparch $ORACLE_HOME/sapcheck $ORACLE_HOME/saptrace 3. The Oracle and brbackup profiles should be available in $ ORACLE_HOME/dbs directory. 4. The parameter backup_type in the profile init<SID>.sap must be set as follows:
backup_type = offline_split|online_split
The online_split option specifies an online backup of the mirror disks. The offline_split option specifies an offline backup of the mirror disks If the brbackup command is invoked from the command line with one of the above options, it overwrites the value set in the parameter backup_type in the init<SID>.sap profile. 5. The parameter split_cmd in the profile init<SID>.sap must be set as follows:
Chapter 10. Split mirror implementation in R/3
277
split_cmd = <split_cmd> [$ ] <split_cmd> is a program or shell script (with or without options) called by brbackup to split the mirror disks. When running, brbackup replaces the optional character $ with the name of a file containing the names of all files to be backed up.
7. To set up the link between the backup host and the production host an instance string must be defined in the profile parameter primary_db. The instance string to the production database is required, in order to connect the backup host with the production host.The connections (alias names) are defined in the configuration file tnsnames.ora or by the user, as part of the Oracle SQLNET configuration. 8. The directories /usr/sap/<SID>/SYS/exe/run must be available on the backup host and should at least contain the programs brbackup, brconnect and brtools. 9. The path names of all databases with access to both hosts, must be identical. 10.The directory $ORACLE_HOME/sapbackup has to be shared by the two nodes. In our example, we used an NFS mount. 11.The ownership of all the files must match (same administrative users with same user and group IDs) on the primary and on the backup node 12.The Oracle instance has to be accessed from the remote node with SYSOPER authorization. The authorization SYSOPER has to be granted to the system user, an Oracle password file $ORACLE_HOME/dbs/orapw<SID> has to be present containing the valid database password of the system user and the parameter remote_login_passwordfile = exclusive has to be set in the parameter file init<SID>.ora.
10.4.2.2 Tivoli Data Protection for R/3 configuration The R/3 backup can be performed with just the brbackup tool. However, when using an external backup tool like Tivoli Storage Manager, the Tivoli Data Protection for R/3 must be installed and configured on the backup host.
1. In $ORACLE_HOME/dbs, the profiles init<SID>.utl and the password file for Tivoli Data Protection for R/3 init<SID>.bki have to be present. 2. The R/3 brtools and Tivoli Data Protection executables backagent and backint have to be present in /sapmnt/<SID>/exe. The symbolic link /usr/sap/<SID>/SYS/exe/run must point to the directory of this executables. 3. The Tivoli Data Manager defined in the SERVER statement of the profile of Tivoli Data Protection for R/3 init<SID>.utl must be also defined in the clients dsm.sys. (See 5.2, Server setup on page 104 and 6.1, Setup of Tivoli Data Protection for R/3 on page 117)
278
Note:
The commands/scripts called by split_cmd or resync_cmd must meet the following rules: If the call of the program/script was successful - a exit code of 0 is returned to brbackup. - no messages to stdout or stderr are allowed during operation, only messages to stdout beginning with #INFO are acceptable. failed - a exit code greater 0 is returned to brbackup. Error messages which describe the cause of the error in detail should be output.
data files
Tape Library
Mirror 1
Mirror 2
Mirror 3
On EMC Symmetrix: Refer to 10.2.4.2, Establish BCV on page 272. On IBM ESS:
279
The copy is made only when FlashCopy is called from the brbackup command in the next step. Now, brbackup can be started on the second node with the command:
brbackup -t online_split -c
The brbackup command does the following: 1. Contacts the R/3 database server defined in primary_db and receives of all the files to be backed up (specified by the $ option in the init<SID>.sap profile). 2. Sets the tablespaces to be backed up to the status BACKUP using the oracle SQL statement:
ALTER TABLESPACE.....BEGIN BACKUP
3. Calls the script specified by the split_cmd option. split_cmd is intended to split the mirror and eventually to access the corresponding filesystems. On the 7133 disk subsystem: This script performs the steps described in section 10.2.1.4, Split logical volume mirror on page 260. The scripts developed in our lab are described in Appendix A.2, Lab split mirror setup scripts and files on page 339. Figure 146 shows the storage status after the execution of the split_cmd script.
VG splitvg
data files
data files
Tape Library
M irror 1
M irror 2
On the IBM ESS: This script should perform the steps described in 10.2.3.3, Establish FlashCopy on page 267. On EMC Symmetrix: This script should perform the steps described in 10.2.4.3, Split BCV on page 272. 4. Resets the tablespaces to the normal status using Oracle SQL Statement:
ALTER TABLESPACE....END BACKUP
5. Backs up the data files on the split disks to the specified media. On 7133 disk subsystem:
280
The brbackup command passes the data files through the Tivoli Data Protection for R/3 to the Tivoli Storage Manager for storage. 6. Optionally calls the script specified in the resync_cmd to re-synchronize the split disks with the original disks. On the 7133 disk subsystem: This script performs the procedures described in 10.2.1.5, Re-establish logical volume mirror on page 262. Figure 147 shows the status after the execution of the script.
resync
Tape Library
Mirror 1
Mirror 2
On IBM ESS: The re-synchronization is not required at this stage because the ESS does not physically copy the data from the original disks to the target. For subsequent backups, a new FlashCopy is invoked. Therefore, the brbackup command uses the script specified in the split_cmd for subsequent backups. This script will now perform the procedures specified in 10.2.3.4, Re-establish a FlashCopy on page 268. On EMC Symmetrix: This script performs the procedures described in 10.2.4.4, Re-establish BCV on page 273. All steps (step 6 optionally) are carried out under control of brbackup.
281
282
283
1. As initial data image, a backup of the production database has to be taken. This may be either an online or an offline backup of the database. 2. The control file needed for the standby database is different from the control file of the production system, but it can be created directly from the production database. 3. To initiate a synchronous starting point for the two systems, the current online logs of the primary have to be archived. If these prerequisites are done on the primary database, the standby node has to be prepared to be able to restore the backup and to access the standby control file and the archived logs. 4. As initial set of data, the backup taken from the primary node has to be restored on the standby node. 5. The control file of the standby database has to be present at the location(s) defined in the database configuration file. Therefore the control file for the standby database generated in step 2 has to been copied to that place. 6. All archived logs (since the start of the backup in step 1.) have to be stored in the archive log directory of the standby. 7. The standby database is started on the standby node in the MOUNT STANDBY DATABASE mode, the standby database will not be opened. The archive logs which were generated on the primary database and which were transferred to the archive log directory of the standby will be used in the recovery process off the standby database.
STANDBY DATABASE
7
Standby Database
data files control files
Database
data files control files
Primary Host
log files
Standby Host
2
control file standby DB
5
control file standby DB
To ensure that the standby database keeps in sync with the primary, all further archived log files (step 3) have to be accessed on the standby node (step 6) and have to be applied during the database recovery (step 7).
284
Note:
Changing the physical structure of the primary database during its operation has an important impact on the standby database. In general, the recovery process of the standby database is not able to maintain these structural changes. For example, if a data file is added to a tablespace on the primary node, this change generates also redo information into the logs. While recovering from this archived log on the standby database, a new entry reflecting the data file is made to the control file of the standby. Since at that moment the new data file is not present at the standby machine, recovery will stop. To continue with recovery in such a case, either the data file has to be created on the standby database or copied from the primary to the standby database. If the data file is present to the standby database afterwards, the recovery procedure can be restarted. In the event of a failure of the primary the following steps have to be taken into account to activate the standby: If possible, the last logs of the primary database should be archived. If the logs cannot be archived, then this transactions are lost since the standby database can only recover transactions from archived logs. When the recovery to the latest available archived log is done, the standby database can be activated. After the activation followed by a shutdown and a restart, the standby database becomes active.
Note:
While activating the standby database, the standby database is opened with resetting the logs. This breaks all former relationships with the primary, so all y further archive logs of the primary are incompatible. After resetting the logs, there is also no valid backup available for the standby database. As soon as possible, a backup of the new production (former standby) database should be performed.
285
Two processes of brarchive are running: one on the primary node and one on the standby node. The archive log directory of the standby node has to be accessible to the primary node, for example by mounting it via NFS. On the primary node, brarchive is used to save the archived logs of the primary database to this shared directory. For the brarchive command running on the primary node the following options are used: Save the log file and delete it afterwards from the source directory (-sd) Save the archive logs to disk (-d disk ) to the directory as defined in the parameter archive_copy_dir in the profile init<SID>.sap. The archive_copy_dir points to the local mountpoint of the archive log directory of the standby database Verify that the log was written successfully to the destination (-w) Run continuously and wait for new archive logs to process (-f) without any interactive user confirmations (-c) Since the destination archive_copy_dir is an NFS mount to the archive log file system on the standby node, the archived logs of the primary database are available to the standby database now (step 1).
DATABASE
STANDBY DATABASE
PRIMARY DATABASE
Database open
Primary Host
brarchive -m 90 -sd -f -c 4
Standby Host
archive log directory active log files data files control files
On the standby node, there is another brarchive process running with the following options: Recover the standby database by applying new archived redo logs. In the example mentioned in the figure, an additional time delay is specified. So, the redo logs are not applied immediately after they occur in the directory, but after a delay (-m <delay (minutes)>) (step 2).
286
If the archived logs of the primary had been successfully applied to the standby database, save them (to the device defined in the init<SID>.sap profile) (step 3) and delete them afterwards from the archive log directory of the standby (-sd) (step 4). Run continuously and wait for new archived logs (-f) in unattended mode (-c).
Note:
During applying logs reflecting structural changes to the primary database, the brarchive process on the standby node terminates. The database administrator is responsible for performing this changes manually to the standby database. If the new structure is present at the standby database, the brarchive process (and therefore the recovery procedure) can be restarted.
11.1.2.2 brbackup Since the standby database contains a nearly actual copy of the primary database, the standby database can also be used as a source for providing the data files for the backup.
To backup the database, brbackup can be started on the standby node using the option -t offline_standby. This option instructs brbackup to connect to the primary database to retrieve a list of the data files which need to be backed up. (step 1). Afterwards, the standby database is shut down, brbackup backs up the database files and the state of the standby database is set to the state it had been before the backup. See Figure 150.
DATABASE
STANDBY DATABASE
PRIMARY DATABASE
Database open
1
data files control files active log files archive log directory
connect
Primary Host
2 brbackup -t offline_standby
Standby Host
archive log directory active log files data files control files
287
288
Tape Library
Storage subsystem
data
files
data
files
archive
logs
archive logs
RAID 1/ RAID 5
Split-mirror
Figure 151. Setup/operation of the standby database using storage subsystem features.
289
DATABASE
STANDBY DATABASE
PRIMARY DATABASE
Database
Standby Database
log files
Primary Host
Standby Host
During the recovery phase of the standby database, only archived logs of the primary can be applied. So, if it is not possible to archive the actual online logs on the primary before the final recovery of the standby database, this transactions remaining in the online logs cannot be recovered. But if: The layout of primary and standby is absolute the same, An valid copy of the online logs of the primary can be accessed by the standby node, and The control file and the init<SID>.ora of the primary node can be accessed then it is possible to recover to the last committed transaction in the online logs of the primary. (In principle, this scenario is comparable with the recovery scenario described in 9.3.2, System copy with online backup on page 245.) Figure 153 shows such a scenario. To initiate this process, at first the standby database has to be stopped by a shutdown normal. An image/copy of the online logs has to be accessible by the standby node within the same location as referenced by the primarys control file The image/copy of the control file(s) of the primary database have to be accessible by the standby node as referenced in the primarys init<SID>.ora The init<SID>.ora has to be replaced by the one of the primary After this steps, the standby database can be started
svrmgrl> startup mount svrmgrl> recover database svrmgrl> alter database open
290
DATABASE
Standby DATABASE
PRIMARY DATABASE
Database
Database
log files
Primary Host
Standby Host
In such a scenario, the online logs (and the control file) of the primary may be Copied from the primary database to the standby database (if possible after the crash of the primary) Accessed by acquiring their disks (formerly accessed by the primary) on the standby. Therefore, the disks have to been shared between the two nodes. Already present on the standby node by accessing a disk containing a valid mirror of them. The mirroring process may be based on storage subsystem features.
291
subsystem feature. This copy can be backup up by the local Tivoli Storage Manager and may be applied by shipping and checking in the tapes of the database image to the Tivoli Storage Manager on the second site. From the Tivoli Storage Manager on the second site, the database image can be restored to the target node for the standby database. Afterwards, the standby database can be established
Shipping the archived logs to the standby The archived logs are shipped to the second site on the storage subsystem level (using PPRC or SRDF or equivalent). On the standby node they are accessed not directly as the directory for the archived log: a recovery procedure is established to copy the files from there to the archive log directory and apply them to the database after a given time delay. This time delay has to meet considerations for the prevention of logical errors on the one hand and minimizing the downtime needed to recover the standby database to the actual timestamp in case of a failure of the primary database. Minimizing data loss Since the online logs of the primary database can be transferred to the storage subsystem on the secondary side (using PPRC or SRDF) and therefore an image of the online logs can be made accessible to the standby database, then it is possible to recover to the last transactions found in this logs. However, since it is absolute necessary to ensure that both the image of this data on the primary site and on the secondary site are identical, the process for ensuring this strict correctness may has an impact on the performance of writing the online logs and therefore on the performance of the source database. Backup of the database The backup of the database is performed on the standby site as offline backup of the standby database to the Tivoli Storage Manager on the second site. In the Tivoli Storage Manager, the storage pool is saved to a copy storage pool. The copy storage pool is checked out of the tape library and sent to the primary site. There it is checked into the library of the Tivoli Storage Manager.
292
Storage subsystem
Storage subsystem
data
files
data
files
data
active
log files
archive logs
archive logs
archive logs
archive logs
RAID 1/ RAID 5
Split-mirror
Figure 154. Backup data center scenario
Takeover of the operation to the second site When the standby database is activated as a new production one, it has to be ensured that there is also a high performance network connection present between the R/3 application servers and the database. If this is not assured using the WAN communication between the two locations, then the R/3 application servers (including the central instance) are stopped on the primary site and restarted on the re-established on the secondary site. However, in such a case, it has to be assured:
That the network connection is sufficient for the load of the users. That the users (and also other applications/tools) are able to communicate to the R/3 system using their unchanged configuration. In general, after a takeover, the second site would handle the production operation of the R/3 system. Therefore, the layout of the two sites is symmetric, so after the second site has taken over, the same environment/considerations are available just like on the first site.
Revert-back After the first site is back again (or the environment is reestablished at another site), the first site would be prepared to take the role as backup data center. Only in the case of a failure of the actual production site, the production operation would be switched to the first site.
293
294
C O N S O LE
g ra p h ic a l/ te xt
DATABASE
T R D A r c h ive r
a rc h iv e lo g d ir e c to ry
D e te c t a n d m a in ta in s tr u c tu r a l c h a n g e s
6 h
D a ta b a s e
S t a n d b y D a ta b a s e
Tasks like the activation or the revert-back can be done also by the use of the central console. Figure 156 gives an impression about the use of the graphical user interface of the Libelle DBMIRROR. In this example, the primary database failed (so its field gets red), the standby database would be activated by selecting the red cross. After the decision, that the standby database shall be activated, the Libelle DBMIRROR asks to which point in the standby database should be recovered, and, after applying these logs the former standby database is available as new production one.
activation of standby
295
D AT A BA S E
/o racle/TS M /sapda ta1 /o racle/TS M /sapda ta2 /o racle/TS M /sapda ta3 /o racle/TS M /sapda ta4 /o racle/TS M /sapda ta5 /o racle/TS M /sapda ta6 /o racle/TS M /orig log A /o racle/TS M /orig log B /o racle/TS M /m irrlogA /o racle/TS M /m irrlogB
S TA ND BY D A TA B AS E
PR IM A RY DA TA B A SE
P RIMA R Y D A TA B AS E
Database
S tandby D atabase
/o racle/TS M /saparch
N FS cap eve rd e
/o racle/TS M /arch_stdby
pa la na
N FS
Figure 157. Overview over the standby lab file system structure
11.2.1 Setup
For the setup of the standby database, different steps both on the primary and on the standby node have to be performed. If the prerequisites on the standby node are met, then on the primary node a control file for the standby database is created. Afterwards, the data base data files and the new control file have to be transferred to the standby node. For the task of accessing the data files on the standby node, the split-mirror procedure as described in 10.2.1, 7133 SSA disk subsystem on page 255 is used. After some additional configurations on the standby node are done, the standby database can be activated. In the lab environment, the SAP brtools are used on the standby node: brarchive is used to keep the standby database in sync
296
brbackup is used to backup the database Therefore, the brtools have to be transferred to and customized for the standby node. The necessary tasks therefore are described in the following.
11.2.1.1 Prerequisites on the standby node To run the standby database on the standby node, the ORACLE software and the environment is needed as installed/configured on the primary node. This includes that the
ORACLE software is installed on the standby node in the same release level user ora<sid> (using the same settings for user id (UID), group id (GID), and environment as the ora<sid> on the primary) exists.
Note:
In a typical R/3 ORACLE environment, the environment of the user ora<sid> is set by sourcing the files .dbenv_<hostname>.csh or .dbenv_<hostname>.sh and .sapenv_<hostname>.csh or .sapenv_<hostname>.sh. During the login process, the actual hostname is used to select the environment files to source. If the user directory is copied from the primary node to the standby node, then these files have to be renamed on the standby node according to the hostname of the standby. The user has to have the same user limits as on the primary node:
The file system structure of the ORACLE database and its data files are the same on both nodes. The SAP brtools are their environment are available on the standby node. Tivoli Data Protection for R/3 and the Tivoli Storage Manager Client are installed on the standby node. For the following steps of setting up and operating the standby database it is assumed that these prerequisite are met.
11.2.1.2 Preparations on the primary node At first, an ORACLE password file is generated. Then the R/3 system and database are stopped on the primary node. The control file for the standby database is created and then the disks containing the third copy set of the database files set are split-off. These disks will be accessed on the standby node later on. Afterwards, the R/3 system can be restarted.
297
Creation of an ORACLE password file For some tasks, it is necessary to administrate the primary database from the standby instance, for example, it has to be possible to start up and shut down the primary instance from the standby instance. Therefore, the following prerequisites must be met:
An Oracle password file $ORACLE_HOME/dbs/orapw<SID> has to be generated containing the valid database password of the system user.
# su - oratsm capeverde:oratsm> cd dbs capeverde:oratsm> orapwd file=orapwTSM password=manager entries=100
The parameter remote_login_passwordfile = exclusive has to be set in the parameter file init<SID>.ora .
capeverde:oratsm> cat initTSM.ora | grep remote_login_passwordfile remote_login_passwordfile = exclusive
Generation of the control file Stop of both R/3 system and database: The user <sid>adm issues a stopsap command.
# su - tsmadm capeverde:tsmadm> stopsap
To reflect the structure of the primary database to the standby database, a control file is created on the primary database. The user ora<sid> connects to ORACLE and issues the ALTER DATABASE CREATE STANDBY CONTROLFILE command.
# su - oratsm capeverde:oratsm>svrmgrl Oracle Server Manager Release 3.0.4.0.0 - Production (c) Copyright 1997, Oracle Corporation. All Rights Reserved. Oracle8 Enterprise Edition Release 8.0.4.4.0 - Production PL/SQL Release 8.0.4.4.0 - Production SVRMGR> connect internal Connected. SVRMGR> alter database create standby controlfile as '/oracle/TSM/standby.ctrl Statement processed.
298
Backup of the source database Splitting the third mirror of the data files
The standby database needs an initial set of data files to operate on. One possibility would be to take a backup off the source database and to restore this on the standby node. In the lab example, the split-mirror procedure (as described in 10.2.1, 7133 SSA disk subsystem on page 255) is used to generate the data files for the standby machine. Restart of database and R/3 system
# su - tsmadm capeverde:tsmadm> startsap
11.2.1.3 Access of the database data to the standby node In the lab environment, access to the initial data for the standby database is gained thorough the split-mirror procedure. (Another possibility would be to restore an existing backup of the source database on the target node.) In this example, only the data files had been generated by the procedure, so the logical volumes and file systems for the online and offline logs also have to be generated. 11.2.1.4 Preparations on the standby node Preparation for the standby database Before the standby database can be activated on the standby node, the standby database has to recognize the new control file. On the standby database, also three copies of the control files are stored in the data files data directories. To differentiate to a standard control file, new subdirectories are built therefore.
Distribution of the control files into the data directories As user ora<tsm>, new subdirectories are built in the data directory trees and the control file is copied to them:
palana:oratsm>ksh palana:oratsm> for i in 1 2 3; do > mkdir /oracle/TSM/sapdata$i/stdby > cp /oracle/TSM/standby.ctrl /oracle/TSM/sapdata$i/stdby/cntrlTSM.dbf > done
Adaption of the init<SID>.ora The configuration file init<SID>.ora (located in /oracle/<SID>/dbs) has to reflect the paths of the control files. The setting of the variable control_files has to be adjusted to the actual paths:
Mount of the standby database After this preparations, the standby database can be mounted.
299
palana:oratsm>svrmgrl connect internal Connected. SVRMGR> startup nomount; ORACLE instance started. Total System Global Area 67154144 Fixed Size 45280 Variable Size 28524544 Database Buffers 38248448 Redo Buffers 335872 SVRMGR> alter database mount standby database exclusive; Statement processed.
As the standby database is mounted, it is prepared to sync with the production one by recovering the archived logs of the primary. This logs have to be present in the archive directory of the standby database, they can be applied issuing the RECOVER STANDBY DATABASE command in the server manager.
Preparation for the listener communication The ORACLE database is accessed by the R/3 system via the SQL Net configuration. In the case of the takeover to the standby, the R/3 system has to communicate with the standby database instead of the primary. Therefore, the TNS listener configuration has to be also available on the standby node. If the standby database is activated, then the ORACLE TNS listener has to be started on the standby node
Adapting the listener.ora The file /oracle/<SID>/network/admin/listener.ora has to be copied to the same directory from the primary node to the standby node. In the ADDRESS section, the entry for the HOST has to be modified to the hostname of the standby node.
Adapting the tnsnames.ora The tnsnames.ora is the configuration file the SQLNET clients use to resolve the communication path to the database. So to communicate via the TNS listener to the database, after a switchover to the standby database it is also necessary to modify the /oracle/tnsnames.ora. (If this file is not shared, this has to be done on all application servers of the R/3 system).
300
<oracle_sid> = (DESCRIPTION = (ADDRESS = (PROTOCOL= TCP)(Host= palana)(Port= 1521)) (CONNECT_DATA = (SID = <oracle_sid>)) ) TSM.WORLD= (DESCRIPTION = (SDU = 32768) (ADDRESS_LIST = (ADDRESS = (COMMUNITY = SAP.WORLD) (PROTOCOL = TCP) (HOST = palana) (PORT = 1527) ) )
Note:
The change of the configuration files listener.ora and tnsnames.ora could be avoided if the IP address (corresponding to the IP Label in the HOST field) is taken over to the standby database in case of an takeover. In the lab implementation, for the purpose of easily exchanging the configuration for accessing primary and standby database, the configuration file for both cases were created, reflecting the hostname in the configuration file name. The real configuration file then is built as a symbolic link pointing on one of the both (depending on the actual status).
capeverde:tsmadm> ls ... dba 22 ... dba 736 ... system 45 ... dba 686 ... dba 680
-l /oracle/TSM/network/admin/*.ora* Nov 17 08:54 listener.ora -> listener.ora.capeverde Oct 13 13:56 listener.ora.capeverde Nov 18 15:12 tnsnames.ora -> tnsnames.ora.capeverde Nov 15 17:49 tnsnames.ora.capeverde Nov 16 15:05 tnsnames.ora.palana
11.2.1.5 Setup of brarchive on primary node On the primary node, brarchive is used to ship the archived redo logs of the primary to the secondary node. Therefore, the file system of the archive logs of the standby database is mounted to the primary by an NFS mount. The directory of this mount point has to be reflected in the init<SID>.sap configuration file in the as archive_copy_dir.
# # directory for archive log copies # default: first value of the backup_root_dir parameter # Oracle Parallel Server: # archive_copy_dir = $SAPDATA_HOME/saparchglobal archive_copy_dir = /oracle/TSM/arch_stdby #
301
11.2.1.6 Setup of brarchive/brbackup on standby node The directories /usr/sap/<SID>/SYS/exe/run must be available on the standby node and should at least contain the programs brbackup, brconnect and brtools.
On the standby node, the brbackup tool will be used to perform offline backups off the standby database. Since it accesses the primary database for getting a list of data files to backup, this contact information has to be present. In the init<SID>.sap of the standby node, the entry primary_db has to be set according to the listener configuration of the primary.
11.2.1.7 Setup of Tivoli Data Protection for R/3 on standby node The Tivoli Data Protection for R/3 has to be installed also on the standby node, in the setting of the profile it will have the identical parameter values as the primary. (They must be identical, because in the view of the Tivoli Storage Manager it has to be transparent if the backup was initiated from the primary or from the standby database).
The steps how to perform the installation are shown in detail in 6.1.2, Installation on page 118. The same considerations as mentioned in 9.2.4, Restore of source database backup on the target system on page 237 are valid for the profile settings for Tivoli Data Protection for R/3 to ensure that a backup (suited for the primary database) will be done using files of the standby node.
11.2.1.8 Preparation for the R/3 system For the R/3 system, some steps require preparation also. After the activation of the standby database, the R/3 system must then access the former standby database as the new production one. The hostname running the database is reflected in the SAPDBHOST variable, this entry has been changed during the takeover scenario. Beside that, the R/3 system needs, for some transactions, access to the performance data of the database. Since after activation of the standby database, the database runs on a different node, preparations have to be made.
Creation of a default profile for the takeover scenario In the lab scenario, in the case of activating the standby database a modified default profile shall be copied to the right location. This steps are prepared by saving the original one, and also modifying a copy of it reflecting the parameters after the takeover to the standby database.
# su - tsmadm> capeverde:tsmadm> cd /sapmnt/TSM/profile capeverde:tsmadm> cp DEFAULT.PFL DEFAULT.capeverde
Afterwards, a modified copy of the DEFAULT.PFL (exchanging the SAPDBHOST) is saved to DEFAULT.palana.
302
Preparing the standby system for exchanging performance data Since the R/3 system needs to contact the database server to get performance information out of it, this communication has also to be set up. On the standby node, two tools have to be installed. In the R/3 system, an RFC destination has to be created.
The Figure 158 illustrates this scenario: On each participating node (either database or application server), there is an saposcol process running at operating system level, which gathers operating system performance data and writes it into a shared memory. This shared memory segment is also accessible to R/3 by the R/3 work processes running at operating system level. If the database server node has none of its own R/3 processes running, then this information has to be transferred to the R/3 system by other methods. Beside the saposcol, an additional tool rfcoscol has to be present at the database server. The rfcoscol, which is invoked from the R/3 system by RPC, reads the shared memory information and transfers this to the R/3 system.
S A P R /3 C e n tra l Ins ta n c e O R A C L E d a ta b a s e
saposcol
s h a re d m e m o ry
S A P R /3
S A P R /3 C e n tra l Ins ta n c e
saposcol
O R A C L E d a ta b a s e
s a p o s c ol
sh a re d m e m o ry
s h a re d m e m o ry
S A P R /3
R FC
rfc os c ol
Therefore the following actions have to be done. (The procedure and links to further information is also contained in the SAP OSS note 20624).
Preparation of saposcol and rfcoscol on the standby database The executables of saposcol and rfcoscol have to be transferred from the primary (which hosts the central instance and therefore has the R/3 software installed) to the standby to the correspondent directory /usr/sap/<SID>/SYS/exe/run, applying the same permissions they have on the primary node.
303
# cd /usr/sap/TSM/SYS/exe/run # rcp capeverde:/usr/sap/TSM/SYS/exe/run/saposcol saposcol # rcp capeverde:/usr/sap/TSM/SYS/exe/run/rfcoscol rfcoscol # chmod 4755 saposcol # chown root:sapsys saposcol # chmod 4755 rfcoscol # chown root:sapsys rfcoscol # ./saposcol Starting collector (create new process) #
To activate the rfcoscol program, a work process of the R/3 system contacts the R/3 gateway process. The R/3 gateway process issues a remote shell command to start the rfcoscol on the foreign node. To be allowed to do that, in the home directory of the <sid>adm user on the standby node a .rhosts file has to be present. (Both users ora<sid> and <sid>adm had been created on the standby node with the same UID and GID like on the primary.)
# su - tsmadm palana:tsmadm> ls -al .rhosts -r-------- 1 tsmadm sapsys palana:tsmadm> cat .rhosts capeverde tsmadm palana:tsmadm>
Creation of a SAPOSCOL RFC destination in the R/3 system To call the rfcoscol out of the R/3 system, an RFC destination has to be created in the R/3 system (by using transaction SM59) this RFC destination then has to be added to SAPOSCOL (by using transaction AL15).
The RFC destination is created similar to the one created for CommonStore in 8.3.1.2, Creating an RFC destination on page 205 by entering transaction SM59. A new RFC destination SAPOSCOL_<hostname> (where <hostname> reflects the hostname of the server to contact) has to be created as shown in Figure 159 with connection type T (TCP/IP connection).
304
Figure 159. Creation of the RFC destination for the standby database server
The RFC destination has to be instructed to start the program /usr/sap/TSM/SYS/exe/run/rfcoscol on the target node (host of the standby database), as shown in Figure 160.
To instruct the R/3 system to use the RFC method to access the data of the standby database server, the created RFC destination has to be added. This is done using transaction AL15. The RFC destination has to be added as the SAPOSCOL destination by entering its name in the panel and adding it. See Figure 161.
305
11.2.2 Operation
If the preparation steps are all completed, then the standby node is able to take over the role of the primary database in case of a failure of the primary one. Therefore, to keep the standby database data in sync, the archived logs have to be applied. Keeping a nearly actual image of the production data gives also the possibility to backup the standby database instead of the primary and therefore reducing the load and/or downtime from the primary database.
11.2.2.1 Synchronization to the primary database The synchronization is done running brarchive on both nodes.
On the primary node, brarchive is started in unattended mode, waiting for new redo log files to process (-f -c) and backing them up to disk (-d disk) (which in that case is the NFS mount of the standby node) with verifying the write (-w) and deleting them afterwards (-sd).
capeverde:oratsm> brarchive -sd -d disk -f -w -c BR280I Time stamp 1999-11-18 14.18.06 BR008I Offline redo log processing for database instance: TSM BR009I BRARCHIVE action ID: adbjmqkg BR010I BRARCHIVE function ID: svd BR011I 2 offline redo log files found for processing, total size 8.115 MB. BR112I Files will not be compressed. BR130I Backup device type: disk BR106I Files will be saved on disk in directory: /oracle/TSM/arch_stdby BR127I Fill volume option set - wait for offline redo log files possible. BR136I Verify option set - double the work to do. BR126I Unattended mode active - no operator confirmation required.
On the standby node, brarchive is started in unattended mode, waiting for new redo log files to process (-f -c), applying them to the standby database immediately (-m), saving them and deleting them afterwards (-sd).
306
palana:oratsm> brarchive -m -sd -f c R280I Time stamp 1999-11-18 14.23.32 BR008I Offline redo log processing for database instance: TSM BR009I BRARCHIVE action ID: adbjmqwu BR010I BRARCHIVE function ID: svd BR011I 2 offline redo log files found for processing, total size 2.148 MB. BR130I Backup device type: util_file BR109I Files will be saved by backup utility at file level. BR127I Fill volume option set - wait for offline redo log files possible. BR141I Apply offline redo log option with delay 0 minutes active. BR126I Unattended mode active - no operator confirmation required. BR336I Applying offline redo log file /oracle/TSM/saparch/TSMarch1_19.dbf ... BR337I Offline redo log file /oracle/TSM/saparch/TSMarch1_19.dbf applied success fully. BR336I Applying offline redo log file /oracle/TSM/saparch/TSMarch1_20.dbf ... BR337I Offline redo log file /oracle/TSM/saparch/TSMarch1_20.dbf applied success fully. BR280I Time stamp 1999-11-18 14.23.41 BR229I Calling backup utility... BR278I Command output of '/usr/sap/TSM/SYS/exe/run/backint -u TSM -f backup -i / oracle/TSM/sapbackup/.adbjmqwu.lst -t file -p /oracle/TSM/dbs/initTSM.utl -c':
Note:
If there are any archived logs applied in which a structural change is recorded, the brarchive process terminates with one of the following ORACLE errors: ORA-01670 ORA-01157 ORA-01110 In such a case, the database administrator has to adjust manually the new structure to the standby database, afterwards brarchive can be started again.
11.2.2.2 Backup of the standby database The database can be backed up by running brbackup on the standby node with the type offline_standby.
307
palana:oratsm> brbackup -t offline_split -c BR051I BRBACKUP 4.0B (39) BR169I Value 'util_file_online' of parameter/option backup_dev_type/-t ignored f or 'offline_standby' - 'util_file' assumed. BR055I Start of database backup: bdbjgtvn.aff 1999-11-17 09.39.07
BR101I Parameters oracle_sid oracle_home oracle_profile sapdata_home sap_profile backup_mode backup_type backup_dev_type util_par_file primary_db TSM /oracle/TSM /oracle/TSM/dbs/initTSM.ora /oracle/TSM /oracle/TSM/dbs/initTSM.sap ALL offline_standby util_file /oracle/TSM/dbs/initTSM.utl TSM.WORLD
11.2.3 Takeover
To activate the standby database, the following steps have to be performed:
11.2.3.1 Stop of the R/3 system and the TNS listener The R/3 system has to be stopped by the user <sid>adm, afterwards the continuously running brarchive process and the TNS listener have to be stopped on the primary node.
# su - tsmadm capeverde:tsmadm> stopsap capeverde:tsmadm> exit # su - oratsm capeverde:oratsm> brarchive -f stop BR002I BRARCHIVE 4.0B (39) BR006I Start of offline redo log processing: adbjnpes.fst 1999-11-18 18.57.18 BR026I BRARCHIVE started with option -f will be stopped now. BR280I Time stamp 1999-11-18 18.57.18 BR256I Please enter 'cont' to continue, 'stop' to cancel the program: cont BR027E BRARCHIVE started with option -f has been stopped. BR007I End of offline redo log processing: adbjnpes.fst 1999-11-18 18.57.49 BR003I BRARCHIVE terminated successfully. capeverde:oratsm> lsnrctl stop LSNRCTL for IBM/AIX RISC System/6000: Version 8.0.4.0.0 - Production on 18-NOV-9 9 18:59:12 (c) Copyright 1997 Oracle Corporation. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=TSM.WORLD)) The command completed successfully
11.2.3.2 Activation of the standby database on the standby node Eventually, the brarchive process on the standby node has to be stopped first using the same procedure for stopping as described before.
308
The standby database can be activated. After the restart, the standby database is present for operation.
# su - oratsm palana:oratsm> svrmgrl SVRMGRL> connect internal SVRMGRL> ALTER DATABASE ACTIVATE STANDBY DATABASE; SVRMGRL> SHUTDOWN; SVRMGRL> STARTUP; SVRMGRL> quit
11.2.3.3 Preparation for SQLNET communication The standby node has to be prepared to start the TNS listener. As mentioned earlier, the configuration files are already prepared. Now, the configuration file for the listener is linked to the file containing the standbys configuration data. The listener is started afterwards.
# su - oratsm palana:oratsm> cd /oracle/TSM/network/admin palana:oratsm> ln -sf tnsnames.ora.palana tnsnames.ora palana:oratsm> ln -sf listener.ora.palana listener.ora palana:oratsm> lsnrctl start LSNRCTL for IBM/AIX RISC System/6000: Version 8.0.4.0.0 - Production on 18-NOV-9 9 19:19:49 (c) Copyright 1997 Oracle Corporation. All rights reserved. Starting /oracle/TSM/bin/tnslsnr: please wait... TNSLSNR for IBM/AIX RISC System/6000: Version 8.0.4.0.0 - Production System parameter file is /oracle/TSM/network/admin/listener.ora Log messages written to /oracle/TSM/network/log/listener.log Listening on: (ADDRESS=(PROTOCOL=ipc)(DEV=6)(KEY=TSM.WORLD)) Listening on: (ADDRESS=(PROTOCOL=ipc)(DEV=10)(KEY=TSM)) Listening on: (ADDRESS=(PROTOCOL=tcp)(DEV=11)(HOST=9.1.150.113)(PORT=1527)) Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=TSM.WORLD)) STATUS of the LISTENER -----------------------Alias LISTENER Version TNSLSNR for IBM/AIX RISC System/6000: Version 8.0.4.0. 0 - Production Start Date 18-NOV-99 19:20:07 Uptime 0 days 0 hr. 0 min. 25 sec Trace Level off Security OFF SNMP ON Listener Parameter File /oracle/TSM/network/admin/listener.ora Listener Log File /oracle/TSM/network/log/listener.log Services Summary... TSM has 1 service handler(s) The command completed successfully
11.2.3.4 Exchange of configuration to access the standby database After the standby database is activated, the primary node only runs the central instance, and no longer the database server. Therefore, the R/3 system has to access the former standby database as the new production one instead. Configuration files both of R/3 and ORACLE (SQL net client) have to be adapted. Since this files were prepared earlier, they have now only to be linked/copied. 309
Exchange of tnsnames.ora The tnsnames.ora (located in /oracle/<SID>/network/admin) deals with some kind of name resolution: the connection to the database is done by resolving the instance string to IP label and TCP/IP port of the listener running on the database node.
The new configuration files were already prepared, now the link is to the is exchanged pointing to the configuration to access the standby database.
Modification of the SAPDBHOST in the R/3 default profile The default profile of the R/3 system contains settings which are valid for all application servers of the entire system. One item is the SAPDBHOST which reflects the database server which is contacted by the R/3 work processes. This value has to be changed to the standby node before restarting the R/3 system.
# su - tsmadm capeverde:tsmadm> cdpro capeverde:tsmadm> cp DEFAULT.palana DEFAULT.PFL
11.2.3.5 Restart of the R/3 system Now, all preparations are done to restart the R/3 system. Since the database server is located on the standby host (and is already running), only the R/3 system is started.
# su - tsmadm capeverde:tsmadm> startsap R3
The R/3 system is active again afterwards, and now uses the standby node as production database server. Since the standby database located on the standby node was activated with resetting the logs and therefore the relationship between primary and standby is broken, there is no backup available to recover the new production database. As soon as possible, a backup of the new production database should be taken on the standby node, since all tools and parameter settings are available on the standby node, an online backup (brbackup -t online) followed by an backup of the archived logs (brarchive) should be done. The procedure to switch to the standby database can be automated by the use of scripts performing the actions mentioned above. As an example, Appendix D, Sample script for activating the standby database on page 355 describes the functionality of a sample script used in the lab environment to automate the takeover to the standby database. However, theres no error handling built in this example script, so for handling a real environment it would need to be expanded.
310
311
312
313
Tivoli Framework
The Tivoli Framework is the backbone of the Tivoli solution and the basis for all Tivoli systems management applications. The Tivoli Framework provides the basic systems management services, such as communications, presentation, security, and so forth, that all the Tivoli systems management applications use, therefore, ensuring consistency and integration. All Tivoli systems management tasks, regardless of the application or component that is to be managed, are performed using the Tivoli Desktop, which provides a user interface consistent throughout Tivoli management applications.
Tivoli Distributed Monitoring
Tivoli Distributed Monitoring is the Tivoli application for monitoring systems and applications. Distributed monitoring is tightly integrated with the Tivoli Framework and provides monitoring capabilities for a wide range of systems and components. The strength of distributed monitoring is that monitoring collections for components can easily be added, thus, allowing the monitoring of any kind of component.
Tivoli Netview
Tivoli Netview is Tivolis network management solution that is focused on managing IP-based networks. Netview displays the nodes in the network on a map representing the network topology and the status of the network nodes. In the case of something happening in the network, Netview generates Simple Network Management Protocol (SNMP) traps that are displayed in a central
314
event window and that can trigger actions or correlations. In that regard, Netview is similar to the Tivoli Enterprise Console. However, Netview is exclusively focused on processing SNMP events, usually events related to the network. However, Netview alerts can be forwarded onto the Tivoli Enterprise Console by the application of simple filter rules. Netview can also receive events created by external sources, such as the servers it is monitoring. These are received in the form of SNMP traps formatted for display in the Netview event window.
Tivoli Enterprise Console
The Tivoli Enterprise Console provides a central event display and correlation for the enterprise regardless from which source the events are originating. Unlike traditional SNMP managers, Enterprise Console uses event adapters that can convert any kind of event stream into Enterprise Console events that can then be processed by the Enterprise Console. Event adapters are available for a wide range of systems and applications. The major strength of the Enterprise Console is that events from any kind of system can be displayed and, more importantly, be correlated into one place. This allows, for example, correlating a network event that comes from Netview with an application event coming from R/3 and triggering an action as a result of the correlation.
Tivoli Software Distribution
Tivoli Software Distribution provides a simple and reliable service to distribute software in the enterprise across platforms and networks. Software distribution has such features as fan-out and bandwidth optimization. Software is grouped in file packages that software distribution can then automatically distribute to the desired targets. Many applications require distribution of components or data across the network. This function can be provided by software distribution. For example, the Tivoli Manager for R/3 provides utilities that assist in the creation of file packages. This means that the SAPGUI component can be automatically deployed to a large number of presentation clients.
Tivoli Workload Scheduler (Maestro)
Tivoli Workload Scheduler, also known as Maestro, is the Tivoli product for enterprise-wide job scheduling. Maestro is an application with full functionality in scheduling purposes and is available on several platforms. Currently, these include all flavors of UNIX, for example, AIX and HP UX, Windows 95, 98, and NT. It can be integrated with Tivoli using the Tivoli Plus for Maestro product. This product allows the management of Maestro from the Tivoli Desktop. Tivoli Maestro for R/3: Tivoli Maestro has been certified with SAP's Business Application Programming Interface (BAPI) for scheduling R/3. It enables batch job coordination between R/3 and other environments. It is certified by SAP.
Tivoli Database Management Products
The Tivoli Database Management products allow for the seamless management of RDBMS components with Tivoli. Similar to the Tivoli Manager
Chapter 12. Tivoli integration of R/3 data management
315
for R/3, they use the Tivoli Framework and core applications to manage a certain application, in this case, RDBMS servers. Tivoli Manager for DB2 Tivoli Manager for Oracle Tivoli Manager for Sybase
Tivoli Global Enterprise Manager
Tivoli Global Enterprise Manager is Tivolis solution for managing applications and systems from a business perspective. Once and application is instrumented from Global Enterprise Manager, Tivoli allows management of this component in the wider context of a business system.
Tivoli User Administration
Tivoli User Administration extends the capabilities of the Tivoli environment to allow management of user accounts on UNIX, Windows NT, and Novell Netware platforms. Additionally, and of most relevance to this book, you can also manage R/3 user accounts all from a single location. Tivoli User Administration provides your network computing environment with the following features: Centralized and GUI-based control of administration tasks Consistent administrative policy definition Automation of repetitive administration tasks Parallel operations performed on many users and systems Delegation of administrative tasks to other administrators Configuration error reduction via profile based methodology Single action user management to synchronize logins and passwords
Tivoli Global Sign-On
Global Sign-On provides a secure and easy to use solution that grants users access to the computing resources they are authorized to use with just one logon. Global Sign-On is primarily designed for large enterprises, which consist of multiple applications and systems, within heterogeneous, distributed computing environments. Global Sign-On removes the need for end-users to manage multiple user-IDs and passwords.
Tivoli Security Management (Lockdown Module)
Tivoli Security Management provides maximum flexibility and improvements in administrator reliability and efficiency. Tivoli Security Management achieves this by abstracting from the security model of individual platforms. It still manages the platform specific security data; however, it achieves cross platform security modelling through role-based security. There are significant benefits to this form of abstraction, but it makes the implementation of the security module more difficult. An ideal Tivoli Security Management implementation design would include the identification of all job tasks in an organization as well as the IT resource access requirements to fulfill each of those tasks. This data would then form the basis of Tivoli Security Management roles and resource data. This would require that all personnel also be defined in Tivoli Security Management groups. In a large organization, this can take a long time.
316
Tivoli Security Management was the first security product that was designed from the ground up to manage access control in a consistent fashion in a distributed environment using a role-based security model. With role-based security, it is possible to determine what resources people have access to based on the job tasks or roles that they need to perform. In Tivoli Security Management, these resources can be of different types including files, printers, programs, or TCP services. They can also reside on different platform types, such as Windows NT, different flavors of UNIX, and on OS/390 systems protected by the OS/390 Security Resource Access Control Facility (RACF). Once all the resources that are required to complete the task have been identified, they can all be listed in a role and the relevant access rights given. Tivoli security groups can then be formed based around a job title, and those groups can then be given all the roles they need in that position. Once it is configured, an administrator does not have to be concerned with granting access to a user to resources of many different types on many different systems. Instead, an administrator adds the user to the Tivoli security group, and all the access rights are granted during the next security profile distribution through the role relationships. The time saved in administration once Tivoli Security Management is in place can be significant.
Tivoli Storage Manager
Tivoli Storage Manager (TSM ) is Tivolis enterprise backup/restore and archive/retrieve solution that is available on a wide range of platforms. TSM provides integration of several databases and applications, such as DB/2, Oracle, Lotus Notes, R/3, and so forth.
Tivoli plus for Tivoli Storage Manager provides the interface with Tivoli Environment Management. This interface is explained in this book in 12.3, Tivoli Storage Manager integration into Tivoli Framework on page 322.
Tivoli Output Manager
Tivoli Output Manager, formerly known as Destiny, is the Tivoli product for enterprise-wide output control. The output environment is an ever changing and diverse environment containing different printing devices (Postscript, PCL, encapsulated PostScript, plotters, line printers), different printer cartridge fonts in printing devices, facsimile machines, Web servers, different mail gateways (X400, PROFS, cc:Mail, Lotus Notes, Microsoft Exchange, Microsoft MS Mail, Microsoft OutLook, SMTP), and the global differences in paper sizes in the distribution centers. Companies are starting to look at enterprise output managers for coordination, routing paths, delivery, and above all, security of documents. The Tivoli Output Manager is positioned to do this with an easy-to-use user interface and rule engine to deliver documents reliably across the enterprise. Enterprise applications, such as R/3, rely on the output environment to deliver the critical daily, weekly, and month-end reports to a single end user or groups of management teams. It is the responsibility of the enterprise output manager to orchestrate and deliver these reports according to the business rules that have been defined by the process engineers. Tivoli Output Manager provides the following: Centralize output management Controlled access of output resources Routed output resources
Chapter 12. Tivoli integration of R/3 data management
317
Tivoli Decisions Support is a multidimensional query and reporting tool and simple query and reporting software targeted at help desks and customer support operations. The product is specifically targeted at helping technologists and business users better understand their internal help desk and external customer support operations. To meet the specific needs of help desk/customer support analysis, Tivoli Decision Support includes two primary elements: A combination of multidimensional analysis and reporting as well as simple query and reporting tools - specifically Tivoli has OEMed Cognos' PowerPlay offering and Seagate's Crystal Reports. Templates known as Decision Support Guides that assist administrators in selecting which questions to ask and locating the data that will answer these questions. These facilities are the backbone for user organizations, either alone or working with Tivoli, to quickly incorporate focused decision support capabilities into their own operations. The templates are a case in point. Decision Support Guides are currently available for Call Center Performance, Relationship Management, Knowledge Management, Storage Management Analysis and Service Level Management. Decisions Support offers a number of business advantages. The major ones are as follows: End-users can access information themselves. Decisions Support allows anyone, from support analysts to CEOs, to directly view the summary and detailed information that is relevant to them. This means that technical staff are relieved of the burden of trying to guess what information would be valuable to each user, as they are able to get exactly what is required the first time. Decisions Support eliminates a number of manual and time consuming processes, such as the traditional White Board, which is the manual mechanism found in most call centers to display performance data. Business processes are changed for the better. Decisions Support enables users to show, from a quantitative perspective, which business activities were successful and which were not. This enables the easy identification and correction of the most critical problem areas. This, in turn, leads to improved call handling and support dispatching capabilities. By discovering which activities and questions consume the most time and energy, it is possible to allocate resources appropriately as well as improve procedures and documentation. Eliminates the need to build and maintain a separate and historical database for reporting. This ability to run against production data reduces reporting overhead on the technical staff and increases the timeliness of reaction.
Performance Analysis and Reporting
Tivoli provides a set of reporting capabilities for performance data with a Decisions Support guide for Tivoli Application Performance Management. Organizations spend numerous hours and resources gathering performance
318
statistics, analyzing the data to determine needs, trends, and creating the multidimensional graphs that evaluate their performance and needs. Tivoli easily automates these functions in a solution that uses collected performance data to track the performance levels of the target system. The following list provides sample questions found in the Decisions Support guide for application performance: What is an acceptable response? Which users and locations are impacted? Which component of the end-to-end of the application environment is affected? Which application server or application modules are affected? What performance trends may cause problems in the future? With expertise in targeted applications, you can create relationships between the various performance indicators to determine trends across related data points. For example, a performance module for the R/3 environment may provide multi-level reports on transaction execution, workload analysis, and resource consumption. However, because of the comparatively non-distributed nature of PeopleSoft transaction execution, implementing the same metrics and rules for PeopleSoft might not be as efficient. To provide a comprehensive solution across implementations of various ERP solutions and applications, middleware and underlying database metrics also need to be monitored and measured. Irrespective of the application, the performance module must use the latest in object-oriented techniques, data storage and graphical interfaces. This allows the generation of specialized reports addressing application performance regardless of whether it encompasses a single server or a whole enterprise.
Tivoli Manager for MQSeries
The Tivoli Manager for MQSeries is an enterprise level management solution that brings Tivolis life-cycle management capabilities to organizations using the MQSeries across host and distributed environments. The most powerful aspect of Tivolis MQSeries management approach is that MQSeries is managed in the context of the applications and business systems that utilize it. This provides IT managers with the information they need to understand the business-level impact of MQSeries availability and performance. It is the only MQSeries management solution that manages MQSeries and is an integral part of a larger business solution. It is also the only MQSeries management product that manages the entire application stack, from low-level network and operating system resources up through the applications, that utilize MQSeries for messaging. Tivoli Manager for MQSeries provides centralized management for distributed MQSeries networks that span geographic distances and heterogeneous systems.
Tivoli Manager for R/3
Tivoli Manager for R/3 consolidates the management of multiple R/3 systems that use different version of the R/3 application. It is capable of managing the monitoring and operational tasks for R/3 testing, development, and production from a centralized location. It captures and reports events from all R/3 systems and has the ability to forward these onto the Tivoli Enterprise Console. The Manager for R/3 enables the integration of information regarding
319
R/3 availability and performance with information about the operating system, network, database, and other R/3 related components. It provides an overall picture of how the R/3 environment is performing and helps operators quickly identify and resolve the cause of any availability or performance problems. In 12.2, Tivoli Manager for R/3 on page 321, there are detailed descriptions of this specific Tivoli Module.
Tivoli Application Performance Manager
Tivoli Application Performance Manager is Tivolis performance management solution. It encompasses the best parts of four standard measurement techniques to provide an accurate view of the end-user application performance. This comprehensive solution tightly integrates with Tivoli's existing resource management, alerting, reporting, and analysis solutions. Application Performance Manager provides application performance management in a comprehensive solution. In recent years, two-tier client/server applications have become increasingly important. The number of servers has increased from a few mainframes to dozens or even hundreds of smaller systems. TCP/IP and its dynamic routing have become the standard network protocol, and performance of the endpoint, usually the users PC that has replaced the old terminals, has become an important factor. The factors listed previously have all made the task of managing performance increasingly difficult. However, because of the relatively low cost to add more servers or new LAN segments, organizations have managed the problem in the past by simply adding more resources to overcome performance drops. As the underlying infrastructure becomes more robust, application architectures are evolving to become more complex. In current times, three, and even four-tier, architectures are not unusual. Users are accustomed to being able to move large blocks of data without actually being aware of exactly how much data is involved. The number of servers organizations have is steadily increasing from dozens to hundreds and even to thousands. The number of clients is often many times greater than the number of servers an organization has. Organizations are finding it more difficult to know where to deploy resources and what effect that will have on the performance of each application. The gradual degradation of infrastructure performance is one of the biggest problems that organizations face. The traditional areas of performance measurement are as follows:
Application instrumentation
This involves modifying applications so that they make calls to the Application Response Measurement (ARM) API. The ARM can be implemented in a measurement agent, which clocks the elapsed time from the start to the end of a transaction. This enables the measurement of actual business transactions without any excess overhead. This also means that the application instrumentation can be controlled by the developers. Using ARM is the least invasive technique. Response can be measured without sending any extra traffic across the network to be processed by servers. It also enables the collection of measurements without traffic interception and event processing. Because application developers control the measurements, it is easy to ensure accurate measurement of user
320
transactions. Even if the transaction itself changes as result of enhancements, the instrumentation can be updated at the same time.
Transaction simulation
This method does not measure the real application. A script or program acts as a proxy for it. The proxy is executed periodically, and it passes measurements to an agent, preferably by making ARM API calls. The results of this can be used to approximate the end-user experience.
Tivoli Manager for R/3 proactively identifies potential problems before they affect R/3 users using a comprehensive collection of monitors and events. The information is collected from several sources including R/3s SAPOSCOL, Tivoli Distributed Monitoring, and information available through the R/3 Computing Center Management System (CCMS). The important information from across the R/3 landscape and supporting components is correlated by the Tivoli Enterprise Console. This means that it is possible to prevent many R/3 alert situations from happening and automate actions for specified problems using the rules mechanism. Monitors included with Tivoli Manager for R/3 provide information about CPU utilization, CPU load average, available physical memory, paging, swap space, disk utilization and response time, and all other values displayed by the R/3 transaction ST06. Tivoli Manager for R/3 also displays information concerning the R/3 memory subsystem including roll and page area statistics. Other monitors provide information on buffer utilization, allocated memory, database accesses, directory entry information, hit ratio, and all other values displayed by R/3 transaction ST02. For the DB2 database on OS/390, Tivoli tracks additional values including buffer pool-hit ratios, shortages and page information, and deadlock and lock time outs.
Secure and Efficient Operations
With Tivolis unique policy-based management approach, you can securely delegate routine activities to junior personnel. Tivolis single action management and automation saves you time by simplifying operational tasks. Tivoli Manager for R/3 provides a variety of operational tasks as well the ability to easily add additional tasks to further increase administrator and operator efficiency. Companies can also save time and accelerate the roll out of the SAPGUI to hundreds or thousands of end-user workstations by utilizing Tivoli Manger for R/3.
321
Tivoli Manager for R/3 builds on core Tivoli management applications and supports the Tivoli approach to unified, comprehensive enterprise management. This approach allows R/3 to be managed as part of the enterprise and with the same tools that are used to manage the entire enterprise. Operators can easily manage R/3 and other enterprise components from one point and without the need for R/3 expertise. Figure 163 show the components of Tivoli Manager for R/3.
12.3.1 Overview
Tivoli Storage Manager has an optional module to interface with Tivoli Management Environment, called Tivoli plus for Tivoli Storage Manager.
322
The Tivoli Storage Manager Plus Module allows centralized distribution and management of Tivoli Storage Manager across a multi-platform network. This module provides the following features: Icons for launching Tivoli Storage Manager Subscription lists for clients and servers Tivoli Storage Manager File packages used with Tivoli Software Distribution Tivoli Storage Manager Monitors used with Tivoli Distributed Monitoring Tivoli Enterprise Console events and rules sets customized for Tivoli Storage Manager Tivoli Storage Manager tasks and jobs In the next section we explain the setup procedure for Tivoli Plus for Tivoli Storage Manager.
TME 10 Distributed Monitoring must be installed and configured TME 10 Enterprise Console (TEC) must be installed and configured TSMPlus for Tivoli must be installed Tivoli Storage Manager server installed and configured Additional Information on how to integrate Tivoli Storage Manager into the Tivoli Framework can be found in the book TME 10 Tivoli/Plus TSM Users Guide, GC31-8405.
12.3.2.2 Tivoli Plus Setup In the TSMPlus installation procedure the TSM server used is defined to identify the correct server. After the installation two setup should be executed the setup of resource monitoring and the setup of TSM tasks and jobs.
Figure 164 presents the steps to startup the Tivoli Plus for Tivoli Storage Manager. All Tivoli/Plus modules are kept in a collection under the TivoliPlus icon on the TME 10 desktop. When TSMPlus for Tivoli is installed on an TSM server, the installation process identifies the location of the TSM binary directories and creates an icon for launching the TSM storage software from within the TME 10. Each TSM server has its own TSM icon for launching the TSM application from that server site. For example, in the Figure 164 on page 324 the server name is demo1.
323
Also in Figure 164 there are there are two logical entry points for TSM software: TSM Client and TSM Start Server: In the TSM Client option, there is a dialog display for the authentication procedure to login in TSM server and launch the TSM client interface. In the TSM Server option, the dsmserv daemon process can be initialized. No GUI is displayed.
Resource Monitoring TSMPlus for Tivoli provides the ability to monitor critical resources with TME 10 Distributed Monitoring. The monitors for TME 10 Distributed Monitoring are predefined to let you manage different aspects of the operating system, such as processes or the TSM server, that are crucial to the continued availability of the TSM application. These monitors let you quickly identify and respond to potential problems so that system downtime is avoided.
Using TSMPlus for Tivoli Monitors
TSMPlus for Tivoli has Distributed Monitoring profile managers (subscription lists) that have monitors associated with them. These profile managers are used to distribute the monitors to the client or server subscription lists. Profile Managers Icons: TSM Central Monitors TSM Remote Monitors HSM Space Monitors
324
HSM Recall Monitors TSM Client Monitors Log File Size Monitors Database Size Monitors Before the TSMPlus for Tivoli monitors are operational, you must distribute them to their subscription lists. To distribute a monitor, select the Distribute.. . option from the monitors pop-up menu to display the Distribute Profiles dialog and then click on the Distribute Now button. The TSM Central Monitors are distributed to the TSM server or servers (there can be more than one TSM server within a TMR). The TSM Remote Monitors are distributed to the machines that are used to detect network collisions. The other monitors are distributed to their default subscription lists. To view the default subscription lists, double -click on the icon that pertains to the subscription list in which you are interested: TSM Servers TSM Clients HSM Recall Servers HSM Space Monitor Servers To modify the default subscription list of a specific monitor, select the Subscribers... option from the monitor icons pop-up menu.
Adding or Deleting a Monitor
Although the TSMPlus for Tivoli monitors come already defined to monitor resources specific to TSM, you can add monitors from the TME 10 Distributed Monitoring collection or delete monitors as you wish. You can also edit existing monitors to perform different actions under different conditions. Selecting the options on an TSM monitor's pop-up menu displays the usual TME 10 Distributed Monitoring windows and dialogs: Properties... Displays the functions for adding, deleting, and editing monitors. Distribute... Displays the functions for distributing monitors to those managed nodes (machines) and profiles included on the monitors subscription list. Subscribers... Displays the subscription list that specifies to which machines the monitor is distributed. You can modify this list.
Editing the Monitors
After adding the TSM monitors to the TME 10 Distributed Monitorings Indicator Collection, you can edit them to specify various options. These options include how often you want to monitor a system resource and how the application responds when a problem arises. For more information on the options that you can monitor using Distributed Monitoring, see TME 10 Distributed Monitoring Users Guide.
325
Table 26 provides the context and authorization role required for this task:
Table 26. Context and authorization role required on Tivoli Management Environment
Activity
Edit a monitor
Context
Distributed Monitoring Profile
Required Role
admin
Here are the steps needed to edit a monitor: a. Select Properties from one of the monitor icons pop-up menu to display the Sentry Profile Properties window. b. Select the monitor you want to edit. For example, the host status monitor. c. If a monitor has been previously distributed, press the Disable Selected button to disable the monitor. Save the changes and redistribute the disabled monitors. d. Press the Edit Monitor... button to display the Edit Sentry Monitor dialog. e. The host that you chose when you added the monitor is displayed. If you want to select a different host, press the Hosts... button to display the Host Name dialog. Select the host you want to monitor, and press the Set & Close button to set your choice and return to the Edit Sentry Monitor dialog. f. Select the response level to define or modify. The following steps can be repeated for each response level you want to set: 1. Choose the selected response level in the Response level option menu. This threshold option indicates the conditions that must be met to trigger response level actions. These options are monitor-specific and function in different ways depending on the sources being monitored. 2. Select a trigger option from the trigger when pop-menu. The threshold options available depend on the source being monitored and are defined by the monitoring collection. 3. Press the Set Messages Styles button to use the default format of the TME 10 Distributed Monitoring messages. 4. Press the Set Distributed Actions button to set the distributed sections. 5. Press the Set Monitoring Schedule button to set the monitoring schedule if you do not want to accept the default hourly monitoring. 6. Choose the next response level and perform the subprocedures in step (f), until you have set the thresholds and responses for all the response levels required. g. Press the Change & Close button to confirm the changes and close the Edit Sentry Monitor dialog. You do not need to press the Change & Close button until you have set the thresholds and responses for all required response levels. h. Select the Save option from the Profile menu of the Sentry Profile Properties window to save your changes. When you have finished editing your monitors, distribute the changes to begin monitoring your resources. For more information, see Distributing a TME 10
326
Use the Sentry Indicators icon to view the status of the monitored resources. The thermometer on the Sentry Indicators icon rises as the status of a monitored resource becomes more urgent. Open the TSM Monitors icon to view the status of the TSM servers. For each monitored resource, the monitor only reports the most urgent status received within a recent time frame. The monitor reports are organized so that the most urgent status level appears at the top of the report. For more information on viewing the status of a monitored resource and on the TME 10 Distributed Monitoring indicator collection, see the TME 10 Distributed Monitoring Users Guide. The following list shows the TSM monitored resources that TSMPlus for Tivoli monitors: Host Availability Network Collisions Per Packet TSM Database Percentage Used TSM Log File Percentage Used
Tasks and Jobs for TSM TSMPlus for Tivoli provides many tasks for running TSM jobs on multiple machines and operating systems. These tasks are ready to be executed as soon as you have installed your TSMPlus for Tivoli module. Simply double-click on the task icon to execute the TSM job. You can also modify the default execution characteristics of a particular job, such as where the output of a job is displayed and on which machines the job will run. You may also specify whether a job runs serially on each machine, in parallel on all machines, or staged in groups of machines.
In addition, you can filter the types of events to be enabled for logging. For example, you might enable only severe messages to the event server receiver and one or more specific messages, by number, to another receiver. Figure 165 shows a possible configuration in which both server and client messages are filtered by the event rules and logged to a set of specified receivers.
327
As shown in Figure 165 you have six possible receivers for the logs, each one based in their own event rules. In this chapter, we focus only in Tivoli Event Console, for more information about others consult TME 10 Tivoli/Plus TSM Users Guide, GC31-8405.
Controlling Event Logging To control event logging do the following:
Enable logging of events, by type, to specific receivers. You can also specify that the logging of events be enabled or disabled to specific receivers. The following sections describe the types of receiver, any setup that may be required, and examples of the ENABLE EVENTS command and applicable server options. Begin or end logging by receiver (see below). You can issue the BEGIN EVENTLOGGING and END EVENTLOGGING commands to begin and end logging for one or more receivers. You can enable or disable specific events or groups of events by receiver by issuing the ENABLE EVENTS and DISABLE EVENTS commands. When you enable or disable events, you can specify the following: A message number or an event severity Events for one or more client nodes
328
For example, to enable event logging to a user exit for server messages with a severity of WARNING, enter:
Logging Events to the Tivoli Event Console TSM includes the Tivoli receiver, a Tivoli/Enterprise Console (T/EC) adapter for sending TSM events to the TE/C. This section describes what you must do to set up Tivoli as a receiver for event logging:
1. The file ibmadsm.baroc, which is distributed with the server, defines the TSM event classes to the Tivoli enterprise console. Before the events are displayed on a Tivoli console, you must import ibmadsm.baroc into an existing rule base or create a new rule base and activate it by using the following file: To import ibmadsm.baroc into an existing rule base: 1. From the TME desktop, click on the RuleBase icon to display the pop-up menu and select Import and specify the location of the ibmadsm.baroc file. 2. Select the Compile pop-up menu. 3. Select the Load pop-up menu and Load , but activate only when server restarts from the resulting dialog. 4. Shut down the event server and restart it.
To create a new rule base, do the following:
1. From the TME desktop, open the Event Server Rules Bases window, by double-clicking on the EventServer icon. 2. Select the Create->RuleBase menu. 3. Optionally, copy the contents of an existing rule base into the new rule base by selecting the Copy pop-up menu from the rule base to be copied. 4. Click on the RuleBase icon to display the pop-up menu. 5. Select Import and specify the location of the ibmadsm.baroc file. 6. Select the Compile pop-up menu. 7. Select the Load pop-up menu and Load , but activate only when server restarts from the resulting dialog. 8. Shut down the event server and restart it. 2. Define an event source and an event group: a. From the TME desktop, select Source from the EventServer pop-up menu. From the resulting dialog, define a new source whose name is TSM. b. From the TME desktop, select Event Groups from the EventServer pop-up menu. From the resulting dialog, define a new event group for TSM and a filter that includes event classes IBMTSMSERVER_EVENT and IBMTSMCLIENT_EVENT. c. From the event console icon, select the Assign Event Group pop-up menu item and assign the new event group to the event console.
329
d. Double-click on the event console icon to start the configured event console. 3. Enable events for logging to the Tivoli receiver. For example, to enable all severe and error server events, enter:
4. In the server options file (dsmserv.opt), specify the location of the host on which the Tivoli server is running. For example, to specify a Tivoli server at the IP address 9.114.22.345:1555, enter the parameters shown in Figure 166.
5. Begin event logging for the Tivoli receiver. You do this in one of two ways: To begin event logging automatically at server start up, specify the following server option:
For details about the server options shown, see the book Tivoli Storage Manager for AIX Administrators Reference v3.7, GC35-0369.
330
#------------------------------------------------------------------------# Shall BACKINT/TSM send error/status information to a network # management program via SNMP traps? # Default: none. #------------------------------------------------------------------------#SNMPTRAP Hostname community level capeverde 9.1.150.0 0
Figure 167. Example for initSID.utl
331
332
Part 4. Appendices
333
334
335
# SymmWin BCV File # Reg Dev Num BCV Dev Num 1000Exxx 10053xxx 10002xxx 10055xxx 1000Bxxx 10056xxx 1000Dxxx 10057xxx 1000Axxx 10058xxx 10006xxx 10059xxx 10010xxx 1005Axxx 1000Cxxx 1005Cxxx 10009xxx 1005Dxxx 10003xxx 1005Exxx 10007xxx 1005Fxxx 10005xxx 10060xxx 1000Fxxx 10061xxx 10001xxx 10062xxx 1002Bxxx 10065xxx 10014xxx 10066xxx 10013xxx 10067xxx
Figure 168. SAP.MAP - EMC regular physical disk to BCV physical disk map
The Reg Dev Num column in the SAP.MAP file refers to the regular (source) logical volumes, while the BCV Dev Num column refers to the BCV/target logical volumes of the storage subsystem. A similar file is required for the IBM ESS to map the source and target logical volumes.
INQ.NODENAME - file containing the logical to physical relationship between the storage subsystem logical volume and the AIX physical volumes (hdisk ). In case of EMC, two files are generated by the EMC utility inq - one for all EMC logical volumes in the EMC regular disks on the primary node, and another for the BCV disks on the new node. Figure 169 shows the sample contents of this file.
---------------------------------------------------------------------DEVICE :VEND :PROD :REV :SER NUM :CAP(kb) ---------------------------------------------------------------------/dev/rhdisk103 :EMC :SYMMETRIX :5267 :10000001 :8838720 /dev/rhdisk104 :EMC :SYMMETRIX :5267 :10000101 :8838720 /dev/rhdisk203 :EMC :SYMMETRIX :5267 :10000111 :8838720 /dev/rhdisk201 :EMC :SYMMETRIX :5267 :10000201 :8838720
Figure 169. Output of the EMC inq command.
The DEVICE column shows the AIX hdisk device names, and the SER NUM column displays the logical volume serial number assignment corresponding to the AIX hdisks. A similar file is required for the IBM ESS.
HDISK.MAP - The INQ.NODENAME files and the SAP.MAP are parsed to update this file which contains the relationship between the AIX hdisks, storage subsystem logical volume serial numbers and SCSI IDs. This file is used to create 336
the logical partition map file (LVNAME.MAP, described later). This file is updated in case of disk re configuration (using AIX cfgmgr) following recovery, or an HACMP cluster resource resynchronization. Figure 170 shows the layout of this file.
Figure 170. Map of AIX hdisks (source to target), volume serial number and SCSI ID
In the above file, the hdisks in the first column represent the hdisks residing on the source logical volumes with the serial number defined in column 3 and SCSI ID 010. Similarly, the hdisks in the second column reside on logical volumes whose serial numbers are specified in column 5 and SCSI ID 011. A similar file is required for the IBM ESS configuration. The files mentioned below - VOLUME_GROUP.VG, VGNAME.VG, and LVNAME.MAP are AIX specific and are required for the creation of volume groups and AIX logical volumes on specified AIX hdisks after a copy. VGNAME.CFG is required for mounting and unmounting filesystems. These files are useful in automating the pre- and post- split mirror procedures.
VOLUME_GROUP.CFG - file storing the volume group information for volume groups participating in the split. This file contains a list of all the AIX hdisks and their physical partition size and major number, for each volume group. This information is used to maintain consistency in the PVIDs of the copied hdisks when mounted on the new node. Figure 171 shows the contents of this file.
sapvg 16 40 n hdisk103 data1vg 16 41 n hdisk104 hdisk303 hdisk301 data2vg 16 42 n hdisk106 hdisk304 hdisk105
Figure 171. Volume group information
VGNAME.CFG - file storing the distribution of logical volumes and file system mount points within a volume group for all the volume groups participating in the split. Figure 172 shows the layout of this file.
Figure 172. Logical volume to filesystem mount point mapping for a volume group
LVNAME.MAP - file for each AIX logical volume to be copied, which stores the disk information of the AIX logical volume data distribution assigned to one or
337
more AIX physical volumes (hdisks).This map file is used to rebuild the AIX logical volume on the new node after a split. Figure 173 shows the contents of this file.
Column one in the figure refers to the logical partition number, column two refers to the physical partition location in AIX hdisk hdisk5 (column three). A second copy of the logical partition is stored in the physical partition specified in column four on hdisk9 (column five). The next section described the split mirror procedure in a HACMP environment.
1. Create source to target logical volume map - see file SAP.BCV On the primary node: 2. Create the logical volume to AIX physical volumes (hdisk) map - see file INQ.NODENAME. 3. Create map of AIX hdisks (source to target), logical volume serial number and SCSI ID - see file HDISK.MAP. 4. Create volume group to AIX hdisk map - see file VOLUME_GROUP.CFG. 5. Create volume group to AIX logical volume and mount point map VGNAME.CFG. 6. Create AIX logical volume to AIX Physical partition map - LVNAME.MAP. 7. Copy map files to new node. On the new node: 8. Invoke Establish BCV/FlashCopy command. 9. For EMC, perform this step, skip to step 8 for ESS Invoke BCV split mirror command 10.Make copy accessible to AIX
338
For each volume group in VOLUME_GROUP.CFG: a. Change AIX hdisk PVIDs. b. Create volume group and activate the volume group. For each AIX logical volume in VGNAME.CFG: c. Create AIX logical volume using LVNAME.MAP. d. Mount AIX logical volumes.
A.1.2.2 Re-establish BCV/FlashCopy These steps are required for refreshing the target volume which was copied earlier using FlashCopy, with the current production volume data.
On the new node: 1. Unmount and delete volume groups definitions on target volumes For all volume groups to be refreshed in VOLUME_GROUP.CFG, a. Unmount all filesystems using VGNAME.CFG. b. Deactivate all the volume groups to be refreshed. c. Delete all the volume group definitions. 2. Follow the steps in section A.1.2.1, Establish BCV/FlashCopy on page 338.
-rwxr--r--rwxr--r--rwxr--r--rwxr--r--rwxr--r--rwxr--r--
1 1 1 1 1 1
21 21 21 21 21 21
Figure 174. Scripts for performing the split mirror in the lab environment
Since brbackup is started for the user ora<sid>, the user ora<sid> was added to the system group. As member of the system group, the ora<sid> is able to perform all the necessary tasks on the volume groups, logical volumes and filesystems.
339
The script /usr/local/bin/do_split_mirror.sh is started out of brbackup on the second node, it forks the a process calling /usr/local/bin/split.sh on the primary node with the name of the volume group as argument. Prerequisite for running do_split_mirror.sh is the existence of a file /usr/local/tmp/<vg>_pvid, which contains the partition size of the volume group and a list of all disks which will be split off referenced by their physical volume identifier (PVID). Figure 175 shows the content of the file /usr/local/tmp/ssavg_pvid from the lab environment.
Split the mirror on the primary node On the primary node, /usr/local/bin/split.sh accesses an configuration file /usr/local/tmp/<vg>_pvid. This file contains info about the disks of the volume group which will be split of. Figure 175 shows the content of this file. At the beginning, there is an entry for the physical partition size of the volume group (the volume group has to be created on the second node with the same physical partition size, or else the map files created later are not usable). Then, there is a list of the disks referenced by there physical volume identifier. These disks will be split of from the first node and will form the volume group on the secondary node, it has to be ensured that these disks are accessible by both node. Since the hdisk numbering schemata be different between primary and secondary node, the disks are referenced by their (unique) physical volume identifier (PVID).
The script split.sh examines the volume group for logical volumes which have a mirror set on these disks and stores the mapping of the physical partitions of these mirror set by creating a map file for each of this logical volumes. The map files will be used later (on the second node) for rebuilding the logical volumes on the former location of their copysets. To identify the correct disk on the other node, also the PVID is used instead of the hdisk name. Figure 176 shows an excerpt of the contents of such a create map file. The rows contain an sequential order of the location of the physical partitions of the copy set of the logical volumes. In each row there are the two entries (separated by a:) of the disks physical volume identifier and the actual partition number of the physical partition on the disk.
340
00064790055e2394:0103 00064790055e1a5e:0104 00064790055e1eed:0104 00064790055e15ac:0103 00064790055e2394:0104 00064790055e1a5e:0105 00064790055e1eed:0105 00064790055e15ac:0104 00064790055e2394:0105 00064790055e1a5e:0106
Figure 176. Example of a logical volume map file
The copy sets on these disks will be removed from the logical volume (however, this will not erase the actual data in the physical partitions on the disk). To keep book of the logical volumes having an copy set on these disks, another file <vg>_lv is created. (On the second node, all the logical volumes found in this file will be accessed). Figure 177 shows an example of this file: All the logical volumes of the /oracle/<SID>/sapdataX filesystems had a copy set on these disks, their logical volumes have a size of 250 logical partitions and are of type jfs. Also, a copy set of the jfslog loglv00 was on these disks.
sapdata6lv 250 jfs /oracle/TSM/sapdata6 sapdata5lv 250 jfs /oracle/TSM/sapdata5 sapdata4lv 250 jfs /oracle/TSM/sapdata4 sapdata3lv 250 jfs /oracle/TSM/sapdata3 sapdata2lv 250 jfs /oracle/TSM/sapdata2 sapdata1lv 250 jfs /oracle/TSM/sapdata1 loglv00 1 jfslog N/A
Figure 177. Content of <vg>_lv in the lab example
After all logical volume copies are removed from the disks, they will also be removed from the volume group. After all important files are copied to the second node, the split.sh script finishes. Now, do_split_mirror.sh calls the script access.sh on the second node to access the filesystems on the second node.
Accessing the disks on the second node access.sh creates at first a volume group using partition size and disks found in the file <vg>_pvid. (Since in <vg>_pvid all the disks are referenced by their PVID, at first the corresponding hdisk is found out). Then, all logical volumes mentioned in <vg>_lv are created. They are created using the partition maps, so the logical volume is built on exact the same physical partitions the copy set had been before. (Off course, also the PVID mentioned in this files is replaced by the corresponding hdisk first). Afterwards, the filesystems are mounted.
When these steps are done, brbackup continues performing the real backup. After the backup is finished and if a resync_cmd is defined in init<SID>.sap, then brbackup calls this command for re-establishing the mirror. In the lab environment this task is performed by the script do_resync.sh.
341
Unmounting the file systems on the second node do_resync.sh calls the script umount.sh. All the filesystems mentioned in the file <vg>_lv are unmounted first, then the volume group is varied of and all information about the volume group is deleted from the local Object Data Manager (ODM). Reestablishing the copy set on the primary node do_resync.sh calls the script resync.sh on the primary node. The disks are added to their original volume group, then the additional copy set is reestablished on its old location using the map file. Afterwards, a resynchronization process is established in the background.
342
<-<-<--
A few keywords are required in any case, but most are optional. Each of the optional keywords has a preset default value.
This parameter specifies the block size for the buffers passed to the Tivoli Storage Manager API functions. The valid range for n is from 4096 to 262144. Inappropriate values will be adjusted automatically. This parameter is employed only in the multi-threaded versions of Tivoli Data Protection for R/3 and only when Tivoli Data Protection for R/3 compression is switched on.
ADSMNODE ORACLE_SID
If specified, ORACLE_SID must be registered to the Tivoli Storage Manager server as a Tivoli Storage Manager node. With this option it can be assigned a different node name to the ORACLE database system. It should be used if there are several R/3 ORACLE database systems in the network environment with the same name, for example, <SID>, and they all use the same Tivoli Storage Manager server. If not specified, the default is the hostname. This name must be registered to Tivoli Storage Manager by a Tivoli Storage Manager administrator.
343
Remember:
Path defines the complete path and program name of the Tivoli Data Protection for R/3 Agent program.
BACKEND program_name [parameter list ...]
Specifies a program program_name that is called by Tivoli Data Protection for R/3 after the backup function has completed and before program control is returned to the SAP backup utility. The parameter program_name is either a fully qualified file name or simply a file name. In the latter case the default search path is used to find the program. If not specified, no backend processing is done. Example (for UNIX):
This sends a message to a remote user when the backup has finished.
BACKUPIDPREFIX 6charstring SAP___
Specifies a 6 character long prefix that is used to build a backup identifier for each archived object. The total length of the backup ID is 16 characters. Tivoli Data Protection for R/3 automatically fills up the remaining 10 characters with a time stamp. The backup ID is needed by the SAP backup utilities. They are stored in the SAP backup protocols and in the description field of the archived data objects in Tivoli Storage Manager. The default is SAP___ (___ = 3 underscores). An example for a backup ID is SAP___9911181100, which reads: 18th November, 1999, time 11:00.
BATCH YES/NO
Specify NO if Tivoli Data Protection for R/3 is running with an operator standing by. Specify YES if Tivoli Data Protection for R/3 is running in unattended mode. In this mode Tivoli Data Protection for R/3 terminates the run if operator intervention is required. The default for the BATCH parameter is YES for the backup run and NO for the restore run if the BATCH parameter is commented out in the Tivoli Data Protection for R/3 profile.
BRARCHIVEMGTCLASS management_class [management_class...]
344
Specifies the Tivoli Storage Manager management class(es) Tivoli Data Protection for R/3 uses when called from BRARCHIVE. Each parameter string can consist of up to 16 characters. If not specified, Tivoli Data Protection for R/3 uses the Tivoli Storage Manager default management class.
Note:
The number of different BRARCHIVE management classes specified must be larger or equal than the number of redo log copies specified (keyword REDOLOG_COPIES).
Remember:
Specifies the Tivoli Storage Manager management class(es) Tivoli Data Protection for R/3 uses when called using BRBACKUP. The parameter string can consist of up to 16 characters. If not specified, Tivoli Data Protection for R/3 uses the Tivoli Storage Manager default management class.
Remember:
Specifies the configuration file for Tivoli Data Protection for R/3 to store all variable parameters such as passwords, date of last password change, and the current version number. The path statement specifies the full path and the name of the file. This keyword is required.
DISKBUFFSIZE n (default 131072)
This parameter specifies the block size for disk I/O. The valid range is from 4096 to 262144. Inappropriate values will be adjusted automatically. If not specified the default value is 131072 (128 KB). In most cases this parameter has little influence on the performance. However, in some hardware environments it is recommended to perform disk I/O using a certain block size.
END
Specifies the end of the parameter definitions. Tivoli Data Protection for R/3 stops searching the file for keywords when END is encountered.
EXITONERROR YES/NO/NUMBER
345
This parameter specifies whether or not Tivoli Data Protection for R/3 exits on a backup or restore error during a BRBACKUP/BRRESTORE run. NO means don not exit if an error occurs. YES means exit if one file cannot be backed up. If a number is specified as an argument, Tivoli Data Protection for R/3 counts the number of errors (not warnings or retries) and exits after the specified number of errors. This parameter works only for the BRBACKUP/BRRESTORE runs. The BRARCHIVE run always exits after the first error. This parameter is ignored if the BATCH parameter is set to NO.
FILE_RETRIES n (default 3)
This parameter specifies the number of retries when a file could not be saved or restored.
FRONTEND program_name
Specifies a program program_name that is called by Tivoli Data Protection for R/3 in a backup run before the connection to the Tivoli Storage Manager server is established. The parameter program_name is either a fully qualified file name or simply a file name. In the latter case the default search path is used to find the program. If not specified, no frontend processing is done. Example (for UNIX):
The parameter server_name specifies the name of the Tivoli Storage Manager server to send log messages to. The name must match one of the servers listed in a SERVER statement. This parameter must be specified. The parameter verbosity may be any one of the following: ERROR, WARNING, or DETAIL. This value determines which messages are sent. The default value is WARNING, which means that error and warning messages are sent. ERROR sends only error messages. DETAIL sends all message types: errors, warnings, and informational messages. Note that this feature is available only with Tivoli Storage Manager client and server Version 3 or higher. If there is no LOG_SERVER statement in the profile, log messages are not sent to any of the Tivoli Storage Manager servers.
MAX_SESSIONS n (default 1)
Specifies the total number of parallel Tivoli Storage Manager client sessions that Tivoli Data Protection for R/3 establishes. For a direct backup/restore on tape drives, keep the following in mind: The number of sessions must be equal or
346
lower than the number of tape drives available for the backup. For performance reasons, it is recommended to use as many parallel sessions as tape drives available.
Note:
It is to ensure that the mount limit parameter in the device class is set to the number of available tape drives. If you upgraded from previous version of ADSM (ADSTAR Distributed Storage Manager) to Tivoli Storage Manager and uses more than one drive you should . have special attention for new the parameter MAXNUMMP It defines the maximum number of mount point that one node can do, and the default value is 1. One node can only use more than one mount point if the parameter is correctly set for the correct number of mount points desired.
Remember:
This parameter must correlate to the sum of the number of sessions specified in the SERVER statement.
MAX_ARCH_SESSIONS, MAX_BACK_SESSIONS, MAX_RESTORE_SESSIONS
These keywords have the same function as the MAX_SESSIONS parameter, but they are more specific. They define the number of parallel sessions used for the BRARCHIVE, BRBACKUP and BRRESTORE functions. If MAX_SESSIONS is specified with one or more of these parameters, the more specific parameters overwrite the MAX_SESSIONS parameter. It is necessary to specify all of them if MAX_SESSIONS was not specified.
MAX_VERSIONS n (default 0)
The parameter n defines the maximum number of database backup versions to be kept in backup storage. The default setting for this value is 0, meaning that versioning is disabled.
Caution:
Be aware that if you are using VERSIONING you use the same initSID.bki file for BRBACKUP and BRACHIVE, to avoid an unexpected loss of data.
MULTIPLEXING n (default 1)
Specifies the number of files which are multiplexed into one Tivoli Storage Manger server data stream. The allowed range is from 1 to 8. The optimal value depends strongly on the actual hardware environment. Simply speaking multiplexing makes sense when fast tapes and fast networks are available, when the database files are compressible a lot and when the CPU load is not too high. We expect optimal values in the range from 1 to 4. If not specified the default value of 1 means no multiplexing.
347
Note:
This parameter is available only in the multi-threaded version of Tivoli Data Protection for R/3.
PASSWORDREQUIRED NO (default is YES)
Specifies if Tivoli Storage Manager requires a password to be supplied by the Tivoli Storage Manager client. This depends on the Tivoli Storage Manager installation. For more information see the Tivoli Storage Manager administrators manuals. If not specified the default is PASSWORDREQUIRED YES.
Remember:
If set, Tivoli Data Protection for R/3 is able to send data for monitoring of backup/restore runs to the Administration Tools Server. Address specifies the Administration Tools Server machine (domain name or IP address). A port have to be given, over which the communication will be handled.
REDOLOG_COPIES n (default 1)
Specifies the number of copies Tivoli Data Protection for R/3 stores for each processed ORACLE redo log.
Note:
The number of different BRARCHIVE management classes specified (keyword BRARCHIVEMGTCLASS ) must be larger than or equal to the number of redo log copies specified.
REPORT YES/2 (default NO)
If set to YES, Tivoli Data Protection for R/3 produces some additional information, for example, transferred files. If set to 2, Tivoli Data Protection for R/3 generates an additional summary report containing detailed backup/restore and performance statistics. This summary is displayed at the end of the whole run. The output is sent to stdout, which is normally the console.
RETRY n (default 3)
If Tivoli Data Protection for R/3 did not get a connection to the Tivoli Storage Manager server, it retries n times before it terminates with a connection failure message.
RL_COMPRESSION YES (default is NO)
348
If set to YES Tivoli Data Protection for R/3 performs a null block compression of the data before they are sent over the network. Although the null block compression introduces a small additional CPU load, in most cases it will improve the throughput.
SERVER server_name
Denotes the name of the Tivoli Storage Manager server, to which a path with the subsequent definitions will be established. For alternate paths each path must have its own (logical) server name, even if they refer to one and the same real server (same TCP/IP address for all server names). For alternate servers there must be different TCP/IP addresses for each of the different (real) Tivoli Storage Manager servers.
Remember:
The parameter n specifies the number of parallel sessions Tivoli Data Protection for R/3 can start per server.
Remember:
If set, Tivoli Data Protection for R/3 sends error or status information to a network management program on a certain machine (hostname) using SNMP traps. The community level determines, which information should be sent (error, warning, status).
TCPWAIT s
Causes Tivoli Data Protection for R/3 to wait s seconds before a new process is started. This gives the server enough time to accept the next processes. The parameter s can vary up to around 10 seconds.
TRACE ON (default OFF)
If set to ON, Tivoli Data Protection for R/3 writes trace information into a file called BKItrace within the directory of the caller of Tivoli Data Protection for R/3. Between 50-100 KB of disk space per call of Tivoli Data Protection for R/3 are required.
Note:
This parameter should only be used if your Tivoli Data Protection for R/3 support asks you to.
TRACEFILE path (default stdout)
349
Specifies the trace file for Tivoli Data Protection for R/3 to store all trace information (if TRACE ON), path specifies the full path and the name of file.
Note:
In an actual trace the string %BID% will be replaced by the backup ID.
TRACEMAX n (default 100)
The parameter n defines the maximum size of the trace file in KB.
Note:
If the trace information exceeds the specified limit, it is written in wrap-around fashion.
USE_AT days
Specifies on which days to use a corresponding Tivoli Storage Manager server. The days are numbered from 0 (Sunday) to 6 (Saturday). If not specified, the default is to use the Tivoli Storage Manager server on all days.
350
If only one path is used, SESSIONS must be equal to MAX_SESSIONS, the parameter identifying the total number of parallel sessions to be used (equivalent to number of tape drives).
Tivoli Data Protection for R/3 tries to communicate with the Tivoli Storage Manager server using the first path in the profile. If this proves successful, Tivoli Data Protection for R/3 starts the number of parallel sessions as specified for this path; if the attempt was unsuccessful, this path is skipped, and Tivoli Data Protection for R/3 continues with the next path. This continues until as many sessions are active as were specified in the total session number. If this number is never reached (for example, because several paths were inactive), Tivoli Data Protection for R/3 terminates the backup job.
351
MAX_SESSIONS SERVER ADSMNODE SESSIONS PASSWORDREQUIRED BRBACKUPMGTCLASS BRARCHIVEMGTCLASS # USE_AT SERVER ADSMNODE SESSIONS PASSWORDREQUIRED BRBACKUPMGTCLASS BRARCHIVEMGTCLASS # USE_AT
# 2 tape drives
server_a # Token Ring T01 2 YES mdata mlog1 mlog2 1 2 3 4 5 6 7 server_b # Ethernet T01 2 YES mdata mlog1 mlog2 1 2 3 4 5 6 7
Backup is normally performed using the Token Ring LAN (SERVER statement server_a). If the Token Ring is down, the backup should still be performed using the Ethernet connection (SERVER statement server_b), although data transfer might take longer. If path 1 is active, Tivoli Data Protection for R/3 will start the 2 sessions as defined in the SERVER statement for path 1. Since MAX_SESSION is also 2, no more sessions will be started. If path 1 is inactive, Tivoli Data Protection for R/3 will start 2 sessions on path 2. Since this equals the MAX_SESSION definition as well, backup will be executed using path 2.
352
a. Server_a, TCP/IP address xxx.xxx.xxx.xxx b. Server_b, TCP/IP address yyy.yyy.yyy.yyy c. Each of these Tivoli Storage Manager servers has a tape library with two tape drives A R/3 server is connected to the FDDI network. The definitions in the Tivoli Data Protection for R/3 profile could be as shown in Figure 180.
MAX_SESSIONS SERVER ADSMNODE SESSIONS PASSWORDREQUIRED BRBACKUPMGTCLASS BRARCHIVEMGTCLASS # USE_AT SERVER ADSMNODE SESSIONS PASSWORDREQUIRED BRBACKUPMGTCLASS BRARCHIVEMGTCLASS # USE_AT
server_a # FDDI T01 2 YES mdata mlog1 mlog2 1 2 3 4 5 6 7 server_b # FDDI T01 2 YES mdata mlog1 mlog2 1 2 3 4 5 6 7
R/3 database backups will be done every day on both Tivoli Storage Manager systems.
353
MAX_SESSIONS SERVER ADSMNODE SESSIONS PASSWORDREQUIRED BRBACKUPMGTCLASS BRARCHIVEMGTCLASS USE_AT SERVER ADSMNODE SESSIONS PASSWORDREQUIRED BRBACKUPMGTCLASS BRARCHIVEMGTCLASS USE_AT
server_a # FDDI T01 4 YES mdata mlog1 mlog2 1 2 3 4 server_b # FDDI T01 4 YES mdata mlog1 mlog2 5 # for disaster recovery
Normal backups are to be performed with server_a, which is local to the R/3 database server. Every Friday (USE_AT = 5) a disaster recovery backup should be stored on a remote Tivoli Storage Manager server (server_b).
354
-rwxr--r--rwxr--r--
1 root 1 root
system system
The script switch_standby.sh will be executed out of user root on the primary node. The script performs the following actions: The R/3 system on the primary node will be stopped The online logs of the database will be archived The TNS Listener will be stopped The brarchive run on the primary node will be stopped The script activate_standby.sh will be invoked on the standby node. activate_standby.sh performs the following actions: brarchive will be stopped The standby database will be activated The standby database will be shutdown and restarted The TNS Listener configuration will be adapted The TNS Listener will be started The file tnsnames.ora will be adapted The R/3 system will be restarted
355
356
************************************************************************ * Tivoli Storage Manager * Sample Client User Options file for AIX and SunOS ************************************************************************ SErvername server_a Replace On Tapeprompt No DOM /usr/sap /sapmnt/TSMTSM /usr/sap/trans /oracle/TSM
Figure 183. Example User Option File for Tivoli Storage Manager Client
************************************************************************ * Tivoli Storage Manager * Sample Client System Options file for AIX and SunOS ************************************************************************ SErvername server_a COMMmethod TCPip TCPPort 1500 TCPServeraddress loopback TCPBuffsize 32 TCPWindowsize 24 Compression Off InclExcl /usr/tivoli/tsm/client/ba/bin/inclexcl.sample
Figure 184. Example System Options File for Tivoli Storage Manager Client
357
-------------------------------------------------------------------------* inclexcl.lst: * Sample include/exclude list * Task: * Include/Exclude list of files and directories for TSM incremental backups -------------------------------------------------------------------------* ***** NOTE ***** NOTE ***** NOTE ***** * This file is intended only as a model and should be * carefully tailored to the needs of the specific site. -------------------------------------------------------------------------* For all AIX systems exclude /unix exclude /.../core exclude /u/.../.*sh_history exclude /home/.../.*sh_history * Note: It is recommended to perform system backups on a regular * basis (e.g. using "smit mksysb"). Consequently, you can exclude * at least the following directories (which make up about 30 MB). exclude /usr/games/.../* exclude /usr/bin/.../* exclude /usr/lbin/.../* exclude /usr/mbin/.../* exclude /usr/sbin/.../* -------------------------------------------------------------------------* For those using AFS, exclude the cache file system or file * exclude /usr/vice/cache/* * exclude /var/vice/cache/* * or * exclude /afscfs -------------------------------------------------------------------------* This stuff is either not worthwhile to be included or should be backed up * using the SAP utilities BRBACKUP/BRArchive. exclude /oracle/TSM/saparch/.../* * exclude /oracle/TSM/sapbackup/.../* * exclude /oracle/TSM/sapreorg/.../* (There may be important scripts * located, check it out and decide.) exclude /oracle/TSM/sapdata*/.../* exclude /oracle/TSM/sapraw*/.../* -------------------------------------------------------------------------* With the above include/exclude list we implicitly include everything not * excluded above. Especially for R/3 this means including: * /sapmnt/TSM > 270 MB * /usr/sap > 14 MB * /oracle/stage > 89 MB * /oracle/TSM > 90 MB * and AIX related > 220 MB -------------------------------------------------------------------------* Attn. Depending on your R/3 release * and your database layout it might be * necessary to add additional * exclude /oracle/TSM/sapdata...... * statements. Check the BRBACKUP run * for additional files/directories you * want to exclude. * ----------------------------------------------------------------------Figure 185. example for Include and Exclude List files for Tivoli Storage Manager Client
358
F.1 Preparation
1. Get the SAP license 2. Acquire the installation kit (CDs, documentation and R/3 Notes) 3. Check the system configuration 4. Determine the size of the source database (transaction DB02) 5. Determine table modifications 6. Plan the structure of the target database (tablespaces/ dbspaces, etc.) 7. VLDB: Do you have to move tables into extra tablespaces/ dbspaces? 8. Plan and create the file systems 9. Make common technical preparations in the source system 10.Deactivate cron jobs in the source system
359
360
361
Network Configuration: The requirements for the R/3 System network configuration are described in the manuals Integration of R/3 Server in TCP/IP Networks and SAP Software in PC Networks. If these requirements are not adhered to, you may encounter restrictions or problem situations when working with the R/3 System.
EXAbyte 8mm drive with hardware compression is recommended Capacity 5 GB You can test the drive /dev/rmt0 as follows:
(The tape device is always rmt0 unless more than one tape drive exists)
CD drive:
ISO 9660 compatible Many CD drives can be configured but not all can be mounted. Try to mount it.
Disks
For data security reasons distribution over 3 disks is required (over 5 is recommended). Display available disks:
> lspv
(disks marked none in the 3rd column are unused) Display free space on a disk:
lspv -p <disk_name>
362
lslpp -l bos.rte
NFS
lslpp -l bos.net.nfs.
Then rpc.mountd and either biod or nfsd must have status activated
NLS
lslpp -l bos.loc.*
Output should show at least one ISO8859-1 locale like de_DE, en_US, fr_FR, en_GB, es_ES etc. (depending on the language(s) you want to install for your R/3 System) e.g.:
bos.loc.iso.en_US.
Motif is installed after operating system upgrades only if explicitly requested. Check this as follows:
lslpp -l X11.motif.lib
363
Printer
> lpstat -t
Keyboard
on the directly connected console. You can select your keyboard under Motif by setting a language environment (LANG), for which an NLS component is installed. The settings will take effect after reboot. Network Test the network connection to the database server:
Component
bos.rte bos.adt bos.data bos.sysmgt bos.diag.rte bos.msg.en_US bos.net.nfs bos.net.tcp perfagent
Description
Base Operating System Runtime Base Application Development Base Operating System Data System Management Hardware Diagnostics Database Base OS Runtime Messages - U.S. English Network File System TCP/IP Performance Agent
364
Component
bos.loc.iso.en_US bos.loc.iso.de_DE bos.iconv.de_DE
Description
Base System Locale Code Set - U.S. English Base System Locale Code Set - German Base Level Fileset (required for Local Code Set) Base Level Fileset (required for Local Code Set) Device Drivers for all installed Hardware Printer Backend (if Printer installed) AIXwindows Runtime AIXwindows Applications AIXwindows Motif AIXwindows Latin 1 Fonts AIXwindows Locale - U.S. English AIXwindows Messages - U.S. English AIXwindows Desktop C Set ++ for AIX Application Runtime, Version 3.1.4.8 or higher C for AIX Compiler (license key will be provided by DKIBMVM1 (SPCMENU))
bos.iconv.com
devices.* printers.rte X11.base X11.apps X11.motif X11.fnt.iso1 X11.loc.en_US X11.msg.en_US X11.Dt xlC.rte
365
366
An entry for the Tivoli Storage Manager to connect to has to be set in the server file dsm.sys. The server file has to be present in the directories correspondent to DSM_DIR and the DSMI_DIR . In the client options file dsm.opt, a NODENAME for the DB2 database has to be specified. The PASSWORDACCESS has to be set to GENERATE.
367
The location of the password file of the Tivoli Storage Manager has to be changed, therefore an entry for PASSWORDDIR has to be set up in the server file dsm.sys. If there is no PASSWORDDIR specified in dsm.sys, then during setup of the DB2 ADSM connection, a passwordfile <nodename> is created in the default location /etc/security/adsm. This password file can be accessed by the root and db2<sid> user, for security reasons the access authorizations for this directory must not be changed. Since the user <sid>adm has to be able to invoke a backup, the user needs read access also to the file. Therefore, the password file has to be moved to another location (specified in the PASSWORDDIR ), where the access permissions can be adapted. As a user root , the corresponding directory to PASSWORDDIR has to be created first.
# grep PASSWORDDIR /usr/tivoli/tsm/client/api/bin/dsm.sys PASSWORDDIR /etc/tsm # mkdir /etc/tsm # chmod 755 /etc/tsm
The Tivoli Storage Manager password has to be set now by the user root using the DB2 UDB tool /db2/<SID>/sqllib/adsm/dsmapipw. When the password is set (and the passwordfile is written to PASSWORDDIR ), the authorization has to be changed so that the user <sid>adm is able to read the file.
# chmod 644 /etc/tsm/PALANA
368
Note:
The procedure shown here was examined while performing a homogeneous system copy at a customer site. For some reasons, the SAP DB2admin tools where not available on the source node, so for the part of the redirected restore not the SAP DB2admin tools brdb6brt and bdbr6rst were used, but a procedure based on the intrinsic DB2 UDB backup/restore commands. We we did not built up an R/3 - DB2 UDB system in the lab environment, however, we refer in the following example to: SOURCE_SID as TSM, located on node capeverde TARGET_SID as CPY, located on node palana We also assume that both DB2 UDB systems are registered with the nodenames capeverde and palana at the Tivoli Storage Manager. During the system copy, the following steps have to be done:
Actions on the source system 1. Backup of the source system
On the source system, an offline backup off the DB2 UDB database is done to Tivoli Storage Manager. R/3 has to be stopped, afterwards the backup can be started out of the DB2 UDB command line.
# su - tsmadm capeverde:tsmadm> stopsap capeverde:tsmadm> exit # # su - db2tsm capeverde:db2tsm> db2 BACKUP DATABASE TSM TO ADSM WITH 2 BUFFERS BUFFER 512
2. Receiving the structural info out of the source data base During the redirected restore on the target node, the tablespace containers are stored to a different directory structure as on the source system. So, as one step in the redirected restore procedure, the new location of all the tablespace containers has to be adapted to the new filesystem structure using the SET TABLESPACE CONTAINERS command. Since in the R/3 system, there are a lot of tablespaces with even more containers, the necessary commandfile is created out of the source database. For this purpose, the example script get_db2_containers.sh is used. The script get_db2_containers.sh can be accessed from CD ????/ downloaded from ???? (it is located in \\coral\leins in the directory SG245743\work in progress\files as get_db2_containers.sh ) In the sample environment, a temporary filesystem /db2/<SOURCE_SID>/syscopy was created and the script was stored to it.
capeverde:db2tsm> cd /db2/TSM/syscopy capeverde:db2tsm> ls -l -rwxr--r-- 1 db2tsm system 2227 Nov 23 10:31 get_db2_containers.sh
369
To dump the container layout of the source database, the script is started as user db2<source_sid>, the output is redirected to a file. The script also can be started if R/3 and the DB2 UDB database are active, is has only to be assured that the info about the actual structure is valid for the structure of the database backup too.
capeverde:db2tsm> cd /db2/TSM/syscopy capeverde:db2tsm> ./get_db2_containers.sh >TSM.layout
The file TSM.layout contains a list of SET TABLESPACE CONTAINER commands, the files are referencing to the actual locations of the tablespace containers on the source node. All file names have to be adapted to their location on the target node (by replacing the <SOURCE_SID> by the <TARGET_SID>).
capeverde:db2tsm> cd /db2/TSM/syscopy capeverde:db2tsm> cat TSM.layout | sed s/TSM/CPY/g >CPY.layout
Note:
By editing the pathnames in the file, the tablespace containers can be redistributed to other locations. If there are a lot of tablespace containers for one tablespace, it is possible to redefine them to a fewer number of containers with a larger size. (The size information in the file reflects 4k blocks). For performance reasons, also all tablespace containers of one tablespace should be of the same size.
Actions on the target system 3. Installation of the DB2 UDB database software, the Tivoli Storage Manager Client and start of the R/3 installation
We assume that all prerequisites are met and all steps are done according to the procedures mentioned in the SAP installation and homogeneous system copy guides. It is also assumed that the Tivoli Storage Manager client is installed and that all necessary customizing settings both in the Tivoli Storage Manager, its DB2 UDB is customized to use Tivoli Storage The R/3 installation for the homogeneous system copy has to be performed just before the point where in a standard installation the database load would be started. At that point, the installation process is paused. 4. Preparations of the DB2 UDB - Tivoli Storage Manager communication On the target node, DB2 UDB has to restore a backup which is owned by the database user of the source node. For accessing this backup, some configuration settings have to be changed. This changes affect both the Tivoli Storage Manager API client configuration and settings in DB2 UDB. Modification of the client server file dsm.sys. The environment variable DSMI_CONFIG points that file. The entry of the actual Tivoli Storage Manager has to be changed in dsm.sys. Since the backup taken at the source node has to be restored, the nodename has to be set to the nodename of the source node. Also, PASSWORDACCESS has to be changed to PROMPT.
370
Modification of the DB2 UDB configuration DB2 UDB holds some Tivoli Storage Manager settings in the database configuration. Prerequisite for been able to set them accordingly is the existence of a database <TARGET_SID>. If this is not the case, at first the database has to be created on the target node.
# su - db2cpy palana:db2cpy> db2 CREATE DATABASE CPY
If the database for <TARGET_SID> exists on the target node, the following parameters can be adjusted: ADSM_NODENAME to the corresponding Tivoli Storage Manager nodename of the source database. ADSM_OWNER to the administrative DB2 UDB user of the source database. ADSM_PASSWORD to the Tivoli Storage Manager password corresponding to the nodename of the source database.
ADSM_MGMTCLASS to the name of the management class the backup of the source database was stored to. This variable has only to be set if a management class different from the default management class is used.
These parameters can be maintained using the DB2 UDB command UPDATE DB CFG FOR <TARGET_SID> USING <PARAMETER> <VALUE>;
# su - db2cpy palana:db2cpy> db2 UPDATE DB CFG FOR CPY USING ADSM_NODENAME capeverde palana:db2cpy> db2 UPDATE DB CFG FOR CPY USING ADSM_OWNER db2tsm palana:d22cpy> db2 UPDATE DB CFG FOR CPY USING ADSM_PASSWORD <password>
5. Redirected restore of the database The file containing the layout of the containers of the tablespaces has to be copied to the target node. Using layout file, a script file for the complete restore procedure can be built. If everything is well prepared, these script should run the complete redirected restore without any further user intervention. The command file has the following contents: At the top of the command file, the RESTORE command has to be issued
RESTORE DATABASE TSM FROM ADSM INTO CPY WITH 2 BUFFERS BUFFER 512 REPLACE EXISTING REDIRECT;
In the middle section of the command file, all SET TABLESPACE CONTAINER commands have to appear. So, the file containing the layout is copied to that part.
371
Note:
During the phase of the redirected restore when DB2 UDB works on the SET TABLESPACE CONTAINER commands, the files corresponding to the tablespace containers are created in the directory. The user db2<target_sid> has to be able to create this files, therefore the user parameters concerning file limitations, and so on, have to be sufficient. At the end of the command file, a RESTORE DATABASE CONTINUE command has to appear.
RESTORE DATABASE CPY CONTINUE;
After the command file is created, the redirected restore can be started using the -f option of the db2 command line interface specifying to read all commands from a command file. (The further options are: -v verbose, -t the db2 commands are separated by a ;, -z log output is written to a file)
# su - db2cpy palana:db2cpy> cd /db2/CPY/syscopy palana:db2cpy> cat header.cmd RESTORE DATABASE TSM FROM ADSM INTO CPY WITH 2 BUFFERS BUFFER 512 REPLACE EXISTING REDIRECT; palana:db2cpy> cat foot.cmd RESTORE DATABASE CPY CONTINUE; palana:db2cpy> cat header.cmd CPY.layout foot.cmd >restore.cmd palana:db2cpy> db2 -tvf restore.cmd -z restore.log
Note:
It is very important to keep the terminal session active and undisturbed from which the restore was started. This session is the only window from which some error handling of this actual restore may be invoked. The Redirected Restore first accesses the Tivoli Storage Manager Server, and gets information about the files of the backup, afterwards all the tablespace containers are created in the filesystem and, if all of them are present, the real restore from Tivoli Storage Manager is initiated.
Note:
Dependent on the size of the database, the creation of the tablespace containers can take a long time. Since there is a session invoked to the Tivoli Storage Manager at the beginning of the backup, and there will be no further communication from DB2 UDB to this session until all tablespace containers are created until the restore continues, the session must not be canceled due to a timeout. Therefore, it has to be assured that the setting of the COMMTIMEOUT parameter for Tivoli Storage Manager server is sufficient. To be on the safe side, increase the COMMTIMEOUT of the Tivoli Storage Manager using the SETOPT command:
dsmadmc>SETOPT COMMTIMEOUT 65535
372
Note:
After all the data is restored, DB2 UDB generates the log files. The default path for generating the log files immediately after the restore is /db2/<TARGET_SID>/db2<target_sid>/NODE0000/SQL00001/SQLOGDIR. During the restore, this path is created as directory in the filesystem of /db2/<TARGET_SID>, this path is different to the path which is assigned to the logs for the production operation, /db2/<TARGET_SID>/log_dir. If the log files cannot be stored in this directory (for example, due to freespace problems), the restore fails, and without any further warnings all restored files are immediately deleted from the filesystem and the restore has to be started from the beginning. To avoid this, it has to be ensured that there is sufficient space for storing the log files in this directory. One possibility is to increase the size of the /db2/<TARGET_SID> filesystem in advance. Another possibility is to set a symbolic link for SQLOGDIR pointing to an directory with sufficient freespace. (Off course, the user db2<target_sid> has to be able to read and write to that directory). This link has to be set up during the actual restore process from Tivoli Storage Manager, that means after all table space containers are redefined and before all backed up data is restored. After the restore is complete, the path for the logfiles has to be changed to the dedicated path /db2/<TARGET_SID>/log_dir and the parameters for the Tivoli Storage Manager have to be reset to an empty value.
DB DB DB DB
Also, the entries in the dsm.sys for PASSWORDACCESS and NODENAME have to be changed to their old values.
PASSWORDACCESS GENERATE NODENAME palana
Afterwards, the sequence number can be changed be switching the LOGRETAIN mode and the USEREXIT parameter. The settings have to be deactivated.
# su - db2cpy palana:db2cpy> db2 UPDATE DB CFG FOR CPY USING LOGRETAIN OFF palana:db2cpy> db2 UPDATE DB CFG FOR CPY USING USEREXIT OFF
all connections to the database have to be closed, and afterwards the settings have to be reactivated.
373
# su - db2cpy palana:db2cpy> db2 UPDATE DB CFG FOR CPY USING LOGRETAIN ON palana:db2cpy> db2 UPDATE DB CFG FOR CPY USING USEREXIT ON
Note:
Immediately after the value for LOGRETAIN is switched to on, a backup of the DB2 UDB database has to be done. Until a backup is performed, DB2 UDB only allows access to the database if the value for logretain is switched backed to off. After the backup, the necessary database steps are completed. 6. Resuming the R/3 installation and performing the post-installation tasks The R/3 installation process can be restarted afterwards. After the R/3 installation process is completed, the additional post-installation tasks have to be performed as described in the SAP manual Homogeneous system copy.
Note:
If the SAP DB2admin tools are present on both nodes, the homogeneous system copy can be done using this tools as described in the SAP documentation. The steps 1), 2), and 5) will then be done in a different way, but the other considerations are still valid. 1.) The backup of the source database would we done using brdb6brt:
brdb6brt -s TSM -nb 2 -bs 512 -bpt ADSM
The <time stamp> of the backup can be extracted from the TSM.rti file.
374
375
The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries:
IBM AIXwindows AT CT DB2 Universal Database Enterprise Storage Server FICON MQSeries Netfinity OS/390 PROFS RAMAC RISC Systems/6000 S/390 SP2 VisualInfo XT AIX AS/400 C Set ++ DB2 EDMSuite ESCON Magstar MVS/ESA OS/2 OS/400 RACF RETAIN RS/6000 SP System/390 VM/ESA 400
The following terms are trademarks of other companies: Tivoli, Manage. Anything. Anywhere.,The Power To Manage., Anything. Anywhere.,TME, NetView, Cross-Site, Tivoli Ready, Tivoli Certified, Planet Tivoli, and Tivoli Enterprise are trademarks or registered trademarks of Tivoli Systems Inc., an IBM company, in the United States, other countries, or both. In Denmark, Tivoli is a trademark licensed from Kjbenhavns Sommer - Tivoli A/S. C-bus is a trademark of Corollary, Inc. in the United States and/or other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and/or other countries. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States and/or other countries. PC Direct is a trademark of Ziff Communications Company in the United States and/or other countries and is used by IBM Corporation under license. ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States and/or other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. SET and the SET logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.
376
Book Title
Publication Number
General Topics Using Tivoli to Manage a Large Scale SAP R/3 Environment Tivoli Storage Manager / ADSM Concepts Getting Started with ADSM: A Practical Implementation Guide ADSM Version 3.7 Technical Guide Using ADSM Hierarchical Storage Management Client Disaster Recovery: Bare Metal Restore Planning for IBM Remote Copy P/DAS and Enhancement to the IBM 3990-6 and RAMAC Array Family TME 10 Cookbook for AIX: Systems Management and Networking Applications Specific Server Books A Practical Guide to Implementing ADSM on AS/400 Windows NT Backup and Recovery with ADSM Using TSM in a Clustered NT Environment ADSM Server-to-Server Implementation and Operation ADSM Server for Windows NT Configuration and Recovery Examples ADSM/6000 on 9076 SP2 ADSM for OS/2: Advanced Topics ADSM/VSE Implementation Guide Implementing the Enterprise Storage Server in Your Environment ADSM with Other Products A Practical Guide to Network Storage Manager Using ADSM to Back Up Databases Using ADSM to Back Up Lotus Notes Using ADSM to Back Up and Recover Microsoft Exchange Server Using ADSM to Back Up OS/2 LAN Server and Warp Server Backup, Recovery, and Availability with DB2 Parallel Edition on RISC/6000, ADSM Operation and Management with TME10,
SG24-5500 SG24-4877 SG24-5416 SG24-5477 SG24-4631 SG24-4880 SG24-2595 SG24-4724 SG24-4867 SG24-5472 SG24-2231 SG24-5742 SG24-5244 SG24-4878 GG24-4499 SG24-4740 SG24-4266 SG24-5420 SG24-2242 SG24-4335 SG24-4534 SG24-5266 SG24-4682 SG24-4695 SG24-2214
CD-ROM Title
System/390 Redbooks Collection Networking and Systems Management Redbooks Collection Transaction Processing and Data Management Redbooks Collection
377
CD-ROM Title
Lotus Redbooks Collection Tivoli Redbooks Collection AS/400 Redbooks Collection Netfinity Hardware and Software Redbooks Collection RS/6000 Redbooks Collection (BkMgr) RS/6000 Redbooks Collection (PDF Format) Application Development Redbooks Collection IBM Enterprise Storage and Systems Management Solutions
Collection Kit Number SK2T-8039 SK2T-8044 SK2T-2849 SK2T-8046 SK2T-8040 SK2T-8043 SK2T-8037 SK3T-3694
Publication Number GC35-0367 GC35-0368 GC35-0369 GC35-0370 GC35-0371 GC35-0372 GC35-0373 GC35-0374 GC35-0375 GC35-0276 GC35-0277 GC35-0278 GC35-0379 GC35-0380 GC35-0381 GC35-0382 GC35-0317 GC35-0315 GC35-0316 GC35-0351 GC35-0352 GC35-0353 GC35-0271 GC35-0231 GC35-0232 GC35-0233 SH35-0133 SH26-4105 SH26-4101 SH26-4100 SH26-4102 SH26-4107 SH26-4106 SH26-4104 SX26-6021
378
Book Title
Publication Number Tivoli Data Protection for Informix V3R7: Installation and Users Guide SH26-4095 Tivoli Data Protection for Lotus Domino for UNIX V1R1: Installation and Users Guide SH26-4088 Tivoli Data Protection for Lotus Domino for Windows NT V1R1: Installation and Users GuideGC26-7320 Tivoli Data Protection for Lotus Notes on Windows NT Installation and Users Guide SH26-4065 Tivoli Data Protection for Lotus Notes on AIX Installation and Users Guide SH26-4067 Tivoli Data Protection for Microsoft Exchange Server Installation and Users Guide SH26-4071 Tivoli Data Protection for Microsoft SQL Server Installation and Users Guide SH26-4069 Tivoli Data Protection for Oracle Backup on AIX Installation and Users Guide SH26-4061 Tivoli Data Protection for Oracle Backup on Sun Solaris Installation and Users Guide SH26-4063 Tivoli Data Protection for Oracle Backup on HP-UX Installation and Users Guide SH26-4073 Tivoli Data Protection for Oracle Backup on Windows NT Installation and Users Guide SH26-4086 Tivoli Decision Support 2.1 Installation Guide (October 1999) GC32-0438 Tivoli Decision Support 2.1 Administrator Guide (October 1999) GC32-0437 Tivoli Decision Support 2.1 Users Guide (October 1999) GC32-0436
General Topics
IBM CommonStore Server Installation Guide IBM SCSI Tape Drive, Medium Changer, and Library Device Drivers Installation and Users Guide R/3 Homogeneous System Copy R/3 Installation on UNIX - Oracle Database Check list - Installation Requirements: Oracle SAP Documentation Guides CD
51002956
51007113
379
380
e-mail address usib6fpl@ibmmail.com Contact information is in the How to Order section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl
Fax Orders United States (toll free) Canada Outside North America 1-800-445-9269 1-403-267-4455 Fax phone number is in the How to Order section at this site: http://www.elink.ibmlink.ibm.com/pbl/pbl
This information was current at the time of publication, but is continually subject to change. The latest information may be found at the redbooks Web site.
381
First name
Last name
Company
Address
City
Postal code
Country
Telefax number
VAT number
Card issued to
Signature
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment.
382
Index Numerics
7133 Disk subsystem 7133 SSA 253 279 Archive 14 Archive Development Kit (ADK) 189 Archive Link interface 189 archive packages 14 archive redo log 101 archive redo logs and data file 103 archive system 217 Archive/Restore log files 74 archive/retrieve, use by commonstore ARCHIVE_GLOBAL_PATH 209 archived data size 110 Archived retained 74 ArchiveLink 208 archiving 89 archiving object 219 ARCH-PRO 78 AS/400 backup 10 Asynchronous Backup 253 authentication 17 automating server and client 110 availability considerations 88
A
Accessing files by logical file 209 ADABAS 68 ADINT/ADSM 69, 72 ADINT/ADSM AGENT(s) 71 adjustable disk blocksize 64 adjustable Tivoli Storage Manager blocksize 64 ADK 202 administration authority 161 Administration Tools 153 Configurator 163 delivery 155 Installation 154 overview 67 Performance Monitor 175 Save Configuration 171 server 157 slave server 157 User Administration 161 verification procedure 159 administration tools server 154 Administration Tools Configurator 68 Administration Tools for Tivoli Data Protection for R/3 65 Administration Tools Performance Monitor 68 Administration Tools Server 66 Administration Tools server Copy Configuration 173 user profiles 162 Administration Tools slave server 67 Administration Tools, architecture 65 Administration Tools,setup 153 administrative database user 235 ADSM (ADSTAR Distributed Storage Manager) 5 ADSM, upgrade from 109 AIX chdev 255 AIX logical volume manager 254 AIX Split mirror implementation 256 alternate backup path 62 Alternate backup server 72 Analyzing performance bottlenecks 181 API 104 API setup 111 API setup for EDM Suite CommonStore for SAP 114 Application instrumentation 320 Application protection 3 ARCH-Dispatcher 78 archint.ini 192, 194, 201 architectural component, Tivoli Storage Manager 6 Architecture of ADINT/ADSM 70 architecture, Tivoli Storage Manager 3 ARCHIVE 192
103
B
BACKAGENT 121 BACKINT 277 BACKINT (BC-BRI) 65 Backup by version 63 backup sessions, reviewing 179 backup sets 14 Backup/Archive client, description 7 BACKUPIDPREFIX 121 Basis customizing 217 bootable diskettes 22 both-sides verification 17 BRARCHIVE 118 BRARCHIVEMGTCLASS 168 brbackup 276 BRBACKUPMGTCLASS 121 BRMS/400 10 Business Continuance Volume (BCV) 270 business requirements 86
C
case study 101 central instance 234 Change management 88 Checklist - Installation Requirements Oracle 233 client platform 7 client scheduler 112 client software 6 cluster LIC 264 collocation 102 co-locate 16 committed transactions 94 COMMMETHOD 105 Common Object Request Broker Architecture (CORBA)
383
313 CommonStore 103, 104, 189 architecture 76 ArchiveLink interface 190 functionality 76 installation and setup 189, 195 Overview and planning 189 Planning the installation 191 profile parameters 193 Server 190 server processes 192 server profile 201 Commonstore 94 CommonStore administrative user 196 CommonStore clien 79 CommonStore client 77 CommonStore Server 77 Comparison of split-mirror implementations 274 compression 183 Computing Center Management System (CCMS) 321 CONFIG_FILE 121 configuration authority 161 configuration data 66 configuration files, Tivoli Storage Manager 66 configuration history 68, 173 Configurator 153 consistency check 68 CONTROL, administration tool 69 Controlling Event Logging 328 Copy Access 274 copy groups 17, 103 CPIC user 190, 203 Creating test data 253
E
Early archiving 76 EDM Suite CommonStore 110 EDMSuite CommonStore for SAP 75 EMC split mirror implementation 270 EMC Symmetrix 253 EMC TimeFinder 270 encapsulation of key services 313 Enterprise Console 315 Enterprise protection 3 Enterprise Storage Server 264 Enterprise storage server specialist 264 enterprise-wide solution 4 Error messages 131 ESS FlashCopy error logs 264 implementation 264 Invoking 266 NOCOPY option 265 started for the first time 267 target volume 265 viewing and updating the configuration ESS FlashCopy commands rsExecuteTask 266 rsFlashCopy 266 rsList2105s 266 rsPrimeServer 266 rsQuery 266 Establish logical volume mirror 258 event adapters 315 Event Analysis 19 event filter 10 event logging 10 event receivers 10 Executables 119 exploiting SAN infrastructure 6 Extended Remote Copy 263 external library manager interface 10
264
D
data availability 4 Data compression 114 data management APIs 10 data management functions 6 Data Mining and Data Warehousing 253 data objects 14 Data protection 87 Data Protection for application solutions 21 Data protection, disk failures 93 Data storage 101 DB2 Control Center 73 DB2 Control Center (DB2CC) 73 DB2 SmartGuides 73 DB2 UDB, backup and recovery 73 Decision Support Guides 318 DEFINE SCHEDULE 110 Definition of queues 216 delete program 217 DESTINATION 191 Destiny 317 device classes 101 Disaster Recovery 5 disaster recovery for Windows NT 22 Disaster Recovery Manager 18 disaster recovery plan 19
F
fan-out 315 fast null block compression algorithm 64 Fibre Channel 4 file multiplexing 64 file system 217 Flash Copy 263 FlashCopy options 266 FlashCopy, overview 264 Framework 314 Full+Incremental, comparison 14
384
G
Gateway host 207 General server setup 104 graphical performance analyzer 186 Graphical User Interface 104
HACMP 93, 95 hardware compression 114 Hardware Upgrade while cloning 230 hdisk 254 help desk usage 7 Hierarchical Space Management 5, 17 High Performance Switch 106 Homogeneous or Heterogeneous System Copy 229 Homogenous System Copy 229 hot standby server 93
MAX_SESSIONS 121, 183 MAX_VERSIONS 121 maxscr 101 methods of establishing a copy 253 migrate 15 mirroring 253 monitoring authority 161 Monitoring R/3 backup 177 Multiple management classes 62, 72 Multiple redo log copies 62 multiple versions 14 multiplex level 2 178 multiplexing level 184 multi-session clients 7 mutual suspicion algorithm 17
N
Network Information System (NIS) 198 network management 63 Network topology 106 network-induced bottlenecks, reduction 62 node registration 103 Node definition 109 CommonStore 111 number of backup versions 13
I
IBM 7017-S70 86 IBM Enterprise Storage Server 253 implementation CommonStore 189 Inquire function 124 Installation and setup of CommonStore Server 195 Installation of the R/3 instance 232 installation steps, TDP for R/3 118 installing and customizing TDP for R/3 117 Instant Archive 14 instant copy, with ESS 264 instantaneous copy 253 integration into Tivoli Framework, Tivoli Storage Manager 322 Integration with R/3, TDP 64 Integrity requirements 89 Invoke FlashCopy 267
O
ODBC drivers 7 Off-line backup of source database system 235 Offline retained 74 off-site media tracking 19 Online active 74 Online retained 74 Online/Offline backup with BRBACKUP 130 Operational Backup and Restore of Data 5 Oracle 276 Oracle and R/3 277 overview of SAP system cloning 229 Ownership and permissions 119
J
JDK 153 JRE 153
L
LAN-free backup solutions 6 LAN-free recovery 14 Late archiving 76 Legal requirements 89 log file management concepts 74 Logging Events to the Tivoli Event Console 329 logical server name 62 logical system 207 logical volume manager (LVM) 254 logical volume mirror 258 Lotus Domino 20
P
Parallel backup and restore 71 parallelism 64 password 17 Password handling 64 Peer-to-Peer Remote Copy 263, 266 peer-to-peer servers 6 Performance Analysis 19 Performance Analysis and Reporting 318 performance data 66 performance improvements with TDP for R/3 64 Performance monitor session 185 Performance Monitor. 153 physical partition 256 physical-logical relationships 254 Planning a Homogeneous System Copy 232 Planning checklist 90
M
Maestro 315 management classes 17
385
Planning Homogeneous System Copy 233 point-in-time backups 7 Policy concepts 16 policy definition 108 policy domain. 16 policy set 17 policy-based managemen 321 Prerequisites 117 procedure of System copy 233 PROGID 191 progressive backup methodology 7, 13, 15 PTFs for Tivoli Storage Manager 118
Q
quality assurance 88 queue administrator 216
R
R/3 ADABAS database administration tools 68 R/3 ADK customizing 217 R/3 central instance and database 234 R/3 customizing 202 R/3 database backup procedure, split mirror 276 R/3 database utilities BRBACKUP 118 R/3 DB2 UDB database administration 72 R/3 gateway process 198 R/3 scheduler 127 R/3 split mirror backup using Tivoli storage manager 276 R/3 System Copy 230 R/3 System Landscape setup 230 R/3 Transport Management System (TMS) 230 R3SETUP -f CEDBR3 235 RAID 5 93 RAID-1/RAID-5 88 raw device backup 7 raw logical volumes 14 Recovery using FlashCopy 269 Re-establish logical volume mirror 258 Registration 206 Remote Function Calls (RFC) 190 remote location 88 Remote Method Invocation (RMI) 154 remote mirroring 96 Reporting/Tracing 72 reports 20 Restore from a BCV 273 retrieve 14 review control panel 180 reviewing completed backups 181 RFC destination 205 role-based security 316
S
SAN 7 SAN management 3 SAP ArchiveLink 75 SAP certification 64
SAP KERNEL ORACLE 235 SAPDBA profile 66, 120, 153 SAPDBA, BRBACKUP 118 SAPGUI 199, 315 SAPOSCOL 321 scale button 176 scratch volume 101 second site 95 Security concepts 17 SERVER 121 server hierarchies 6 server parameter list 169 server profile archint.ini 191 SESSIONS 121 Setup R/3 System Landscape 230 Simple Network Management Protocol (SNMP) 314 Simultaneous archiving 76 single point of control. 67 slice, dice 19 SMGW 199 SNA LU6.2 105 source database system 235 space to store the archive redo log 108 Split logical volume mirror 258 split mirror backup procedure 279 Split mirror overview 253 split-mirror 93, 94 split-mirror implementations, comparisons 274 splitting the mirror 253 spool output 223 SQL SELECT statements 7 SSSIs ABC OpenVMS 10 standby database server 93 Storage Area Network (SAN) 4 storage hierarchy 7, 15 Storage Management Analysis 17 Storage Management Analysis, Tivoli Decision Support for 318 storage policy 102 storage pool 15 Storage pool definition 107 storage pool volumes 15 storage pools 101 storage repository 7 storage requirements 87 Storage Resource Management 5 storage subsystem features 253 Storage subsystem specific implementation 7133 SSA disk 255 Support for ORACLE in raw devices 64 Supported platforms Administration Tools 68 Symmetrix 270 syntax groups 209 System availability 95 System cloning 229 System Configuration 164 system landscape, real world 67 System management and reporting 63 Systems Administration 198
386
314
T
Tablespace backup with SAPDBA 131 target database, activation 232 target SAP system 235 TCP/IP 105 Technical requirements 89 thresholds 108 Tivoli Application Performance Manager 320 Tivoli Data Protection for applications 20 Tivoli Data Protection for R/3 FIle Manager 125 parameters 170 Tivoli Data Protection for R/3 configuration 278 Tivoli Data Protection for R/3 message catalog 120 Tivoli Data Protection for Workgroups 22 Tivoli Database Management Products 315 Tivoli Decision Support 318 Tivoli Decision Support Discovery Guides 19 Tivoli Decision Support for Storage Management Analysis 19 Tivoli Desktop 314 Tivoli Disaster Recovery Manager 18 Tivoli Disaster Recovery Manager (DRM) 104 Tivoli Distributed Monitoring 314 Tivoli Enterprise 3 Tivoli Enterprise Console 315 Tivoli Enterprise framework 10 Tivoli Framework 313 Tivoli Global Enterprise Manager 316 Tivoli Global Sign-On 316 Tivoli Maestro for R/3 315 Tivoli Management Environment 313 Tivoli Management Environment 10 (TME 10) 323 Tivoli Manager for DB2 316 Tivoli Manager for MQSeries 319 Tivoli Manager for Oracle 316 Tivoli Manager for R/3 319, 321 Tivoli Manager for Sybase 316 Tivoli NetView 330 Tivoli Netview 314 Tivoli Output Manager 317 Tivoli plus for Tivoli Storage Manager. 322 Tivoli Plus Setup 323 Tivoli Security Management (Lockdown Module) 316 Tivoli Software Distribution 315 Tivoli Storage Manager 317 API 104 architecture 6 base concepts 12 schedule 128 server 6 servers 62 Tivoli Storage Manager server 6 Tivoli Storage Manager setup 99 Tivoli Storage Manager supported environment 11 Tivoli Storage Manager,introduction to 5 Tivoli systems management 313 Tivoli User Administration 316
Tivoli Workload Scheduler (Maestro) 315 TME 10 Distributed Monitoring 325 Transaction simulation 321
U
UNIX based clients 7 UNIX crontab 129 use of storage resources 14 used backup sessions 182 user controlled migration and recall util_file_online feature 64
18
V
Validating and activating the policy set 109 Version control 71 virtual volumes 7 Vital Record Retention Archive and Retrieval Vital records 14 5
W
web client interface 7 Web user interfaces 7 Withdraw 267 Workflow 207 Workgroup protection 3, 4 working directory 235 writing data to different storage pools
63
387
388
What other subjects would you like to see IBM Redbooks address?
Please rate your overall satisfaction: Please identify yourself as belonging to one of the following groups:
O Very Good
O Good
O Average
O Poor
O Customer O Business Partner O Solution Developer O IBM, Lotus or Tivoli Employee O None of the above
Your email address: The data you provide here may be used to provide you with information from IBM or our business partners about our products, services or activities. Questions about IBMs privacy policy?
O Please do not use the information collected here for future marketing or promotional contacts or other communications beyond the scope of this transaction.
The following link explains how we protect your personal information. ibm.com/privacy/yourprivacy/
389
SG24-5743-00
ISBN 0738416371