Sie sind auf Seite 1von 506

Front cover

End-to-End Scheduling
with Tivoli Workload
Scheduler 8.1
Plan and implement your end-to-end
scheduling environment with TWS 8.1
Model your environment using
realistic scheduling scenarios
Learn the best practices
and troubleshooting

Vasfi Gucer
Stefan Franke
Finn Bastrup Knudsen
Michael A Lowry

ibm.com/redbooks

International Technical Support Organization


End-to-End Scheduling with
Tivoli Workload Scheduler 8.1
May 2002

SG24-6022-00

Take Note! Before using this information and the product it supports, be sure to read the
general information in Notices on page xix.

First Edition (May 2002)


This edition applies to Tivoli Workload Scheduler V 8.1.
Comments may be addressed to:
IBM Corporation, International Technical Support Organization
Dept. JN9B Building 003 Internal Zip 2834
11400 Burnet Road
Austin, Texas 78758-3493
When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the
information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 2002. All rights reserved.
Note to U.S Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to
restrictions set forth in GSA ADP Schedule Contract with IBM Corp.

Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Job scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Introduction to Tivoli Workload Scheduler for z/OS. . . . . . . . . . . . . . . . . . . 3
1.2.1 Overview of Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . 3
1.2.2 Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 3
1.3 Introduction to Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Overview of Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Tivoli Workload Scheduler architecture. . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Benefits of integrating TWS for z/OS and TWS . . . . . . . . . . . . . . . . . . . . . 5
1.5 Summary of enhancements in Tivoli Workload Scheduler 8.1 . . . . . . . . . . 6
1.5.1 Enhancements to Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . 6
1.5.2 Enhancements to Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . 7
1.5.3 Enhancements to the Job Scheduling Console . . . . . . . . . . . . . . . . . 8
1.6 The terminology used in this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 Material covered in this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Chapter 2. End-to-end TWS architecture and components. . . . . . . . . . . . 15
2.1 Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 The Tivoli Workload Scheduler network . . . . . . . . . . . . . . . . . . . . . . 18
2.1.2 Tivoli Workload Scheduler workstation types . . . . . . . . . . . . . . . . . . 21
2.1.3 Tivoli Workload Scheduler topology . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.4 Tivoli Workload Scheduler components . . . . . . . . . . . . . . . . . . . . . . 24
2.1.5 Tivoli Workload Scheduler database objects . . . . . . . . . . . . . . . . . . 25
2.1.6 Tivoli Workload Scheduler plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.7 Other Tivoli Workload Scheduler features . . . . . . . . . . . . . . . . . . . . 34
2.1.8 Making the Tivoli Workload Scheduler network fail-safe. . . . . . . . . . 37
2.2 Tivoli Workload Scheduler for z/OS architecture. . . . . . . . . . . . . . . . . . . . 39

Copyright IBM Corp. 2002

iii

2.2.1 Tivoli Workload Scheduler for z/OS configuration. . . . . . . . . . . . . . . 40


2.2.2 Tivoli Workload Scheduler for z/OS database objects . . . . . . . . . . . 45
2.2.3 Tivoli Workload Scheduler for z/OS plans. . . . . . . . . . . . . . . . . . . . . 50
2.2.4 Other Tivoli Workload Scheduler for z/OS features . . . . . . . . . . . . . 57
2.3 End-to-end scheduling architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.3.1 How end-to-end scheduling works . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.3.2 Tivoli Workload Scheduler for z/OS end-to-end components . . . . . . 66
2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration . . . . . 70
2.3.4 Tivoli Workload Scheduler for z/OS end-to-end database objects . . 71
2.3.5 Tivoli Workload Scheduler for z/OS end-to-end plans . . . . . . . . . . . 73
2.3.6 How to make the TWS for z/OS end-to-end fail-safe . . . . . . . . . . . . 78
2.3.7 Tivoli Workload Scheduler for z/OS end-to-end benefits . . . . . . . . . 80
2.4 Job Scheduling Console and related components . . . . . . . . . . . . . . . . . . 81
2.4.1 A brief introduction to the Tivoli Management Framework . . . . . . . . 82
2.4.2 Job Scheduling Services (JSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.4.3 Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Chapter 3. Planning, installation, and configuration of the TWS 8.1 . . . . 91
3.1 Planning end-to-end scheduling for TWS for z/OS . . . . . . . . . . . . . . . . . . 92
3.1.1 Preventive Service Planning (PSP). . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.1.2 Software ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.1.3 Tracker agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.1.4 Workload Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.1.5 System documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.1.6 EQQPDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.1.7 Sharing hierarchical file system (HFS) cluster . . . . . . . . . . . . . . . . . 95
3.1.8 TCP/IP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.1.9 Script repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.1.10 Submitting user ID. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.2 Installing Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . . . . . . 100
3.2.1 Executing EQQJOBS installation aid . . . . . . . . . . . . . . . . . . . . . . . 102
3.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems . . . . . . . 106
3.2.3 Allocate end-to-end datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2.4 Create and customize the work directory . . . . . . . . . . . . . . . . . . . . 109
3.2.5 Create the started task procedures for TWS for z/OS . . . . . . . . . . 109
3.2.6 Define end-to-end initialization statements . . . . . . . . . . . . . . . . . . . 110
3.2.7 Initialization parameter used to describe the topology . . . . . . . . . . 115
3.2.8 Reflecting a distributed environment in TWS for z/OS . . . . . . . . . . 122
3.2.9 Verify the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.3 Installing the TWS for z/OS TCP/IP server for JSC usage . . . . . . . . . . . 131
3.3.1 Controlling access to OPC using Tivoli Job Scheduling Console . . 132
3.4 Planning end-to-end scheduling for TWS . . . . . . . . . . . . . . . . . . . . . . . . 133
3.4.1 Network planning and considerations . . . . . . . . . . . . . . . . . . . . . . . 134

iv

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

3.4.2 Backup domain manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135


3.4.3 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.4.4 FTA naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
3.5 Installing TWS in an end-to-end environment . . . . . . . . . . . . . . . . . . . . . 141
3.5.1 Installing multiple instances of TWS on one machine . . . . . . . . . . . 142
3.6 Installing and configuring Tivoli Framework . . . . . . . . . . . . . . . . . . . . . . 144
3.6.1 Installing Tivoli Management Framework 3.7B . . . . . . . . . . . . . . . . 145
3.6.2 Upgrade to Tivoli Management Framework 3.7.1 . . . . . . . . . . . . . . 145
3.7 Install Job Scheduling Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
3.7.1 Installing the connectors (for TWS and TWS for z/OS connector) . 146
3.7.2 Creating connector instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
3.7.3 Creating TMF Administrators for Tivoli Workload Scheduler . . . . . 148
3.8 Planning for Job Scheduling Console availability . . . . . . . . . . . . . . . . . . 153
3.9 Installing the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . . . 156
3.9.1 Hardware and software prerequisites . . . . . . . . . . . . . . . . . . . . . . . 157
3.9.2 Installing the JSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
3.10 The Tivoli Workload Scheduler security model . . . . . . . . . . . . . . . . . . . 160
3.10.1 The security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
3.10.2 Sample security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
3.11 Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Chapter 4. End-to-end implementation scenarios and examples. . . . . . 169
4.1 The rationale behind the conversion to end-to-end scheduling. . . . . . . . 171
4.2 Description of our environment and systems . . . . . . . . . . . . . . . . . . . . . 172
4.3 Implementing TWS for z/OS end-to-end scheduling from scratch . . . . . 174
4.3.1 The configuration and topology for the end-to-end network . . . . . . 176
4.3.2 Customize the TWS for z/OS plan program topology definitions . . 177
4.3.3 Customize the TWS for z/OS engine . . . . . . . . . . . . . . . . . . . . . . . 186
4.3.4 Restart the TWS for z/OS engine . . . . . . . . . . . . . . . . . . . . . . . . . . 186
4.3.5 Define fault tolerant workstations in the engine database. . . . . . . . 187
4.3.6 Activate the fault tolerant workstation definitions . . . . . . . . . . . . . . 188
4.3.7 Create jobs, user definitions, and job streams for verification tests 190
4.3.8 Verification test of the end-to-end installation . . . . . . . . . . . . . . . . . 194
4.3.9 Some general experiences from the test environment . . . . . . . . . . 200
4.4 Migrating TWS for z/OS tracker agents to TWS for z/OS end-to-end . . . 201
4.4.1 Migration benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
4.4.2 Migration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
4.4.3 Migration checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
4.4.4 Migration actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
4.4.5 Migrating backwards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
4.4.6 TWS for z/OS JCL variables in connection with TWS parameters . 211
4.5 Conversion from TWS network to TWS for z/OS managed network. . . . 216
4.5.1 Illustration of the conversion process . . . . . . . . . . . . . . . . . . . . . . . 216

Contents

4.5.2 Considerations before doing the conversion . . . . . . . . . . . . . . . . . . 219


4.5.3 The conversion process from TWS to TWS for z/OS . . . . . . . . . . . 221
4.5.4 Some guidelines to automate the conversion process . . . . . . . . . . 226
4.6 TWS for z/OS end-to-end fail-over scenarios . . . . . . . . . . . . . . . . . . . . . 230
4.6.1 Configure TWS for z/OS backup engines . . . . . . . . . . . . . . . . . . . . 230
4.6.2 Configure DVIPA for the TWS for z/OS end-to-end server . . . . . . . 232
4.6.3 Configure backup domain manager for primary domain manager . 233
4.6.4 Switch to TWS for z/OS backup engine . . . . . . . . . . . . . . . . . . . . . 234
4.6.5 Switch to TWS backup domain manager . . . . . . . . . . . . . . . . . . . . 236
4.6.6 HACMP configuration guidelines for TWS workstations . . . . . . . . . 246
4.6.7 Configuration guidelines for High Availability environments . . . . . . 250
4.7 Backup and maintenance guidelines for end-to-end FTAs . . . . . . . . . . . 251
4.7.1 Backup of the TWS agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
4.7.2 Stdlist files on TWS fault tolerant agents . . . . . . . . . . . . . . . . . . . . 253
4.7.3 Auditing log files on TWS fault tolerant agents . . . . . . . . . . . . . . . . 255
4.7.4 Files in the TWS schedlog directory . . . . . . . . . . . . . . . . . . . . . . . . 257
4.7.5 Monitoring file systems on TWS agents . . . . . . . . . . . . . . . . . . . . . 258
4.7.6 Central repositories for important TWS files . . . . . . . . . . . . . . . . . . 259
Chapter 5. Using The Job Scheduling Console . . . . . . . . . . . . . . . . . . . . 263
5.1 Using the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.2 Starting the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.3 Job Scheduling Console fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5.4 TWS for z/OS-specific JSC enhancements. . . . . . . . . . . . . . . . . . . . . . . 276
5.4.1 Submit job streams to TWS for z/OS current plan . . . . . . . . . . . . . 276
5.4.2 JSC text editor to display and modify JCLs in current plan . . . . . . . 282
5.4.3 JSC read-only text editors for job logs and operator instructions . . 289
5.4.4 New restart possibilities in JSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
5.5 TWS-specific JSC enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
5.5.1 Editing a job stream instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
5.5.2 Re-Submitting a job stream instance . . . . . . . . . . . . . . . . . . . . . . . 302
5.5.3 New run cycle options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
5.6 General JSC enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
5.6.1 The filter row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
5.6.2 Sorting list results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
5.6.3 Non-modal windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
5.6.4 Copying jobs from one job stream to another . . . . . . . . . . . . . . . . . 319
5.7 Common Default Plan Lists in JSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
5.7.1 Example showing how to use common list of job instances . . . . . . 323
5.7.2 When to use the TWS for z/OS legacy interfaces . . . . . . . . . . . . . . 332
Chapter 6. Troubleshooting in a TWS end-to-end environment . . . . . . . 333
6.1 Troubleshooting for Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . 334

vi

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

6.1.1 Using keywords to describe a problem . . . . . . . . . . . . . . . . . . . . . . 334


6.1.2 Searching the software-support database . . . . . . . . . . . . . . . . . . . 335
6.1.3 Problem-type keywords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.1.4 Problem analysis procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
6.1.5 Abnormal termination (ABEND or ABENDU) procedure . . . . . . . . . 338
6.1.6 The diagnostic file (EQQDUMP) . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
6.1.7 Trace information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
6.1.8 System dump dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
6.1.9 LOOP procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6.1.10 Message (MSG) procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6.1.11 Performance (PERFM) procedure . . . . . . . . . . . . . . . . . . . . . . . . 343
6.1.12 WAIT procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
6.1.13 Preparing a console dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
6.1.14 Dump the failing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
6.1.15 Information needed for all problems . . . . . . . . . . . . . . . . . . . . . . . 346
6.1.16 Performing problem determination for tracking events . . . . . . . . . 347
6.2 Troubleshooting TWS for z/OS end-to-end solution . . . . . . . . . . . . . . . . 354
6.2.1 End-to-end working directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
6.2.2 The standard list directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
6.2.3 The standard list messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
6.2.4 Diagnose and fix problems with unlinked workstations . . . . . . . . . . 359
6.2.5 Symphony renew option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
6.2.6 UNIX System Services diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . 363
6.2.7 TCP/IP server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
6.2.8 Tivoli Workload Scheduler for z/OS connector . . . . . . . . . . . . . . . . 366
6.2.9 Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
6.2.10 Trace for the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . 372
6.3 TWS troubleshooting checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
6.3.1 FTAs not linking to the master . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
6.3.2 Batchman not up or will not stay up (batchman down) . . . . . . . . . . 376
6.3.3 Jobs not running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
6.3.4 Jnextday is hung or still in EXEC state . . . . . . . . . . . . . . . . . . . . . . 378
6.3.5 Jnextday in ABEND state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
6.3.6 FTA still not linked after Jnextday . . . . . . . . . . . . . . . . . . . . . . . . . . 379
6.4 A brief introduction to the TWS 8.1 tracing facility. . . . . . . . . . . . . . . . . . 380
Chapter 7. Tivoli NetView integration . . . . . . . . . . . . . .
7.1 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1 Network management ABCs . . . . . . . . . . . . . . . .
7.1.2 NetView as an SNMP manager . . . . . . . . . . . . .
7.2 What the integration provides . . . . . . . . . . . . . . . . . . .
7.3 How the integration works . . . . . . . . . . . . . . . . . . . . . .
7.4 Integration architecture and our environment . . . . . . .

.......
.......
.......
.......
.......
.......
.......

......
......
......
......
......
......
......

Contents

.
.
.
.
.
.
.

381
382
382
382
383
385
386

vii

7.5 Installing and customizing the integration software . . . . . . . . . . . . . . . . . 388


7.5.1 Installing mdemon on the NetView Server . . . . . . . . . . . . . . . . . . . 389
7.5.2 Installing magent on managed nodes . . . . . . . . . . . . . . . . . . . . . . . 391
7.5.3 Customizing the integration software . . . . . . . . . . . . . . . . . . . . . . . 392
7.6 Tivoli Workload Scheduler/NetView operation . . . . . . . . . . . . . . . . . . . . 397
7.6.1 Discovery of the Tivoli Workload Scheduler Network . . . . . . . . . . . 397
7.6.2 Tivoli Workload Scheduler maps. . . . . . . . . . . . . . . . . . . . . . . . . . . 397
7.6.3 Menu actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
7.6.4 Discovery of Maestro process information . . . . . . . . . . . . . . . . . . . 404
7.6.5 Monitoring job schedules from NetView . . . . . . . . . . . . . . . . . . . . . 411
7.6.6 Tivoli Workload Scheduler/NetView configuration files . . . . . . . . . . 413
7.6.7 Tivoli Workload Scheduler/NetView events . . . . . . . . . . . . . . . . . . 416
7.6.8 Configuring TWS to send traps to any SNMP manager . . . . . . . . . 419
Chapter 8. The future of TWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
8.1 The future of end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
8.2 Planned enhancements to Tivoli Workload Scheduler for z/OS . . . . . . . 422
8.3 Planned enhancements to Tivoli Workload Scheduler . . . . . . . . . . . . . . 423
8.3.1 Security enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
8.3.2 Serviceability enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
8.4 Planned enhancements to Job Scheduling Console . . . . . . . . . . . . . . . . 424
Appendix A. Centrally stored OPC controllers to FTAs. . . . . . . . . . . . . . 427
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
JCL and fault tolerant agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
FTP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
EQQJBLIB concatenation considerations . . . . . . . . . . . . . . . . . . . . . . . . . 430
Member names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Installing TWSXPORT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
FTP script file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Security implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Using the TWSXPORT utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Selecting the METHOD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
Defining the JCL libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
Defining the workstations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
Selecting members to transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
TWSXPORT program outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Return code 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Return code 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439

viii

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Return code 8 . . . . . . . . . . . . .
Variables . . . . . . . . . . . . . . . . .
REXX members . . . . . . . . . . .
Flow of the program . . . . . . . .
TWSXPORT program code. . .
Sample job . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

Appendix B. Connector reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447


Setting the Tivoli environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Authorization roles required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Working with TWS for z/OS connector instances. . . . . . . . . . . . . . . . . . . . . . 448
The wopcconn command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Working with TWS connector instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
The wtwsconn.sh command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Useful Tivoli Framework commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Appendix C. Merging TWS for z/OS and TWS databases . . . . .
Alternatives to consider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Option 1: Keep both TWS for z/OS and TWS engines. . . . . . . . . . .
Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Option 2: Move TWS schedules into TWS for z/OS . . . . . . . . . . . . .
Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Option 3: Move TWS for z/OS schedules into TWS . . . . . . . . . . . . .
Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

......
......
......
......
......
......
......
......
......
......
......

.
.
.
.
.
.
.
.
.
.
.

453
454
454
455
457
457
457
458
459
459
459

Appendix D. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461


Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
System requirements for downloading the Web material . . . . . . . . . . . . . 462
How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Related publications . . . . . . . . . . . . . . . . . . . . . .
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Other resources . . . . . . . . . . . . . . . . . . . . . . . .
Referenced Web sites . . . . . . . . . . . . . . . . . . . . . .
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . .
IBM Redbooks collections . . . . . . . . . . . . . . . . .

......
......
......
......
......
......

.......
.......
.......
.......
.......
.......

......
......
......
......
......
......

.
.
.
.
.
.

465
465
465
466
466
467

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469

Contents

ix

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figures
2-1
2-2
2-3
2-4
2-5
2-6
2-7
2-8
2-9
2-10
2-11
2-12
2-13
2-14
2-15
2-16
2-17
2-18
2-19
2-20
2-21
2-22
2-23
2-24
3-1
3-2
3-3
3-4
3-5
3-6
3-7
3-8
3-9
3-10
3-11
3-12
3-13
3-14

Copyright IBM Corp. 2002

Tivoli Workload Scheduler network with only one domain . . . . . . . . . . . 18


Tivoli Workload Scheduler network with three domains . . . . . . . . . . . . 19
A multi-tiered Tivoli Workload Scheduler network . . . . . . . . . . . . . . . . . 21
TWS network with different manager/agent types . . . . . . . . . . . . . . . . . 23
Tivoli Workload Scheduler inter-process communication . . . . . . . . . . . 25
Creating a new 24-hour plan in Tivoli Workload Scheduler . . . . . . . . . . 30
The distribution of the plan (Symphony file) in a TWS network . . . . . . . 31
Detailed description of the steps of the Jnextday script . . . . . . . . . . . . . 33
TWS for z/OS - Two sysplex environments and stand-alone systems . 42
APPC server with remote panels and PIF access to TWS for z/OS . . . 44
JSC connection to Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . 45
Job Scheduling Console display of dependencies between jobs . . . . . 48
The long-term plan extension process . . . . . . . . . . . . . . . . . . . . . . . . . . 52
The current plan extension process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Tivoli Workload Scheduler for z/OS automatic recovery and restart . . . 58
Tivoli Workload Scheduler for z/OS security . . . . . . . . . . . . . . . . . . . . . 62
Tivoli Workload Scheduler for z/OS end-to-end scheduling . . . . . . . . . 65
Tivoli Workload Scheduler for z/OS inter-process communication . . . . 67
Link between workstation names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Creation of Symphony file in TWS for z/OS plan programs . . . . . . . . . . 74
Symphony file distribution from TWS for z/OS server to TWS agents . . 76
Fail-safe configuration with standby engine and TWS backup DM . . . . 79
One TWS for z/OS connector and one TWS connector instance . . . . . 84
An example with two connector instances of each type . . . . . . . . . . . . 86
TWS domain manager connection to end-to-end server . . . . . . . . . . . . 97
EQQJOBS primary panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Server-related input panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Generate end-to-end skeletons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Relationship between new initialization members . . . . . . . . . . . . . . . . 112
The topology member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
The topology definitions for server and plan programs . . . . . . . . . . . . 116
Domrec syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
CPUREC syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
USRREC Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
End-to-end environment example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Defining FTW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Setting the fault tolerant flag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
JOBREC statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

xi

3-15
3-16
3-17
3-18
3-19
3-20
3-21
3-22
3-23
3-24
3-25
3-26
3-27
3-28
3-29
3-30
4-1
4-2
4-3
4-4
4-5
4-6
4-7
4-8
4-9
4-10
4-11
4-12
4-13
4-14
4-15
4-16
4-17
4-18
4-19
4-20
4-21
4-22
4-23
4-24
4-25
4-26
4-27

xii

Job definition example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130


Backup domain manager within a network . . . . . . . . . . . . . . . . . . . . . 136
End-to-end topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Naming the workstations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Example of two separate TWS engines on one computer . . . . . . . . . . 143
Create Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Create Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Login names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Set TMR roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Tivoli Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
JSC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Installation splash screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
JSC login window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Example security setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Example of a TWS security check . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
The dumpsec command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Our configuration for TWS for z/OS end-to-end scheduling . . . . . . . . 174
List with all our fault tolerant workstations defined in TWS for z/OS . . 188
FTWs in the Tivoli Workload Scheduler for z/OS plan . . . . . . . . . . . . . 189
Right clicking one of the workstation shows a pop-up menu . . . . . . . . 189
Pop-up menu to set status of workstation . . . . . . . . . . . . . . . . . . . . . . 190
Job streams used for verification test . . . . . . . . . . . . . . . . . . . . . . . . . 191
Task (job) definition for the test job stream used for verification . . . . . 192
The jobs in the daily housekeeping job stream . . . . . . . . . . . . . . . . . . 195
Jobs in F100DWTESTSTREAM and F101DWTESTSTREAM . . . . . . 196
Browse Job Log from TWS for z/OS JSC . . . . . . . . . . . . . . . . . . . . . . 199
Pop-up window when browsing a job log that is not on the engine . . . 199
The Browse Job Log pop-up window . . . . . . . . . . . . . . . . . . . . . . . . . . 200
A classic tracker agent environment . . . . . . . . . . . . . . . . . . . . . . . . . . 205
Domain topology example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Process of distributing TWS for JCL variables to TWS parameters. . . 212
The FXXXDWPARMUPDATE job stream with FTW jobs . . . . . . . . . . 214
The parameters in the F100 workstation parameter database . . . . . . 215
TWS distributed network with a master domain manager . . . . . . . . . . 217
TWS for z/OS network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
TWS for z/OS managed end-to-end network . . . . . . . . . . . . . . . . . . . 219
Job stream added to plan in TWS for z/OS . . . . . . . . . . . . . . . . . . . . . 234
Jobs in F101DWTSTSTREAM after switch to backup engine . . . . . . . 236
Status for workstations before the switch to F101 . . . . . . . . . . . . . . . . 237
Status of all Domains list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Right click the DM100 domain to get pop-up window . . . . . . . . . . . . . 239
The Switch Manager - Domain search pop-up window . . . . . . . . . . . . 239
JSC Find Workstation Instance pop-up window . . . . . . . . . . . . . . . . . 240

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

4-28
4-29
4-30
4-31
4-32
4-33
4-34
5-1
5-2
5-3
5-4
5-5
5-6
5-7
5-8
5-9
5-10
5-11
5-12
5-13
5-14
5-15
5-16
5-17
5-18
5-19
5-20
5-21
5-22
5-23
5-24
5-25
5-26
5-27
5-28
5-29
5-30
5-31
5-32
5-33
5-34
5-35
5-36

The result from Find Workstation Instance . . . . . . . . . . . . . . . . . . . . . 241


Switch Manager - Domain pop-up window with selected FTA . . . . . . . 241
Status for the workstations after the switch to F101 . . . . . . . . . . . . . . 242
Status of the workstations after the switch to F101 . . . . . . . . . . . . . . . 243
Status for workstations after the TWS for z/OS replan program . . . . . 245
HACMP failover of a TWS master to a node also running an FTA . . . 247
Output form the cleanup.sh when run from TWS for z/OS . . . . . . . . . 255
Starting the Job Scheduling Console (Windows desktop) . . . . . . . . . . 265
Job Scheduling Console password prompt (Windows desktop) . . . . . 266
Job Scheduling Console release level notice (Windows desktop) . . . . 267
Properties window for the JSC shortcut . . . . . . . . . . . . . . . . . . . . . . . . 268
Initial information to user first time the JSC is started . . . . . . . . . . . . . 268
Pre-customized preference file can be specified . . . . . . . . . . . . . . . . . 269
The JSC starting window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
The main Job Scheduling Console window . . . . . . . . . . . . . . . . . . . . . 271
JSC Default Database and Plan Lists for TWS for z/OS instance . . . . 273
JSC Default Database and Default Plan Lists for a TWS instance . . . 274
How to create new lists in the Job Scheduling Console . . . . . . . . . . . 275
Submit TWS for z/OS job stream to current plan in TWS for z/OS . . . 276
Submitting a job stream to current plan . . . . . . . . . . . . . . . . . . . . . . . . 277
Search result for job streams starting with TWSCD* . . . . . . . . . . . . . . 278
JSC Submit Job Stream window filled with job stream information . . . 278
Jobs in TWSDISTPARJUPD for TWS for z/OS current plan . . . . . . . . 279
Release all held jobs in JSC with just one release command . . . . . . . 280
JSC Job Stream Instance Editor - Adding job stream to current plan . 280
Submit job stream from JSC database job stream list view . . . . . . . . . 281
The JSC Submit Job Stream...window . . . . . . . . . . . . . . . . . . . . . . . . 282
Edit JCL for a z/OS job in the Job Scheduling Console . . . . . . . . . . . . 283
The Job Scheduling Console Edit JCL window . . . . . . . . . . . . . . . . . . 284
The Job Scheduling Console File Import/Export pull-down menu . . . . 285
The Job Scheduling Console Save window . . . . . . . . . . . . . . . . . . . . . 286
Select Edit JCL for the job where the JCL should be imported . . . . . . 287
JSC Edit JCL window after deleting records of all JCLs . . . . . . . . . . . 287
JSC Open window used when importing a file. . . . . . . . . . . . . . . . . . . 288
The Edit JCL window with the import JCL . . . . . . . . . . . . . . . . . . . . . . 289
The Job Scheduling Console Browse Job Log... entry . . . . . . . . . . . . 290
JSC pop-up window when controller does not have copy of job log . . 291
The Job Scheduling Console Browse Job Log window . . . . . . . . . . . . 292
The Job Scheduling Console Browse Operator Instruction entry . . . . 293
The JSC Browse Operator Instruction window . . . . . . . . . . . . . . . . . . 293
The JSC Restart and Cleanup entry . . . . . . . . . . . . . . . . . . . . . . . . . . 295
The Job Scheduling Console Restart and Cleanup window . . . . . . . . 295
The Job Scheduling Console Step Restart window . . . . . . . . . . . . . . . 296

Figures

xiii

5-37
5-38
5-39
5-40
5-41
5-42
5-43
5-44
5-45
5-46
5-47
5-48
5-49
5-50
5-51
5-52
5-53
5-54
5-55
5-56
5-57
5-58
5-59
5-60
5-61
5-62
5-63
5-64
5-65
5-66
5-67
5-68
5-69
5-70
5-71
5-72
5-73
5-74
5-75
5-76
5-77
5-78
5-79

xiv

Rerun selection for job stream in Job Scheduling Console . . . . . . . . . 297


The JSC Job Stream Instance Editor. . . . . . . . . . . . . . . . . . . . . . . . . . 298
The JSC pop-up window with rerun actions for a job in a job stream . 299
The Job Scheduling Console Rerun Impact List . . . . . . . . . . . . . . . . . 300
A job stream instance list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
The Job Stream Instance Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Submitting a job into an existing job stream instance . . . . . . . . . . . . . 302
Specifying job that is to be submitted into job stream instance . . . . . . 302
Resubmitting a job stream instance . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Specifying an alias for a resubmitted job stream instance . . . . . . . . . . 303
The job stream has been resubmitted, with a new name . . . . . . . . . . 304
The Job Stream Editor window in graph mode . . . . . . . . . . . . . . . . . . 305
The Job Stream Editor window in run cycle mode . . . . . . . . . . . . . . . . 306
Simple Run Cycle window with freedays rule menu displayed . . . . . . 307
Rule page of Simple Run Cycle window; 10th of April is selected . . . . 308
Rule page of Weekly Run Cycle window . . . . . . . . . . . . . . . . . . . . . . . 309
Rule page of Calendar Run Cycle window; LOTTA selected . . . . . . . 310
An exclusive weekly run cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
An inclusive calendar run cycle; the 10th of April is included . . . . . . . 312
Freedays calendar for job stream set to UK-HOLS calendar. . . . . . . . 313
Enabling the filter row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
A job streams list with the filter row displayed but no filters set . . . . . . 315
Editing a filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
A job streams list displayed with a filter for ACC set . . . . . . . . . . . . . . 316
A job streams list with a filter for the name ACC set, but disabled . . . 317
A list of jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
A list of job definitions sorted in descending order by name . . . . . . . . 318
The Clear Sort function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Copying a job from a job stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Pasting a job into a job stream. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Window appears when you paste a job into a job stream . . . . . . . . . . 321
The job ACCJOB01 has been copied to the new job stream. . . . . . . . 321
Common Default Plan Lists group and default lists in the group . . . . . 323
Creating a new entry in the Common Default Plan List group . . . . . . . 324
JSC Properties window used to create a common job instance list. . . 325
The Status pop-up window where we check the error code . . . . . . . . 326
Unchecking the engines that should not be part of the common list . . 327
The periodic refresh is activated and set to 120 second (2 minutes). . 328
JSC Properties window with the filter specification and the result . . . . 329
The Detach button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
A detached JSC window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
Pop-up window shown when clicking a TWS for z/OS job in error . . . 331
Pop-up windows shown when right clicking a TWS job in error. . . . . . 332

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

6-1
6-2
6-3
6-4
6-5
6-6
6-7
6-8
7-1
7-2
7-3
7-4
7-5
7-6
7-7
7-8
7-9
7-10
7-11
7-12
7-13
7-14
7-15
7-16
7-17
7-18
7-19
7-20
7-21
A-1

Link the workstation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359


Displaying the Symphony run number . . . . . . . . . . . . . . . . . . . . . . . . . 360
Setting status to active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
JSC log on error message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Connector link failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
Disabled instance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Framework failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Allocation error message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Our distributed TWS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Selecting Maestro - Unison Software Inc. (c). . . . . . . . . . . . . . . . . . . . 393
Configuring the TWS map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
NetView Root map with TWS icon added . . . . . . . . . . . . . . . . . . . . . . 395
Loaded MIBs by default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Loading Maestro MIB file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
NetView Root map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
TWS main map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
TWS map showing all workstations . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Launching Maestro menu from Tools . . . . . . . . . . . . . . . . . . . . . . . . . 401
Stopping conman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Locate function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Locating tividc11 node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
tividc11 on the map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
tividc11 map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Tivoli Workload Scheduler processes . . . . . . . . . . . . . . . . . . . . . . . . . 408
Show Parent dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Issuing Start command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
All processes up. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Job abended on tividc11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
Status changed on tividc11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Syntax of the WSTASTART control statement . . . . . . . . . . . . . . . . . . 437

Figures

xv

xvi

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tables
3-1
3-2
3-3
3-4
3-5
3-6
3-7
3-8
3-9
3-10
4-1
4-2
6-1
6-2
6-3
6-4
6-5
6-6
6-7
6-8
7-1
7-2
A-1
A-2
B-1
B-2
B-3
8-1

Copyright IBM Corp. 2002

PSP upgrade and subset ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93


Product and components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
TWS for z/OS installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
End-to-end example JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
End-to-end skeletons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
New initialization member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Performance-related localopts parameter . . . . . . . . . . . . . . . . . . . . . . 137
If possible, choose user IDs and port numbers that are the same. . . . 144
Authorization roles required for connector actions. . . . . . . . . . . . . . . . 151
Tivoli Workload Scheduler connector and engine combinations . . . . . 155
Migration checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
The AutoTrace library file must exist in /usr/lib on both systems . . . . . 247
Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Socket error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Tracking events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Problem determination of tracking events . . . . . . . . . . . . . . . . . . . . . . 348
Files and directory structure of UNIX System Services . . . . . . . . . . . . 356
Trace levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Tracelevel values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
Tracedata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
TWS traps enabled by default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
TWS traps not enabled by default . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
REXX members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Setting the Tivoli environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Authorization roles required for working with connector instances . . . 448
How to manage TWS for z/OS connector instances . . . . . . . . . . . . . . 449
How to manage TWS for z/OS connector instances . . . . . . . . . . . . . . 451

xvii

xviii

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.

Copyright IBM Corp. 2002

xix

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
IBM
PAL
Perform
PowerPC
RACF
Redbooks
Redbooks(logo)

RS/6000
S/390
Sequent
SP
Tivoli
Tivoli Enterprise
Tivoli Enterprise Console

Tivoli Management
Environment
TME
VTAM
z/OS

The following terms are trademarks of International Business Machines Corporation and Lotus Development
Corporation in the United States, other countries, or both:
Lotus

Notes

Word Pro

The following terms are trademarks of other companies:


ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United
States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.
C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure
Electronic Transaction LLC.
Other company, product, and service names may be trademarks or service marks of others.

xx

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Preface
The beginning of the new century sees the data center with a mix of work,
hardware, and operating systems previously undreamt of. Todays challenge is to
manage disparate systems with minimal effort and maximum reliability. People
experienced in scheduling traditional host-based batch work must now manage
distributed systems, and those working in the distributed environment must take
responsibility for work running on the corporate OS/390 system.
This redbook considers how best to provide end-to-end scheduling using Tivoli
Workload Scheduling 8.1, both distributed (previously known as Maestro) and
mainframe (previously known as OPC) components.
In this book, we provide the information for installing the necessary TWS 8.1
software components and configure them to communicate with each other.
In addition to technical information, we will consider various scenarios that may
be encountered in the enterprise and suggest practical solutions. We will
describe how to manage work and dependencies across both environments
using a single point of control.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world
working at the International Technical Support Organization, Austin Center.
Vasfi Gucer is a Project Leader at the International Technical Support
Organization, Austin Center. He worked for IBM Turkey for 10 years and has
been with the ITSO since January 1999. He has more than nine years of
experience in the areas of systems management, and networking hardware and
software on mainframe and distributed platforms. He has worked on various
Tivoli customer projects as a Systems Architect in Turkey and the U.S. Vasfi is
also a IBM Certified Senior IT Specialist.
Stefan Franke is a Customer Support Engineer of IBM Global Services based in
the Central Region Support Center in Mainz, Germany. He is a member of the
EMEA TWS Level 2 Support Team. In 1992 he began to support z/OS System
Management software. Since 1994 he works mainly for TWS Support. His areas
of expertise include installation and tuning, defect and non-defect problem
determination, and on-site customer support.

Copyright IBM Corp. 2002

xxi

Finn Bastrup Knudsen is an Advisory IT Specialist in Integrated Technology


Services (ITS) in IBM Global Services in Copenhagen, Denmark. He has 12
years of experience working with TWS for z/OS (OPC) and four years of
experience working with TWS (Maestro). Finn is primarily doing consultation and
services at customer sites, as well as TWS for z/OS and TWS training. He is a
certified Tivoli Instructor in TWS for z/OS and TWS. He has worked at IBM for 13
years. His areas of expertise include TWS for z/OS and TWS.
Michael A Lowry is an IBM Certified Consultant and Instructor currently working
for IBM in Stockholm, Sweden. Michael does support, consulting, and training for
IBM customers, primarily in Europe. He has 10 years of experience in the IT
services business and has worked for IBM since 1996. Michael studied
engineering and biology at the University of Texas in Austin, his hometown.
Before moving to Sweden, he worked in Austin for Apple, IBM, and the TWS
Support Team at Tivoli Systems. He has five years of experience with Tivoli
Workload Scheduler and has extensive experience with IBM's network and
storage management products. He is also an IBM Certified AIX Support
Professional.
The team wants to express special thanks to TWS Product Manager Warren Gill.
Also thanks to the following people for their contributions to this project:
International Technical Support Organization, Austin Center
Budi Darmawan and Julie Czubik
IBM Belgium
Rudy Segers
IBM Italy
Fabio Benedetti, Maria Pia Cagnetta, Andrea Capasso, Riccardo Colella,
Rossella Donadeo, Pietro Iannucci, Antonio Gallotti, Francesca Guccione,
Franceo Mossotto, Stefano Proietti
IBM UK
Bob Watling
IBM USA
Henry Daboub, Robert Haimowitz, Tina Lamacchia, Jose Villa
In addition, we also want to thank Dean Harrison from Consignia/UK for his
valuable contributions to the redbook.

xxii

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Notice
This publication is intended to help Tivoli specialists implement an end-to-end
scheduling environment with TWS 8.1. The information in this publication is not
intended as the specification of any programming interfaces that are provided by
Tivoli Workload Scheduler 8.1. See the PUBLICATIONS section of the IBM
Programming Announcement for Tivoli Workload Scheduler 8.1 for more
information about what publications are considered to be product documentation.

Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments
about this or other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks

Send your comments in an Internet note to:


redbook@us.ibm.com

Mail your comments to the address on page ii.

Preface

xxiii

xxiv

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Chapter 1.

Introduction
In this chapter, we introduce the Tivoli Workload Scheduler 8.1 suite and
summarize the new functions that are available in this version. TWS 8.1 is an
important milestone in the integration of OPC- and Maestro-based scheduling
engines.
This chapter contains the following:
An overview of job scheduling
An overview of Tivoli Workload Scheduler
An overview of Tivoli Workload Scheduler for z/OS
New functions in Tivoli Workload Scheduler 8.1
A description the material covered in this book
An introduction to the terminology used in this book

Copyright IBM Corp. 2002

1.1 Job scheduling


Scheduling is the nucleus of the data center. Orderly, reliable sequencing and
management of process execution is an essential part of IT management. The IT
environment consists of multiple strategic applications, such as SAP/3 and
Oracle, payroll, invoicing, e-commerce, and order handling, running across
different operating systems and platforms. Legacy systems must be maintained
alongside newer systems.
Workloads are increasing, accelerated by electronic commerce. Staffing and
training requirements increase, and too many platform experts are needed.
There are too many consoles and no overall point of control. 24x7 availability is
essential and must be maintained through migrations, mergers, acquisitions, and
consolidations.
Dependencies exist between jobs in different environments. For example, a
customer can fill out an order form on their Web browser that will trigger a UNIX
job that acknowledges the order, an AS/400 job that orders parts, an z/OS job
that debits the customers bank account, and a Windows NT job that prints a
document and address labels. Each job must run only after the job before it has
completed.
The Tivoli Workload Scheduler 8.1 suite provides an integrated solution for
running this kind of complicated workload. The Job Scheduling Console provides
a centralized control point with a single interface to the workload regardless of
the platform or operating system where the jobs run.
The Tivoli Workload Scheduler 8.1 suite includes Tivoli Workload Scheduler and
Tivoli Workload Scheduler for z/OS, as well as the Job Scheduling Console.
Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS can be used
separately or can be combined together to make end-to-end scheduling work.

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

1.2 Introduction to Tivoli Workload Scheduler for z/OS


Tivoli Workload Scheduler for z/OS (TWS for z/OS) has been scheduling and
controlling batch workloads in data centers since 1977. Originally called
Operations Planning and Control (OPC), the product has been extensively
developed and extended to meet the increasing demands of customers
worldwide. An overnight workload consisting of 100,000 production jobs is not
unusual, and TWS for z/OS can easily manage this kind of workload.

1.2.1 Overview of Tivoli Workload Scheduler for z/OS


TWS for z/OS databases contains information about the work that is to be run,
when it should run, and the resources that are needed and available. This
information is used to calculate a forward forecast called the long term plan. Data
center staff can check this to confirm that the desired work is being scheduled
when required. The long term plan usually covers a time range of four to twelve
weeks. A second plan is produced that uses the long term plan and databases as
input. The current plan usually covers 24 hours and is a detailed production
schedule. TWS for z/OS uses this to submit jobs to the appropriate processor at
the appropriate time. All jobs in the current plan have TWS for z/OS status codes
that indicate the progress of work. When a jobs predecessors are complete,
TWS for z/OS considers it ready for submission. It checks that all requested
resources are available, and when these conditions are met, it causes the job to
be submitted.

1.2.2 Tivoli Workload Scheduler for z/OS architecture


TWS for z/OS consists of a controller and one or more trackers. The controller
runs on an z/OS system. The controller manages the TWS for z/OS databases
and the long term and current plans. The controller schedules work and causes
jobs to be submitted to the appropriate system at the appropriate time.
Trackers are installed on every system managed by the controller. The tracker is
the link between the controller and the managed system. The tracker submits
jobs when the controller instructs it to do so, and it passes job start and job end
information back to the controller.
The controller can schedule jobs on z/OS systems as well as distributed
systems. The TWS for z/OS controller use fault tolerant workstations for job
scheduling on distributed systems. A fault tolerant workstation is actually a Tivoli
Workload Scheduler fault tolerant agent.

Chapter 1. Introduction

The main method of accessing the controller is via ISPF panels, but several
other methods are available including Program Interfaces, TSO commands, and
the Job Scheduling Console.
The Job Scheduling Console (JSC), a Java GUI interface, was introduced in
Tivoli OPC Version 2.3. The current version of JSC has been updated with
several new specific TWS for z/OS functions. The JSC provides a common
interface to both TWS for z/OS and TWS.
For more information on TWS for z/OS architecture, see Chapter 2, End-to-end
TWS architecture and components on page 15.

1.3 Introduction to Tivoli Workload Scheduler


Tivoli Workload Scheduler is descended from the Unison Maestro program.
Unison Maestro was developed by Unison Software on Hewlett-Packards MPE
operating system. It was then ported to UNIX and Windows. In its various
manifestations, TWS has 15 year track record. During the processing day, TWS
production control programs manage the production environment and automate
most operator activities. It prepares jobs for execution, resolves
interdependencies, and launches and tracks each job. Because jobs begin as
soon as their dependencies are satisfied, idle time is minimized and throughput
improves significantly. Jobs never run out of sequence, and, if a job fails, TWS
handles the recovery process with little or no operator intervention.

1.3.1 Overview of Tivoli Workload Scheduler


There are two basic aspects to job scheduling in TWS: The database and the
plan. The database contains all the definitions for scheduling objects, such as
jobs, job streams, resources, and workstations. It also holds statistics of job and
job stream execution, as well as information on the user ID that created an object
and when an object was last modified. The plan contains all job scheduling
activity planned for a period of one day. In TWS, the plan is created every 24
hours and consists of all the jobs, job streams, and dependency objects that are
scheduled to execute for that day. All job streams for which you have created a
run cycle are automatically scheduled and included in the plan. As the day goes
by, the jobs and job streams that do not execute successfully can be rolled over
into the next days plan.

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

1.3.2 Tivoli Workload Scheduler architecture


TWS consists of a master domain manager that contains the centralized
database files used to document scheduling objects. It creates the production
plan at the start of each day and performs all logging and reporting for the
network.
All communications to agents are routed through the domain manager, which is
the management hub in a domain. The network can be managed by a mix of
agents. Fault tolerant agents are capable of resolving local dependencies and
launching their jobs should a network interruption cause a loss of communication
with their domain managers because each one is given a set of scheduling
instructions at the beginning of every processing day.
Tivoli Workload Scheduler Version 7.0 introduced a new Java GUI, the Job
Scheduling Console (JSC). The current version of JSC has been updated with
several new specific TWS functions. The JSC provides a common interface to
both TWS and TWS for z/OS.
For more on TWS architecture, see Chapter 2, End-to-end TWS architecture
and components on page 15.

1.4 Benefits of integrating TWS for z/OS and TWS


Both TWS for z/OS and TWS have individual strengths. While an enterprise
running z/OS and distributed systems could schedule and control work using
only one of these tools, a complete solution requires TWS for z/OS and TWS to
work together.
TWS for z/OSs long term plan gives peace of mind by showing the workload
forecast for weeks into the future. TWS fault tolerant agents keep scheduling
work even when it loses communication with the domain manager. TWS for z/OS
manages huge numbers of jobs through a sysplex of connected z/OS systems.
TWS extended agents can control work on applications, such as SAP R/3 and
Oracle.
Data centers that need to schedule and control significant amounts of both host
z/OS and distributed work will be most productive when they get their TWS for
z/OS and TWS systems connected and working cooperatively.
This redbook will show you how to achieve this.

Chapter 1. Introduction

1.5 Summary of enhancements in Tivoli Workload


Scheduler 8.1
This section describes enhancements to:
Tivoli Workload Scheduler for z/OS
Tivoli Workload Scheduler
Tivoli Job Scheduling Console

1.5.1 Enhancements to Tivoli Workload Scheduler for z/OS


The following are enhancements to Version 8.1 of Tivoli Workload Scheduler for
z/OS.

Restart and cleanup


Dataset management has been extended to improve the flexibility of job and step
restart. One does not have to rely on the exclusive use of the data store; this
removes the delay in normal JES processing of system datasets. Now the data
store can run at a very low priority, if so desired.

Job durations in seconds


One can now create schedule plans in the duration of seconds, adding more
control to scheduling second-by-second.

Integration with Tivoli Business Systems Manager


Tivoli Business Systems Manager is the solution to unifying the management of
business systems. Tivoli Workload Scheduler for z/OS has been enhanced to
support monitoring from Tivoli Business Systems Manager. From Tivoli Business
Systems Manager, one can monitor the following:
Status changes to jobs
Addition of jobs to the plan
Alert conditions

Integration with Removable Media Manager


Removable Media Manager (RMM) works with restart and cleanup to assist with
the management of datasets. Removable Media Manager verifies that a dataset
exists on a volume and then takes user-requested actions on the dataset.
Consequently, Tivoli Workload Scheduler for z/OS works with Removable Media
Manager to properly mark and expire datasets, as needed.

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tivoli Workload Scheduler end-to-end


End-to-end scheduling is based on the possibility of connecting a Tivoli Workload
Scheduler domain manager, and its underlying agents and domains, to the Tivoli
Workload Scheduler for z/OS engine. The engine is seen by the distributed
network as the master domain manager. The Tivoli Workload Scheduler domain
manager acts as the broker for the distributed network and has the task of
resolving all dependencies. OPC tracker agents can then be replaced by more
reliable, fault tolerant, and scalable agents.

Minor enhancements
EQQAUDIT is now installed during the installation of Tivoli Workload Scheduler
for z/OS. One can access EQQAUDIT from the main menu of Tivoli Workload
Scheduler for z/OS ISPF panels. The batch command interface tool (BCIT) is
now installed as part of the regular installation of Tivoli Workload Scheduler for
z/OS.

1.5.2 Enhancements to Tivoli Workload Scheduler


The following are enhancements to Version 8.1 of Tivoli Workload Scheduler.

Multiple holiday calendars


The freedays calendar (where a freeday is the opposite of a workday) extends
now the role of the holidays calendar, as it allows users to customize the
meaning of workdays within Tivoli Workload Scheduler. With this new function,
you can define and associate as many calendars as you need to each job stream
you create.

Freeday rule
The freeday rule introduces the concept of the run cycle that is already used in
Tivoli Workload Scheduler for z/OS. It consists of a number of options (or rules)
that determine when a job stream should actually be run if its schedule falls on a
freeday.

Integration with Tivoli Business Systems Manager


The integration with Tivoli Business Systems Manager for Tivoli Workload
Scheduler provides the same functionality as with Tivoli Workload Scheduler for
z/OS.

Chapter 1. Introduction

Performance improvements
The new performance enhancements will be particularly appreciated in Tivoli
Workload Scheduler networks with many workstations, massive scheduling
plans, and complex relations between scheduling objects. The improvements are
in the following areas:
Daily plan creation: Jnextday runs faster and, consequently, the master
domain manager can start its production tasks sooner.
Daily plan distribution: the Tivoli Workload Scheduler administrator can now
enable the compression of the Symphony file so that the daily plan can be
distributed to other nodes earlier.
I/O optimization: Tivoli Workload Scheduler performs fewer accesses to the
files and optimizes the use of system resources. The improvements reflect:
Event files: The response time to the events is improved so the message
flow is faster.
Daily plan: The access to the Symphony file is quicker for both read and
write. The daily plan can therefore be updated in a shorter time than it was
previously.

Installation improvements
On Windows NT, the installation of netman is no longer a separate process. It is
now part of the installation steps of the product.

Linux support
Version 8.1 of Tivoli Workload Scheduler adds support for the following Linux
platforms:
Linux for INTEL as master domain manager or fault-tolerant agent
Linux for S/390 as fault-tolerant agent

1.5.3 Enhancements to the Job Scheduling Console


The Job Scheduling Console Feature Level 1.2 is delivered with the Workload
Scheduler suite or either of its components. The following are the latest
enhancements.

Usability enhancements
The following usability enhancements are featured:
Improved tables for displaying list views. Users can now sort and filter table
contents by right clicking the table. They can also automatically resize a
column by double clicking its header.

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Message windows now directly display the messages. Users no longer have
to click the Details button to read the message. For error messages, the error
code is also displayed on the window title.
The addition of jobs to a job stream can also be done from within the Timeline
view, as well as from the Graph view, of the Job Stream Editor.
New jobs are now automatically positioned within the Graph view of the Job
Stream Editor. Users are no longer required to click their mouse on the
background of the view to open the jobs properties window.
A new editor for job stream instances is featured. This editor is similar to the
Job Stream Editor for the database and enables users to see and work with
all the job instances contained in a specified job stream. From it users can
modify the properties of a job stream instance and its job instances, and the
dependencies between the jobs. The job stream instance editor does not
include the Timeline and Run cycle views. The job instance icons also display
the current status of the job.

Graphical enhancements
The following graphical enhancements are featured:
Input fields have changed to conform to Tivoli Presentation Services norms.
Mandatory fields have a yellow background. Fields into which input containing
syntax errors were introduced display a white cross on a red background.
A new Hyperbolic view graphically displays all the job dependencies of every
single job in the current plan.

Non-modal windows
The properties windows of scheduling objects are now non-modal. This means
that you can have two or more properties windows open at the same time. This
can be particularly useful if you need to define a new object that is in turn
required for another objects definition.

Common view
The Common view provides users with the ability to list job and job stream
instances in a single view and regardless of their scheduling engine, thus
furthering integration for workload scheduling on the mainframe and the
distributed platforms. The Common view is displayed as an additional selection
at the bottom of the tree view of the scheduling engines.

Chapter 1. Introduction

Tivoli Workload Scheduler for z/OS-specific enhancements


The Job Scheduling Console now supports the following Tivoli Workload
Scheduler for z/OS functions:
Submit job streams. Users can select a specific job stream from the database
and submit it directly to the current plan. They can choose to have the job
stream run immediately upon submission or to be put on hold and, eventually,
edited on the fly before it is submitted. The possibility to modify the start and
deadline times is also provided.
A text editor to display and modify job control languages (JCLs). The editor
provides import and export functions, so that users can store a JCL as a
template and then reuse it for other JCLs. It also includes functions to copy,
cut, and paste JCL text. The JCL editor displays information on the current
JCL, such as the current row and column, the job name, the workstation
name, and who last updated it.
Read-only text editors to visualize:
The logs produced by job instance runs.
The operator instruction associated with a job instance.
The possibility to restart a job after it has run. Users can now:
Restart a job instance with the option to choose which step must be first,
which must be last, and which must be included or excluded.
Rerun a job instance that will execute entirely all the steps of a job
instance.
Clean the list of datasets used by the selected job instance.
Display the list of datasets cleaned by a previous clean-up action.
The possibility to rerun an entire job stream instance. This function opens a
job stream instance editor with a set of reduced functionalities where users
can select the job instance that will be the starting point of the rerun. When
the starting point is selected, an impact list is displayed that shows all the
possible job instances that will be impacted from this action. For every job
instance within the current job stream instance, users can perform a clean-up
action and display its results.

Tivoli Workload Scheduler-specific enhancements


The Job Scheduling Console now allows the operator to select an old plan file for
viewing instead of the current plan file. Once an old plan has been selected, it is
possible to view the old job streams, jobs, and other scheduling object instances
that are in the old plan file. These objects appear just as they did when the old
plan file was archived (at the end of its production day). This functionality was

10

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

available in the old gconman program through the Set Symphony function, and is
available in the command line conman program via the setsym command. It is
useful for quickly seeing how things were (e.g., what ran and when it ran) on a
specific day in the past.

1.6 The terminology used in this book


The Tivoli Workload Scheduler 8.1 suite comprises two somewhat different
software programs, each with its own history and terminology. For this reason,
there are sometimes two different and interchangeable names for the same
thing. Other times, a term used in one context can have a different meaning in
another context. To help clear up this confusion, we now introduce some of the
terms and acronyms that will be used throughout the book. In order to make the
terminology used in this book internally consistent, we adopted a system of
terminology that may be a bit different than that used in the product
documentation. So please take a moment to read through this list, even if you are
already familiar with the products.

TWS 8.1 suite

The suite of programs that includes Tivoli Workload


Scheduler and Tivoli Workload Scheduler for z/OS. These
programs are used together to make end-to-end
scheduling work. Sometimes called just Tivoli Workload
Scheduler (TWS).

TWS

Tivoli Workload Scheduler. This is the version of TWS that


runs on UNIX, OS/400, and Windows operating systems,
as distinguished from TWS for z/OS, a somewhat different
program. Sometimes called TWS Distributed. TWS is
based on the old Maestro program.

TWS for z/OS

Tivoli Workload Scheduler for z/OS. This is the version of


TWS that runs on z/OS, as distinguished from TWS (by
itself, without the for z/OS specification). TWS for z/OS is
based on the old OPC program.

Master

The top level of the TWS or TWS for z/OS scheduling


network. Also called the master domain manager,
because it is the domain manager of the MASTERDM
(top-level) domain.

Domain manager

The agent responsible for handling dependency


resolution for subordinate agents. Essentially an FTA with
a few extra responsibilities.

Chapter 1. Introduction

11

Fault tolerant agent An agent that keeps its own local copy of the plan file, and
can continue operation even if the connection to the
parent domain manager is lost. Also called an FTA. In
TWS for z/OS, FTAs are referred to as fault tolerant
workstations.
Scheduling engine

A TWS engine or TWS for z/OS engine.

TWS engine

The part of TWS that does actual scheduling work, as


distinguished from the other components related primarily
to the user interface (for example, the TWS connector).
Essentially the part of TWS that is descended from the old
Maestro program.

TWS for z/OS engine The part of TWS for z/OS that does actual scheduling
work, as distinguished from the other components related
primarily to the user interface (e.g., the TWS for z/OS
connector). Essentially the controller plus the server.
TWS for z/OS
controller

The part of the TWS for z/OS engine that is based on the
old OPC program.

TWS for z/OS server The part of TWS for z/OS that is based on the UNIX TWS
code. Runs in UNIX System Services (USS) on the
mainframe.
JSC

Job Scheduling Console. This is the common graphical


user interface (GUI) to both the TWS and TWS for z/OS
scheduling engines.

Connector

A small program that provides an interface between the


common GUI (Job Scheduling Console) and one or more
scheduling engines. The connector translates to and from
the different languages used by the different scheduling
engines.

JSS

Job Scheduling Services. Essentially a library used by the


connectors.

TMF

Tivoli Management Framework. Also called just the


Framework.

1.7 Material covered in this book


We will cover the following material in the subsequent chapters.
Chapter 2, End-to-end TWS architecture and components on page 15 gives
a detailed overview of the TWS 8.1, including in-depth explanation of how all

12

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

the pieces fit in relation to one another and work together. This chapter is
divided into four main parts:
Tivoli Workload Scheduler for z/OS 8.1 architecture
Tivoli Workload Scheduler 8.1 architecture
End-to-end scheduling architecture (combining both TWS and TWS for
z/OS into one big picture)
Description of how Job Scheduling Console fits into this picture
Chapter 3, Planning, installation, and configuration of the TWS 8.1 on
page 91 includes guidelines for intelligent use of existing computer resources
in your TWS network, as well as step-by-step instructions for installing each
of the required components. This chapter is divided into sections devoted to
each component of the complete installation:
Planning an end-to-end scheduling installation for TWS for z/OS
Installing Tivoli Workload Scheduler for z/OS 8.1
Planning end-to-end scheduling for TWS
Installing Tivoli Workload Scheduler 8.1 in an end-to-end environment
Installing the Tivoli Management Framework as a stand-alone TMR server
just for TWS
Installing Job Scheduling Console 1.2
Chapter 4, End-to-end implementation scenarios and examples on
page 169 shows implementation examples for different TWS for z/OS and
TWS environments. The following implementation scenarios are covered:
Upgrading from previous versions
Maintenance
Fail-over scenarios
Chapter 5, Using The Job Scheduling Console on page 263. This chapter
gives best practices for using JSC in an end-to-end scheduling environment.
Chapter 6, Troubleshooting in a TWS end-to-end environment on page 333
covers best practices for troubleshooting and has the following sections:
Troubleshooting Tivoli Workload Scheduler for z/OS 8.1 (including
end-to-end troubleshooting)
Troubleshooting Tivoli Workload Scheduler 8.1
Chapter 7, Tivoli NetView integration on page 381 covers the steps
necessary to successfully integrate Tivoli NetView with TWS in order to
process job statuses.

Chapter 1. Introduction

13

14

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Chapter 2.

End-to-end TWS
architecture and
components
This chapter describes the end-to-end scheduling architecture using Tivoli
Workload Scheduler and Tivoli Workload Scheduler for z/OS.
In this chapter, the following topics are discussed:
Tivoli Workload Scheduler architecture
Tivoli Workload Scheduler for z/OS architecture
End-to-end scheduling architecture
Job Scheduling Console and related components
If you are unfamiliar with Tivoli Workload Scheduler terminology and architecture
you can start with the section on Tivoli Workload Scheduler architecture to get a
better understanding of how Tivoli Workload Scheduler works. If you are already
familiar with Tivoli Workload Scheduler but would like to learn more about Tivoli
Workload Scheduler for z/OS, you can start with the section on Tivoli Workload
Scheduler for z/OS architecture. If you are already familiar with both TWS and
TWS for z/OS, skip ahead to the third section where we describe how both
programs work together when configured as an end-to-end network.

Copyright IBM Corp. 2002

15

The Job Scheduling Console, its components, and its architecture, are described
in the last topic. In this topic we describe the different components used to
establish a Job Scheduling Console environment.

16

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

2.1 Tivoli Workload Scheduler architecture


Tivoli Workload Schedulers scheduling features help you plan every phase of
production. During the processing day, the Tivoli Workload Scheduler production
control programs manage the production environment and automate most
operator activities. Tivoli Workload Scheduler prepares jobs for execution,
resolves interdependencies, and launches and tracks each job. Because jobs
start running as soon as their dependencies are satisfied, idle time is minimized,
and throughput improves significantly. Jobs never run out of sequence, and, if a
job fails, Tivoli Workload Scheduler handles the recovery process with little or no
operator intervention.
Tivoli Workload Scheduler is composed of three major parts:
Tivoli Workload Scheduler engine
Also called the scheduling engine. This installed on every computer that
should participate in a Tivoli Workload Scheduler network. The engine is a
complete Tivoli Workload Scheduler installation, which means all Tivoli
Workload Scheduler services and components are installed on the computer.
When doing the installation, the engine is configured for the role that the
computer with the engine is going to play within the Tivoli Workload Scheduler
scheduling network, such as master domain manager, domain manager, or
fault tolerant agent. The configuration of the engine role is done in two places:
In parameter files (localopts and globalopts), and in database definition for
the Tivoli Workload Scheduler workstation that represents the engine on the
physical computer.
Tivoli Workload Scheduler connector
Maps Job Scheduling Console commands to the Tivoli Workload Scheduler
engine. The Tivoli Workload Scheduler connector runs on the master and on
any of the fault tolerant agents (FTA) that you will use as backup machines for
the master workstation. The connector pre-requires the Tivoli Management
Framework configured for a Tivoli server or Tivoli managed node.
Job Scheduling Console (JSC)
A Java-based graphical user interface (GUI) for the Tivoli Workload
Scheduler suite. The Job Scheduling Console runs on any machine from
which you want to manage Tivoli Workload Scheduler plan and database
objects. It provides, through the Tivoli Workload Scheduler connector,
conman and composer functionality. The Job Scheduling Console does not
require to be installed on the same machine with the Tivoli Workload
Scheduler engine or connector. You can use the Job Scheduling Console
from any machine as long as it has a TCP/IP link with the machine running
the Tivoli Workload Scheduler connector.

Chapter 2. End-to-end TWS architecture and components

17

In the next sections we will provide an overview of the Tivoli Workload Scheduler
network and workstations, the topology used to describe the architecture in Tivoli
Workload Scheduler, and the two basic aspects to job scheduling in Tivoli
Workload Scheduler: The databases, and the plan and the terminology used in
Tivoli Workload Scheduler.

2.1.1 The Tivoli Workload Scheduler network


A Tivoli Workload Scheduler network is made up of the workstations, or CPUs,
on which jobs and job streams are run.
A Tivoli Workload Scheduler network contains at least one Tivoli Workload
Scheduler domain, the master domain, in which the master domain manager is
the management hub. It is the master domain manager that manages the
databases and it is from the master domain manager that you define new objects
in the databases. Additional domains can be used to divide a widely distributed
network into smaller, locally managed groups.
In a single domain configuration, the master domain manager maintains
communications with all of the workstations (fault tolerant agents) in the Tivoli
Workload Scheduler network (see Figure 2-1).

MASTERDM
AIX

Master
domain
manager

FTA1

FTA2
AIX

FTA3
OS/400

Windows 2000

FTA4
Solaris

Figure 2-1 Tivoli Workload Scheduler network with only one domain

18

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Using multiple domains reduces the amount of network traffic by reducing the
communications between the master domain manager and the other computers
in the network. In Figure 2-2 we have a Tivoli Workload Scheduler network with
three domains.

MASTERDM
AIX

Master
Domain
Manager

DomainA

DomainB
AIX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

FTA3
OS/400

HPUX

FTA4

Windows 2000

Solaris

Figure 2-2 Tivoli Workload Scheduler network with three domains

In a multi-domain configuration, the master domain manager communicates with


the workstations in its domain and with the subordinate domain managers. The
subordinate domain managers, in turn, communicate with the workstations in
their domains and subordinate domain managers. Multiple domains also provide
fault-tolerance by limiting the problems caused by losing a domain manager to a
single domain. To limit the effects further, you can designate backup domain
managers to take over if their domain managers fail.
Before the start of each new day, the master domain manager creates a plan for
the next 24 hours. This plan is placed in a production control file, named
Symphony. Tivoli Workload Scheduler is then restarted in the network, and the
master domain manager sends a copy of the Symphony file to each of its
automatically linked agents and subordinate domain managers. The domain
managers, in turn, send copies of the Symphony file to their automatically linked
agents and subordinate domain managers.

Chapter 2. End-to-end TWS architecture and components

19

Once the network is started, scheduling messages like job starts and
completions are passed from the agents to their domain managers, through the
parent domain managers to the master domain manager. The master domain
manager then broadcasts the messages throughout the hierarchical tree to
update the Symphony files of domain managers and fault tolerant agents running
in full status mode.
It is important to remember that Tivoli Workload Scheduler does not limit the
number of domains or levels (the hirerarchy) of your Tivoli Workload Scheduler
network. The number of domains or levels in your Tivoli Workload Scheduler
network should be based on the topology of the physical network where you
want to implement the Tivoli Workload Scheduler network. See Section 3.4.1,
Network planning and considerations on page 134 for more information on how
to design a Tivoli Workload Scheduler network.
Figure 2-3 on page 21 shows an example of a multi-domain Tivoli Workload
Scheduler four-tier network, that is, a network with four levels:
1. Master domain manager, MASTERDM
2. DomainA and DomainB
3. DomainC, DomainD, DomainE, FTA1, FTA2, and FTA3
4. FTA4, FTA5, FTA6, FTA7, FTA8, and FTA9

20

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MASTERDM

AIX

Master
Domain
Manager

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMB

Domain
Manager
DMA

FTA1

FTA2

FTA3
Solaris

HPUX

DomainC

DomainD

Solaris

DMD

FTA5
AIX

DomainE
AIX

AIX

DMC

FTA4

AIX

OS/400

FTA6
Win NT

DME

FTA7

FTA8
Win 2K

FTA9
AIX

HPUX

Figure 2-3 A multi-tiered Tivoli Workload Scheduler network

2.1.2 Tivoli Workload Scheduler workstation types


For most cases, workstation definitions refer to physical workstations. However,
in the case of extended and network agents, the workstations are logical
definitions that must be hosted by a physical Tivoli Workload Scheduler
workstation.
Tivoli Workload Scheduler workstations can be of the following types:
Master domain manager (MDM)
The master domain manager in the topmost domain of a Tivoli Workload
Scheduler network. It contains the centralized database files used to
document scheduling objects. It creates the plan at the start of each day, and
performs all logging and reporting for the network. The plan is distributed to
all subordinate domain managers and fault tolerant agents.

Chapter 2. End-to-end TWS architecture and components

21

Backup master
A fault tolerant agent or domain manager capable of assuming the
responsibilities of the master domain manager for automatic workload
recovery. The copy of the plan on the backup master is updated with the
same reporting and logging as the master domain manager plan.
Domain manager
The management hub in a domain. All communications to and from the
agents in a domain are routed through the domain manager. The domain
manager can resolve dependencies between jobs in its subordinate agents.
The copy of the plan on the domain manager is updated with reporting and
logging from the subordinate agents.
Backup domain manager
A fault tolerant agent capable of assuming the responsibilities of its domain
manager. The copy of the plan on the backup domain manager is updated
with the same reporting and logging information as the domain manager plan.
Fault tolerant agent (FTA)
A workstation capable of resolving local dependencies and launching its jobs
in the absence of a domain manager. It has a local copy of the plan generated
in the master domain manager. It is also called workstation tolerant agents.
Standard agent
A workstation that launches jobs only under the direction of its domain
manager.
Extended agent
A logical workstation definition that enables you to launch and control jobs on
other systems and applications, such as Peoplesoft; Oracle Applications;
SAP; and MVS, JES2, and JES3.
Network agent
A logical workstation definition for creating dependencies between jobs and
job streams in separate Tivoli Workload Scheduler networks.
Job Scheduling Console client
Any workstation running the graphical user interface from which schedulers
and operators can manage Tivoli Workload Scheduler plan and database
objects. Actually this is not a workstation in the Tivoli Workload Scheduler
network; the Job Scheduling Console client is where you work with the Tivoli
Workload Scheduler database and plan.
Figure 2-4 on page 23 shows a Tivoli Workload Scheduler network with some of
the different workstation types.

22

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MASTERDM

AIX

Master
Domain
Manager

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

Domain
Manager
DMB
Job
Scheduling
Console

FTA1

FTA2

FTA3
Solaris

HPUX

DomainC

DomainD

DMC

Solaris

DMD

FTA5
AIX

DomainE
AIX

AIX

FTA4

AIX

OS/400

FTA6

DME

FTA7

Win NT

FTA8
Win 2K

FTA9
AIX

HPUX

Figure 2-4 TWS network with different manager/agent types

2.1.3 Tivoli Workload Scheduler topology


Localized processing is key for choosing how to set up Tivoli Workload
Scheduler domains for an enterprise. The idea is to separate or localize the
enterprises scheduling needs based on a common set of characteristics.
Common characteristics are things such as geographical locations, business
functions, and application groupings. Grouping related processing can limit the
amount of interdependency information that needs to be communicated between
domains. The benefits of localizing processing in domains are:
Decreased network traffic. Keeping processing localized to domains
eliminates the need for frequent interdomain communications.
Provides a convenient way to tighten security and simplify administration.
Security and administration can be defined at, and limited to, the domain
level. Instead of network-wide or workstation-specific administration, you can
have domain administration.
Network and workstation fault tolerance can be optimized. In a multiple
domain Tivoli Workload Scheduler network, you can define backups for each
domain manager, so that problems in one domain do not disrupt operations in
other domains.

Chapter 2. End-to-end TWS architecture and components

23

In Section 3.4.1, Network planning and considerations on page 134, you can
find more information on how to configure a Tivoli Workload Scheduler network
based on your particular distributed network and environment.

2.1.4 Tivoli Workload Scheduler components


Tivoli Workload Scheduler uses several manager processes to efficiently
segregate and manage networking, dependency resolution, and job launching.
These processes communicate among themselves through the use of message
queues. Message queues are also used by the Tivoli Workload Scheduler
command line interface program, conman (Console Manager), and JSC to
integrate operator commands into the batch process.
A computer running Tivoli Workload Scheduler has several active TWS
processes. They are started as a system service, by the StartUp command, or
manually from the Job Scheduling Console. The following are the main
processes:
netman

The network listener program. netman accepts the initial


connection from the remote mailman, spawns a new
writer process, and then hands the connection over to the
writer process.

writer

The network writer process that passes incoming


messages from a remote workstation to the local mailman
process.

mailman

The primary message management process that sends


and receives inter-workstation messages.

batchman

The production control process. Working from Symphony,


the plan, it runs jobs streams, resolves dependencies,
and directs jobman to launch jobs.

jobman

The job management process that launches and tracks


jobs under the direction of batchman.

The Tivoli Workload Scheduler processes and their intercommunication via


message files is shown in Figure 2-5 on page 25.

24

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Operator input

Message files

TWS processes

conman stop,
start and shut

NetReq.msg

netman

writer

JSC or
conman

Mailbox.msg

mailman

Intercom.msg

batchman

Courier.msg

jobman

From remote mailman

To remote writer

Figure 2-5 Tivoli Workload Scheduler inter-process communication

2.1.5 Tivoli Workload Scheduler database objects


Scheduling with Tivoli Workload Scheduler includes the capability to do the
following:
Schedule jobs across a network.
Group jobs into job streams according to, for example, function or application.
Set limits on the number of jobs that can run concurrently.
Create job streams based on day of the week, specified dates and times, or
by customized calendars.
Ensure correct processing order by identifying dependencies such as
successful completion of previous jobs, availability of resources, or existence
of required files.
Set automatic recovery procedures for unsuccessful jobs.
Forward incomplete jobs to the next plan for the next day (the next 24-hour
plan).
This is accomplished by defining scheduling objects in the Tivoli Workload
Scheduler databases residing on the master domain manager. Scheduling
objects are combined in Tivoli Workload Scheduler so they represent the
workload you want to be handled by Tivoli Workload Scheduler.

Chapter 2. End-to-end TWS architecture and components

25

Scheduling objects are elements used to define your Tivoli Workload Scheduler
network and production. Scheduling objects include jobs, job streams,
dependencies, calendars, workstations, prompts, resources, parameters, and
users.
Scheduling objects can be created, modified, or deleted by using the Job
Scheduling Console or the command line interface, the composer program.
Workstation

Also referred to as CPU. Usually an individual computer


on which jobs and job streams are run. Workstations are
defined in the Tivoli Workload Scheduler database as a
unique object. A workstation definition is required for
every computer that executes jobs or job streams in the
Tivoli Workload Scheduler network
When creating the workstation definition you specify the
workstation type (fault tolerant agent, standard agent, or
extended agent) as well as the function (for example
domain manager or backup domain manager).
Workstation definitions are also used to implement your
Tivoli Workload Scheduler network or topology. When
defining the workstation you name the domain that the
workstation should be part of or you create a new domain
containing the workstation.

Workstation class

A group of workstations. Any number of workstations can


be placed in a class. Job streams and jobs can be
assigned to execute on a workstation class. This makes
replication of a job or a job stream across many
workstations easy.

Job

A script or command, run on the users behalf, and run


and controlled by Tivoli Workload Scheduler. A job
definition includes a pointer (path) to the executable on
the workstation (the local machine) where the job is going
to be executed, the logon that it will execute as, and what
to do if the executable fails (stop, rerun, or continue).

Job stream

Also referred to as schedule. A mechanism for grouping


jobs by function or application on a particular day and
time. A job stream definition includes a set of jobs, the job
dependencies, dependencies on the job stream, and for
which production day(s) the job stream will be selected for
the plan.
A run cycle specifies the days that a job stream is
scheduled to run. Run cycles are defined as part of a job
stream and may include calendars that were previously

26

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

defined. There are three types of run cycles: A simple run


cycle, a weekly run cycler, and a calendar run cycle. Each
type of run cycle can be included or excluded. That is,
each run cycle can define the days when a job stream is
included in the plan, or when the job stream is specifically
excluded from the plan.
Calendar

Contains a list of scheduling dates. Each calendar can be


assigned to multiple job streams. Calendars are used to
specify run days for the job streams. A calendar can be
used as an inclusionary or exclusionary run cycle.

Prompt

An object that can be used as a dependency for jobs and


job streams. A prompt must be answered affirmatively for
the dependent job or job stream to launch. There are two
types of prompts: Predefined and ad hoc. An ad hoc
prompt is defined within the properties of a job or job
stream and is unique to that job or job stream. A
predefined prompt is defined in the Tivoli Workload
Scheduler database and can be used by any job or job
stream.

Resource

An object representing either physical or logical resources


on your system. Once defined in the Tivoli Workload
Scheduler database, resources can be used as
dependencies for jobs and job streams. For example, you
can define a resource named tapes with a unit value of
two, then define jobs that require two available tape drives
as a dependency. Jobs with this dependency cannot run
concurrently because each time a job is run the tapes
resource is in use.

Parameter

A parameter used to substitute values into your jobs and


job streams. When using a parameter in a job script, the
value is substituted at runtime. In this case, the parameter
must be defined on the workstation where it will be used.
Parameters cannot be used when scripting extended
agent jobs.
Parameters can be used to pass variable values (dates,
results, etc.) from one job to another. This way the
parameters serve as a respository for the jobs.

User

For Windows only, the user name specified in a job


definitions Logon field must have a matching user
definition. The definitions furnish the user passwords
required by Tivoli Workload Scheduler to launch jobs.

Chapter 2. End-to-end TWS architecture and components

27

Defining objects in the Tivoli Workload Scheduler databases


Scheduling is the process of defining objects in the Tivoli Workload Scheduler
database.
Scheduling can be accomplished either through the Tivoli Workload Scheduler
command line using the composer program to access the databases or using the
conman program or the Tivoli Job Scheduling Console.
Scheduling includes the following tasks:
Defining and maintaining workstations
Defining and maintaining job streams
Defining and maintaining other scheduling objects
Starting and stopping production processing
Viewing and modifying jobs and job streams

Defining scheduling objects


Scheduling objects are workstations, workstation classes, domains, jobs, job
streams, resources, prompts, calendars, and parameters. Scheduling objects are
managed with the composer program or through the Job Scheduling Console
and are stored in Tivoli Workload Schedulers database.

Creating job streams


Tivoli Workload Schedulers primary processing task is to run job streams. A job
stream is an outline of batch processing consisting of a list of jobs. Although job
streams can be defined from the products command line interface, using the Job
Stream Editor of the Job Scheduling Console is the recommended way to create
and modify job streams. The Job Stream Editor is for working with the jobs and
follows dependencies between the jobs, as well as the run cycles of the job
stream. The job stream properties window is for specifying time restrictions,
resource dependencies, file dependencies, and prompt dependencies at the job
stream level.

Dependencies
A condition that must be met in order to launch a job or job stream. Note that
dependencies can be created on a job stream level or a job level.

28

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Setting job recovery


When defining a job, the possibility must be taken into account that in some
instance the job may not complete successfully. It is possible to define a recovery
option and recovery actions when defining the job. One of the following recovery
options is possible:
Not continuing with the next job. This stops the execution of the job stream
and puts it in the stuck state. This is the default action.
Continuing with the next job.
Running the job again
It is possible to define recovery jobs, for example, to back up or restore a
database. This recovery job can then be run before the failing job is run again.
Furthermore, you can define prompts to be issued when the job fails or when the
rerun job fails.

2.1.6 Tivoli Workload Scheduler plan


The Tivoli Workload Scheduler plan is created daily and it always covers 24
hours. This 24-hour period is also referred to as a production day in Tivoli
Workload Scheduler. The plan start and end time does not have to conform to
the actual calendar day. It normally will be offset. For example, the default plan in
Tivoli Workload Scheduler runs from 6:00 a.m. to 5:59 a.m. the next day.
When it is time (5:59 a.m. in this example) to create a new 24-hour plan, Tivoli
Workload Scheduler executes a program that selects, from the databases found
on the master domain manager, the job streams that are to run in the next
24-hour plan. Tivoli Workload Scheduler looks at the run cycles in the job
streams in the database, and if the run cycle specifies that the job stream is
going to run in the next 24-hour period, this job stream together with all the
objects used in the job stream (jobs, prompts, resources, etc.) will be copied into
the plan file. Then another program includes the uncompleted schedules from
the previous 24-hour plan into the current 24-hour plan and logs all the previous
days statistics into an archive.
The plan-creating process is carried out with the Tivoli supplied job, Jnextday.
Jnextday reads the Tivoli Workload Scheduler databases and creates the plan
(see Figure 2-6 on page 30).

Chapter 2. End-to-end TWS architecture and components

29

Figure 2-6 Creating a new 24-hour plan in Tivoli Workload Scheduler

A copy of the Symphony file is sent to all subordinate domain managers and to
all the fault tolerant agents in the same domain.
The subordinate domain managers distribute their copy to all the fault tolerant
agents in their domain and to all the domain managers that are subordinate to
them, and so on down the line. This enables fault tolerant agents throughout the
network to continue processing even if the network connection to their domain
manager is down. From the Job Scheduling Console or the command line
interface, the operator can view and make changes in the days production by
making changes in the Symphony file.
The distribution of the Symphony file from master domain manager, to domain
managers and their subordinate agents is shown in Figure 2-7 on page 31.

30

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MASTERDM
Master
Domain
Manager

The TWS plan is generated


by the Jnextday job. The
plan is then distributed to the
subordinate DMs and FTAs
TWS plan

AIX

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

FTA3
OS/400

Windows 2000

FTA4
Solaris

Figure 2-7 The distribution of the plan (Symphony file) in a TWS network

Tivoli Workload Scheduler processes monitor the Symphony file and make calls
to the operating system to launch jobs as required. The operating system runs
the job, and in return informs Tivoli Workload Scheduler whether the job has
completed successfully or not. This information is entered into the Symphony file
to indicate the status of the job. This way the Symphony file is continuously being
updated to reflect the work that needs to be done, the work in progress, and the
work that has been completed.
It is important to remember that the Symphony file contains copies of the objects
read from the Tivoli Workload Scheduler databases. This way any updates,
changes, or modifications made to objects in the Symphony file will not be
reflected in the Tivoli Workload Scheduler databases.

The plan creating in details


The processing day of Tivoli Workload Scheduler begins at the time defined by
the start global option parameter (defined in domain master globalopts
configuration file), which is set by default to 6:00 a.m. To turn over a new day,
preproduction set up is performed for the upcoming day, and post-production
logging and reporting is performed for the day just ended.
Pre- and post-production processing can be fully automated by adding the
Tivoli-supplied final job stream, or a user-supplied equivalent, to the Tivoli
Workload Scheduler database along with other job streams.

Chapter 2. End-to-end TWS architecture and components

31

The final job stream is placed in the plan everyday, and results in running a job
named Jnextday prior to the start of a new day. The major steps are depicted in
Figure 2-8 on page 33. The job performs the following tasks:
The scheduler program runs (Step 1 in Figure 2-8 on page 33). The
scheduler reads the system date and selects job streams for the new days
plan (read from the mastsked database). Only the job streams whose run
cycles include that date are selected. The job streams are placed in the
production schedule file (prodsked).
The compiler program runs (Step 2 in Figure 2-8 on page 33). The compiler
creates an interim plan file (Symnew). The compiler does this by importing
into the Symnew file all the scheduling objects (jobs, prompts, resources, and
NT users) referenced by the job streams in the production schedule file.
Prints preproduction reports.
Stops Tivoli Workload Scheduler (stops and unlinks from the subordinate
agents).
The stageman program runs (Step 3 in Figure 2-8 on page 33). This program
carries forward into the new plan incomplete job streams from the old plan.
Only incomplete job streams whose Carry Forward option is set are carried
forward. Stageman then archives the old plan file in the schedlog directory
and installs the new plan (Symphony). A duplicate copy of the plan called
Sinfonia is created for distribution to the agents.
Starts Tivoli Workload Scheduler for the new day.
Prints post-production reports for the previous day.
Logs job statistics for the previous day.
These steps are run on the master domain manager.
When Tivoli Workload Scheduler starts up again on the master it automatically
links to the subordinate agents in the network. The agents receive the updated
Sinfonia file from the master, save the Sinfonia file as their new local Symphony
file, and resume processing.

32

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Incomplete
carry forward
job streams

Old plan file


System date

Symphony

CF job
streams

1
Job streams
database

mastsked

calendars

jobs
Other TWS
databases

schedulr

Symphony

stageman

prodsked
Production
schedule file

New plan file

Symnew

compiler
Interim
Plan File

jobs, prompts,
resources,
NT users

Sinfonia
Copy of plan
for agents

prompts
Other scheduling
objects
resources

NT users

Figure 2-8 Detailed description of the steps of the Jnextday script

Running job streams in the plan


Depending on their run cycle definition, job streams are taken from the Tivoli
Workload Scheduler database and automatically inserted in the daily production
plan. Actually it is a copy of the job stream and related objects that are placed in
the daily production plan, the Symphony file.
While the job stream is in the plan, and as long as it has not completed, it can still
be modified in any of its components. That is, you can modify the job stream
properties, the properties of its jobs, their sequence, the workstation or resources
they use, and so on, in order to be able to face last-minute contingencies. The
best way to do this is by means of the job stream instance editor of the Job
Scheduling Console, where the term instance implies a scheduling object that
has been included in the current plan.
Note that since all objects are copied from the Tivoli Workload Scheduler
database to the daily plan, changes made to any of these objects in the plan will
not be reflected in the objects in the database.
You can also hold, release, or cancel a job stream, as well as change the
maximum number of jobs within the job stream that can run concurrently. You
can change the priority previously assigned to the job stream and release the job
stream from all its dependencies.
Last minute changes to the current production plan include the possibility of
submitting jobs and job streams that are already defined in the Tivoli Workload
Scheduler database but were not included in the plan. You can also submit jobs
that are being defined ad hoc. These jobs are submitted to the current plan but
are not stored in the database.

Chapter 2. End-to-end TWS architecture and components

33

Furthermore you can rerun completed or in-error jobs as well as job streams in
the plan.

Monitoring instances in the plan


Monitoring is done by listing plan objects (instances) in the Job Scheduling
Console. Using lists, you can see the status of all or of subsets of the following
objects in the current plan:
Job stream instances.
Job instances.
Domains.
Workstations.
Resources.
File dependencies, where a file dependency is when a job or job stream
needs to verify the existence of one or more files before it can begin
execution.
Prompt dependencies, where a prompt dependency is when a job or job
stream needs to wait for an affirmative response to a prompt before it can
begin execution.
You can also use these lists for managing some of these objects. You can, for
instance, reallocate resources, link or unlink workstations, kill jobs, or switch a
domain manager.
Additionally, you can monitor the daily plan with Tivoli Business Systems
Manager, an object-oriented systems management application that provides
monitoring and event management of resources, applications and subsystems,
and is integrated with Version 8.1 of Tivoli Workload Scheduler.
Network managers can use Tivoli Workload Scheduler/NetView, a NetView
application, to monitor and diagnose Tivoli Workload Scheduler networks from a
NetView management node. This includes a set of submaps and symbols to view
Tivoli Workload Scheduler networks topographically and determine the status of
the job scheduling activity and critical Tivoli Workload Scheduler processes on
each workstation. Menu actions are provided to start and stop Tivoli Workload
Scheduler processing, and to run conman on any workstation in the network. In
Chapter 7, Tivoli NetView integration on page 381, we will give examples of
using this integration.

2.1.7 Other Tivoli Workload Scheduler features


Besides the basic functions in Tivoli Workload Scheduler you also have features
like reporting, auditing, security, and time zones.

34

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Reporting
As part of the preproduction and post-production processes, reports are
generated that show summary or detail information about the previous or next
production day. These reports can also be generated ad hoc. The available
reports are:
Job details listing
Prompt listing
Calendar listing
Parameter listing
Resource listing
Job history listing
Job histogram
Planned production schedule
Planned production summary
Planned production detail
Actual production summary
Actual production detail
Cross reference report
In addition, during production, a standard list file (STDLIST) is created for each
job instance launched by Tivoli Workload Scheduler. Standard list files contain
header and trailer banners, echoed commands, and errors and warnings. These
files can be used to troubleshoot problems in job execution.

Auditing
An auditing option helps track changes to the database and the plan.
For the database, all user modifications, except for the delta of the modifications,
are logged. If an object is opened and saved, the action will be logged even if no
modification has been done.
For the plan, all user modifications to the plan are logged. Actions are logged
whether they are successful or not.
Audit files are logged to a flat text file on individual machines in the Tivoli
Workload Scheduler network. This minimizes the risk of audit failure due to
network issues and allows a straightforward approach to writing the log. The log
formats are the same for both plan and database in a general sense. The logs

Chapter 2. End-to-end TWS architecture and components

35

consist of a header portion, which is the same for all records; an action ID; and a
section of data, which will vary according the action type. All data is kept in clear
text and formatted to be readable and editable from a text editor such as Vi or
Notepad.

Options and security


The Tivoli Workload Scheduler option files determine how Tivoli Workload
Scheduler runs on your system. Several performance, tuning, security, logging,
and other configuration options are available.

Setting Global and Local options


Global options are defined on the master domain manager and apply to all
workstations in the Tivoli Workload Scheduler network. Global options are
entered in the globalopts file with a text editor. Changes can be made at any
time, but they do not take effect until Tivoli Workload Scheduler is stopped and
restarted. Global options are used to:
Set the name of the master domain manager.
Determine if expanded database format is used (expanded database format
allows usage of long object names).
Determine whether uncompleted job streams will be carried forward from the
old to the new production control file.
Define the start time of the Tivoli Workload Scheduler processing day.
Local options are entered with a text editor into a file named localopts, which
resides in the Tivoli Workload Scheduler users home directory. The local options
are defined on each workstation and apply only to that workstation. Local options
are used to:
Specify the name of the local workstation.
Prevent the launching of jobs executed by root in UNIX.
Prevent unknown clients from connecting to the system.
Specify a number of performance options.
Specify a number of logging preferences.
Specify date format and command line prompts.

Setting security
Security is accomplished with the use of a security file that contains one or more
user definitions. Each user definition identifies a set of users, the objects they are
permitted to access, and the types of actions they can perform.

36

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

A template file is installed with the product. The template must be edited to
create the user definitions, and compiled and installed with a utility program to
create a new operational security file. After it is installed, further modifications
can be made by creating an editable copy with another utility.
Each workstation in a Tivoli Workload Scheduler network has its own security
file. An individual file can be maintained on each workstation, or a single security
file can be created on the master domain manager and copied to each domain
manager, fault tolerant agent, and standard agent.

Using time zones


Tivoli Workload Scheduler supports time zones. Enabling time zones provides
the ability to manage ones workload on a global level. Time-zone
implementation also allows for easy scheduling across multiple time zones and
for jobs that need to run in the dead zone. The dead zone is the gap between the
Tivoli Workload Scheduler start of day time on the master and the time on the
fault tolerant agent in another time zone. For example, if an eastern master with
a Tivoli Workload Scheduler start of day of 6 a.m. initializes a western agent with
a 3-hour time-zone difference, the dead zone for this agent is between the hours
of 3 a.m. and 6 a.m. Previously, special handling was required to run jobs in this
time period. Now when specifying a time zone with the start time on a job or job
stream, Tivoli Workload Scheduler runs them as expected.
Once enabled, time zones can be specified in the Job Scheduling Console or
composer for start and deadline times within jobs and job streams.

2.1.8 Making the Tivoli Workload Scheduler network fail-safe


Being prepared for network problems as well as problems with the operating
systems make recovery easier. In particular, you should make sure that you have
a backup master for your master domain manager and backup domain
managers for the management hub domain managers in your Tivoli Workload
Scheduler network. The Tivoli Workload Scheduler network can be made more
fail-safe by performing the following actions:
Designate fault tolerant agents in the domain to be a backup master domain
manager and backup domain managers.
Make certain that the Full Status and Resolve Dependencies modes are
selected in the workstation definition for the backup managers. To make sure
that the Symphony files on the backup managers are updated with the same
information as the domain manager, the Full Status and Resolve
Dependencies modes must be selected on the workstation definition.

Chapter 2. End-to-end TWS architecture and components

37

Ensure that the domain managers (including the master domain manager)
have full status and resolve dependencies turned on. This is important if you
need to resort to long-term recovery, where the backup master generates a
Symphony file (runs Jnextday). If those records are not enabled, the former
master domain manager shows up as a regular fault tolerant agent after the
first occurrence of Jnextday. During normal operations, the Jnextday job
automatically turns on the full status and resolve dependency flags for the
master domain manager, if they are not already turned on. When the new
master runs Jnextday, it does not recognize the former master domain
manager as a backup master unless those flags are enabled. The former
master does not have an accurate Symphony file when the time comes to
switch back. Treat the all domain managers workstation definitions as if they
were backup domain manager definitions to the new domain managers. This
ensures true fault tolerance.
For the standby master domain manager, it may be necessary to transfer files
between the master domain manager and its standby. For this reason, the
computers must have compatible operating systems. Do not combine UNIX
with Windows NT computers, and in UNIX, do not combine big-endian with
little-endian computers.
Terminology note: Endian refers to which bytes are most significant in
multi-byte data types. In big-endian architectures, the left-most bytes
(those with a lower address) are most significant. In little-endian
architectures, the right-most bytes are most significant. Mainframes and
most RISC-based systems, including AIX, are big-endian. Intel-based
systems generally use little-endian format. PowerPC uses both.

On a daily basis, following start-of-day processing on the master domain


manager, make copies of the TWShome\mozart and TWShome \..\unison
\network directories, and the TWShome \Sinfonia file. The copies can then be
moved to the standby master domain manager, if necessary.
Note: For a UNIX master domain manager, if the TWShome/mozart and
../unison/network directories on the current master domain manager are
reasonably static, they can be copied to the standby beforehand. During
normal operation, they are hidden when you mount the current master
domain managers directories on the standby. If it becomes necessary to
switch to the standby, simply unmounting the current master domain
managers directories will make the standbys copies accessible.

38

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

2.2 Tivoli Workload Scheduler for z/OS architecture


Tivoli Workload Scheduler for z/OS expands the scope for automating your data
processing operations. It plans and automatically schedules the production
workload. From a single point of control, it drives and controls the workload
processing at both local and remote sites. By using Tivoli Workload Scheduler for
z/OS to increase automation, you use your data processing resources more
efficiently, have more control over your data processing assets, and manage
your production workload processing better.
Tivoli Workload Scheduler for z/OS is composed of three major features:
The Tivoli Workload Scheduler for z/OS agent feature
The agent is the base product in Tivoli Workload Scheduler for z/OS. The
agent is also called a tracker. It must run on every operating system in your
z/OS complex on which Tivoli Workload Scheduler for z/OS controlled work
runs. The agent records details of job starts and passes that information to
the engine, which updates the plan with statuses.
The Tivoli Workload Scheduler for z/OS engine feature
One z/OS operating system in your complex is designated the controlling
system and it runs the engine. The engine is also called the controller. Only
one engine feature is required, even when you want to establish standby
engines on other z/OS systems in a syspelx.
The engine manages the databases and the plans and causes the work to be
submitted at the appropriate time and at the appropriate system in your z/OS
sysplex or on another system in a connected z/OS sysplex or z/OS system.
The Tivoli Workload Scheduler for z/OS end-to-end feature
This feature makes it possible for the Tivoli Workload Scheduler for z/OS
engine to manage a production workload in a Tivoli Workload Scheduler
distributed environment. You can schedule, control, and monitor jobs in Tivoli
Workload Scheduler from the Tivoli Workload Scheduler for z/OS engine with
this feature.
The end-to-end feature is covered in Section 2.3, End-to-end scheduling
architecture on page 63.
The workload on other operating environments can also be controlled with the
open interfaces provided with Tivoli Workload Scheduler for z/OS. Sample
programs using TCP/IP or an Network Job Entry/Remote Spooling
Communication Subsystem (NJE/RSCS) combination show you how you can
control the workload on environments that at present have no scheduling
feature.

Chapter 2. End-to-end TWS architecture and components

39

Besides these major parts, the Tivoli Workload Scheduler for z/OS product also
contains the Tivoli Workload Scheduler for z/OS connector and the Job
Scheduling Console (JSC).
Tivoli Workload Scheduler for z/OS connector
Maps the Job Scheduling Console commands to the Tivoli Workload
Scheduler for z/OS engine. The Tivoli Workload Scheduler for z/OS
connector pre-requires the Tivoli Management Framework configured for a
Tivoli server or Tivoli managed node.
Job Scheduling Console
A Java-based graphical user interface (GUI) for the Tivoli Workload
Scheduler suite.
The Job Scheduling Console runs on any machine from which you want to
manage Tivoli Workload Scheduler for z/OS engine plan and database
objects. It provides, through the Tivoli Workload Scheduler for z/OS
connector, functionality similar to the Tivoli Workload Scheduler for z/OS
legacy ISPF interface. You can use the Job Scheduling Console from any
machine as long as it has a TCP/IP link with the machine running the Tivoli
Workload Scheduler for z/OS connector.
The same Job Scheduling Console can be used for Tivoli Workload
Scheduler and Tivoli Workload Scheduler for z/OS.
In the next topics we will provide an overview of Tivoli Workload Scheduler for
z/OS configuration, the architecture, and the terminology used in Tivoli Workload
Scheduler for z/OS.

2.2.1 Tivoli Workload Scheduler for z/OS configuration


Tivoli Workload Scheduler for z/OS supports many configuration options using a
variety of communication methods:
The controlling system (the controller or engine)
Controlled z/OS systems
Remote panels and program interface applications
Job Scheduling Console
Scheduling jobs that are in a distributed environment using Tivoli Workload
Scheduler (described in Section 2.3, End-to-end scheduling architecture on
page 63)

40

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The controlling system


The controlling system requires both the agent and the engine. One controlling
system can manage the production workload across all of your operating
environments.
The engine is the focal point of control and information. It contains the controlling
functions, the dialogs, the databases, the plans, and the schedulers own batch
programs for housekeeping, etc. Only one engine is required to control the entire
installation, including local and remote systems.
Since Tivoli Workload Scheduler for z/OS provides a single point of control for
your Tivoli Workload Scheduler for z/OS production workload, it is important to
make this system fail-safe. This way you minimize the risk of having any outages
in your production workload in case the engine or the system with the engine
fails. To make your engine fail-safe you can start backup engines (hot standby
engines) on other systems in the same sysplex as the active engine. If the active
engine or the controlling system fails, Tivoli Workload Scheduler for z/OS can
automatically transfer the controlling functions to a backup system within a
parallel sysplex. Through Cross Coupling Facility (XCF), Tivoli Workload
Scheduler for z/OS can automatically maintain production workload processing
during system failures. The standby engine can be started on several z/OS
systems in the sysplex.
Figure 2-9 on page 42 shows an active engine with two standby engines running
in one sysplex. When an engine is started on a system in the sysplex, it will
check if there is already an active engine in the sysplex. It there are no active
engines it will be an active engine. If there is an active engine it will be a standby
engine. The engine in the example has connection to eight agents: Three in the
sysplex, two remote, and three in another sysplex. The agents on the remote
systems and in the other sysplexes are connected to the active engine via
ACF/VTAM connections.

Chapter 2. End-to-end TWS architecture and components

41

Agent

Agent

Standby
Engine

Standby
Engine

z/OS
SYSPLEX
Agent
Active
Engine

VTAM

Remote
Agent

VTAM

Remote
Agent

Remote
Agent

Remote
Agent

z/OS
SYSPLEX

Remote
Agent

Figure 2-9 TWS for z/OS - Two sysplex environments and stand-alone systems

Controlled z/OS systems


An agent is required for every controlled z/OS system in a configuration. This
includes, for example, local controlled systems within shared DASD or sysplex
configurations.
The agent runs as a z/OS subsystem and interfaces with the operating system
(through JES2 or JES3, and SMF), using the subsystem interface and the
operating system exits. The agent monitors and logs the status of work, and
passes the status information to the engine via shared DASD, XCF, or
ACF/VTAM.
You can exploit z/OS and the cross-system coupling facility (XCF) to connect
your local z/OS systems. Rather than being passed to the controlling system via
shared DASD, work status information is passed directly via XCF connections.
XCF lets you exploit all of production-workload-restart facilities and its hot
standby function in TWS for z/OS.

42

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Remote systems
The agent on a remote z/OS system passes status information about the
production work in progress to the engine on the controlling system. All
communication between Tivoli Workload Scheduler for z/OS subsystems on the
controlling and remote systems is done via ACF/VTAM.
Tivoli Workload Scheduler for z/OS lets you link remote systems using
ACF/VTAM networks. Remote systems are frequently used locally on premises
to reduce the complexity of the data processing installation.

Remote panels and program interface applications


ISPF panels and program interface (PIF) applications can run in a different z/OS
system than the one where the active engine is running. Dialogs and PIF
applications send requests to and receive data from a Tivoli Workload Scheduler
for z/OS server that is running on the same z/OS system where the target engine
is running, via advanced program-to-program communications (APPC). The
APPC server will communicate with the active engine to perform the requested
actions.
Using an APPC server for ISPF panels and PIF gives user the freedom to run
ISPF panels and PIF on any system in a z/OS enterprise as long as this system
has advanced program-to-program communication with the system where the
active engine is started. This also means that you do not have to make sure that
your PIF jobs always run on the z/OS system where the active engine is started.
Furthermore using the APPC server makes it seamless for panel users and PIF
programs if the engine is moved to its backup engine.
The APPC server is a separate address space, started and stopped either
automatically by the engine, or by the user via the z/OS start command. There
can be more than one server for an engine. If the dialogs or the PIF applications
run on the same z/OS system where the target engine is running, the server may
not be involved. As shown in Figure 2-10 on page 44, it is possible to run the
Tivoli Workload Scheduler for z/OS dialogs and PIF applications from any system
as long as the system has a ACF/VTAM connection to the APPC server.

Chapter 2. End-to-end TWS architecture and components

43

PIF program

z/OS
SYSPLEX

ISPF
panels

Active
Engine
APPC
Server
VTAM

VTAM

Remote
System

Remote
System

Remote
System
ISPF
panels

ISPF
panels

PIF program

Figure 2-10 APPC server with remote panels and PIF access to TWS for z/OS

Job Scheduling Console


The Job Scheduling Console (JSC or JS Console) is another way to work with
Tivoli Workload Scheduler for z/OS databases and current plan. Using the Job
Scheduling Console you have a graphical user interface. The Job Scheduling
Console connects to the Tivoli Workload Scheduler for z/OS engine via an Tivoli
Workload Scheduler for z/OS TCP/IP server task.
The TCP/IP server is a separate address space, started and stopped either
automatically by the engine or by the user via the z/OS start and stop command.
There can be more than one TCP/IP server for an engine.

44

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

z/OS
SYSPLEX

Job
Scheduling
Console

Job
Scheduling
Console

Active
Engine
TCP/IP
Server

Figure 2-11 JSC connection to Tivoli Workload Scheduler for z/OS

2.2.2 Tivoli Workload Scheduler for z/OS database objects


Scheduling with Tivoli Workload Scheduler for z/OS includes the capability to do
the following:
Schedule jobs across multiple systems local and remotely.
Group jobs into job streams according to, for example, function or application,
and define advanced run cycles based on customized calendars for the job
streams.
Set workload priorities and specify times for the submission of particular work.
Base submission of work on availability of resources.
Automatic tailoring of jobs based on dates, date calculations, etc.
Ensure correct processing order by identifying dependencies such as
successful completion of previous jobs, availability of resources, and time of
day.
Define automatic recovery and restart for jobs.
Forward incomplete jobs to the next production day.
This is accomplished by defining scheduling objects in the Tivoli Workload
Scheduler for z/OS databases managed by the active engine and shared by the
standby engines. Scheduling objects are combined in the Tivoli Workload
Scheduler for z/OS databases so they represent the workload you want the be
handled by Tivoli Workload Scheduler for z/OS.

Chapter 2. End-to-end TWS architecture and components

45

Tivoli Workload Scheduler for z/OS databases contains information about the
work that is to be run, when it should be run, and the resources that are needed
and available. This information is used to calculate a forward forecast called the
long-term plan.
Scheduling objects are elements used to define your Tivoli Workload Scheduler
for z/OS workload. Scheduling objects include job streams (jobs and
dependencies as part of job streams), workstations, calendars, periods, operator
instructions, resources, and JCL variables.
All these scheduling objects can be created, modified, or deleted by using the
legacy Tivoli Workload Scheduler for z/OS ISPF panels. Job streams,
workstations, and resources can be managed from the Job Scheduling Console
as well.
Job streams

A job stream is a description of a unit of production work. It


includes a list of jobs (related tasks) associated with that unit of
work. For example, a payroll job stream might include a
manual task in which an operator prepares a job, several
computer-processing tasks in which programs are run to read
a database; updates employee records, and writes payroll
information to an output file, and a print task that prints
paychecks.
Tivoli Workload Scheduler for z/OS schedules work based on
the information you provide in your job stream description. A
job stream can include the following:
A list of the jobs (related tasks) associated with that unit of
work, such as:
Data entry
Job preparation
Job submission or started-task initiation
Communication with the NetView program
File transfer to other operating environments
Printing of output
Post-processing activities, such as quality control or
dispatch
Other tasks related to the unit of work that you want to
schedule, control, and track
A description of dependencies between jobs within a job
stream and between jobs in other job streams

46

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Information about resource requirements, such as exclusive


use of a datasets
Special operator instructions that are associated with a job
How, when, and where each job should be processed
Run policies for that unit of work; that is, when it should be
scheduled or, alternatively, the name of a group definition
that records the run policy
Workstations

When scheduling and processing work, Tivoli Workload


Scheduler for z/OS considers the processing requirements of
each job. Some typical processing considerations are:
What human or machine resources are required for
processing the work; for example, operators, processors, or
printers?
When are these resources available?
How will these jobs be tracked?
Can this work be processed somewhere else if the resources
become unavailable?
You can plan for maintenance windows in your hardware and
software environments. Tivoli Workload Scheduler for z/OS
enables you to perform a controlled and incident-free
shutdown of the environment, preventing last-minute
cancellation of active tasks. You can choose to reroute the
workload automatically during any outage, planned or
unplanned.
Tivoli Workload Scheduler for z/OS tracks jobs as they are
processed at workstations and dynamically updates the plan
with real-time information on the status of jobs. You can view or
modify this status information online using the workstation
ready lists in the dialog.

Dependencies

In general, every data processing related activity must occur in


a specific order. Activities performed out of order will, at the
very least, create invalid output; in the worst case your
corporate data will be corrupted. In any case, the result is
costly reruns, missed deadlines, and unsatisfied customers.
You can define dependencies for jobs when a specific
processing order is required. When Tivoli Workload Scheduler
for z/OS manages the dependent relationships for you, the
jobs are always started in the correct order every time they are
scheduled. A dependency is called internal when it is between

Chapter 2. End-to-end TWS architecture and components

47

two jobs in the same job stream, and external when it is


between two jobs in different job streams.
You can work with job dependencies graphically from the Job
Scheduling Console (see Figure 2-12).

Figure 2-12 Job Scheduling Console display of dependencies between jobs

Calendars

Tivoli Workload Scheduler for z/OS uses information about


when the jobs departments work and when they are free, so
that job streams are not scheduled to run on days when
processing resources are not available (for example, Sundays
and holidays). This information is stored in a calendar. Tivoli
Workload Scheduler for z/OS supports multiple calendars for
enterprises where different departments have different work
days and free days (different groups within a business operate
according to different calendars).
The multiple calendar function is critical if your enterprise has
installations in more than one geographical location (for
example, with different local or national holidays).

Resources

48

Tivoli Workload Scheduler for z/OS lets you serialize work


based on the status of any data processing resource. A typical

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

example is a job that uses a dataset as input, but must not start
until the dataset is successfully created and loaded with valid
data. You can use resource serialization support to send
availability information about a data processing resource to
Tivoli Workload. To accomplish this Tivoli Workload Scheduler
for z/OS uses resources (also called special resources).
Resources are typically defined to represent physical or logical
objects used by jobs. A resource can be used to serialize
access to a dataset or to limit the number of file transfers on a
particular network link. The resource does not have to
represent a physical object in your configuration, although it
often does.
Tivoli Workload Scheduler for z/OS keeps a record of the state
of each resource and its current allocation status. You can
choose to hold resources in case a job allocating the resources
ends abnormally. You can also use the Tivoli Workload
Scheduler for z/OS interface with the Resource Object Data
Manager (RODM) to schedule jobs according to real resource
availability. You can subscribe to RODM updates in both local
and remote domains.
Tivoli Workload Scheduler for z/OS lets you subscribe to
dataset activity on z/OS systems. The dataset triggering
function of Tivoli Workload Scheduler for z/OS automatically
updates special resource availability when a dataset is closed.
You can use this notification to coordinate planned activities or
to add unplanned work to the schedule.
Periods

Tivoli Workload Scheduler for z/OS uses business processing


cycles, or periods, to calculate when your job streams should
be run; for example, weekly or every 10th working day. Periods
are based on the business cycles of your customers.
Tivoli Workload Scheduler for z/OS supports a range of
periods for processing the different job streams in your
production workload.There are several predefined periods in
Tivoli Workload Scheduler for z/OS that can be used when
defining run cycles for your job streams, such as week, month,
year, and all the julian months (January through December).
When you define a job stream, you specify when it should be
planned using a run cycle, which can be:
A rule with a format such as:
ONLY the SECOND TUESDAY of every MONTH
EVERY FRIDAY in the user-defined period SEMESTER1

Chapter 2. End-to-end TWS architecture and components

49

Where the words in capitals are selected from lists of ordinal


numbers, names of days, and common calendar intervals or
period names, respectively.
Or it can be a combination of period and offset. For example,
an offset of 10 in a monthly period specifies the tenth day of
each month.
Operator instr.

You can specify an operator instruction to be associated with a


job in a job stream. This could be, for example, special running
instructions for a job or detailed restart information in a case
job abends and needs to be restarted.

JCL variables

JCL variables are used to do automatic job tailoring in Tivoli


Workload Scheduler for z/OS. There are several predefined
JCL variables, such as current date, current time, planning
date, day number of week, etc. Besides these predefined
variables, you can also define your own specific or unik
variables in Tivoli Workload Scheduler for z/OS so your local
defined variables can be used for automatic job tailoring as
well.

2.2.3 Tivoli Workload Scheduler for z/OS plans


Tivoli Workload Scheduler for z/OS plans your production workload schedule. It
produces both high-level (long-term plan) and detailed plans (current plan or
plan). Not only do these plans drive the production workload, but they can also
show you the status of the production workload on your system at any specified
time. You can produce trial plans to forecast future workloads. These trial plans
can, for example, be used to simulate the effects of changes to your production
workload, calendar, and installation.
Tivoli Workload Scheduler for z/OS builds the plans from your description of the
production workload, that is, the objects you have defined in the Tivoli Workload
Scheduler for z/OS databases.

The plan process


First the long-term plan is created, which shows the job streams that should be
run each day in a period, usually for one or two months. Then a more detailed
current plan is created. The current plan is used by Tivoli Workload Scheduler for
z/OS to submit and control jobs and job streams.

50

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Long term planning


The long-term plan is a high-level schedule of your anticipated production
workload. It lists, by day, the instances of job streams to be run during the period
of the plan. Each instance of a job stream is called an occurrence. The long-term
plan shows when occurrences are to run, as well as the dependencies that exist
between the job streams. You can view these dependencies graphically on your
terminal as a network to check that work has been defined correctly. The plan
can assist you in forecasting and planning for heavy processing days. The
long-term-planning function can also produce histograms showing planned
resource use for individual workstations during the plan period.
You can use the long-term plan as the basis for documenting your service level
agreements. It lets you relate service level agreements directly to your
production workload schedules so that your customers can see when and how
their work is to be processed.
The long term plan provides a window to the future. How far into the future is up
to you: From one day to four years. Normally the long term plan goes two to three
months into the future. You can also produce long-term plan simulation reports
for any future date. Tivoli Workload Scheduler for z/OS can automatically extend
the long-term plan at regular intervals. You can print the long-term plan as a
report, or you can view, alter, and extend it online using the legacy ISPF dialogs.
The long term plan extension is performed by a Tivoli Workload Scheduler for
z/OS program. This program is normally run as part of the daily Tivoli Workload
Scheduler for z/OS housekeeping job stream. By running this program on
workdays and letting the program extend the long term plan by one working day,
you assure that the long term plan is always up-to-date (see Figure 2-13 on
page 52).

Chapter 2. End-to-end TWS architecture and components

51

Databases

Resources

Workstations

Job
Streams

Calendars

Periods

1. Extend long term plan

Long Term
Plan

90 days

1
workday

Long Term Plan

Figure 2-13 The long-term plan extension process

This way the long-term plan always reflects changes made to job streams, run
cycles, and calendars, since these definitions are re-read by the program that
extends the long term plan. The long term plan extension program reads job
streams (run cycles), calendars, and periods and creates the high level long term
plan based on these objects.

Current plan
The current plan, or simply the plan is the heart of Tivoli Workload Scheduler for
z/OS processing: In fact, it drives the production workload automatically and
provides a way to check its status. The current plan is produced by the run of
batch jobs that extract from the long term plan the occurrences that fall within the
specified period of time, considering also the job details. What the current plan
does is select a window from the long term plan and make the jobs ready to be
run: They will really be started depending on the decided restrictions (for
example, dependencies, resources availability, or time depending jobs).
Job streams and related objects are copied form the Tivoli Workload Scheduler
for z/OS databases to the current plan occurrences. Since the objects are copied
to the current plan dataset, any changes made to these objects in the plan will
not be reflected in the Tivoli Workload Scheduler for z/OS databases.
The current plan is a rolling plan that can cover several days. The extension of
the current plan is performed by a Tivoli Workload Scheduler for z/OS program.
This program is normally run on workdays as part of the daily Tivoli Workload
Scheduler for z/OS housekeeping job stream scheduled on workdays (see
Figure 2-14 on page 53).

52

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Databases

Current
Plan
Extension

Resources

Workstations

Old current plan

Job
Streams

Remove completed
job streams

90 days

Long Term
Plan

Calendars

Add detail
for next day

Periods

New current plan

1
workday

Long Term Plan

today tomorrow

Figure 2-14 The current plan extension process

Extending the current plan by one workday means that the plan can cover more
than one calendar day. If, for example, Saturday and Sunday are considered as
Fridays (in the calendar used by the run-cycle for the housekeeping job stream),
then when the current plan extension program is run on Friday afternoon, the
plan will go to Monday afternoon. The current plan is a rolling plan that can cover
several days. A common method is to cover 12 days with regular extensions
each shift.
Production workload processing activities are listed by minute in the plan. You
can either print the current plan as a report, or view, or alter and extend it online,
by using the legacy ISPF dialogs.
Note: Changes made to job stream run-cycle, for example, changing job
stream from running on Mondays to running on Tuesdays, will not immediately
be reflected in the long term plan or the current plan. To have such changes
reflected in the long-term plan and current plan you must first run a modify all
or extend the long-term plan and then extend or replan the current plan. Due
to this, it is a good practice to run the Extend of long-term plan with one
working day (shown in Figure 2-13 on page 52) before the Extend of current
plan as part of normal Tivoli Workload Scheduler for z/OS housekeeping.

Chapter 2. End-to-end TWS architecture and components

53

Running job streams and jobs in the plan


Tivoli Workload Scheduler for z/OS automatically:
Starts and stops started tasks
Edits z/OS job JCL statements before submission
Submits jobs in the specified sequence to the target operating
environmentevery time
Tracks each scheduled job in the plan
Determines the success or failure of the jobs
Displays status information and instructions to guide workstation operators
Provides automatic recovery of z/OS jobs when they end in error
Generates processing dates for your job stream run cycles using rules, such
as:
Every second Tuesday of the month
Only the last Saturday in June, July, and August
Every third workday in the user-defined PAYROLL period
Starts jobs with regard to real resource availability
Performs dataset cleanup in error and rerun situations for the z/OS workload
Tailors the JCL for step restarts of z/OS jobs and started tasks
Dynamically schedules additional processing in response to activities that
cannot be planned
Provides automatic notification when an updated dataset is closed, which can
be used to trigger subsequent processing
Generates alerts when abnormal situations are detected in the workload

Automatic workload submission


Tivoli Workload Scheduler for z/OS automatically drives work through the
system, taking into account work that requires manual or program-recorded
completion (program-recorded completion refers to situations where the status of
a scheduler-controlled job is set to Complete by a user-written program). It also
promotes the optimum use of resources, improves system availability, and
automates complex and repetitive operator tasks. Tivoli Workload Scheduler for
z/OS automatically controls the submission of work according to:
Dependencies between jobs
Workload priorities

54

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Specified times for the submission of particular work


Availability of resources
By saving a copy of the JCL for each separate run, or occurrence, of a particular
job in its plans, Tivoli Workload Scheduler for z/OS prevents the unintentional
reuse of temporary JCL changes, such as overrides.

Job tailoring
Tivoli Workload Scheduler for z/OS provides automatic job-tailoring functions,
which enable jobs to be automatically edited. This can reduce your dependency
on time-consuming and error-prone manual editing of jobs. Tivoli Workload
Scheduler for z/OS job tailoring provides:
Automatic variable substitution
Dynamic inclusion and exclusion of inline job statements
Dynamic inclusion of job statements from other libraries or from an exit
For jobs to be submitted on a z/OS system, these job statements will be z/OS
JCL.
Variables can be substituted in specific columns, and you can define verification
criteria to ensure that invalid strings are not substituted. Special directives
supporting the variety of date formats used by job stream programs let you
dynamically define the required format and change them multiple times for the
same job. Arithmetic expressions can be defined to let you calculate values such
as the current date plus four work days.

Manual control and intervention


Tivoli Workload Scheduler for z/OS lets you check the status of work and
intervene manually when priorities change or when you need to run unplanned
work. You can query the status of the production workload and then modify the
schedule if needed.

Status inquiries
With the legacy ISPF dialogs or with the Job Scheduling Console, you can make
queries online and receive timely information on the status of the production
workload.
Time information that is displayed by the dialogs can be in the local time of the
dialog user. Using the dialogs, you can request detailed or summary information
on individual job streams, jobs, and workstations, as well as summary
information concerning workload production as a whole. You can also display
dependencies graphically as a network at both job stream and job level.

Chapter 2. End-to-end TWS architecture and components

55

Status inquiries:
Provide you with overall status information that you can use when considering
a change in workstation capacity or when arranging an extra shift or overtime
work.
Help you supervise the work flow through the installation; for instance, by
displaying the status of work at each workstation.
Help you decide whether intervention is required to speed the processing of
specific job streams. You can find out which job streams are the most critical.
You can also check the status of any job stream, as well as the plans and
actual times for each job.
Enable you to check information before making modifications to the plan. For
example, you can check the status of a job stream and its dependencies
before deleting it or changing its input arrival time or deadline. See Modifying
the current plan on page 56 for more information.
Provide you with information on the status of processing at a particular
workstation. Perhaps work that should have arrived at the workstation has not
arrived. Status inquiries can help you locate the work and find out what has
happened to it.

Modifying the current plan


Tivoli Workload Scheduler for z/OS makes status updates to the plan
automatically, using its tracking functions. However, it lets you change the plan
manually to reflect unplanned changes to the workload or to the operations
environment, which often occur during a shift. For example, you may need to
change the priority of a job stream, add unplanned work, or reroute work from
one workstation to another. Or you may need to correct operational errors
manually. Modifying the current plan may be the best way to handle these
situations.
You can modify the current plan online. For example, you can:
Include unexpected jobs or last-minute changes to the plan. Tivoli Workload
Scheduler for z/OS then automatically creates the dependencies for this
work.
Manually modify the status of jobs.
Delete occurrences of job streams.
Graphically display job dependencies before you modify them.
Modify the data in job streams, including the JCL.
Respond to error situations by:
Rerouting jobs

56

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Rerunning jobs or occurrences


Completing jobs or occurrences
Changing jobs or occurrences
Change the status of workstations by:
Rerouting work from one workstation to another
Modifying workstation reporting attributes
Updating the availability of resources
Changing the way resources are handled
Replan or extend the current plan.
In addition to using the dialogs, you can modify the current plan from your own
job streams using the program interface or the application programming
interface. You can also trigger Tivoli Workload Scheduler for z/OS to dynamically
modify the plan using TSO commands or a batch program. This enables
unexpected work to be added automatically to the plan.
It is important to remember that the current plan contains copies of the objects
read from the TWS for z/OS databases. This means that changes made to
current plan instances will not be reflected in the corresponding database
objects.

2.2.4 Other Tivoli Workload Scheduler for z/OS features


In the following sections we investigate other Tivoli Workload Scheduler for z/OS
features.

Automatically controlling the production workload


Tivoli Workload Scheduler for z/OS automatically drives the production workload
by monitoring the flow of work and by directing the processing of jobs so that it
follows the business priorities established in the plan.
Through its interface to the NetView program or its management-by-exception
ISPF dialog, Tivoli Workload Scheduler for z/OS can alert the production control
specialist to problems in the production workload processing. Furthermore, the
NetView program can automatically trigger Tivoli Workload Scheduler for z/OS to
perform corrective actions in response to these problems.

Chapter 2. End-to-end TWS architecture and components

57

Recovery and restart


Tivoli Workload Scheduler for z/OS provides automatic restart facilities for your
production work. You can specify the restart actions to be taken if work initiated
by Tivoli Workload Scheduler for z/OS ends in error (see Figure 2-15). You can
use these functions to predefine automatic error-recovery and restart actions for
jobs and started tasks. The schedulers integration with the NetView for OS/390
program allows it to automatically pass alerts to the NetView for OS/390 in error
situations. Use of z/OSs cross-system coupling facility (XCF) enables Tivoli
Workload Scheduler for z/OS processing when system failures occur.

Figure 2-15 Tivoli Workload Scheduler for z/OS automatic recovery and restart

Recovery of jobs and started tasks


Automatic recovery actions for failed jobs are specified in user-defined control
statements. Parameters in these statements determine the recovery actions to
be taken when a job or started task ends in error.

Restart and cleanup


You can use restart and cleanup to catalog, uncatalog, or delete datasets when a
job ends in error or when you need to rerun a job. dataset cleanup takes care of
JCL in the form of in-stream JCL, in-stream procedures, and cataloged
procedures on both local and remote systems. This function can be initiated
automatically by Tivoli Workload Scheduler for z/OS or manually by a user

58

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

through the panels. Tivoli Workload Scheduler for z/OS will reset the catalog to
the status that it was before the job ran for both generation dataset groups
(GDGs) and for DD allocated datasets contained in JCL. In addition, restart and
cleanup supports the use of Removable Media Manager in your environment.
Restart at both the step- and job-level is also provided in the Tivoli Workload
Scheduler for z/OS legacy ISPF panels and in the JSC. It manages resolution of
generation data group (GDG) names, JCL-containing nested INCLUDEs or
PROC, and IF-THEN-ELSE statements. Tivoli Workload Scheduler for z/OS also
automatically identifies problems that can prevent successful restart, providing a
logic of the best restart step.
You can browse the job log or request a step-level restart for any z/OS job or
started task even when there are no catalog modifications. The job-log browse
functions are also available for the workload on other operating platforms, which
is especially useful for those environments that do not support a System Display
and Search Facility (SDSF) like facility.
These facilities are available to you without the need to make changes to your
current JCL. Tivoli Workload Scheduler for z/OS gives you an enterprise-wide
dataset cleanup capability on remote agent systems.

Production workload restart


Tivoli Workload Scheduler for z/OS provides a production workload restart, which
can automatically maintain the processing of your work if a system or connection
fails. Scheduler-controlled production work for the unsuccessful system is
rerouted to another system. Because Tivoli Workload Scheduler for z/OS can
restart and manage the production workload, the integrity of your processing
schedule is maintained, and service continues for your customers.
Tivoli Workload Scheduler for z/OS exploits the VTAM Model Application
Program Definition feature and the z/OS-defined symbols to ease the
configuration and job in a sysplex environment, giving the user a single system
view of the sysplex.
Starting, stopping, and managing your engines and agents does not require you
to know which sysplex the z/OS image is actually running on.

z/OS Automatic Restart Manager support


All the scheduler components are enabled to be restarted by the Automatic
Restart Manager (ARM) of the z/OS operating system, in the case of program
failure.

Chapter 2. End-to-end TWS architecture and components

59

Automatic status checking


To track the work flow, Tivoli Workload Scheduler for z/OS interfaces directly with
the operating system, collecting and analyzing status information about the
production work that is currently active in the system. Tivoli Workload Scheduler
for z/OS can record status information from both local and remote processors.
When status information is reported from remote sites in different time zones,
Tivoli Workload Scheduler for z/OS makes allowances for the time differences.

Status reporting from heterogeneous environments


The processing on other operating environments can also be tracked by Tivoli
Workload Scheduler for z/OS. You can use supplied programs to communicate
with the engine from any environment that can establish communications with a
z/OS system.

Status reporting from user programs


You can pass status information about production workload processing to Tivoli
Workload Scheduler for z/OS from your own user programs through a standard
supplied routine.

Additional job-completion checking


If required, Tivoli Workload Scheduler for z/OS provides further status checking
by scanning SYSOUT and other print datasets from your processing when the
success or failure of the processing cannot be determined by completion codes.
For example, Tivoli Workload Scheduler for z/OS can check the text of system
messages or messages originating from your user programs. Using information
contained in job completion checker (JCC) tables, Tivoli Workload Scheduler for
z/OS determines what actions to take when it finds certain text strings. These
actions can include:
Reporting errors
Re-queuing SYSOUT
Writing incident records to an incident dataset

Managing unplanned work


Tivoli Workload Scheduler for z/OS can be automatically triggered to update the
current plan with information about work that cannot be planned in advance. This
allows Tivoli Workload Scheduler for z/OS to control unexpected work. Because
Tivoli Workload Scheduler for z/OS checks the processing status of this work,
automatic recovery facilities are also available.

60

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Interfacing with other programs


Tivoli Workload Scheduler for z/OS provides a program interface (PIF). Using
this interface, you can automate most actions that you can perform online
through the dialogs. This interface can be called from CLISTs, user programs,
and via TSO commands.
The application programming interface (API) lets your programs communicate
with Tivoli Workload Scheduler for z/OS from any compliant platform. You can
use Common Programming Interface for Communications (CPI-C), advanced
program-to-program communication (APPC), or your own logical unit (LU) 6.2
verbs to converse with Tivoli Workload Scheduler for z/OS through the API. You
can use this interface to query and update the current plan. The programs can be
running on any platform that is connected locally, or remotely through a network,
with the z/OS system where the engine runs.

Management of critical jobs


Tivoli Workload Scheduler for z/OS exploits the capability of the Workload
Manager (WLM) component of z/OS to ensure that critical jobs are completed on
time. If a critical job is late, Tivoli Workload Scheduler for z/OS favors it using
existing Workload Manager interface.

Security
Today, data processing operations increasingly require a high level of data
security, particularly as the scope of data processing operations expands and
more people within the enterprise become involved. Tivoli Workload Scheduler
for z/OS provides complete security and data integrity within the range of its
functions. It provides a shared central service to different user departments even
when the users are in different companies and countries. Tivoli Workload
Scheduler for z/OS provides a high level of security to protect scheduler data and
resources from unauthorized access. With Tivoli Workload Scheduler for z/OS,
you can easily organize, isolate, and protect user data to safeguard the integrity
of your end-user applications (see Figure 2-16 on page 62). Tivoli Workload
Scheduler for z/OS can plan and control the work of many user groups, and
maintain complete control of access to data and services.

Chapter 2. End-to-end TWS architecture and components

61

Figure 2-16 Tivoli Workload Scheduler for z/OS security

Audit trail
With the audit trail, you can define how you want Tivoli Workload Scheduler for
z/OS to log accesses (both reads and updates) to scheduler resources. Because
it provides a history of changes to the databases, the audit trail can be extremely
useful for staff that works with debugging and problem determination.
A sample program is provided for reading audit-trail records. The program reads
the logs for a period that you specify and produces a report detailing changes
that have been made to scheduler resources.

System Authorization Facility (SAF)


Tivoli Workload Scheduler for z/OS uses the system authorization facility, a
function of z/OS, to pass authorization verification requests to your security
system, for example RACF. This means that you can protect your scheduler data
objects with any security system that uses the SAF interface.

Protection of data and resources


Each user request to access a function or to access data is validated by SAF.
This is some of the information that can be protected:
Calendars and periods
Job stream names or job stream owner, by name
Workstation, by name
Job stream-specific data in the plan

62

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Operator instructions
JCL
To support distributed, multi-user handling, Tivoli Workload Scheduler for z/OS
lets you control the level of security you want to implement, right down to the
level of individual records. You can define generic or specific RACF resource
names to extend the level of security checking.
If you have RACF Version 2 Release 1 installed, you can use the Tivoli Workload
Scheduler for z/OS reserved resource class (IBMOPC) to manage your Tivoli
Workload Scheduler for z/OS security environment. This means that you do not
have to define your own resource class by modifying RACF and restarting your
system.

Data integrity during submission


Tivoli Workload Scheduler for z/OS can ensure the correct security environment
for each job it submits, regardless of whether the job is run on a local or a remote
system. Tivoli Workload Scheduler for z/OS lets you create tailored security
profiles for individual jobs or groups of jobs.

2.3 End-to-end scheduling architecture


In the two previous sections, Section 2.1 Tivoli Workload Scheduler architecture
on page 17 and Section 2.2 Tivoli Workload Scheduler for z/OS architecture on
page 39, we discuss the architecture for Tivoli Workload Scheduler and the
architecture for Tivoli Workload Scheduler for z/OS. In this section we will bring
the two products together and describe the Tivoli Workload Scheduler for z/OS
end-to-end architecture.

2.3.1 How end-to-end scheduling works


End-to-end scheduling is based on the ability to directly connect a Tivoli
Workload Scheduler domain manager, and its underlying agents and domains, to
the Tivoli Workload Scheduler for z/OS engine. The engine is seen by the
distributed network as the master domain manager.
Tivoli Workload Scheduler for z/OS also creates the plan for the distributed
network and sends it to the domain manager. The domain manager sends a copy
of the plan to each of its linked agents and subordinate domain managers for
execution.

Chapter 2. End-to-end TWS architecture and components

63

The Tivoli Workload Scheduler domain manager acts as the broker for the
distributed network by resolving all dependencies for the subordinate managers
and agents. It sends its updates (in the form of events) to Tivoli Workload
Scheduler for z/OS so that it can update the plan accordingly. Tivoli Workload
Scheduler for z/OS handles its own jobs and notifies the domain manager of all
the status changes of the Tivoli Workload Scheduler for z/OS jobs that involve
the Tivoli Workload Scheduler plan. In this configuration, the domain manager
and all the distributed agents recognize Tivoli Workload Scheduler for z/OS as
the master domain manager and notify it of all the changes occurring in their own
plans. At the same time, the agents are not permitted to interfere with the Tivoli
Workload Scheduler for z/OS jobs, since they are viewed as running on the
master that is the only node that is in charge of them.
With this version of Tivoli Workload Scheduler for z/OS, the fault tolerant agents
replace the Tivoli OPC tracker agents and make scheduling possible on the
distributed platform with more reliable, fault tolerant, and scalable agents.
In Figure 2-17 on page 65 you can see a Tivoli Workload Scheduler network
managed by a Tivoli Workload Scheduler for z/OS engine. This is accomplished
by connecting a Tivoli Workload Scheduler domain manager directly to the Tivoli
Workload Scheduler for z/OS engine. Actually if you compare Figure 2-2 on
page 19 with Figure 2-17 on page 65, you will see that the Tivoli Workload
Scheduler network that is connected to Tivoli Workload Scheduler for z/OS is a
Tivoli Workload Scheduler network managed by a Tivoli Workload Scheduler
master domain manager. When connecting this Tivoli Workload Scheduler
network to the Tivoli Workload Scheduler for z/OS engine, the former Tivoli
Workload Scheduler master domain manager is changed to domain manager for
DomainZ (Z was chosen because this domain manager is intermediary between
the Tivoli Workload Scheduler distributed network and the Tivoli Workload
Scheduler for z/OS engine). The new master domain manager is the Tivoli
Workload Scheduler for z/OS engine.

64

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MASTERDM
Master
Domain
Manager

z/OS

TWS for z/OS


Engine
TWS for z/OS
Server

DomainZ
Domain
Manager
DMZ

AIX

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

FTA3
OS/400

Windows 2000

FTA4
Solaris

Figure 2-17 Tivoli Workload Scheduler for z/OS end-to-end scheduling

Tivoli Workload Scheduler for z/OS also allows you to access job streams
(schedules in Tivoli Workload Scheduler) and add them to the current plan in
Tivoli Workload Scheduler for z/OS. In addition, you can build dependencies
among Tivoli Workload Scheduler for z/OS job streams and Tivoli Workload
Scheduler jobs. From Tivoli Workload Scheduler for z/OS, you can monitor and
control the distributed agents.
In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs to
run on workstations in the Tivoli Workload Scheduler network. The TWS for z/OS
engine passes the job information to the Symphony file in the TWS for z/OS
server, which in turn passes the Symphony file to the TWS domain manager
(DMZ) to distribute and process. In turn, TWS reports the status of running and
completed jobs back to the current plan for monitoring in the Tivoli Workload
Scheduler for z/OS engine.

Chapter 2. End-to-end TWS architecture and components

65

2.3.2 Tivoli Workload Scheduler for z/OS end-to-end components


To run the Tivoli Workload Scheduler for z/OS end-to-end you must have a Tivoli
Workload Scheduler for z/OS server start task dedicated for end-to-end
scheduling. It is also possible to use the same server to communicate with the
Job Scheduling Console. The Tivoli Workload Scheduler for z/OS uses TCP/IP
for communication.
The Tivoli Workload Scheduler for z/OS engine uses the end-to-end server to
communicate events to the distributed agents. The end-to-end server will start
multiple tasks and processes using the z/OS UNIX System Services (USS).
The Tivoli Workload Scheduler for z/OS end-to-end scheduling engine is
comprised of three major components:
The Tivoli Workload Scheduler for z/OS controller: Manages database
objects, creates plans with the workload, and executes and monitors the
workload in the plan.
The Tivoli Workload Scheduler for z/OS server: Acts as the Tivoli Workload
Scheduler master domain manager. It receives a part of the current plan from
the Tivoli Workload Scheduler for z/OS engine, which contains job and job
streams to be executed in the Tivoli Workload Scheduler network. The server
is the focal point for all communication to and from the Tivoli Workload
Scheduler network.
The Tivoli Workload Scheduler primary domain manager: Serves as the
communication hub between the Tivoli Workload Scheduler for z/OS server
and the distributed Tivoli Workload Scheduler network. The domain manager
is connected directly to the Tivoli Workload Scheduler for z/OS master
domain manager in USS.

In Tivoli Workload Scheduler for z/OS 8.1.0, it is only possible to connect one
focal point domain manager directly to the Tivoli Workload Scheduler for z/OS
server (this domain manager is also called the primary domain manager).
It is possible to designate a backup domain manager for the focal point Tivoli
Workload Scheduler domain manager.

Detailed description of the communication


The communication between the Tivoli Workload Scheduler for z/OS controller
and the Tivoli Workload Scheduler for z/OS server is shown in Figure 2-18 on
page 67.

66

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

TWS for z/OS Engine


TWS for z/OS Controller

TWS for z/OS Server

job log
retrievers

GS
GS

NMM
EM

sender
subtask
receiver
subtask

Message files

TWS processes

NetReq.msg

netman
spawns writer

spawned as
necessary

end-to-end task
WA

(in USS)

outbound
queue
inbound
queue

writer

output
translator
input
translator

Mailbox.msg

mailman

Intercom.msg

batchman

From remote
mailman

To remote
writer

tomaster.msg

Figure 2-18 Tivoli Workload Scheduler for z/OS inter-process communication

Tivoli Workload Scheduler for z/OS server processes and tasks


The Tivoli Workload Scheduler for z/OS server uses the following processes and
tasks for end-to-end scheduling (see Figure 2-18):
netman

Replicates the Tivoli Workload Scheduler process. It


starts at system startup. It monitors the NetReq.msg
queue and the Tivoli Workload Scheduler TCP/IP port
(default 31111). When it receives a request, it starts the
writer or mailman processes. The request to start or stop
mailman will come from the output translator via the
NetReq.msg queue. The request to start or stop writer will
come from mailman on the Tivoli Workload Scheduler
domain manager via the TCP/IP port.

writer

Replicates the Tivoli Workload Scheduler process. It is


started by netman on request from mailman of the
connected Tivoli Workload Scheduler domain manager.
Writer has the task of writing the events that it receives
form the remote mailman in Mailbox.msg.

Chapter 2. End-to-end TWS architecture and components

67

mailman

Replicates the Tivoli Workload Scheduler process. Its


main task are:
Routing events. It reads the events stored in the
Mailbox.msg queue and sends them either to the
controller, writing them in Tomaster.msg, or to the
remote writer on the Tivoli Workload Scheduler domain
manager.
Establishing the connection with the domain manager
by calling the remote netman to start writer.
Sending the Symphony file to the primary Tivoli
Workload Scheduler directly connected domain
manager when a new Symphony file is created.

batchman

Updates the Symphony file and resolves dependencies at


master level. It replicates the functionality of Tivoli
Workload Schedulers batchman process to a limited
extent.

job log retriever

Receives from each distributed agent the log of a job


executed on the agent. After the job log retriever has
received the log, it sizes the log according to the Tivoli
Workload Scheduler for z/OS specifications, translates it
from UTF-8 to the Tivoli Workload Scheduler for z/OS
EBCDIC codepage, and queues it in the inbound queue
of the controller. The retrieval of a job log is a lengthy
operation and users may request several logs at the same
time. For this reason, a subtask is started for each job log
retrieve. The subtasks are temporary and terminate after
the logs are queued in the inbound queue. The user will
be notified by a message if running Tivoli Workload
Scheduler for z/OS panel interface when the job log is
received.

Events from server to controller

68

Input translator

Translates the events read form the tomaster.msg file to


the Tivoli Workload Scheduler for z/OS format (including
UTF-8 to EBCDIC translation), and writes them in the
inbound queue.

Receiver subtask

A subtask of the end-to-end task run in the Tivoli Workload


Scheduler for z/OS controller. It receives events from the
inbound queue and queues them to the Event Manager
(EM) task. The events have already been filtered and
elaborated by the input translator.

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Events from controller to server


Sender subtask

A subtask of the end-to-end task in the Tivoli Workload


Scheduler for z/OS controller. It receives events for
changes in the engine plan, related to Tivoli Workload
Scheduler agents. The Tivoli Workload Scheduler for z/OS
tasks that can change the current plan are: General
Service (GS), Normal Mode Manager (NMM), Event
Manager (EM), and Workstation Analyzer (WA).
The NMM sends events to the sender task when the plan
is extended or replanned for synchronization purposes.

Output translator

Receives the events in Tivoli Workload Scheduler for z/OS


format form the outbound queue and elaborates them to
activate the correct Tivoli Workload Scheduler function. It
also translates event names from the Tivoli Workload
Scheduler for z/OS EBCDIC codepage to UTF-8.
The output translator interacts with three different
components depending on the type of the event:
Starts a job log retriever subtask if the event is to
retrieve the log of a job from a Tivoli Workload
Scheduler agent.
Queues an event in NetReq.msg if the event is to start
or stop mailman.
Queues events in Mailbox.msg for the other events that
are sent to update the Symphony file on the Tivoli
Workload Scheduler agents (for example events for a
job that has changed status, events for manual changes
on jobs or workstations, or events to link/unlink
workstations).

TWS for z/OS datasets and files used for end-to-end


The Tivoli Workload Scheduler for z/OS server and controller uses the following
datasets and files for end-to-end scheduling:
EQQTWSIN

Sequential dataset used to queue events sent by the


server to the controller (the inbound queue).
Must be defined in Tivoli Workload Scheduler for z/OS
controller and the server started task procedure.

EQQTWSOU

Sequential dataset used to queue events sent by the


controller to the server (the outbound queue).

Chapter 2. End-to-end TWS architecture and components

69

Symphony

HFS file containing the active copy of the plan used by the
distributed Tivoli Workload Scheduler agents. This file is
not shown in Figure 2-18 on page 67.

Sinfonia

HFS file containing the distribution copy of the plan used


by the distributed Tivoli Workload Scheduler agents. This
file is not shown in Figure 2-18 on page 67.

NetReq.msg

HFS file used to queue requests for the netman process.

Mailbox.msg

HFS file used to queue events sent to the mailman


process.

intercom.msg

HFS file used to queue events sent to the batchman


process.

tomaster.msg

HFS file used to queue events sent to the input translator


process.

EQQSCLIB

Partitioned dataset used as a repository for the definitions


of the jobs running on distributed agents. This dataset is
not shown in Figure 2-18 on page 67.
The EQQSCLIB dataset is described in Section 2.3.4,
Tivoli Workload Scheduler for z/OS end-to-end database
objects on page 71.

EQQSCPDS

VSAM dataset containing a copy of the current plan used


by the daily plan batch programs to create the Symphony
file. This dataset is not shown in Figure 2-18 on page 67.
The end-to-end plan creating process is described in
Section 2.3.5, Tivoli Workload Scheduler for z/OS
end-to-end plans on page 73.

2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration


The topology of the distributed Tivoli Workload Scheduler network that is
connected to the Tivoli Workload Scheduler for z/OS engine is described in
parameter statements for the Tivoli Workload Scheduler for the z/OS server and
for the Tivoli Workload Scheduler for z/OS programs that handle the long-term
plan and the current plan.
Parameter statements are also used to activate the end-to-end subtasks in the
Tivoli Workload Scheduler for z/OS controller.

70

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The parameters statements used to describe the topology is described in great


detail in Section 3.2.6, Define end-to-end initialization statements on page 110.
Here you will also find an example on how to reflect a specific TWS network
topology in TWS for z/OS servers and plan programs using the TWS for z/OS
topology parameter statements.

2.3.4 Tivoli Workload Scheduler for z/OS end-to-end database


objects
To be able to run any workload in the Tivoli Workload Scheduler distributed
network, you must define some database objects related to the Tivoli Workload
Scheduler workload in Tivoli Workload Scheduler for z/OS engine databases.
The Tivoli Workload Scheduler for z/OS end-to-end related database objects are:
Tivoli Workload Scheduler for z/OS fault tolerant workstations

A fault tolerant workstation is a computer workstation configured to schedule


jobs on distributed agents. The workstation must also be defined in the server
CPUREC initialization statement (see Figure 2-19 on page 73).
Tivoli Workload Scheduler for z/OS job streams, jobs, and dependencies

Job streams and jobs to run on Tivoli Workload Scheduler distributed agents
are defined like other job streams and jobs in Tivoli Workload Scheduler for
z/OS. To run a job on a Tivoli Workload Scheduler distributed agent, the job is
simply defined on a fault tolerant workstation. Dependencies between Tivoli
Workload Scheduler distributed jobs are created exactly the same way as
other job dependencies in the Tivoli Workload Scheduler for z/OS engine.
This is also the case when creating dependencies between Tivoli Workload
Scheduler distributed jobs and Tivoli Workload Scheduler for z/OS mainframe
jobs.
Some of the Tivoli Workload Scheduler for z/OS mainframe specific options
will not be available for Tivoli Workload Scheduler distributed jobs.
Tivoli Workload Scheduler for z/OS resources

Only global resources are supported and can be used for Tivoli Workload
Scheduler distributed jobs. This means that the resource dependency is
resolved by the TWS for z/OS engine (controller) and not locally on the
distributed agent.
For a job running on a distributed agent, the usage of resources causes the
loss of fault tolerance. Only the engine determines the availability of a
resource and consequently lets the distributed agent start the job. Thus if a
job running on a distributed agent uses a resource, the following occurs:
When the resource is available, the engine sets the state of the job to
started and the extended status to waiting for submission.

Chapter 2. End-to-end TWS architecture and components

71

The engine sends a release-dependency event to the distributed agent.


The distributed agent starts the job.
If the connection between the engine and the distributed agent breaks, the
operation does not start on the distributed agent even if the resource
becomes available.
Note: When you monitor a job on a fault tolerant agent by means of the
Tivoli Workload Scheduler interfaces, you will not be able to see the
resource used by the job. Instead you see the job
OPCMASTER#GLOBAL.SPECIAL_RESOURCES. The dependency on
OPCMASTER#GLOBAL.SPECIAL_RESOURCES is set by the engine.
When monitoring by means of the Tivoli Workload Scheduler for z/OS
interfaces, you can see the resources as expected.

Every job that has special resource dependencies has a dependency to


this job. When the engine allocates the resource for the job, the
dependency is released (the engine sends a release events for the specific
resource to the agent through the distributed network).
The task associated to the distributed agent job, defined in Tivoli Workload
Scheduler for z/OS

A special partitioned dataset, EQQSCLIB, allocated in the Tivoli Workload


Scheduler for z/OS engine started task procedure, is used to store the job or
task definitions for distributed agent jobs.
For every distributed agent job definition in Tivoli Workload Scheduler for
z/OS there must be a corresponding member in the EQQSCLIB dataset. The
members of EQQSCLIB contain a JOBREC statement that describes the path
to the job or the command to be executed and the user to be used when the
job or command is executed.
Example for a UNIX script:
JOBREC JOBSCR(/Tivoli/tws/scripts/script001_accounting) JOBUSR(userid01)

Example for a UNIX command:


JOBREC JOBCMD(ls) JOBUSR(userid01)

It is not possible to use Tivoli Workload Scheduler for z/OS JCL variables or
automatic recovery statements in the task definition for distributed agent jobs,
because the task definition is placed in a separate library and does not
contain the actually script (JCL) only the placement (the path).
The JOBREC definitions are read by the Tivoli Workload Scheduler for z/OS
plan programs when producing the new current plan and placed as part of the
job definition in the Symphony file.

72

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

If a TWS distributed job stream is added to the plan in TWS for z/OS, the
JOBREC definition will be read by TWS for z/OS, copied to the Symphony file
on the TWS for z/OS server, and sent (as events) by the server to the TWS
agent Symphony files via the directly connected TWS domain manager.
It is important to remember that the EQQSCLIB members only has a pointer
(the path) to the job that is going to be executed. The actual job (the JCL) is
placed locally on the distributed agent or workstation in the directory defined
by the JOBREC JOBSCR definition.

Figure 2-19 Link between workstation names

2.3.5 Tivoli Workload Scheduler for z/OS end-to-end plans


When scheduling jobs in the Tivoli Workload Scheduler environment, current
plan processing also includes the automatic generation of the Symphony file that
goes to the Tivoli Workload Scheduler for z/OS server and Tivoli Workload
Scheduler subordinate domain managers as well as fault tolerant agents.

Chapter 2. End-to-end TWS architecture and components

73

If the end-to-end feature is activated in The Tivoli Workload Scheduler for z/OS,
the current plan program will read the topology definitions described in the
TOPLOGY, DOMREC, CPUREC, and USRREC initialization statements (see
Section 2.3.3, Tivoli Workload Scheduler for z/OS end-to-end configuration on
page 70) and the script library (EQQSCLIB) as part of the planning process.
Information from the initialization statements and the script library will be used to
create a Symphony file for the Tivoli Workload Scheduler distributed agents (see
Figure 2-20).
The Tivoli Workload Scheduler for z/OS current plan program is normally run on
workdays in the engine as described in Section 2.2.3, Tivoli Workload Scheduler
for z/OS plans on page 50.

Databases

Current
Plan
Extension
&
Replan

Resources

Old current plan

Workstations

Job
Streams

Remove completed
job streams

1.
2.
3.

Add detail
for next day

Extract TWS plan form current plan


Add topology (domain, work station)
Add task definition (path and user) for
distributed TWS jobs

Script
library

New current plan

New Symphony

Topology
Definitions

Figure 2-20 Creation of Symphony file in TWS for z/OS plan programs

Note that creating the plan, extracting plan objects related to the distributed
agents, and building the related Symphony file, do not involve Jnextday or any of
the Jnextday processes described in Section 2.1.6, Tivoli Workload Scheduler
plan on page 29. The process is handled by Tivoli Workload Scheduler for z/OS
planning programs.

Detailed description of the of the Symphony creation


See Figure 2-18 on page 67 for a description of the tasks and processes involved
in the Symphony creation.

74

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tivoli Workload Scheduler for z/OS Normal Mode Manager (NMM) sends an
event to the output translator to stop the Tivoli Workload Scheduler network.
In the meanwhile the plan program has started producing the Tivoli Workload
Scheduler for z/OS plan and the Symphony file with workstation information.
The output translator stops the Tivoli Workload Scheduler for z/OS server in
USS and stops processing incoming events. An End Sync event is added to
the inbound queue. The output translator starts to stop all the Tivoli Workload
Scheduler agents.
The Event Manager (EM) processes all the events on the inbound queue
while the Sync Stop event is found, then notifies NMM that the Tivoli
Workload Scheduler network has been stopped.
When the plan program has produced the new plan, NMM eventually wait
until EM has finished to process events. After that the NMM applies the job
tracking events received while the new plan was produced. It then takes a
backup of the new current plan on the Tivoli Workload Scheduler for z/OS
current plan dataset (CP1, CP2) and the Symphony Current Plan (SCP)
dataset. NMM sends a CP Ready Sync event to the output translator to
separate events from the old plan and events from the new plan.
Tivoli Workload Scheduler for z/OS mainframe schedule is resumed.
The plan program starts producing the Symphony file starting from SCP.
When the Symphony has been created, the plan program ends and NMM
notifies the output translator that the new Symphony file is ready.
The output translator copy the new Symphony (Symnew file) into Symphony
and Sinfonia file, a Symphony OK (or NOT OK) Sync event is sent to the
Tivoli Workload Scheduler for z/OS engine that logs a message in the engine
message log indicating that the Symphony has been switched (or not).
The Tivoli Workload Scheduler for z/OS server master is started in USS and
the Input Translator starts to process new events. As in Tivoli Workload
Scheduler distributed, mailman, and batchman process events are left in local
event files and start distributing the new Symphony file to the whole Tivoli
Workload Scheduler.

When the Symphony file is created by the Tivoli Workload Scheduler for z/OS
plan programs, it (or more precisely the Sinfonia file) will be distributed to the
Tivoli Workload Scheduler for z/OS subordinate domain manager, which in turn
distributes the Symphony (Sinfonia) file to its subordinate domain managers and
fault tolerant agents (see Figure 2-21 on page 76).

Chapter 2. End-to-end TWS architecture and components

75

MASTERDM

z/OS

Master
Domain
Manager

The TWS plan is extracted


from the TWS for z/OS plan
TWS for
z/OS plan

DomainZ
Domain
Manager
DMZ

AIX

The TWS plan is then distributed


to the subordinate DMs and FTAs
TWS plan

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

TWS plan

FTA3
OS/400

Windows 2000

FTA4
Solaris

Figure 2-21 Symphony file distribution from TWS for z/OS server to TWS agents

The Symphony file is generated:


Every time the Tivoli Workload Scheduler for z/OS plan is extended or
replanned
When a Symphony renew batch job is submitted (from Tivoli Workload
Scheduler for z/OS legacy ISPF panels, option 3.5)

The Symphony file contains:


Jobs to be executed on Tivoli Workload Scheduler distributed agents
z/OS (mainframe) jobs that are predecessors to Tivoli Workload Scheduler
distributed jobs
Job streams that have at least one job in the Symphony file

After the Symphony file is created and distributed to the Tivoli Workload
Scheduler distributed agents, the Symphony file is updated by events:
When job status changes
When jobs or job streams are modified

76

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

When jobs or job streams for the Tivoli Workload Scheduler distributed
agents are added to the plan in the Tivoli Workload Scheduler for z/OS
engine.

If you look at the Symphony file locally on a Tivoli Workload Scheduler distributed
agent, from the Job Scheduling Console, or using the Tivoli Workload Scheduler
command line interface to the plan (conman), you will see that:
The Tivoli Workload Scheduler workstation has the same name as the related
Workstation. OPCMASTER is the hard-coded name for the master domain
manager workstation for the Tivoli Workload Scheduler for z/OS engine.
The name of the job stream (or schedule) is the hexadecimal representation
of the occurrence (job stream instance) token (internal unique and invariant
identifier for occurrences). The job streams are always defined on the
OPCMASTER workstation (having no dependencies, this does not reduce
fault tolerance).
Using this hexadecimal representation for the job stream instances makes it
possible to have several instances for the same job stream, since they have
unique job stream names. Therefore it is possible to have a plan in the Tivoli
Workload Scheduler for z/OS engine and a distributed Symphony file that
spans more than 24 hours.
The job name has the form: <T>_<opnum>_<applname> or
<T>_<opnum>_<reuse>_<applname> where:

<T> is J for normal jobs or P for jobs that are representing pending
predecessors.
<opnum> is the operation number for the job in the job stream.
<reuse> is incremented when the same operation is recreated; if 0 it is
omitted.
<applname> is the occurrence (job stream) name.
In normal situations the Symphony file is automatically generated as part of the
Tivoli Workload Scheduler for z/OS plan process. Since the topology definitions
are read and built into the Symphony file as part of the Tivoli Workload Scheduler
for z/OS plan programs, regular operation situations can occur where you need
to renew (or rebuild) the Symphony file form the Tivoli Workload Scheduler for
z/OS plan:
When you make changes to the script library or to the definitions of the
TOPOLOGY statement
When you add or change information in the plan, such as workstation
definitions

Chapter 2. End-to-end TWS architecture and components

77

To have the Symphony file rebuilt or renewed, you can use the Symphony
Renew option of the Daily Planning menu (option 3.5 in the legacy TWS for z/OS
ISPF panels).
This renew function can also be used to recover from error situations like:
There is a non-valid job definition in the script library.
The workstation definitions are incorrect.
An incorrect Windows NT user name or password is specified.
When you make changes to the script library or to the definitions of the
TOPOLOGY statement.

In Common errors for jobs on fault tolerant workstations on page 196, we


describe how you can correct several of these error situations without
redistributing the Symphony file. It is worth it to get familiar with these
alternatives before you start redistributing a Symphony file in a heavily loaded
production environment.

2.3.6 How to make the TWS for z/OS end-to-end fail-safe


Basically three error situations can occur when running a Tivoli Workload
Scheduler for z/OS end-to-end network:
1. The Tivoli Workload Scheduler for z/OS engine can fail due to a system or
task outage.
2. The Tivoli Workload Scheduler for z/OS server can fail due to a system or
task outage.
3. The primary (focal point) domain manager directly connected to the Tivoli
Workload Scheduler for z/OS server can fail due to a system or task outage.
To avoid an outage of the end-to-end workload managed in the Tivoli Workload
Scheduler for z/OS engine and server and in the Tivoli Workload Scheduler
domain manager, you should consider:
Using standby engine for the Tivoli Workload Scheduler for z/OS engine
Making sure that your Tivoli Workload Scheduler for z/OS server can be
reached if the Tivoli Workload Scheduler for z/OS engine is moved to one of
its standby engines (TCP/IP configuration in your enterprise)
Using a standby domain manager for the primary Tivoli Workload Scheduler
domain manager

An example of a fail-safe end-to-end network with a Tivoli Workload Scheduler


standby engine and a Tivoli Workload Scheduler backup domain manager is
shown in Figure 2-22 on page 79.

78

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MASTERDM
Standby
Engine

Standby
Engine

z/OS
SYSPLEX
Active
Engine
Server

DomainZ
Domain
Manager
DMZ

AIX

AIX

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

Backup
Domain
Manager
(FTA)

FTA3
O S/400

Windows 2000

FTA4
Solaris

Figure 2-22 Fail-safe configuration with standby engine and TWS backup DM

If the primary domain manager fails it will be possible to switch to the backup
domain manager. Since the backup domain manager has an updated Symphony
file and knows the subordinate domain managers and fault tolerant agents, the
backup domain manager will be able to take over the responsibilities of the
domain manager. This switch can be performed without any outages in the
workload management.
If the switch to the backup domain manager is going to be active across the Tivoli
Workload Scheduler for z/OS plan extension, you must change the topology
definitions in the Tivoli Workload Scheduler for z/OS DOMREC initialization
statements. The backup domain manager fault tolerant workstation is going to be
the focal point domain manager for the Tivoli Workload Scheduler distributed
network.

Chapter 2. End-to-end TWS architecture and components

79

Example 2-1 shows how to change the name of the fault tolerant workstation in
the DOMREC initialization statement, if the switch to the backup domain
manager should be effective across Tivoli Workload Scheduler for z/OS plan
extension (See Section 4.6.5, Switch to TWS backup domain manager on
page 236 for more information).
Example 2-1 DOMREC initialization statement
DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMZ) DOMPARENT(MASTERDM)
Should be changed to:
DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMB) DOMPARENT(MASTERDM)
Where FDMB is the name of the fault tolerant workstation where the backup
domain manager is running.

If the Tivoli Workload Scheduler for z/OS engine or server fails it will be possible
to let one of the standby engines in the same sysplex take over. This takeover
can be accomplished without any outages in the workload management.
The Tivoli Workload Scheduler for z/OS server must follow the Tivoli Workload
Scheduler for z/OS engine. That is, if the Tivoli Workload Scheduler for z/OS
engine is moved to another system in the sysplex, the Tivoli Workload Scheduler
for z/OS server must be moved to the same system in the sysplex.

2.3.7 Tivoli Workload Scheduler for z/OS end-to-end benefits


The benefits that can be gained from using the Tivoli Workload Scheduler for
z/OS end-to-end scheduling are the following:
Connecting Tivoli Workload Scheduler fault tolerant agents to Tivoli Workload
Scheduler for z/OS.
Scheduling on additional operating systems.
Synchronizing work in mainframe and distributed environments.
The ability for Tivoli Workload Scheduler for z/OS to use multi-tier architecture
with Tivoli Workload Scheduler domain managers.
Extended planning capabilities, such as the use of long-term plans, trial
plans, and extended plans, also for the distributed Tivoli Workload Scheduler
network.

Extended plans also means that the current plan can span more than 24
hours.
Powerful run-cycle and calendar functions.

80

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Improved usage of resources (keep resource if job ends in error).


Extended usage of hostname or DNS names instead of dotted IP addresses.

Considerations
Implementing the Tivoli Workload Scheduler for z/OS end-to-end also implies
some limitations:
Usage of JCL variables is not possible in end-to-end scripts (we have showed
a work-around in Section 4.4.6, TWS for z/OS JCL variables in connection
with TWS parameters on page 211).
It is not possible to define Tivoli Workload Scheduler for z/OS auto-recovery
for jobs on Tivoli Workload Scheduler for z/OS fault tolerant workstations.
Windows users passwords are defined directly (without any encryption) in the
Tivoli Workload Scheduler for z/OS server initialization parameters.
Some Tivoli Workload Scheduler functions are not available on Tivoli
Workload Scheduler distributed agents. For example:

Tivoli Workload Scheduler prompts


Tivoli Workload Scheduler file dependencies
Dependencies on job stream level
Repeat job every 10 minutes (repeat range on job level)
Some of these limitations have been addressed in the next release of Tivoli
Workload Scheduler for z/OS end-to-end solution.

2.4 Job Scheduling Console and related components


The Job Scheduling Console can be run on almost any platform. Using the JSC,
an operator can access both TWS and TWS for z/OS scheduling engines. In
order to communicate with the scheduling engines, the Job Scheduling Console
requires several additional components to be installed:
The Tivoli Management Framework
The Job Scheduling Services (JSS)
The TWS connector, TWS for z/OS connector, or both

The Job Scheduling Services and the connectors must be installed on top of the
Tivoli Management Framework (TMF). Together, the Tivoli Management
Framework, the Job Scheduling Services, and the connector provide the
interface between JSC and the scheduling engine.

Chapter 2. End-to-end TWS architecture and components

81

The Job Scheduling Console is installed locally on your computer.

2.4.1 A brief introduction to the Tivoli Management Framework


The Tivoli Management Framework (TMF or the Framework) provides the
foundation on which the Job Scheduling Services and connectors are installed.
The Framework also performs access verification when a Job Scheduling
Console user logs in. The Tivoli Management Environment (TME) uses the
concept of Tivoli Management Regions (TMRs). There is a single server for each
TMR called the TMR server. One can think of this server as being analogous to
the TWS master server. The TMR Server contains the Tivoli object repository (a
database used by the TMR). Managed nodes (MNs) are semi-independent
agents installed on other nodes in the network. Managed nodes are roughly
analogous to TWS fault tolerant agents. For more information about the Tivoli
Management Framework, see the Tivoli Framework 3.7.1 Users Guide,
GC31-8433.

2.4.2 Job Scheduling Services (JSS)


The Job Scheduling Services component provides a unified interface in the Tivoli
Management Framework to different job scheduling engines. JSS does not do
anything on its own; it requires additional components called connectors in order
to connect to job scheduling engines. The Job Scheduling Services must be
installed on either the TMR server or a managed node.

2.4.3 Connectors
Connectors are the components that allow the Job Scheduling Services to talk
with different types of scheduling engines. When working with a particular type of
scheduling engine, the Job Scheduling Console communicates with the
scheduling engine via the Job Scheduling Services and the connector. A different
connector is required for each type of scheduling engine. A connector can only
be installed on a computer where the Tivoli Management Framework and Job
Scheduling Services have already been installed.
There are two types of connector for connecting to the two types of scheduling
engines in the Tivoli Workload Scheduler 8.1 suite:
Tivoli Workload Scheduler for z/OS connector
Tivoli Workload Scheduler connector

82

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The JSS communicates with the engine via the connector of the appropriate
type. When working with a TWS for z/OS engine, the JSC communicates via the
TWS for z/OS connector. When working with a TWS engine, the JSC
communicates via the TWS connector.
The two types of connectors function somewhat differently: The TWS for z/OS
Connector communicates over TCP/IP with the TWS for z/OS engine running on
a mainframe (MVS or z/OS) computer. The TWS connector performs direct
reads and writes of the TWS plan and database files on the same computer as
where the TWS connector runs.
A connector instance must be created before the connector can be used. Each
type of connector can have multiple instances. A separate instance is required
for each engine that will be controlled by JSC.
We will now discuss each type of connector in more detail.

Tivoli Workload Scheduler for z/OS connector


The TWS for z/OS connector can be instantiated on any TMR server or managed
node. The TWS for z/OS connector instance communicates via TCP with the
Tivoli Workload Scheduler for z/OS TCP/IP server. You might, for example, have
two different TWS for z/OS engines that both need to be accessible from the Job
Scheduling Console. In this case, you would install one connector for working
with one TWS for z/OS engine, and another connector for communicating with
the other TWS for z/OS engine. When a TWS for z/OS connector instance is
created, the IP address (or host name) and TCP port number of the TWS for
z/OS engines TCP/IP server are specified. The TWS for z/OS connector uses
these two pieces of information to connect to the Tivoli Workload Scheduler for
z/OS engine. See Figure 2-23 on page 84.

Tivoli Workload Scheduler connector


The TWS connector must be instantiated on the host where the TWS engine is
installed, so that it can access the plan and database files locally. This means
that the Tivoli Management Framework must be installed (either as a TMR server
or managed node) on the server where the TWS engine resides. Usually, this
server is the TWS master domain manager. But it may also be desirable to
connect with JSC to another domain manager or to a fault tolerant agent. If
multiple instances of TWS are installed on a server, it is possible to have one
TWS connector instance for each TWS instance on the server. When a TWS
connector instance is created, the full path to the TWS home directory
associated with that TWS instance is specified. This is how the TWS connector
knows where to find the TWS databases and plan. See Figure 2-23 on page 84.

Chapter 2. End-to-end TWS architecture and components

83

Connector instances
We will now give a couple of examples of how connector instances might be
installed in the real world.

One connector instance of each type


In Figure 2-23 there are two connector instances, including one TWS for z/OS
connector instance (CON1) and one TWS connector instance (TWSA):
The Tivoli Workload Scheduler for z/OS connector instance CON1 is
associated with Engine 1, a Tivoli Workload Scheduler for z/OS engine
running in a remote sysplex. Communication between the connector instance
and the remote scheduling engine is conducted over a TCP connection.
The Tivoli Workload Scheduler connector instance TWS1 is associated with
Engine A, a Tivoli Workload Scheduler engine installed on the same AIX
server. The TWS1 connector instance reads from and writes to the plan (the
Symphony file) of the TWS engine.

z/OS SYSPLEX
TWS for z/OS
Master
Domain
Manager

Engine 1
Server

TWS link between


Master and DM

AIX Server
TWS

TMR Server or Managed Node


TWS for z/OS
Tivoli
ENG-1
Management Connector
Framework
(with JSS
installed)
TWS
ENG-A
Connector
oserv

TCP/IP
connections

Job
Scheduling
Console

Domain
Manager

local read/write
of plan

Plan
Instance of TWS on
the AIX server

connector
instances

Desktop Computer

Laptop Computer

Engine A

TCP/IP
connection

Job
Scheduling
Console

UNIX Workstation
Job
Scheduling
Console

Figure 2-23 One TWS for z/OS connector and one TWS connector instance

84

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The names of the connector instances are arbitrary. We chose the names we did
because the names convey some information about the scheduling engine with
which the connector instances are associated.
Tip: TWS connector instances must be created on the server where the TWS
engine is installed. This is because the TWS connector needs to be able to
have access locally to the TWS engine (specifically, to the plan and database
files). This limitation obviously does not apply to TWS for z/OS connector
instances because the TWS for z/OS connector communicates with the
remote TWS for z/OS engine over TCP/IP.

In this example, Engine A is a TWS domain manager whose master is Engine 1,


a TWS for z/OS engine. It is common to install a TWS connector instance on a
domain manager that is under the control of a TWS for z/OS master. This allows
the operator to get information about jobs running on the domain manager
directly from the domain manager itself, rather than having to retrieve the
information through the TWS for z/OS engine.

Two connector instances of each type


It is often necessary to have multiple connector instances of each type installed
on the same server. For example, one instance for a production environment and
one instance for a test environment. Figure 2-24 on page 86 shows an example
of this.
In Figure 2-24 on page 86 there are four connector instances, including two TWS
for z/OS connector instances (CON1 and CON2) and two TWS connector
instances (TWS1 and TWS2).

Chapter 2. End-to-end TWS architecture and components

85

z/OS SYSPLEX

z/OS SYSPLEX

TWS for z/OS

TWS for z/OS


Master
Domain
Manager

Master
Domain
Manager

Engine 1
Server

Engine 2
Server

TWS link between


Master and DM

AIX Server
TMR Server or Managed Node

TCP/IP
ENG-1
TWS for z/OS
Tivoli
connections
Management Connector
ENG-2
Framework
(with JSS
ENG-A
installed)
TWS
Connector
local read/write of
ENG-B
oserv
plan & databases

TCP/IP
connections

4 connector
instances

Job
Scheduling
Console

TWS
Engine B

Master
Domain
Manager

Domain
Manager

Databases
Plan

Plan

2 separate instances of TWS


installed on the AIX server

Desktop Computer

Laptop Computer

TWS
Engine A

UNIX Workstation

Job
Scheduling
Console

Job
Scheduling
Console

Figure 2-24 An example with two connector instances of each type

Each of the connector instances is associated with a different Tivoli Workload


Scheduler for z/OS engine or Tivoli Workload Scheduler engine:
ENG-1 is associated with Engine 1 running in one sysplex.
ENG-2 is associated with Engine 2 running in a different sysplex.
ENG-A is associated with Engine A running on the AIX server.
ENG-B is associated with Engine B running on the AIX server.

In the above example, Engine 2 and Engine B are part of an end-to-end


scheduling environment where Engine 2 is the master domain manager and
Engine B is a subordinate domain manager.
Engine 1 and Engine A are independent of each other and not a part of the
end-to-end scheduling environment.

86

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

In this example, The TWS1 connector instance reads from and writes to the
databases and the plan of the Engine 1 TWS engine. The ENG-1 connector
instance reads from and writes to the plan of the Engine 2 TWS engine; there are
no databases associated with Engine 2 because it is a domain manager, not a
master.
Each TWS engine is installed as a distinct instance on the same AIX server. For
details about installing multiple TWS instances on the same computer, see
Section 3.5.1, Installing multiple instances of TWS on one machine on
page 142.
The two Tivoli Workload Scheduler for z/OS engines could instead be running on
two different nodes in the same sysplex. In many environments, there will be only
one Tivoli Workload Scheduler for z/OS engine. In these environments, only one
TWS for z/OS connector instance is needed.
Note: One connector instance must be created for each TWS for z/OS engine
or TWS engine that is to be accessed from the Job Scheduling Console.

It is also possible to have more than one TWS for z/OS connector instance
associated with the same TWS for z/OS engine, but this is not particularly useful.
You might wonder why one would need a separate TWS master domain
manager (Engine A) that is not part of the end-to-end environment (not under the
control of the TWS for z/OS master domain manager). You might also wonder
why one would need a TWS for z/OS engine (Engine 1) that is independent of
the end-to-end environment. In some situations, it might be best to have multiple
TWS networks, each with its own master. The divisions between TWS networks
might be based on organizational or functional divisions. For example, the
Accounting Department might want to use a TWS for z/OS engine as its master,
while the Engineering Department wants to use a TWS master. Or the
organization might choose to test the end-to-end scheduling environment
thoroughly before putting it into production. During the testing phase, it would be
necessary to have both networks up and running at the same time. You could
imagine that in the above example, Engine 2 and Engine B are the new
end-to-end scheduling networks, while Engine 1 and Engine A are parts of
independent TWS for z/OS and TWS networks.
There are many good reasons to have multiple separate scheduling networks.
Tivoli Workload Scheduler gives you a great deal of flexibility in this regard.

The connector programs


These are the programs that run behind the scenes to make the connectors work.
Each program and its function is described below.

Chapter 2. End-to-end TWS architecture and components

87

Programs of the TWS for z/OS connector


The programs that comprise the TWS for z/OS connector are located in
$BINDIR/OPC.
opc_connector

The main connector program, which contains the implementation of the main
connector methods (basically all the methods that are required to connect to
the TWS for z/OS engine and retrieve data from TWS for z/OS). It is
implemented as a threaded daemon, which means that it is automatically
started by the Tivoli Framework at the first request that should be handled by
it, and it will stay active until there has not been a request for a long time.
Once it is started, it handles starting new threads for all the JSC requests that
require data from a specific TWS for z/OS engine.
opc_connector2

A small connector program that contains the implementation for small


methods that do not require data from TWS for z/OS. This program is
implemented per method, which means that Tivoli Framework starts this
program when a method implemented by it is called, the process performs the
action for this method, and then is terminated. This is useful for methods (like
the ones called by JSC when it starts and asks for information from all of the
connectors) that can be isolated and not logical to maintain the process
activity.

Programs of the TWS connector


The programs that comprise the TWS connector are located in
$BINDIR/Maestro.
maestro_engine

The maestro_engine program performs authentication when a user logs in via


the Job Scheduling Console. It also starts and stops the TWS engine. It is
started by the Tivoli Management Framework (specifically, the oserv
program) when a user logs in from JSC. It terminates after 30 minutes of
inactivity.
maestro_plan

The maestro_plan program reads from and writes to the TWS plan. It also
handles switching to a different plan. The program is started when a user
accesses the plan. It terminates after 30 minutes of inactivity.
maestro_database

The maestro_database program reads from and writes to the TWS database
files. It is started when a JSC user accesses a database object or creates a
new object definition. It terminates after 30 minutes of inactivity.

88

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

job_instance_output

The job_instance_ouput program retrieves job standard list files. It is started


when a JSC user runs the Browse Job Log operation. It starts up, retrieves
the requested stdlist file, and then terminates.
maestro_x_server

The maestro_x_server program is used to provide an interface to certain


types of extended agents, such as the SAP R/3 extended agent (r3batch). It
starts up when a command is run in JSC that requires execution of an agent
method. It runs the X-agent method, returns the output, and then terminates.

Chapter 2. End-to-end TWS architecture and components

89

90

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Chapter 3.

Planning, installation, and


configuration of the TWS 8.1
In this chapter, we provide details on how to plan, install, and configure an
end-to-end scheduling environment using the Tivoli Workload Scheduler 8.1
suite (Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler).
This chapter is divided into sections that discuss the following topics:
Planning an end-to-end scheduling environment
Installing Tivoli Workload Scheduler for z/OS
Installing Tivoli Workload Scheduler
Installing Tivoli Management Framework for use with Job Scheduling Console
Installing the Job Scheduling Console
Configuring the end-to-end scheduling environment

In this chapter, we made the assumption that no scheduling products exist in our
example environment. So we will provide you general guidelines to plan, install,
and customize from scratch. You might already have an existing TWS for z/OS,
TWS for z/OS tracker agents, or TWS in your environment. We will cover these
different types of environments with detailed scenarios in Chapter 4, End-to-end
implementation scenarios and examples on page 169.

Copyright IBM Corp. 2002

91

3.1 Planning end-to-end scheduling for TWS for z/OS


As part of planning, the hardware and software requirements are important
factors to be taken care of. For the latest hardware and software requirements
for and end-to-end scheduling with TWS for Z/OS, you may refer to Appendix E
of the Tivoli Workload Scheduler for z/OS V8R1 Installation Guide, SH19-4543.
Tip: With every release, Tivoli Workload Scheduling integrates more and more
of the traditional host and distributed environments. This means that the staff
responsible for planning, installation, and operation of scheduling must
cooperate across department boundaries and must understand the entire
scheduling environment--mainframe and distributed. Let us give you an
example:

Tivoli Workload Scheduler for z/OS administrators must be familiar with the
domain architecture and the meaning of fault tolerant in order to understand
that the script is no longer located in the job repository database. This is
essential when it comes to reflecting the topology in Tivoli Workload Scheduler
for z/OS.
On the other hand, people who are in charge of Tivoli Workload Scheduler
need to know the Tivoli Workload Scheduler for z/OS architecture to
understand the new planning mechanism and Symphony file creation.
In conclusion, we recommend that all involved people (mainframe and
distributed scheduling) get familiar with both scheduling environments, which
are described throughout the book.

3.1.1 Preventive Service Planning (PSP)


The Program Directory provided with your Tivoli Workload Scheduler for z/OS
distribution tape may include technical information that is more recent than the
information provided in this section. In addition, the Program Directory describes
the program temporary fix (PTF) level of the Tivoli Workload Scheduler for z/OS
licensed program that is shipped. It is an important document when you install
Tivoli Workload Scheduler for z/OS. The Program Directory contains instructions
for unloading the software and information about additional maintenance for your
level of the distribution tape. Before you start installing Tivoli Workload Scheduler
for z/OS, check the preventive service planning bucket for recommendations
added by the service organizations after your Program Directory was produced.
The PSP includes a recommended service section that includes high impact or
pervasive (HIPER) APARs. Ensure that the corresponding PTFs are installed
before you start to customize a Tivoli Workload Scheduler for z/OS subsystem.

92

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Important: If you plan to migrate the Tivoli Workload Scheduler for z/OS
agent to 8.1, but stay on a pre-8.1 release of the engine, you need to install
the tolerance APAR PQ52935.

Table 3-1 gives the PSP information for Tivoli Workload Scheduler for z/OS.
Table 3-1 PSP upgrade and subset ID
Upgrade

Subset

Description

TWSZOS810

HWSZ100

Agent

JWSZ102

Engine

JWSZ1A4

Engine English NLS

JWSZ101

TCP/IP communication

JWSZ103

End-to-end enabler

JWSZ1C0

Agent enabler

3.1.2 Software ordering


Tivoli Workload Scheduler has been packed into three products that can be
ordered independently from each other. Table 3-2 shows each product and its
included functions.
Table 3-2 Product and components
Components

TWS 8.1 z/OS

z/OS scheduler
(OPC) includes
agent (z/OS
tracker)

TWS 8.1

TWS 8.1 suite

Tracker agent
enabler

Extended agent
for MVS and
OS/390

End-to-end enabler

TWS distributed
(Maestro)

Chapter 3. Planning, installation, and configuration of the TWS 8.1

93

Components

TWS 8.1 z/OS

TWS 8.1

TWS 8.1 suite

Job Scheduling
Console

The end-to-end enabler (FMID JWSZ103) is used to populate the base binary
directory in a HFS during System Modification Program/Extended (SMP/E)
installation. These files can be shared by different z/OS engines.
Important: If you want to use the end-to-end scheduling solution, you need to
order the Tivoli Workload Scheduler 8.1 suite, because it contains all the
necessary end-to-end components.

3.1.3 Tracker agents


Attention: As part of Tivolis strategy, the tracker agent code is no longer
shipped with the product, even though you can still run tracker agents under
Tivoli Workload Scheduler for z/OS until you are ready to migrate the tracker
agents to fault tolerant agents.

If you will install Tivoli Workload Scheduler for z/OS into the same SMP/E
zone where a prior Tivoli Workload Scheduler for z/OS release is already
installed, it will remove any tracker agent workstation code. Then any further
downloads or maintenance activities will be impossible. So, either choose a
different SMP/E zone or take a backup your SEQQEENU dataset.
If you plan to migrate from tracker agents to fault tolerant agents you must be
aware of following restriction in the current release Tivoli Workload Scheduler
8.1: You cannot use automatic recovery and variable substitution when running
scripts on a FTA. This restriction is planned to be removed in the next release.
In Section 4.4.6, TWS for z/OS JCL variables in connection with TWS
parameters on page 211, we show you a way to use TWS for z/OS JCL
variables in connection with TWS parameters, to circumvent the variable
substitution restriction.

3.1.4 Workload Monitor


Because the Job Scheduling Console replaces WLM/2, Workload Monitor/2 is
not shipped as part of the TWS for z/OS code.

94

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

3.1.5 System documentation


The system documentation is not shipped in hardcopy form with TWS for z/OS
8.1. The books are now converted to PDF format and part of the Tivoli Workload
Scheduler for z/OS product. Because the tracker agents are going to be replaced
by the Tivoli Workload Scheduler fault tolerant agents and the WLM/2 is not
included in the TWS for z/OS product, the following books have been removed:
Tivoli Operations Planning and Control V2R3 Tracker Agents for AIX, UNIX,
VMS, and OS/390, SH19-4484
Tivoli Operations Planning and Control V2R3 Tracker Agents for OS/2 and
Windows NT, SH19-4483
Tivoli Operations Planning and Control V2R3 Tracker Agents for OS/400
Installation and Operation, SH19-4485
Tivoli Operations Planning and Control V2R3 Workload Monitor/2 Users
Guide, SH19-4482

3.1.6 EQQPDF
Member EQQPDF of dataset SEQQMISC contains the latest Tivoli Workload
Scheduler for z/OS documentation corrections in PDF format. You need to
download the member (in binary) with the extension .pdf. Then you can use the
Adobe Acrobat Reader to view it. EQQPDF will be updated regularly via program
temporary fixes, so make sure to always have the latest documentation
available.

3.1.7 Sharing hierarchical file system (HFS) cluster


Tivoli Workload Scheduler code has been ported into UNIX System Services
(USS) on z/OS. The end-to-end server accesses this code in a hierarchical file
system cluster. This cluster can be shared within a sysplex only starting from
OS/390 Version 2 Release 9. Take this into consideration when you are planing
your hot standby strategy. Sharing an HFS cluster is explained in detail in
Section 3.2, Installing Tivoli Workload Scheduler for z/OS on page 100.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

95

Terminology note: An HFS dataset is an OS/390 dataset that contains a


POSIX-compliant hierarchical file system, which is a collection of files and
directories organized in a hierarchical structure that can be accessed using
the OS/390 UNIX system services (USS).

3.1.8 TCP/IP considerations


The end-to-end environment uses TCP/IP as the only protocol to communicate
between the end-to-end server and the domain manager. Even the fault tolerant
architecture has the advantage that during a network failure the distributed
environment is able to continue its processing. You must consider what can
happen when the z/OS engine fails or must be moved within the sysplex, for
example, due to maintenance activities. In this case the domain manager buffers
its tracking events into the file tomaster.msg until the link to the end-to-end server
is reestablished.
The end-to-end server is a started task that must be running on the same system
as the z/OS engine. The server accesses the EQQTWSIN and EQQTWSOU
queues in the HFS cluster in order to translate the events for the engine, and
handles the connection to the domain manager. Be aware that during a link
outage, the dependencies between jobs in Tivoli Workload Scheduler for z/OS
and jobs in the distributed network can not be resolved.
Another major issue that you need to consider is that a hotstandy z/OS engine,
when a host or system fail situation occurs, per default, uses a TCP/IP stack
other than the failing one, and therefore gets a different IP address. This takes
place too, when you simply start the z/OS engine at another system within the
sysplex. Since the domain manager would still try to link to the end-to-end server
at its old location, you must take the required actions to successfully establish a
reconnection.
Figure 3-1 on page 97 illustrates the connection between the end-to-end server
and the domain manager.

96

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Standby
Engine

Standby
Engine

wtsc65.itso.ibm.com

wtsc64.itso.ibm.com

z/OS
SYSPLEX

wtsc63.itso.ibm.com

Active
Engine

eastham.itso.ibm.com
AIX

AIX

Domain
Manager

FTA &
Backup DM

AIX

Domain
Manager

Solaris
Domain
Manager

HPUX
Domain
Manager

Fault
Tolerant
Agents

AIX

Solaris

AIX

OS/400

Windows NT

Windows 2000

AIX

HPUX

Figure 3-1 TWS domain manager connection to end-to-end server

Whenever the end-to-end server starts it looks into the topology member to find
its hostname and port number. If you have not defined any hostname, the system
takes one out of the TCP/IP profile. Port and hostname will also be inserted into
the Symphony file and distributed to the domain manager when either a current
plan batch job or a Symphony renew is initiated. The domain manager in turn
uses this information to link back to the server.
If the z/OS engine fails on wtsc63.itso.ibm.com (see Figure 3-1), the standby
engine either on wtsc64 or wtsc65 can take over all the engine functions. The
engine that takes over depends on how the standby engines are configured.
Since the domain manager on eastham knows wtsc63 as its master domain
manager, the link would fail, no matter which system (wtsc64 or wtsc65) the
engine takes over. To solve this problem one solution could be to send a new
Symphony file (redistribute the Symphony file) from the engine that has taken
over to the primary domain manager. Doing a redistribute of the Symphony file
on the new engine will recreate the Symphony file and add the new z/OS
hostname to the Symphony file. The domain manager can use this information to
reconnect to the engine (or more precisely the server) on the new z/OS system.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

97

Since redistribution of the Symphony file can be disrupted in a heavily loaded


scheduling environment, we will explain three alternative strategies that can be
used to solve the reconnection problem.
For all three alteratives, the topology member is used to specify the hostname
and port number for the TWS for z/OS server task. The hostname is copied to
the Symphony file when the Symphony file is redistributed or the TWS for z/OS
current plan is extended or replanned. The distributed primary domain manager
will use the hostname read from the Symphony file to connect to the server. To
keep the hostname unique after a fail over situation (where engine is moved to
one of its backup engines), and gain additional flexibility, we define a logical
hostname (TWSCTP). This hostname is logical because it is not related to the
physical host or z/OS system where the server runs. It could be called an alias.

Usage of the host file on the domain manager


The different IP-addresses of the systems where the engine can be active are
defined in the host name file (/etc/hosts) on the primary domain manager:
9.12.6.8 Wtsc63.itso.ibm.com TWSCTP
9.12.6.9 wtsc64.itso.ibm.com
9.12.6.10 wtsc65.itso.ibm.com

As explained earlier, TWSCTP is not a hostname related to a physical z/OS


system. It is simply a freely chosen name defined in the server topology member.
If the server is moved to the WTSC64 (wtsc64.itso.ibm.com) system, you only
have to edit the hosts file on the primary domain manager, so TWSCTP now
points to the new system:
9.12.6.8 wtsc63.itso.ibm.com
9.12.6.9 wtsc64.itso.ibm.com TWSCTP
9.12.6.10 wtsc65.itso.ibm.com

This change takes effect dynamically (the next time the domain manager tries to
reconnect to the server). The change must be carried out by editing a local file on
the domain manager.

Usage of stack affinity on the z/OS system


Another possibility is to use stack affinity. Stack affinity provides the ability, to
define which specific TCP/IP instance the application should bind to. If you are
running in a multiple stack environment, where each system has its own TCP/IP
stack, the application can be forced to use a specific stack, even if it runs on
another system. Stack affinity uses the _BPXK_SETIBMOPT_TRANSPORT
environment variable.

98

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tivoli Workload Scheduler Apar PQ55837 (has not been closed at the time we
are writing this book) implements the ability to use this variable by adding a new
DD statement to the end-to-end server started task procedure. This statement
points to a fixed 80 sequential DS or PDS member that contains the environment
variables to be initialized before the end-to-end processes are started.
Example:
//STDENV

DD

DISP=SHR,DSN=MY.FILE.PARM(STDENV)

This member can be used to set the stack affinity using the following
environment variable.
_BPXK_SETIBMOPT_TRANSPORT=xxxxx

Where xxxxx indicates the TCP/IP stack the server should bind to.
One disadvantage of stack affinity occurs in the case where the stack or sysplex
member must be shutdown. If this happens you must interact manually.
For more information see the z/OS V1R2 Communications Server: IP
Configuration Guide, SC31-8775.

Usage of Dynamic Virtual IP Addressing (DVIPA)


DVIPA, introduced with OS/390 V2R8, in general, gives you the ability to assign
a specific virtual IP address to your application. This virtual address is
independent of any specific TCP/IP stack within the sysplex. Even if your
application has to be moved to another system in case of a failure or
maintenance, the application can be reached under the same virtual IP address.
Usage of DVIPA is the most flexible way to be prepared for application or system
failure. We recommend defining the following Tivoli Workload Scheduler for z/OS
components to use DVIPA:
End-to-end server task
TCP/IP server, used for JSC

The following example illustrates how a DVIPA definition looks:


VIPADYNAMIC
viparange define 255.255.255.248 9.12.6.104
ENDVIPADYNAMIC
PORT
5000 TCP TWSJSC BIND 9.12.6.106
31281 TCP TWSCTP BIND 9.12.6.107

In this example DVIPA automatically assigns started task TWSCTP, which


represents our end-to-end server task and listens on port 31281, to IP address
9.12.6.107.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

99

You can find more details in Section 1.3.2 of the z/OS V1R2 Communications
Server: IP Configuration Guide, SC31-8775.
In addition, the redbook TCP/IP in a Sysplex, SG24-5235, provides useful
installation information.

3.1.9 Script repository


Fault tolerant agent technology requires that scripts be located on the physical
machine where they run. This is important for your planning when it comes to
migrating from tracker agents to end-to-end solutions. Strategies for migrating
from tracker agents to fault tolerant workstations are described in detail in
Migrating TWS for z/OS tracker agents to TWS for z/OS end-to-end on
page 201.

3.1.10 Submitting user ID


The definitions of the submitting user ID scripts have been changed. Instead of
using the job submit exit EQQUX001, the user ID is now part of the jobrec
statement. For scripts running on NT, user IDs are needed to define the usrrec
keyword. See also Initialization parameter used to describe the topology on
page 115.

3.2 Installing Tivoli Workload Scheduler for z/OS


In this chapter we guide your though the installation process of Tivoli Workload
Scheduler for z/OS, especially the end-to-end feature. Thus we prevent
duplicating the whole installation of the base product, which is detailed described
in the Tivoli Workload Scheduler for z/OS V8R1 Installation Guide, SH19-4543.
Table 3-3 shows you the necessary installation tasks and identifies the
components of each task.
Table 3-3 TWS for z/OS installation tasks

100

Installation
task

z/OS engine

z/OS agent

End-to-end
server

Page

Execute
EQQJOBS
installation aid.

Section 3.2.1,
Executing
EQQJOBS
installation
aid on
page 102.

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Installation
task

z/OS engine

z/OS agent

Define
subsystems in
SYS1.PARMLI
B.

End-to-end
server

Page

Section 3.2.2,
Defining Tivoli
Workload
Scheduler for
z/OS
subsystems
on page 106.

Allocate
end-to-end
datasets
(EQQPCS06).

Section 3.2.3,
Allocate
end-to-end
datasets on
page 107.

Create and
customize the
work directory
(EQQPCS05).

Section 3.2.4,
Create and
customize the
work directory
on page 109.

Section 3.2.5,
Create the
started task
procedures for
TWS for z/OS
on page 109.

Section 3.2.6,
Define
end-to-end
initialization
statements on
page 110.

Section 3.2.9,
Verify the
installation on
page 130.

Create JCL
procedures for
Tivoli
Workload
Scheduler for
z/OS.

Define
end-to-end
initialization
statements.

Verify your
installation.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

101

3.2.1 Executing EQQJOBS installation aid


EQQJOBS is a CLIST-driven ISPF dialog that can help you install Tivoli
Workload Scheduler for z/OS. EQQJOBS assists in the installation of the engine
and agent by building a batch-job JCL that is tailored to your requirements. To
make EQQJOBS executable, allocate these libraries to the DD statements in
your TSO session:
SEQQCLIB to SYSPROC
SEQQPNL0 to ISPPLIB
SEQQSKL0 and SEQQSAMP to ISPSLIB

We will show you how to use EQQJOBS installation aid as follows:


1. To invoke EQQJOBS, enter the TSO command EQQJOBS from an ISPF
environment. The primary panel seen in Figure 3-2 appears.

EQQJOBS0 ------------ EQQJOBS application menu


Select option ===>

- Create sample job JCL

- Generate OPC batch-job skeletons

- Generate OPC Data Store samples

- Exit from the EQQJOBS dialog

--------------

Figure 3-2 EQQJOBS primary panel

You only need to select options 1 and 2 for end-to-end specifications. We do


not want to step through the whole EQQJOBS dialog. Instead we want to
show you only the related end-to-end panels (the referenced panel names
are indicated in the top left corner of the panels, as shown in Figure 3-2).
2. Press option 1 and make your necessary input into panel ID EQQJOBS8 (see
Figure 3-3 on page 103).

102

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

EQQJOBS8 ------------------- Create sample job JCL -----------Command ===>


end-to-end FEATURE:
HFS Installation Directory

HFS Work Directory

User for OPC address space


Refresh CP group

===>
===>
===>
===>
===>
===>
===>
===>

Y
(Y= Yes ,N= No)
/usr/lpp/TWS/TWS810________
___________________________
___________________________
/tws/twsctpwrk______________
___________________________
___________________________
TWSRES1_
TWS810__

RESTART AND CLEANUP (DATA STORE) Y


(Y= Yes ,N= No)
Reserved destination
===> OPC_____
Connection type
===> XCF
(SNA/XCF)

Figure 3-3 Server-related input panel

The following definitions are important:


End-to-end FEATURE
Specify Y if you want to inter-operate with Tivoli Workload Scheduler fault
tolerant workstations.
HFS installation directory
Specify the HFS path where SMP/E has installed the Tivoli Workload
Scheduler for z/OS binaries and files related to the end-to-end enabler
feature. This directory contains the bin directory. The default path is
/usr/lpp/TWS/TWS810. The installation directory is created by SMP/E job
EQQISMKD and populated by applying the end-to-end feature
(JWSZ103).
HFS work directory
Specify where the subsystem-specific HFS files are. Replace /inst with a
name that uniquely identifies your subsystem. Each subsystem that will
use fault tolerant workstations must have its own work directory. This
directory is the base directory where the end-to-end processes have their
working files (Symphony, event files, traces).

Chapter 3. Planning, installation, and configuration of the TWS 8.1

103

Important: To successfully configure end-to-end scheduling in a sysplex


environment, make sure that the work directory is available to all systems in
the sysplex. This way, in case of a takeover situation, the new server will be
started on a new system in the sysplex, and the server must be able to access
the work directory to continue processing.

Starting from OS/390 2.9, you can use the shared HFS and see the same
directories from different systems. Some directories (usually /var, /dev, /etc,
and /tmp) are system specific. This means that those paths are logical links
pointing to different directories. When you specify the work directory, make
sure that it is not on a system-specific filesystem. Or, should this be the case,
make sure that the same directories on the filesystem of the other systems are
pointing to the same directory. For example, you can use /u/TWS; that is not
system-specific. Or you can use /var/TWS on system SYS1 and create a
symbolic link /SYS2/var/TWS to /SYS1/var/TWS so that /var/TWS will point to
the same directory on both SYS1 and SYS2.
If you are using OS/390 Version 2.6, Version 2.7, or Version 2.8, you need to
create an HFS dataset. Then you should mount it in Read/Write mode on
/var/TWS, and use it for the work directories. When you start the server on
another system, unmount the filesystem from the first system and mount it on
the new one. The filesystem can be mounted in R/W mode only on one
system at a time.
We recommend that you create a separate HFS cluster for the working
directory, mounted in Read/Write mode. This is because the working directory
is application specific and contains application-related data. Besides, it makes
your back up easier. The size of the cluster depends on the size of the
Symphony file and how long you want to keep the stdlist files. We recommend
you to allocate 2 Gigabytes of space.

User for OPC address space


This information is used to create the procedure to build the HFS directory
with the right ownership. In order to run the end-to-end feature correctly,
the ownership of the work directory and the files contained in it must be
assigned to the same user ID as RACF associates with the Server Started
Task. In the User for OPC address space field, specify the RACF user ID
used for the server address space. This is the name specified in the
started-procedure table (in RACF).

104

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Refresh CP group
This information is used to create the procedure to build the HFS directory
with the right ownership. In order to create the new Symphony file, the
user ID used to run the daily planning batch job must belong to the group
that you specify in this field. Also make sure that the user ID associated
with the server address space (the one specified in the User for OPC
address space field) belongs to this group or has this group as a
supplementary group.
As you can see in Figure 4-3 on page 133, we defined RACF user ID
TWSRES1 to the End to end server started task. User TWSRES1 belongs
to RACF group TWS810. Therefore all users of the RACF group TWS810
and its supplementary group get access to create the Symphony file.

Tip: The Refresh CP group field can be used to give access to the HFS file as
well a to protect the HFS directory from unauthorized access.

3. Press Enter to generate the installation job control language (JCL). Figure 3-4
on page 106 is a sample JCL with end-to-end modifications created.
Table 3-4 End-to-end example JCL
Member

Description

EQQCON

Sample started task procedure for a z/OS


engine and agent in same address
space.

EQQCONO

Sample started task procedure for the z/OS


engine only.

EQQCONP

Sample initial parameters for a z/OS engine


and agent in same address space.

EQQCONOP

Sample initial parameters for a z/OS engine


only.

EQQPCS05

Creates the working directory in HFS used


by the end-to-end server task.

EQQPCS06

Allocates datasets for


integration with the end-to-end feature.

EQQSER

Sample started task procedure for a server


task.

EQQSERV

Sample initialization parameters for a


server task.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

105

4. Now go back to the EQQJOBS primary panel and choose option 2 to


generate the skeletons. Make your necessary entries until panel EQQJOBSA
appears (Figure 3-4).

EQQJOBSA -------------- Generate OPC batch-job skeletons -----Command ===>


Specify if you want to use the following optional features:
end-to-end FEATURE:
(To interoperate with TWS
fault tolerant workstations)

(Y= Yes ,N= No)

RESTART AND CLEAN UP (DATA STORE):


(To be able to retrieve joblog,
execute dataset clean up actions
and step restart)

(Y= Yes ,N= No)

Figure 3-4 Generate end-to-end skeletons

5. Specify Y for the end-to-end FEATURE, if you want to inter-operate with Tivoli
Workload Scheduler fault tolerant workstations.
6. Press Enter and the new skeleton member will be created (see Table 3-5).
Table 3-5 End-to-end skeletons
Member

Description

EQQSYRES

Tivoli Workload Scheduler Symphony


renew.

3.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems


The subsystem for the z/OS engines and agents must be defined in the active
subsystem-name-table member of SYS1.PARMLIB. It is advisable to install at
least two Tivoli Workload Scheduler for z/OS controlling systems, one for testing
and one for your production environment.
Note: It is recommended that you install the agent and the z/OS engine in
separate address spaces.

To define the subsystems, update the active IEFSSNnn member in


SYS1.PARMLIB. The name of the subsystem initialization module for Tivoli
Workload Scheduler for z/OS is EQQINITE. Include records, as in the Table 3-1
on page 107.

106

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example 3-1 subsystem definition record (IEFSSNnn member of SYS1.PARMLIB)


SUBSYS SUBNAME(subsystem name)
INITRTN(EQQINITE)
INITPARM('maxecsa,E')

/* TWS for z/OS subsystem */

3.2.3 Allocate end-to-end datasets


Member EQQPCS06 in your installation JCL allocates the following VSAM and
sequential datasets needed for end-to-end scheduling:
End-to-end script library (EQQSCLIB)
End-to-end input and output events datasets (EQQTWSIN and EQQTWSOU)
Current plan backup copy dataset to create Symphony (EQQSCPDS)

Let us explain these datasets in more detail.

End-to-end script library (EQQSCLIB)


This script library dataset includes members containing the commands or the job
definitions for fault tolerant workstations. It is required in the z/OS engine if you
want to use the end-to-end scheduling feature. See the Tivoli Workload
Scheduler for z/OS V8R1 Customization and Tuning, SH19-4544, for details
about the JOBREC statement.
Tip: Do not compress members in this PDS. For example, do not use the ISPF
PACK ON command because Tivoli Workload Scheduler for z/OS does not use
ISPF services to read it.

EQQTWSIN and EQQTWSOU


EQQTWSIN and EQQTWSOU are input and output events datasets. These
datasets are required by every Tivoli Workload Scheduler for z/OS address
space that uses the end-to-end feature. They record the descriptions of events
related with operations running on fault tolerant workstations and are used by
both the end-to-end enabling task and the translator process in the schedulers
TCP/IP server. The datasets are device-dependent and can have only primary
space allocation. Do not allocate any secondary space. They are automatically
formatted by Tivoli Workload Scheduler for z/OS the first time they are used.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

107

Note: An SD37 abend code is produced when Tivoli Workload Scheduler for
z/OS formats a newly allocated dataset. Ignore this error.

EQQTWSIN and EQQTWSOU are wrap-around datasets. In each dataset, the


header record is used to track the amount of read and write records. To avoid the
loss of event records, each dataset uses a mechanism whereby a writer task will
not write any new records until all the existing ones have been read. The task will
resume writing as soon as all the existing records have been read. The quantity
of space that you need to define for each dataset requires some attention.
Because the two datasets are also used for job log retrieval, the limit for the job
log length is half the maximum number of records that can be stored in the input
events dataset. 25 cylinders are sufficient for most installations. When the record
length is 120, the space allocation must be at least 1 cylinder.
There must be sufficient space in each dataset to hold 1000 records (the
maximum number of job log records will be 500). Use this as a reference also if
you plan to define a record length greater than 120 for the datasets. You can
define a LRECL from 120 to 32000 bytes. Values outside of this range will cause
the end-to-end task to terminate. The datasets must be unblocked and the block
size must be the same as the logical record length. For good performance,
define the datasets on a device with plenty of availability. If you run programs that
use the RESERVE macro, try to allocate the datasets on a device that is not, or
only slightly, reserved. Initially you may need to test your system to get an idea of
the number and types of events that are created at your installation. Once you
have gathered enough information, you can then reallocate the datasets. Before
you reallocate a dataset, ensure that the current plan is entirely up-to-date. You
must also stop the event writer and any event reader that uses the datasets.
Tip: Do not move these datasets once they have been allocated. They contain
device-dependent information and cannot be copied from one type of device
to another, or moved around on the same volume. An end-to-end event
dataset that is moved will be re-initialized. This causes all events in the
dataset to be lost. If you have DFHSM or a similar product installed, you
should specify that E2E event datasets are not migrated or moved.

EQQSCPDS
EQQSCPDS is the current plan backup copy dataset to create the Symphony
file. During the creation of the current plan, the SCP dataset is used as a CP
backup copy for the production of the Symphony file. The primary and alternate
CP datasets (CP1 and CP2) are used in a flip-flop manner; that is, Tivoli

108

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Workload Scheduler for z/OS copies the active CP to the inactive dataset and
then uses this newly copied dataset as the active CP. The active dataset is called
the CP logical file. Updates to the CX file are made in the data space. During the
current plan backup process, the data space is refreshed to DASD.

3.2.4 Create and customize the work directory


To install the end-to-end feature, you need to allocate the HFS files that the
feature uses. Then, on every Tivoli Workload Scheduler for z/OS engine that will
be using this feature, run the EQQPCS05 sample to create the HFS directories
and files. In order to do this you need to be a user with a USS (UNIX System
Services) user ID equal to 0, or with user read authority to the RACF Facility
Class BPX.SUPERUSER.
This job runs a configuration script residing in the installation directory to create
the working directory with the right permissions. The configuration script creates
several files and directories in the working directory.
Tip: If you execute this job in a sysplex that can not share the HFS (prior
OS/390 V2R9) and get messages like can not create directory, you may
want a closer look on which machine the job really ran. Because without
system affinity, every member that has the initiater in the right class started
can execute the job so you must add a /*JOBPARM SYSAFF.

3.2.5 Create the started task procedures for TWS for z/OS
Perform this task for a z/OS agent, engine, and server. You must define a started
task procedure or batch job for each Tivoli Workload Scheduler for z/OS address
space. The EQQJOBS dialog generates several members in the output library
that you specified. These members contain started task JCL that is tailored with
the values you entered in the dialog. Tailor these members further, according to
the datasets you require. See also Figure 3-4 on page 105. The Tivoli Workload
Scheduler for z/OS server with TCP/IP support requires access to the C
language runtime library (either as STEPLIB or as LINKLIST). If you have
multiple TCP/IP stacks or a TCP/IP started task with a name different than
TCPIP, then you need to have a SYSTCPD DD card pointing to a TCP/IP dataset
containing the TCPIPJOBNAME parameter.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

109

You must have a server dedicated for end-to-end scheduling. You can use the
same server also to communicate with the Tivoli Job Scheduling Console. The
z/OS engine uses the server to communicate events to the distributed agents.
The server will start multiple tasks and processes using the UNIX System
Services.
Recommendations: The z/OS engine and server use TCP/IP services.
Therefore it is necessary to define a USS segment. There is no special
authorization necessary. It is only required to be defined in USS with any UID.

We recommend dividing the JSC and end-to-end server into two separate
started tasks. This has the advantage that you do not need to stop the whole
server processes in the case that the JSC server needs to be shutdown.
The server started task (STC) is important for handling JSC and end-to-end
communication. We recommend setting the server STC as non-swappable
and giving it the same dispatching priority as the z/OS engine.
The end-to-end server needs the Init calendar statement to start successfully.

3.2.6 Define end-to-end initialization statements


The end-to-end feature introduces new initialization statements required to run it
successfully. They are primarily used to describe the topology of the distributed
environment that must be known by the z/OS engine and the server. In addition
the Tivoli Workload Scheduler for z/OS daily plan batch jobs, like the current
plan, extend need this information to build the Symphony file. We go through
every new statement in detail and give you an example of how to reflect your
planned topology on the z/OS engine.
Table 3-6 New initialization member
Initialization member

Description

TPLGYSRV

Activates end-to-end in the TWS for z/OS engine.

TPLGYPRM

Activates end-to-end in the TWS for z/OS server and


batch jobs (plan jobs).

TOPOLOGY

Specifies all the parameters for end-to-end.

DOMREC

Defines domains in a distributed TWS network.

CPUREC

Defines agents in a TWS distributed network.

USRREC

Specifies user ID and password for NT users.

You can find more information in Tivoli Workload Scheduler for z/OS V8R1
Customization and Tuning, SH19-4544.

110

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

In order to be able to start end-to-end integration, you must set the parameters
for the engine (controller) as follows: Customize the PARMLIB member (DD
name is EQQPARM) for the Tivoli Workload Scheduler for z/OS engine
(controller) started task procedure: Customize the TPLGYSRV parameter in the
OPCOPTS statement by providing the name of the server started task used as
the end-to-end server. This will activate the end-to-end sub-tasks in the Tivoli
Workload Scheduler for z/OS engine.
For the Tivoli Workload Scheduler for z/OS server and the Tivoli Workload
Scheduler for z/OS plan programs to recognize the Tivoli Workload Scheduler
network you must customize the PARMLIB members (DD name is EQQPARM)
for the Tivoli Workload server and plan programs as follows:
1. Create a member containing a description of the Tivoli Workload Scheduler
network topology with domains and fault tolerant workstations. The topology
is described using the CPUDOM and DOMREC initialization statements.
2. If you have any fault tolerant workstations on Windows machines and you
want to run jobs on these workstations with the authority of another user, you
must create a member containing the users and passwords for Windows NT
users. The Windows users are described using USRREC initialization
statements.
3. Create a member containing end-to-end integration information, using the
TOPOLOGY initialization statement. The TOPOLOGY initialization statement
is used to define parameters related the Tivoli Workload Scheduler topology,
such as the port number for netman, the installation path in USS, and the part
to the server working directory in USS.
The TPLGYMEM parameter specifies the name of the member with the
CPUDOM and DOMREC initialization statement (item 1 above). The
USRMEM parameter specifies the name of the member with the USRREC
initialization statements (item 2 above).
4. Customize the TPLGYPRM parameter in the SERVOPTS and BATCHOPT
statements.
The TPLGYPRM parameter specifies the name of the member with the
TOPOLOGY definition (item 3 above).
Figure 3-5 on page 112 illustrates the relationship between the new initialization
statements and members.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

111

OPCOPTS

SERVOPTS

BATCHOPT

TPLGYSRV

TPLGYPRM(TPLGPARM)

TPLGYPRM(TPLGPARM)

EQQPARM(TPLGPARM)

TOPOLOGY
BINDIR()
WRKDIR()
PORTNUMBER(31111)
TPLGYMEM(TPLGINFO)
USRMEM(USRINFO)

EQQPARM(USRINFO)

EQQPARM(TPLGINFO)

USRREC

DOMREC

USRCPU()
USRNAM()
USRPSW()

USRREC

DOMREC

CPUREC

CPUREC

Figure 3-5 Relationship between new initialization members

Next let us explain each of these new statements.

TPLGYSRV(server_name)
Specify this keyword if you want to activate the end-to-end feature in the z/OS
engine. If you specify this keyword the Tivoli Workload Scheduler Enabler task is
started. The specified server name is that of the server that handles the events to
and from the distributed agents. Please note that only one server can handle
events to and from the distributed agents. This keyword is defined in OPCOPTS.

112

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tip: If you want let the z/OS engine start and stop the end-to-end server, use
the server keyword in OPCOPTS parmlib member.

TPLGYPRM(member name/TPLGPARM)
Specify this keyword if you want to activate the end-to-end feature in the server.
The specified member name is a member of the PARMLIB in which the
end-to-end options are defined by the TOPOLOGY statement. TPLGYPRM is
defined in the servopts of the server started task and the batchopt statement.

TOPOLOGY
This statement includes all the parameters related to the end-to-end feature.
TOPOLOGY is defined in the member of the EQQPARM library as specified by
the TPLGYPRM parameter in the BATCHOPT and SERVOPTS statements.
Figure 3-6 shows you the syntax of the topology member.

Figure 3-6 The topology member

Topology parameters
The following sections describe the topology parameters.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

113

BINDIR(directory name)
Specifies the base hierarchical file system (HFS) directory where the binaries,
catalogs and the other files are installed on the HFS and shared among
subsystems.

HOSTNAME(hostname /IP address/ local hostname)


Specifies the hostname or the IP address that will be used by the distributed
agent to connect to the z/OS engine of Tivoli Workload Scheduler for z/OS. The
default is the hostname returned by the operating system.
Note: Please keep in mind that in Section 3.1.8, TCP/IP considerations on
page 96, we showed you how to define this value more flexibly within a
sysplex.

LOGLINES(number of lines/100)
Specifies the maximum number of lines that the job log retriever returns for a
single job log. The default value is 100. In all cases, the job log retriever does not
return more than half the number of records existing in the input queue.

CODEPAGE(host system codepage/IBM-037)


Specifies the name of the host code page and applies to the end-to-end feature.
The value is used by the connector to convert the Tivoli Job Scheduling Console
data submitted. You can provide the IBM xxx value, where xxx is the EBCDIC
code page. The default value, IBM 037, defines the EBCDIC code page for US
English, Portuguese, and Canadian French.
For a complete list of available code pages, refer to Tivoli Workload Scheduler
for z/OS V8R1 Customization and Tuning , SH19-4544.

PORTNUMBER(port/31111)
Defines the TCP/IP port number used to communicate with the distributed
agents. The default value is 31111. The accepted values are from 0 to 65535.
Note that the port number must be unique within a Tivoli Workload Scheduler
network. If you change the value, you also need to restart the Tivoli Workload
Scheduler for z/OS server and refresh the Symphony file.

TPLGYMEM(member name, TPLGINFO)


Specifies the PARMLIB member where the domain and CPU definitions specific
for the end-to-end are. The default value is TPLGINFO.

TRCDAYS(days/14)
Specifies the number of days the trace files on HFS are kept before being
deleted. The default value is 14. Specify 0 if you do not want the trace files to be
deleted.

114

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Recommendation: Monitor the size of your trace files in your working


directory to prevent that the HFS cluster becomes full. The trace files are the
files in stdlist directory. They contain internal logging information that may
useful for troubleshooting. You should consider to delete them on a regular
interval.

USRMEM(member name/USRINFO)
Specifies the PARMLIB member where the user definitions are. This keyword is
optional; it applies only for Windows FTAs. The default value is USRINFO.

WRKDIR(directory name)
Specifies the location of the HFS files of a subsystem. Each Tivoli Workload
Scheduler for z/OS subsystem using the end-to-end feature must have its own
WRKDIR.

3.2.7 Initialization parameter used to describe the topology


With the last three parameters, DOMREC, CPUREC, and USRREC, listed in
Table 3-6 on page 110, you define the topology of the distributed environment in
Tivoli Workload Scheduler for z/OS. The defined topology is used for TWS for
z/OS server and plan jobs.
In Figure 3-7 on page 116 you can see how the distributed TWS network
topology is described using CPUREC and DOMREC initialization statements for
the TWS for z/OS server and plan programs. The TWS for z/OS fault tolerant
workstations are mapped to physical TWS agents or workstations using the
CPUREC statement. The DOMREC statement is used to describe the domain
topology in the distributed TWS network.
Note that the domain manager DomainZ is directly connected to the Tivoli
Workload Scheduler for z/OS server, and that the Tivoli Workload Scheduler for
z/OS server is the master domain manager (MASTERDM) for DomainZ.
Note that the USRREC parameters are not depicted in Figure 3-7 on page 116.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

115

TWS for z/OS Controller

TWS for z/OS Server (and plan programs)

Workstation FDMZ

CPUREC CPUNAME(FDMZ) CPUDOMAIN(DOMAINZ)

Workstation FDMA

CPUREC CPUNAME(FDMA) CPUDOMAIN(DOMAINA)

Workstation FDMB

CPUREC CPUNAME(FDMB) CPUDOMAIN(DOMAINB)


DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMZ) DOMPARENT(MASTERDM)
DOMREC DOMAIN(DOMAINA) DOMMGR(FDMA) DOMPARENT(DOMAINZ)
DOMREC DOMAIN(DOMAINB) DOMMGR(FDMB) DOMPARENT(DOMAINZ)

DomainZ
Domain
Manager
DMZ

AIX

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

OS/400

FTA3
Windows 2000

FTA4
Solaris

Figure 3-7 The topology definitions for server and plan programs

Let us take a detailed look at the DOMREC, CPUREC, and USRREC


parameters.

DOMREC
This statement starts a domain definition. You must specify one DOMREC for
each domain in the Tivoli Workload Scheduler network, with the exception of the
master domain. The domain name used for the master domain is MASTERDM.
The master domain is made up of the z/OS engine, which acts as master domain
manager. The CPU name used for the master domain manager (that is the Tivoli
Workload Scheduler for z/OS engine) in the Tivoli Workload Scheduler network is
OPCMASTER. You must specify one domain, child of MASTERDM, where the
domain manager is a real fault tolerant workstation. If you do not define this
domain, Tivoli Workload Scheduler for z/OS will try to find a domain definition
that can function as a child of master domain. DOMREC is defined in the
member of the EQQPARM library, as specified by the TPLGYMEM keyword in
the TOPOLOGY statement. Figure describes the DOMREC syntax.

116

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 3-8 Domrec syntax

DOMAIN(domain name)
The name of the domain, consisting of up to 16 characters starting with a letter. It
can contain dashes and underscores.

DOMMNGR(domain manager name)


The Tivoli Workload Scheduler workstation name of the domain manager. It must
be a fault tolerant agent running in full status mode.

DOMPARENT(parent domain)
The name of the parent domain. You can specify MASTERDM for one domain
only.

CPUREC
This statement begins a Tivoli Workload Scheduler workstation (CPU) definition.
You must specify one CPUREC for each workstation in the Tivoli Workload
Scheduler network, with the exception of the z/OS engine that acts as master
domain manager. The user must provide a definition for each workstation of
Tivoli Workload Scheduler for z/OS that is defined into the database as a Tivoli
Workload Scheduler fault tolerant agent. CPUREC is defined in the member of
the EQQPARM library, as specified by the TPLGYMEM keyword in the
TOPOLOGY statement. Figure 3-9 on page 118 explains the CPUREC syntax.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

117

Figure 3-9 CPUREC syntax

CPUNAME(cpu name)
The name of the Tivoli Workload Scheduler workstation consisting of up to four
alphanumerical characters, starting with a letter.

118

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

CPUOS(operating system)
The host CPU operating system related to the Tivoli Workload Scheduler
workstation. The valid entries are AIX, HPUX, POSIX, UNIX, WNT, and OTHER.

CPUNODE(node name)
The node name or the IP address of the CPU. Fully-qualified domain names up
to 52 characters are accepted.

CPUTCPIP(port number, 31111)


The TCP port number of netman on this CPU. It comprises five numbers and, if
omitted, uses the default value, 31111.

CPUDOMAIN(domain name)
The name of the Tivoli Workload Scheduler domain of the CPU.

CPUHOST(cpu name)
The name of the host CPU of the agent. It is required for extended agents. The
host is the Tivoli Workload Scheduler CPU with which the extended agent
communicates and where its access method resides.
Note: The host cannot be another extended agent.

CPUACCESS(access method)
The name of the access method. It is valid for extended agents and it must be
the name of a file that resides in the <twshome>/methods directory on the host
CPU of the agent.

CPUTYPE(SAGENT, XAGENT, FTA)


The CPU type specified as one of the following:
FTA: Fault tolerant agent, including domain managers and backup domain
managers.
SAGENT: Standard agent
X-AGENT: Extended agent

The default type is FTA.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

119

Note: If the extended-agent workstation is manually set to Link, Unlink, Active,


or Offline, the command is sent to its host CPU.

CPUAUTOLNK(OFF/ON)
Autolink is most effective during the initial start-up sequence of each plan. Then a
new Symphony file is created and all the workstations are stopped and restarted.
For a fault tolerant agent or standard agent, specify On so that, when the domain
manager starts, it sends the new production control file (Symphony) to start the
agent and open communication with it. For the domain manager, specify On so
that when the agents start they open communication with the domain manager.
Specify Off to initialize an agent when you submit a link command manually from
the Modify Current Plan dialogs.
Note: If the x-agent workstation is manually set to Link, Unlink, Active, or
Offline, the command is sent to its host CPU.

CPUFULLSTAT(ON/OFF)
This applies only to fault tolerant agents. If you specify Off for a domain manager,
the value is forced to On. Specify On for the link from the domain manager to
operate in Full Status mode. In this mode, the agent is kept updated about the
status of jobs and job streams running on other workstations in the network.
Specify Off for the agent to receive status information about only the jobs and
schedules on other workstations that affect its own jobs and schedules. This can
improve the performance by reducing network traffic. To keep the production

control file for an agent at the same level of detail as its domain manager, set
CPUFULLSTAT and CPURESDEP (see below) to On. Always set these modes to
On for backup domain managers.

CPURESDEP(ON/OFF)
This applies only to fault tolerant agents. If you specify Off for a domain manager,
the value is forced to On. Specify On to have the agents production control
process operate in Resolve All Dependencies mode. In this mode, the agent
tracks dependencies for all its jobs and schedules, including those running on
other CPUs.

120

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Note: CPUFULLSTAT must also be On so that the agent is informed about the
activity on other workstations.

Specify Off if you want the agent to track dependencies only for its own jobs and
schedules. This reduces CPU usage by limiting processing overhead. To keep
the production control file for an agent at the same level of detail as its domain
manager, set CPUFULLSTAT (see above) and CPURESDEP to On. Always set

these modes to On for backup domain managers.

CPUSERVER(server ID)
This applies only to fault tolerant and standard agents. Omit this option for
domain managers. This keyword can be a letter or a number (A-Z or 0-9) and
identifies a server (mailman) process on the domain manager that sends
messages to the agent. The IDs are unique to each domain manager, so you can
use the same IDs for agents in different domains without conflict. If more than 36
server IDs are required in a domain, consider dividing it into two or more
domains. If a server ID is not specified, messages to a fault tolerant or standard
agent are handled by a single mailman process on the domain manager.
Entering a server ID causes the domain manager to create an additional
mailman process. The same server ID can be used for multiple agents. The use
of servers reduces the time required to initialize agents, and generally improves
the timeliness of messages. As a guide, additional servers should be defined to
prevent a single mailman from handling more than eight agents.

CPULIMIT(value, 99)
Specifies the number of jobs that can run at the same time in a CPU. The default
value is 99 and the accepted values are from 0 to 99.

CPUTZ(timezone/UTC)
Specifies the local time zone of the fault tolerant workstation. It must match the
timezone in which the agent will run. For a complete list of valid time zones,
please refer to Appendix A of the Tivoli Workload Scheduler 8.1 Reference
Guide, SH19-4556.
If the timezone does not match that of the agent, the message AWS22010128E is
displayed in the batchman stdlist (log file) of the distributed agent. The default is
UTC (universal coordinated time).

USRREC
This statement defines the passwords for the Tivoli Workload Scheduler users
installed on Windows. USRREC is defined in the member of the EQQPARM
library as specified by the USERMEM keyword in the TOPOLOGY statement.
The USRREC syntax is seen in Figure 3-10 on page 122.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

121

Figure 3-10 USRREC Syntax

USRCPU(cpuname)
The Tivoli Workload Scheduler workstation on which the user can launch jobs. It
consists of four alphanumerical characters, starting with a letter.

USRNAM(logon ID)
The user name. It can include a domain name and can consist of 47 characters.
Note that Windows NT user names are case-sensitive. The user must be able to
log on to the computer on which the hierarchical file system has launched jobs,
and must also be authorized to log on as batch. If the user name is not unique in
Windows NT, it is considered to be either a local user, a domain user, or a trusted
domain user, in that order.

USRPWD(password)
The user password. It can consist of up to 31 characters and must be enclosed in
single quotation marks. Do not specify this keyword if the user does not need a
password. You can change the password every time you create a Symphony file
(for example when creating a CP extension).
Attention: The password is not encrypted. You must take care that the parmlib
dataset is RACF-protected to avoid misuse.

3.2.8 Reflecting a distributed environment in TWS for z/OS


We show you how to define your planned distributed environment with the new
initialization statements. As an example, we assume that your topology looks like
Figure 3-11 on page 123.

122

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MASTERDM
Standby
Engine

Standby
Engine

z/OS
SYSPLEX
Active
Engine
Server

DomainZ
Domain
Manager
DMZ

AIX

Eastham.ibm.com

DomainA

DomainB
AIX

Domain
Manager
DMA

Kopenhagen.ibm.com

Finn.ibm.com

HPUX

FTA1

Domain
Manager
DMB

FTA2
AIX

FTA3

Lowry.ibm.com

Ami.ibm.com

FTA4

Windows 2000

OS/400

Mainz.ibm.com

Solaris

Michaela.ibm.com

Figure 3-11 End-to-end environment example

First we need to describe the domain topology with the DOMREC statement.
Example 3-2 Domain definitions
DOMREC

DOMREC

DOMREC

DOMAIN(DOMAINZ)
DOMMNGR(FDMZ)
DOMPARENT(MASTERDM)
DOMAIN(DOMAINA)
DOMMNGR(FDMA)
DOMPARENT(DOMAINZ)
DOMAIN(DOMAINB)
DOMMNGR(FDMB)
DOMPARENT(DOMAINZ)

/*
/*
/*
/*
/*
/*
/*
/*
/*

Domain name for 1st domain


Eastham Domain Manager
TWS/z/OS Master Domain
Domain name for A-domain
Kopenhagen - Domain Manager
DMZ is Domain Manager
Domain name for B-domain
Mainz - Domain Manager
DMZ is Domain Manager

*/
*/
*/
*/
*/
*/
*/
*/
*/

The master domain (MASTERDM) is always the z/OS engine, therefore you
must define it in the DOMPARENT parameter. The DOMNGR keyword
represents the name of the workstation.
Now we must define the CPUREC keyword for every workstation in the network.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

123

Example 3-3 Workstation (CPUREC) definitions


CPUREC

CPUNAME(FDMZ)
/* Domain manager for DMZ
*/
CPUOS(AIX)
/* AIX operating system
*/
CPUNODE(EASTHAM.IBM.COM)
/* IP address of system
*/
CPUTCPIP(31111)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DOMAINZ)
/* The TWS domain name for CPU */
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on
*/
CPUFULLSTAT(ON)
/* Full status on for DM
*/
CPURESDEP(ON)
/* Resolve dep. on for DM
*/
CPULIMIT(20)
/* Number of jobs in parallel
*/
CPUTZ(CST)
/* Time zone for this CPU
*/
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FDMA)
CPUOS(AIX)
CPUNODE(KOPENHAGEN.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINA)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(ON)
CPURESDEP(ON)
CPULIMIT(20)
CPUTZ(CST)
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FDMB)
CPUOS(HPUX)
/* HP UX FTA
*/
CPUNODE(MAINZ.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINB)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(ON)
CPURESDEP(ON)
CPULIMIT(20)
CPUTZ(CST)
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FTA1)
CPUOS(AIX)
CPUNODE(FINN.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINA)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(OFF)
/*No full status for this FTA * /
CPURESDEP(OFF)
/*Don't resolve dependencies
*/
CPULIMIT(20)
CPUTZ(CST)

124

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FTA2)
CPUOS(OTHER)
/*This is an OS/400 FTA
CPUNODE(LOWRY.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINA)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(OFF)
CPURESDEP(OFF)
CPULIMIT(20)
CPUTZ(CST)
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FTA3)
CPUOS(WNT)
/* WIN2000 FTA */
CPUNODE(MICHAELA.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINB)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(OFF)
CPURESDEP(OFF)
CPULIMIT(20)
CPUTZ(CST)
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FTA4)
CPUOS(UNIX)
/* Solaris FTA */
CPUNODE(AMI.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINB)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(OFF)
CPURESDEP(OFF)
CPULIMIT(20)
CPUTZ(CST)

FTA1 and FTA4 do not need to have CPUFULLSTATUS and CPURESDEP on,
because dependency resolution within the domain is the task of the domain
manager. This improves performance by reducing network traffic.
Note: CPUOS(WNT) applies for all Windows platforms.

Because FTA3 runs on a Windows 2000 machine, you must provide the user ID
and password via the USRREC keyword in USERMEM member.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

125

Example 3-4 Usrrec


USRREC

USRCPU(FTA3)
USRNAM(michael)
USRPSW('texas26')

/* The CPU workstation for user */


/* The user name
*/
/* Password for user name
*/

After these customization steps, you can start the z/OS engine. Check the
engine and server logs for any syntax errors or messages. All z/OS messages
are prefixed with EQQ. See also the Tivoli Workload Scheduler for z/OS V8R1
Messages and Codes, SH19-4548.
If the z/OS engine uses the end-to-end feature to schedule on distributed
environments using distributed agents, check that the messages shown in
Example 3-5 appear in the z/OS engine EQQMLOG.
Example 3-5 Server messages
EQQZ005I
EQQZ085I
EQQZ085I
EQQG001I
EQQG001I
EQQG001I

OPC SUBTASK end-to-end ENABLER IS BEING STARTED


OPC SUBTASK end-to-end SENDER IS BEING STARTED
OPC SUBTASK end-to-end RECEIVER IS BEING STARTED
SUBTASK end-to-end ENABLER HAS STARTED
SUBTASK end-to-end RECEIVER HAS STARTED
SUBTASK end-to-end SENDER HAS STARTED

When the end-to-end server is started, check that the messages shown in
Example 3-6 appear in the z/OS engine EQQMLOG.
Example 3-6 Server messages continued
EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED
EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/translator" has been started, pid is
100663307
EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/netman" has been started, pid is
83886093

If a Symphony file has been created and is active, the messages shown in
Example 3-7 will appear.
Example 3-7 Symphony creation messages
EQQPT20I Input Translator waiting for Batchman is started
EQQPT21I Input Translator finished waiting for Batchman
Otherwise, if the Symphony file is not present or a new one must be
produced, this message will follow:
EQQPT22I Input Translator thread stopped until new Symphony will be available

126

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The end-to-end event datasets need to be formatted the first time the z/OS
engine is started with the end-to-end feature in use or after the end-to-end event
datasets (EQQTWSIN and EQQTWSOU) have been reallocated. The messages
shown in Example 3-8 will appear in the z/OS engine EQQMLOG before the
messages about the sender and receiver have started.
Example 3-8 Formatting messages
EQQW030I
EQQW030I
EQQW038I
EQQW038I

A
A
A
A

DISK
DISK
DISK
DISK

dataset
dataset
dataset
dataset

WILL BE FORMATTED, DDNAME = EQQTWSIN


WILL BE FORMATTED, DDNAME = EQQTWSOU
HAS BEEN FORMATTED, DDNAME = EQQTWSIN
HAS BEEN FORMATTED, DDNAME = EQQTWSOU

Also, the following messages may appear in the server EQQMLOG:


EQQPT56W The /DD:EQQTWSIN queue has not been formatted yet
EQQPT56W The /DD:EQQTWSOU queue has not been formatted yet

Defining fault tolerant workstations


The workstations you have defined via the CPUREC keyword must also be
defined in the TWS for z/OS workstation database before they can be activated
in the TWS for z/OS plan. The workstations are defined the same way as any
other workstation in TWS for z/OS, except they need a special flag: fault
tolerant. This flag is used to indicate in TWS for z/OS that these workstations
should be treated as a fault tolerant workstations (FTWs).
Of course you need to at least run a replan current plan in TWS for z/OS to
activate the workstation definitions in the current plan. We will show you how to
define a FTW via the job scheduling console.
1. Put the cursor over the TWS instance icon and right click. Select the icon
New Workstation (see Figure 3-12 on page 128).

Chapter 3. Planning, installation, and configuration of the TWS 8.1

127

Figure 3-12 Defining FTW

2. Enter the name of the workstation and a suitable description. Mark the Fault
Tolerant check box (Figure 3-13) and save the workstation definition in the
TWS for z/OS database.

Figure 3-13 Setting the fault tolerant flag

3. To activate the workstation definition in the TWS for z/OS current plan, you
should extend or replan the plan. When doing this, TWS for z/OS will create a
Symphony file and distribute it to the domain manager. If the Symphony file is

128

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

successfully created and distributed, all your defined FTWs should be linked
and active.
To submit work to the FTWs in order to check their status, you need to complete
the job definitions, which is described next.

Job definitions
A new dataset called Scriptlib has been introduced in this version of TWS for
z/OS. The scriptlib dataset holds all definitions related to fault tolerant
workstations. See also the end-to-end script library (EQQSCLIB) in
Section 3.2.3, Allocate end-to-end datasets on page 107. A new statement
JOBREC within the scriptlib defines the fault tolerant workstation job properties.
You must specify JOBREC for each member of the SCRIPTLIB. For each job this
statement specifies the script or the command to execute and the user that must
run the script or command.
The syntax of the JOBREC command is as in Figure 3-14.

Figure 3-14 JOBREC statement

JOBREC is defined in the members of the EQQSCLIB library, as specified by the


EQQSCLIB DD of the z/OS engine and the daily planing JCL.

Parameters
The following are descriptions of the parameters.
JOBSCR (script name)

Specifies the name of the shell script or executable file to be run for the job. If
the script includes more than one word, it must be enclosed within quotation
marks.
JOBCMD (command name)

Specifies the name of the shell command to run the job. If the command
includes more than one word, it must be enclosed within quotation marks.
JOBUSR (user name)

Specifies the name of the user submitting the specified script or command.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

129

Figure 3-15 Job definition example

Figure 3-15 shows the relationship between the job definition and the member
name in the script library. In this way you can define your jobs for all the FTAs.

3.2.9 Verify the installation


Now you have done all the necessary setup steps to create the Symphony file
from the z/OS engine and, in addition, you can submit and monitor the workload.
There are several ways to create the file.
Run current plan extend job.
Run replan job.
Submit Symphony Renew (option 3.5).
Tip: You cannot run any of these jobs through the JSC. Instead, you can use
the JSC to define a job stream, which in turn can be submitted by the JSC.

130

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

No matter which option you choose, after the job finishes all of your FTW
workstations should be linked and active. If not, the following might be the cause:
The Symphony file has not been created successfully. See the server log for
the appropriate messages.
The DOMREC or CPUREC definitions are wrong.
TCP/IP-related failures.

Please refer to chapter Chapter 6, Troubleshooting in a TWS end-to-end


environment on page 333 for additional information.
Submit and monitor your test jobs to check if they finished successfully. Check
also the stdlist directory in the Tivoli Workload Scheduler home directory.

3.3 Installing the TWS for z/OS TCP/IP server for JSC
usage
The TCP/IP server can be used for JSC connections and by the end-to-end
server. We recommend dividing it for dedicated use. The TCP/IP server is
needed by the connector to access the engine subsystem. The access uses the
Tivoli Workload Scheduler for z/OS program interface (PIF).
You can find an example for the started task procedure in installation member
EQQSER generated by the EQQJOBS installation aid. For prior releases, you
must install the enhancement APARs: PQ21320 or PQ21321. After the
installation, you can get full functionality of the Job Scheduling Console as with
the Tivoli Workload Scheduler for z/OS 8.1 release.
First, you have to modify the JCL of EQQSER in the following way:
Make sure that the C runtime library is concatenated in the server JCL
(CEE.SCEERUN).
If you have more than one TCP/IP stack or the name of the procedure that
was used to start the TCPIP address space is different from TCPIP, introduce
the SYSTCPD DD card pointing to a dataset containing the TCPIPJOBNAME
parameter (see DD SYSTCPD in the TCP/IP manuals).
Customize the server parameters file (see the EQQPARM DD statement in
the server JCL). The installation member EQQSERP already contains a
template. For example, you can provide the parameters shown in Figure 3-9
in the server parameters file.
Example 3-9 TCP/IP server parameter
SERVOPTS SUBSYS(TWSA)
USERMAP(MAP1)

/* TWS engine the server connects */


/* Eqqparm member for usermap */

Chapter 3. Planning, installation, and configuration of the TWS 8.1

131

PROTOCOL(TCPIP)
PORTNUMBER(3111)
CODEPAGE(IBM-037)
INIT CALENDAR(DEFAULT)

/*
/*
/*
/*

Communication protocol */
Port the server connects */
Name of the host codepage */
Name of the TWS calendar */

The SUBSYS, PROTOCOL(TCPIP), and CALENDAR are mandatory for using


the Tivoli Job Scheduling Console. Make sure that the port you try to use is not
reserved by another application. An entry in section Reserve ports for the
following servers of the related TCP/IP profile can be a useful information for
other users.
For information about these parameters, refer to Tivoli Workload Scheduler for
z/OS V8R1 Customization and Tuning , SH19-4544.

3.3.1 Controlling access to OPC using Tivoli Job Scheduling Console


The Tivoli Framework performs a security check when a user tries to use the Job
Scheduling Console, checking the user ID and password. The Tivoli Framework
associates each user ID and password to an administrator. Tivoli Workload
Scheduler for z/OS resources are currently protected by RACF. The Job
Scheduling Console user should only have to enter a single user ID/password
combination and not provide two levels of security checking (at the Tivoli
Framework level and then again at the Tivoli OPC level). The security model is
based on having the Tivoli Framework security handle the initial user verification
while at the same time obtaining a valid corresponding RACF user ID. This
makes it possible for the user to work with the security environment in OS/390.
OS/390 security is based on a table mapping the Tivoli administrator to an RACF
user ID. When a Tivoli Framework user tries to initiate an action on OS/390, the
Tivoli administrator ID is used as a key to obtain the corresponding RACF user
ID. The server uses the RACF user ID to build the RACF environment to access
Tivoli Workload Scheduler for z/OS services; so, the Tivoli Administrator must
relate, or map, to a corresponding RACF user ID.
There are two ways of getting the RACF user ID:
The first way is by using the RACF Tivoli-supplied predefined resource class,
TMEADMIN.

Consult the Implementing Security in Tivoli Workload Scheduler for z/OS


section in the Tivoli Workload Scheduler for z/OS V8R1 Customization and
Tuning, SH19-4544, for the complete setup of the TMEADMIN RACF class.
The other way is to use a new OPC Server Initialization Parameter to define a
member in the file identified by the EQQPARM DD statement in the server
startup job.

132

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

This member contains all the associations for a TME user with a RACF user
ID. You should set the parameter USERMAP in the SERVOPTS Server
Initialization Parameter to define the member name.

USERMAP(USERS)
The member, USERS, of the Initialization Parameter dataset could contain the
entries shown in Figure 3-10 with the same logic as the TMEADMIN class.
Example 3-10 Usermap member entries
USER 'DORIS@ITSO-REGION' RACFUSER(DORIS6) RACFGROUP(TIVOLI)
USER 'VERA@ITSO-REGION' RACFUSER(VERA21) RACFGROUP(TIVOLI)

For example, the TMF user VERA@ITSO-REGION maps the RACF user ID
VERA21 in the group Tivoli. The authorization level to use the plan and the
database of Tivoli Workload Scheduler for z/OS is defined in his RACF
definitions. A TMF user is connected to a certain authorization level within the
Tivoli Framework to perform actions within the management region. In our case
this is used to restrict the usage of the connector.
Tips:
Every RACF user who has update authority to this usermap table may get
access to the z/OS engine OPC subsystem. To maintain a high level of
security, the usermap table should be protected.
To manage the various levels of access, we recommend assigning one

TMF UID to one RACF UID.

3.4 Planning end-to-end scheduling for TWS


In this chapter we discuss how to plan the end-to-end scheduling for Tivoli
Workload Scheduler. We will show you how to configure your environment in a
proper way that fits your requirements. In addition, we point you to special
considerations that apply to the end-to-end solution with Tivoli Workload
Scheduler for z/OS.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

133

3.4.1 Network planning and considerations


Before you are going to install Tivoli Workload Scheduler you need to be sure to
know about the various configuration examples. Each configuration example has
its specific benefits and disadvantages. Here are some guidelines that helps you
to find the right choice:
How large is your Tivoli Workload Scheduler network? How many computers
does it hold? How many applications and jobs does it run?

The size of your network will help you decide whether to use a single domain
or the multiple domain architecture. If you have a small number of computers,
or a small number of applications to control with Tivoli Workload Scheduler,
there may not be a need for multiple domains.
How many geographic locations will be covered in your Tivoli Workload
Scheduler network? How reliable and efficient is the communication between
locations?

This is one of the primary reasons for choosing a multiple domain


architecture. One domain for each geographical location is a common
configuration. If you choose single domain architecture, you will be more
reliant on the network to maintain continuous processing.
Do you need centralized or decentralized management of Tivoli Workload
Scheduler?

A Tivoli Workload Scheduler network, with either a single domain or multiple


domains, gives you the ability to manage Tivoli Workload Scheduler from a
single node, the master domain manager. If you want to manage multiple
locations separately, you can consider the installation of a separate Tivoli
Workload Scheduler network at each location. Note that some degree of
decentralized management is possible in a stand-alone Tivoli Workload
Scheduler network by mounting or sharing file systems.
Do you have multiple physical or logical entities at a single site? Are there
different buildings, and several floors in each building? Are there different
departments or business functions? Are there different applications?

These may be reasons for choosing a multi-domain configuration. For


example, a domain for each building, department, business function, or each
application (manufacturing, financial, engineering).
Do you run applications, like SAP R/3, that will operate with Tivoli Workload
Scheduler?

If they are discrete and separate from other applications, you may choose to
put them in a separate Tivoli Workload Scheduler domain.
Would you like your Tivoli Workload Scheduler domains to mirror your
Windows NT domains? This is not required, but may be useful.

134

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Do you want to isolate or differentiate a set of systems based on performance


or other criteria? This may provide another reason to define multiple Tivoli
Workload Scheduler domains to localize systems based on performance or
platform type.
How much network traffic do you have now? If your network traffic is
manageable, the need for multiple domains is less important.
Do your job dependencies cross system boundaries, geographical
boundaries, or application boundaries? For example, does the start of Job1
on workstation3 depend on the completion of Job2 running on workstation4?

The degree of interdependence between jobs is an important consideration


when laying out your Tivoli Workload Scheduler network. If you use multiple
domains, you should try to keep interdependent objects in the same domain.
This will decrease network traffic and take better advantage of the domain
architecture.
What level of fault-tolerance do you require? An obvious disadvantage of the
single domain configuration is the reliance on a single domain manager. In a
multi-domain network, the loss of a single domain manager affects only the
agents in its domain.

3.4.2 Backup domain manager


Each domain has a domain manager and, optionally, one or more backup
domain managers. A backup domain manager (Figure 3-16 on page 136) must
be in the same domain as the domain manager it is backing up. The backup
domain managers must be fault tolerant agents running the same product
version of the domain manager they are supposed to replace, and must have the
Resolve Dependencies and Full Status options enabled in their workstation
definitions. If a domain manager fails during the production day, you can use
either the Job Scheduling Console, or the switchmgr command in the console
manager command line (conman), to switch to a backup domain manager. A
switch manager action can be executed by anyone with start and stop access to
the domain manager and backup domain manager workstations. A switch
manager operation stops the backup manager, then restarts it as the new
domain manager, and converts the old domain manager to a fault tolerant agent.
The identities of the current domain managers are carried forward in the
Symphony file from one processing day to the next, so any switch remains in
effect until you switch back to the original domain manager.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

135

MASTERDM
AIX
AIX

Backup
Domain
Manager

Master
Domain
Manager

DomainA

DomainB
AIX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

FTA3
OS/400

HPUX

FTA4

Windows 2000

Solaris

Figure 3-16 Backup domain manager within a network

3.4.3 Performance considerations


Tivoli Workload Scheduler 8.1 has some performance improvements over the
previous version. The new performance enhancements will be particularly
appreciated in Tivoli Workload Scheduler networks with many workstations,
massive scheduling plans, and complex relations between scheduling objects.
The improvements are in the following areas:
Daily plan creation: Jnextday runs faster, and consequently the master
domain manager can start its production tasks sooner.
Daily plan distribution: The Tivoli Workload Scheduler administrator can now
enable the compression of the Symphony file so that the daily plan can be
distributed to other nodes earlier.
I/O optimization: Tivoli Workload Scheduler performs fewer accesses to the
files and optimizes the use of system resources. The improvements reflect:

Event files: The response time to the events is improved so the message
flow is faster.
Daily plan: The access to the Symphony file is quicker for both read and
write. The daily plan can therefore be updated in a shorter time than it was
previously.

136

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

If you suffer from poor performance and have already isolated the bottleneck at
the Tivoli Workload Scheduler side, you may want a closer look at the new
localopts parameter.
Table 3-7 Performance-related localopts parameter
Syntax

Default value

mm cache mailbox=yes/no

No

mm cache size = bytes

32

sync level=low/medium/high

High

wr enable compression=yes/no

No

mm cache mailbox
Use this option to enable mailman to use a reading cache for incoming
messages. In that case not all messages are cached, but only those not
considered essential for network consistency. The default is No.

mm cache size
Specify this option if you also use mm cache mailbox. The default is 32 bytes.
Use the default for small and medium networks. Use larger values for large
networks. Avoid using a large value on small networks. The maximum value is
512 (higher values are ignored).

sync level
Specify the speed of write accesses on disk. This option affects all mailbox
agents and is applicable to UNIX workstations only. Values can be:
Low: Lets the operating system handle it.
Medium: Makes updates after a transaction has completed.
High: Makes updates every time data is entered.

The default is high.

wr enable compression
Use this option on fault tolerant agents. Specify if the FTA can receive the
Symphony file in compressed form from the master domain manager. The
default is no.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

137

3.4.4 FTA naming conventions


Each FTA represents a physical machine within a Tivoli Workload Scheduler
network. Depending how large you plan your distributed environment and how
much it can grow in the future, it makes sense to think about a strategy to use
naming conventions. They can help you easily to identify its usage, such as
where its located, the business unit it belongs to, etc. This becomes more
important since the length is limited.
Note: The name of any workstation in Tivoli Workload Scheduler for z/OS and
therefore of the entire end-to-end solution is limited to four digits. The name
must be alphanumeric, where the first digit must be alphabetical or national.

Figure 3-17 shows a typical end-to-end topology. It consists of a domain


manager, a backup domain manager, and several FTAs.

MASTERDM
Standby
Engine

Standby
Engine

z/OS
SYSPLEX
Active
Engine
Server

DomainZ
Domain
Manager

Tier 1

AIX

AIX

DomainA

DomainB
AIX

Tier 2

HPUX

Domain
Manager

Tier 3

FTA1

Domain
Manager

FTA2
AIX

FTA3
OS/400

Figure 3-17 End-to-end topology

138

Backup
Domain
Manager

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Windows 2000

FTA4
Solaris

To take advantage of the four digits naming convention, it can look like:
First digit

Use F, in general, to identify the workstation as an FTA.


Second digit

Use to know the hierarchical level (tier), the workstation resides on the
domain manager as the top domain can be assigned the level 1, where the
FTA in this case belongs to tier 3.
Third and fourth digits

Use in order to allow a high number of uniquely-named machines. The last


two digits need to be reserved for the numbering of each workstation.
Of course this is only an example, and cannot cover all your specific
requirements. It demonstrates only that the naming needs careful consideration.
Using our suggestions based on the example, the names of the workstation are
shown in Figure 3-18 on page 140.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

139

MASTERDM
Standby
Engine

Standby
Engine

z/OS
SYSPLEX
Active
Engine
Server

DomainZ
Domain
Manager
F100

Tier 1

AIX

AIX

DomainA

DomainB
AIX

Tier 2

HPUX

Domain
Manager
F201

Tier 3

F301

Backup
Domain
Manager
F101

Domain
Manager
F202

F303

F302
AIX

OS/400

Windows 2000

F304
Solaris

Figure 3-18 Naming the workstations

For better differentiation you can also use the workstation description field that
allows you to use up to 32 characters. See also Example 3-11 on page 141.
Tip: The hostname, in conjunction with the workstation name, provides you
with an easy way to illustrate your configured environment.

140

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example 3-11 Workstation description field


Work station
name description
F100 EASTHAM
PDM
F101 CHATHAM
BDM
F201 TOKYO
DM
F202 DELHI
DM
F301 ISTANBUL
FTA
F302 MAINZ
FTA
F303 COPENHAGEN FTA
F304 STOCKHOLM FTA

T
in
in
in
in
in
in
in
in

Z
Z
A
B
A
A
B
B

domain
domain
domain
Domain
domain
domain
domain
domain

R
C
C
C
C
C
C
C
C

A
A
A
A
A
A
A
A

Last update
user
date
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07

3.5 Installing TWS in an end-to-end environment


In this section, we describe how to install Tivoli Workload Scheduler in an
end-to-end environment.
Important: Maintenance releases of TWS are made available approximately
every three months. Check ftp://ftp.tivoli.com/support/patches/ for
updates before installing. We recommend that you install the latest available
release of the software.

The the latest available releases at the time of this writing and the location
where they can be found are listed below.
Tivoli Workload Scheduler: 8.1-TWS-0001:
ftp://ftp.tivoli.com/support/patches/patches_8.1/8.1-TWS-0001

Job Scheduling Console and related components: 1.2-JSC-0001:


ftp://ftp.tivoli.com/support/patches/patches_1.2/1.2-JSC-0001

Installing a TWS agent in an end-to-end environment is not very different from


installing TWS where TWS for z/OS is not involved. Follow the installation
instructions in the Tivoli Workload Scheduler 8.1 Planning and Installation Guide,
SH19-4555. The main differences to keep in mind are that in an end-to-end
environment, the master domain manager is always the TWS for z/OS engine
(known by the TWS workstation name OPCMASTER), and the local workstation
name of the fault tolerant workstation is limited to four characters.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

141

3.5.1 Installing multiple instances of TWS on one machine


As mentioned in Chapter 2, End-to-end TWS architecture and components on
page 15, there are often good reasons to install multiple instances of theTivoli
Workload Scheduler engine on the same machine. If you plan to do this, there
are some important considerations that should be made. Careful planning before
installation can save you a considerable amount of work later.
The following items must be unique for each instance of the TWS engine
installed on a computer:
The TWS user name and ID associated with the instance
The home directory of the TWS user and the parent directory of this directory
The component group (in /usr/unison/components)
The netman port number (set by the nm port option in the localopts file)

First of all, the user name and ID must be unique. There are many different ways
of naming these users. Choose user names that make sense to you. It may
simplify things to create a group called TWS and make all TWS users members
of this group. This would allow you to add group access to files to grant access to
all TWS users. When installing TWS on UNIX, the TWS user is specified by the
-uname option of the UNIX customize script. It is very important to specify the
TWS user. If the TWS user is not specified, the customize script will choose the
default user name maestro. Obviously, if you plan to install multiple TWS engines
on the same computer, they cannot both be installed as the user maestro.
Second, the home directory and the parent directory of the home directory must
be unique. When TWS is first installed, most of the files are installed into the
home directory of the TWS user. However, another directory called unison is
created in the parent directory of the TWS users home directory. The unison
directory is a relic of the days when Unison Softwares Maestro program (the
direct ancestor of TWS) was one of several programs that all shared some
common data between them. The unison directory was where the common data
were stored. Important information is still stored in this directory, including the
workstation database (cpudata), the NT user database (userdata), and the
security file. In order to keep two different TWS engines completely separate, it is
necessary to add an extra level to the directory hierarchy to ensure that each
TWS engine has its own unison directory.
Figure 3-19 on page 143 should give you an idea of how two TWS engines might
be installed on the same computer. You can see that each TWS engine has its
own separate TWS directory and unison directory.

142

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

tivoli

tws
TWS Engine A

TWS Engine B

TWS-A

unison

TWS-B

tws

unison

tws

network

Security

bin

mozart

network

Security

bin

mozart

mastsked

jobs

cpudata

userdata

mastsked

jobs

cpudata

userdata

Figure 3-19 Example of two separate TWS engines on one computer

Example 3-12 shows the /etc/passwd entries corresponding to the two TWS
users.
Example 3-12 Excerpt from /etc/passwd showing two different TWS users
tws-a:!:31111:9207:TWS Engine A User:/tivoli/TWS/TWS-A/tws:/usr/bin/ksh
tws-b:!:31112:9207:TWS Engine B User:/tivoli/TWS/TWS-B/tws:/usr/bin/ksh

Note that each TWS user has a unique name, ID, and home directory.
Next, the component group for each Tivoli Workload Scheduler engine must be
unique. The component group is a name used by TWS programs to keep each
engine separate. The name of the component group is up to you. It can be
specified using the -group option of the UNIX customize script. It is important to
specify a different component group name for each instance of the TWS engine
installed on a computer. Component groups are stored in the file
/usr/unison/components. This file contains two lines for each component group.
Example 3-13 on page 144 shows the components file corresponding to the two
TWS engines.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

143

Example 3-13 Sample /usr/unison/components file


netman
maestro
netman
maestro

1.8.1.1
8.1
1.8.1.1
8.1

/tivoli/TWS/TWS-A/tws
/tivoli/TWS/TWS-A/tws
/tivoli/TWS/TWS-B/tws
/tivoli/TWS/TWS-B/tws

TWS-Engine-A
TWS-Engine-A
TWS-Engine-B
TWS-Engine-B

The component groups are called TWS-Engine-A and TWS-Engine-B. For each
component group, the version and path where netman and maestro (the TWS
engine) are listed. In this context, maestro refers simply to the TWS home
directory.
Finally, because netman listens for incoming TCP link requests from other TWS
agents, it is important that netman have a unique listening port for each TWS
engine on a computer. This port is specified by the nm port option in the TWS
localopts file. It is necessary to shut down netman and start it again for changes
made to the netman options to take effect.
In our test environment, we chose a netman port number and user ID that was
the same for each TWS engine. This makes it easier to remember and simpler
when troubleshooting. Table 3-8 shows the names and numbers we used in our
testing.
Table 3-8 If possible, choose user IDs and port numbers that are the same
Component
group

User name

User ID

Netman port

TWS-Engine-A

tws-a

31111

31111

TWS-Engine-B

tws-b

31112

31112

3.6 Installing and configuring Tivoli Framework


As we have already discussed, the new Job Scheduling Console interface
requires that a few other components be installed in order for it to communicate
with the scheduling engines. If you are still not sure how all of the pieces fit
together, now might be a good time to take a closer look at Section 2.4, Job
Scheduling Console and related components on page 81.
In the first few parts of this section, we outline the steps required to get a TMR
server installed and up and running, and to get Job Scheduling Services and the
connectors installed. These instructions are primarily intended for environments
that do not already have a TMR server, or where a separate one will be installed
for Tivoli Workload Scheduler.

144

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

In the last few parts of this section, we discuss in more detail the steps specific to
end-to-end scheduling: Creating connector instances and TMF administrators.
The Tivoli Management Framework is quite easy to install. If you already have
the Framework installed in your organization, it is not necessary to install the
TWS-specific components (the JSS and connectors) on a node in your existing
Tivoli Managed Region. You may prefer to install a stand-alone TMR server
solely for the purpose of providing the connection between the Tivoli Workload
Scheduler suite and its interface, the Job Scheduling Console. If your existing
TMR is busy with other operations, such as monitoring or software distribution,
you might want to consider installing a separate stand-alone TMR server for
TWS. If you decide to install JSS and the connectors on an existing TMR server
or managed node, you can skip past the first few parts of this section.

3.6.1 Installing Tivoli Management Framework 3.7B


The first step is to install Tivoli Management Framework Version 3.7B. For
instructions, refer to Chapter 3 of the Tivoli Framework 3.7.1 Installation Guide,
GC32-0395.

3.6.2 Upgrade to Tivoli Management Framework 3.7.1


Version 3.7.1 of Tivoli Management Framework is required by Job Scheduling
Services 8.1, so if you do not already have Version 3.7.1 of the Framework
installed, you will need to upgrade to Version 3.7.1

3.7 Install Job Scheduling Services


Follow the instructions in Chapter 2 of the Tivoli Job Scheduling Console Users
Guide, SH19-4552. As we discussed in the architecture chapter, Job Scheduling
Services is simply a library used by the Framework, and it is a prerequisite of the
connectors.
The following are the hardware and software prerequisites for the Job
Scheduling Services.
Software

Tivoli Management Framework: Version 3.7.1 for Microsoft Windows, AIX,


HP-UX, and Sun Solaris. Version 3.7B for Linux.
Hardware

CD-ROM drive for installation.

Approximately 4 MB of free disk space.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

145

Tivoli Job Scheduling Services is supported on the following platforms:


Microsoft Windows

Windows NT 4.0 with Service Pack 5 or Service Pack 6a


Windows 2000 Professional, Server, and Advanced Server
2000 with Service Pack 1 and Service Pack 2
IBM AIX 4.3.3, 4.3.3s, 5.1

HP-UX PA-RISC 11.0, 11i

Sun Solaris 2.7, 2.8

Linux Red Hat 7.1

3.7.1 Installing the connectors (for TWS and TWS for z/OS connector)
Follow the installation instructions in Chapters 3 and 4 of the Tivoli Job
Scheduling Console Users Guide, SH19-4552.
When installing the TWS connector, we recommend that you do not click the
Create Instance check box. Create the instances after the connector has been
installed.
The following are the hardware and software prerequisites for the Tivoli
Workload Scheduler for z/OS connector:
Software

Tivoli Management Framework: Version 3.7.1 for Microsoft Windows, AIX,


HP-UX, and Sun Solaris Version 3.7B for Linux.
Tivoli Workload Scheduler for z/OS 8.1, or Tivoli OPC 2.1 or later.
Tivoli Job Scheduling Services 1.2.
TCP/IP network communications.
A Workload Scheduler for z/OS user account is required for proper
installation. You can create the account beforehand, or have the setup
program create it for you.
Hardware

CD-ROM drive for installation.

Approximately 3 MB of free disk space for the installation. In addition, the


Workload Scheduler for z/OS connector produces log files and temporary
files, which are placed on the local hard drive.
For more information please look into the Tivoli Workload Scheduler Job
Scheduling Console Release Notes.

146

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

3.7.2 Creating connector instances


As we discussed in Chapter 2, End-to-end TWS architecture and components
on page 15, the connectors are the bits that tell the Framework how to
communicate with the different types of scheduling engine.
To control the workload of the entire end-to-end scheduling network from the
TWS for z/OS controller, it is necessary to create a TWS for z/OS connector
instance to connect to that controller.
It may also be a good idea to create a TWS connector instance on a fault tolerant
agent or domain manager. Sometimes the status may get out of sync between
an FTA or DM and the TWS for z/OS controller. When this happens, it is helpful
to be able to connect directly to that agent and get the status directly from there.
Retrieving job logs (standard lists) is also much faster through a direct
connection to the FTA than through the TWS for z/OS controller.

Creating a TWS for z/OS connector instance


You have to create at least one Tivoli Workload Scheduler for z/OS connector
instance for each z/OS controller that you want to access with the Tivoli Job
Scheduling Console. This is done using the wopcconn command.

Example of creating a TWS for z/OS connector instance


In our test environment, we wanted to be able to connect to a Tivoli Workload
Scheduler for z/OS controller running on a mainframe with the hostname twscjsc.
On the mainframe, the TWS for z/OS TCP/IP server listens on TCP port 5000.
Yarmouth is the name of the TMR-managed node where we created the
connector instance. We called the new connector instance twsc. Here is the
command we used:
wopcconn -create -h yarmouth -e TWSC -a twscjsc -p 5000

The result of this will be that when we use JSC to connect to Yarmouth, a new
connector instance called TWSC appears in the Job Scheduling list on the left
side of the window. We can access the TWS for z/OS scheduling engine by
clicking that new entry in the list.
It is also possible to run wopcconn in interactive mode. To do this, just run
wopcconn with no arguments.
Refer to Appendix B, Connector reference on page 447 for a detailed
description of the wopcconn command.

Creating a TWS connector instance


Remember that a TWS connector instance must have local access to the TWS
engine with which it is associated. This is done using the wtwsconn.sh command.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

147

Example of creating a TWS connector instance


In our test environment, we wanted to be able to use JSC to connect to a TWS
engine on the host yarmouth. Yarmouth has two TWS engines installed, so we
had to be sure to specify that the path of the TWS engine we specify when
creating the connector is the path to the right TWS engine. We called the new
connector instance TWS-A, to reflect that this connector instance would be
associated with the TWS-A engine on this host (as opposed to the other TWS
engine, TWS-B). Here is the command we used:
wtwsconn.sh -create -h yarmouth -n Yarmouth-A -t /tivoli/TWS/TWS-A/tws

The result of this will be that when we use JSC to connect to yarmouth, a new
connector instance called TWS-A appears in the Job Scheduling list on the left
side of the window. We can access the TWS-A scheduling engine by clicking that
new entry in the list.
Refer to Appendix B, Connector reference on page 447 for a detailed
description of the wtwsconn.sh command.

3.7.3 Creating TMF Administrators for Tivoli Workload Scheduler


When a user logs into the Job Scheduling Console, the Tivoli Management
Framework checks to make sure that the users login is listed in an existing TMF
administrator.

TMF Administrators for TWS


A Tivoli Management Framework administrator must be created for the TWS
user. Additional TMF administrators can be created for other users who will
access TWS using JSC.

TMF Administrators for TWS for z/OS


The Tivoli Workload Scheduler for z/OS TCP/IP server associates the Tivoli
administrator to a RACF user. If you want to be able to identify each user
uniquely, one Tivoli Administrator should be defined for each RACF user. If
operating system users corresponding to the RACF users do not already exist on
the TMR server or on a managed node in the TMR, you must first create one OS
user for each Tivoli administrator that will be defined. These users can be
created on the TMR server of on any managed node in the TMR. Once you have
created those users, you can simply add those users logins to the TMF
administrators you create.

148

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Important: When creating users or setting their passwords, disable any option
that requires the user to set his password the first time he logs in. If the
operating system requires a user to change his password the first time he logs
in, he will have to do this before he will be able to login via the Job Scheduling
Console.

Creating TMF Administrators


We recommend that a Tivoli Administrator be created for every user who wants
to use the JSC.
Perform the following steps from the Tivoli desktop to create a new TMF
administrator:
1. Double click the Administrators icon and select Create -> Administrator as
shown in Figure 3-20.

Figure 3-20 Create Administrator

2. Enter the Tivoli Administrator name you want to create.


3. Select Set Logins to specify the login name. This field is important because it
will be used to determine the UID with which many operations are performed.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

149

It also represents a UID at the operating system level. See Figure 3-21 on
page 150.

Figure 3-21 Create Administrator

4. Type in the login name and press Enter. Then select Set & Close
(Figure 3-22).

150

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 3-22 Login names

5. Enter the name of the group. This field is used to determine the GID under
which many operations are performed. Select Set & Close.
The TMR roles you assign to the administrator will depend on the actions the
user will need to perform.
Table 3-9 Authorization roles required for connector actions
An Administrator with this role...

Can perform these actions

User

Use the instance


View instance settings

Admin, senior, or super

Use the instance


View instance settings
Create and remove instances
Change instance settings
Start and stop instances

Chapter 3. Planning, installation, and configuration of the TWS 8.1

151

6. Click the Set TMR Roles icon and add the role or roles desired (Figure 3-23).

Figure 3-23 Set TMR roles

7. Now select Set & Close to finish your input. This brings you back to the
Administrators desktop (Figure 3-24 on page 153).

152

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 3-24 Tivoli Administrator

3.8 Planning for Job Scheduling Console availability


Since the legacy GUI gconman and gcomposer are no longer included with
TWS, the Job Scheduling Console fills the roles of those program as the primary
interface to Tivoli Workload Scheduler. Staff that only work with the JSC and are
not familiar with the command line interface (CLI) depend on continuos JSC
availability. This requirement must be taken into consideration when planning for
a backup domain manager. We therefore recommend that there be a Tivoli
Workload Scheduler connector instance on the backup manager. This
guarantees JSC access without interruption.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

153

MASTERDM
Master
Domain
Manager

DomainZ

z/OS

Job
Scheduling
Console
AIX

Domain
Manager
DMZ

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

AIX

Backup
Domain
Manager

FTA3
OS/400

Windows 2000

FTA4
Solaris

Figure 3-25 JSC connections

To run the latest version of the Job Scheduling Console (Version 1.2), the
software requirements have been changed so that you need Tivoli Framework
Version 3.7.1 for AIX, HP-UX, and Microsoft Windows. If you are using Linux,
Version 3.7B is required.

Compatibility and migration considerations


The Job Scheduling Console Feature Level 1.2 can work with Tivoli Workload
Scheduler Version 7. 0 and Tivoli Workload Scheduler Connector Version 7.0,
and with Operations Planning and Control 2.3 and Operations Planning and
Control Connector Version 1.1 (the Connector 1.1 refresh level was distributed
on June 2000 with Job Scheduling Console Feature Level 1.1). However, there
are some limitations:
Only Tivoli Workload Scheduler 7.0 and Tivoli Workload Scheduler for z/OS
Version 2 functions are supported from the Job Scheduling Console.
New features of the Job Scheduling Console Feature Level 1.2 do not work
with versions of schedulers and connectors prior to Version 8.1.

Table 3-10 on page 155 shows the only supported combinations of Workload
Scheduler Connectors and engines.

154

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Table 3-10 Tivoli Workload Scheduler connector and engine combinations


Console feature
level

Connector
version

Engine
version 1

Supported

1.2

7.0

7.0

1.2

8.1

8.1

1.2

7.0

8.1

1.2

8.1

7.0

1.1

7.0

7.0

1.1

8.1

8.1

1.1

8.1

7.0

1.1

7.0

8.1

Notes:
The engine can be a fault tolerant agent or master.
Previous versions of the connector and engine must be the same.
There are limitations when using the Job Scheduling Console Feature
Level 1.1 if the database objects have been modified using a Job
Scheduling Console 1.2 with the new functions.

Satisfy the following requirements before installing:

Software
The following is required software.
Tivoli Management Framework: Version 3.7.1 for Microsoft Windows, AIX,
HP-UX, and Sun Solaris. Version 3.7B for Linux.
Tivoli Workload Scheduler 8.1.
Tivoli Job Scheduling Services 1.2.
TCP/IP network communications.
A Workload Scheduler user account is required for proper installation. You
can create the account beforehand, or have the setup program create it for
you.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

155

Hardware
The following is required hardware.
CD-ROM drive for installation.
Approximately 100 MB of free disk space for domain managers and fault
tolerant agents. Approximately 40 MB for standard agents. In addition, the
Workload Scheduler produces log files and temporary files, which are placed
on the local hard drive. The amount of space required depends on the
number of jobs managed by Workload Scheduler and the amount of time you
choose to retain log files.
128 MB RAM and 128 MB swap space for domain managers and fault
tolerant agents. Standard agents requires less.

3.9 Installing the Job Scheduling Console


Tivoli Workload Scheduler for z/OS is shipped with the latest version of the Job
Scheduling Console, Version 1.2. We recommend that you use this version
because it contains the best functionality and stability.
The JSC can be installed on the following platforms:
Windows

NT 4.0 (Service Pack 5 or 6a)


Professional, Server, Advanced Server 2000 (Service Pack 1 and 2)
98
Millennium Edition

IBM AIX

4.3.3
4.3.3s
5.1
Sun Solaris

2.7
2.8
HP-UX PA-RISC

11.0
11i
Linux

Red Hat 7.1

156

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

3.9.1 Hardware and software prerequisites


The following are the hardware and software prerequisites for the Job
Scheduling Console.

For use with Tivoli Workload Scheduler for z/OS


Software:
Tivoli Workload Scheduler for z/OS connector Version 1.2.
Tivoli Workload Scheduler for z/OS Version 8.1 or OPC 2.1 or later.
Tivoli Job Scheduling Services 1.2.
TCP/IP network communication.
Java Runtime Environment Version 1.3.

Hardware:
CD-ROM drive for installation.
70 MB disk space for full installation, or 34 MB for customized (English base)
installation plus approximately 4 MB for each additional language.

For use with Tivoli Workload Scheduler


Software:
Tivoli Workload Scheduler connector Version 7.0 or 8.1.
Tivoli Workload Scheduler Version 7.0 or 8.1.
Tivoli Job Scheduling Services 1.2.
TCP/IP network communication.
Java Runtime Environment Version 1.3.
Note: The versions of the scheduler and the connector that you use must be
the same.

Hardware:
CD-ROM drive for installation.
70 MB disk space for full installation, or 34 MB for customized (English base)
installation plus approximately 4 MB for each additional language.

For Workload Scheduler for z/OS, you should migrate the Job Scheduling
Services and the connector to the latest level. The Workload Scheduler for z/OS
connector can support any Operations Planning and Control V2 release level as
well as Workload Scheduler for z/OS Version 8.1.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

157

3.9.2 Installing the JSC


The following steps describe how to install the JSC.
1. Insert the Tivoli Job Scheduling Console CD-ROM into the system CD-ROM
drive or mount the CD-ROM from a drive on a remote system. For this
example, the CD-ROM drive is drive F.
2. Perform the following steps to run the installation command:
On Windows:

From the Start menu, select the Run option to display the Run
dialog.
In the Open field, enter F:\Install.

On AIX:

Type the following command:


jre -nojit -cp install.zip install

If that does not work, try:


jre -nojit -classpath [path to] classes.zip:install.zip install

If that does not work either, on sh-like shells, try:


cd [to directory where install.zip is located] CLASSPATH=[path to]
classes.zip:install.zip export CLASSPATH java -nojit install

Or, for csh-like shells, try:


cd [to directory where install.zip is located] setenv CLASSPATH [path
to]
classes.zip:install.zip java -nojit install

On Sun Solaris:

Change to the directory where you downloaded install.zip before


running the installer.
Enter sh install.bin.

The splash window, shown in Figure 3-26 on page 159 is displayed.

158

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 3-26 Installation splash screen

To complete the installation perform the following to start the JSC, depending on
your platform.
On Windows

Depending on the shortcut location that you specified during installation, click
the JS Console icon or select the corresponding item in the Start menu.
On Windows 95 and Windows 98

You can also start the JSC from the command line. Just type runcon from the
\bin\java subdirectory of the installation path.
On AIX

Type ./AIXconsole.sh.
On Sun Solaris

Type ./SUNconsole.sh.
A Tivoli Job Scheduling Console start-up window is displayed, as shown in
Figure 3-27 on page 160.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

159

Figure 3-27 JSC login window

Enter the following information and click the OK button to proceed:


1. User name: This is the user name of the person who has permission to use
the Tivoli Workload Scheduler for z/OS connector instances.
2. Password: This is the password for the Tivoli Framework administrator.
3. Host Machine: This is the name of the Tivoli-managed node that runs the
Tivoli Workload Scheduler for z/OS connector.

3.10 The Tivoli Workload Scheduler security model


In this section, we offer an overview of how security is implemented in Tivoli
Workload Scheduler. For more details, see the Tivoli Workload Scheduler 8.1
Planning and Installation Guide, SH19-4555.

160

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

When a user attempts to display a list of defined jobs, submit a new job stream,
add a new resource, or any other operation related to the Tivoli Workload
Scheduler plan or databases, TWS performs a check to verify that the user is
authorized to perform that action.
TWS and root users:
Has full access to all areas
Operations Group:
Can manage the whole workload
but cannot create job streams
Has no root access
Application User:
Can document own
jobs and schedules

Applications Manager:
Can document jobs and
schedules for entire group
and manage some production

General User:
Has display access only

Figure 3-28 Example security setup

TWS users have different roles within the organization. The TWS security model
you implement should reflect these roles. You can think of the different groups of
users as nested boxes, as in the figure above. The largest box represents the
highest access, granted to only the TWS user and the root user. The smaller
boxes represent more restricted roles, with correspondingly restricted access.
Each group represented by a box in the figure would have a corresponding
stanza in the security file. TWS programs and commands read the security file to
determine whether the user has the access required to perform an action.

3.10.1 The security file


Each workstation in a TWS network has its own security file. The files can be
maintained independently on each workstation, or you can keep a single
centralized security file on the master and copy it periodically to the other
workstations in the network.
At installation time, a default security file is created that allows unrestricted
access to only the TWS user (and, on UNIX workstations, the root user). If the
security file is accidentally deleted, the root user can generate a new one.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

161

If you have one security file for a network of agents, you may wish to make a
distinction between the root user on a fault tolerant agent and the root user on
the master domain manager. This is possible. For example, you can restrict local
users to performing operations affecting only the local workstation, while
permitting the master root user to perform operations that affect any workstation
in the network.
A template file named TWShome/config/Security is provided with the software.
During installation, a copy of the template is installed as TWShome/Security and
a compiled copy is installed as TWShome/../unison/Security.

Security file stanzas


The security file is divided into one or more stanzas. Each stanza limits the
access at three different levels:
User attributes appear between the USER and BEGIN statements and
determine whether a stanza applies to the user attempting to perform an
action.
Object attributes are listed, one object per line, between the BEGIN and END
statements. Object attributes determine whether an object line in the stanza
matches the object the user is attempting to access.
Access rights appear to the right of each object listed, after the ACCESS
statement. Access rights are the specific actions the user is allowed to take
on the object.

The steps of a security check


The steps of a security check reflect the three levels listed above:
1. First identify the user who is attempting to perform an action.
2. Next determine the type of object being accessed.
3. Then determine if the requested access is granted to that object.

Step 1: Identify the user


When an user attempts to perform any TWS action, the security file is searched
from the top to the bottom in order to find a stanza whose user attributes match
the user attempting to perform the action. If no match is found in the first stanza,
the user attributes of the next stanza are searched. Once a stanza is found
whose user attributes match that user, that stanza is selected for the next part of
the security check. If no stanza in the security file has user attributes that match
the user, access is denied.

162

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Step 2: Determine the type of object being accessed


Once the user has been identified, the stanza that applies to that user is
searched, top-down, for an object attribute that matches the type of object the
user is trying to access. Only that particular stanza (between the BEGIN and
END statements) is searched for a matching object attribute. If no matching
object attribute is found, access is denied.

Step 3: Determine if the requested access is granted to that object


If an object attribute corresponding to the object the user is attempting to access
is located, the final step is that the access rights following the ACCESS
statement on that line in the file are searched for the action that the user is
attempting to perform. If this access right is found, then access is granted. If the
access right is not found on this line, then the rest of the stanza is searched for
other object attributes (other lines) of the same type and step 3 is repeated for
each of these.
Figure 3-29 illustrates the steps of the security check algorithm.

Master
Sol

Security

Logon ID:
johns
FTA
Venus

FTA
Mars

Command issued:
conman release mars#weekly.cleanup

Security file on Sol


USER JohnSmith
CPU=@+LOGON=johns
BEGIN
JOB CPU=@ ACCESS=DISPLAY,RELEASE,ADD,
END

1) Find user
2) Find object
3) Find access right

Figure 3-29 Example of a TWS security check

Chapter 3. Planning, installation, and configuration of the TWS 8.1

163

3.10.2 Sample security file


Here are some things to note about the security file stanza, as seen in
Example 3-14:
mastersm is an arbitrarily chosen name for this group of users.
The example security stanza above would match a user that logs on to the
master (or to the Framework via JSC), where the user name (or TMF
Administrator name) is maestro, root, or Root_london-region.
These users have full access to jobs, jobs streams, resources, prompts, files,
calendars, and workstations.
The users have full access to all parameters except those whose names
begin with r.

For NT user definitions (userobj), the users have full access to objects on all
workstations in the network.
Example 3-14 Sample security file
###########################################################
#Sample Security File
###########################################################
#(1)APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON THE
#MASTER DOMAIN MANAGER OR FRAMEWORK.
user mastersm cpu=$master,$framework +logon=maestro,root,Root_london-region
begin
#OBJECT ATTRIBUTES ACCESS CAPABILITIES
#-------------------------------------------job
access=@
schedule
access=@
resource
access=@
prompt
access=@
file
access=@
calendar
access=@
cpu
access=@
parameter name=@ ~ name=r@ access=@
userobj
cpu=@ + logon=@ access=@
end

Creating the security file


To create user definitions, edit the template file TWShome/Security. Do not
modify the original template in TWShome/config/Security. Then use the makesec
command to compile and install a new operational security file. After it is installed
you can make further modifications by creating an editable copy of the
operational file with the dumpsec command.

164

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The dumpsec command


The dumpsec command takes the security file, generates a text version of it, and
sends that to stdout. The user must have display access to the security file.
Synopsis
dumpsec -v | -u
dumpsec security-file

Description

If no arguments are specified, the operational security file (../unison/Security)


is dumped. To create an editable copy of a security file, redirect the output of
the command to another file, as shown in Example of dumpsec and
makesec on page 166.
Arguments

-v displays command version information only.


-u displays command usage information only.
Security-file specifies the name of the security file to dump.

Figure 3-30 The dumpsec command

The makesec command


The makesec command essentially does the opposite of what the dumpsec
command does. The makesec command takes a text security file, checks its
syntax, compiles it into a binary security file, and installs the new binary file as
the active security file. Changes to the security file take effect when Tivoli
Workload Scheduler is stopped and restarted. Programs affected are:
Conman
Composer
Tivoli Workload Scheduler connectors

Chapter 3. Planning, installation, and configuration of the TWS 8.1

165

Simply exit the programs. The next time they are run, the new security definitions
will be recognized. Tivoli Workload Scheduler connectors must be stopped using
the wmaeutil command before changes to the security file will take effect for
users of JSC. The connectors will automatically restart as needed.
The user must have modify access to the security file.
Note: On Windows NT, the connector processes must be stopped (using the
wmaeutil command) before the makesec command will work correctly.
Synopsis
makesec -v | -u
makesec [-verify] in-file

Description

The makesec command compiles the specified file and installs it as the
operational security file (../unison/Security). If the -verify argument is
specified, the file is checked for correct syntax, but it is not compiled and
installed.
Arguments

-v displays command version information only.


-u displays command usage information only.
-verify checks the syntax of the user definitions in the in-file only. The file is
not installed as the security file. (Syntax checking is performed
automatically when the security file is installed.)
in-file specifies the name of a file or set of files containing user definitions.
A file name expansion pattern is permitted.

Example of dumpsec and makesec


The following example creates an editable copy of the operational security file in
a file named tempsec, modifies the user definitions with a text editor, then
compiles tempsec and replaces the operational security file.
dumpsec >tempsec
vi tempsec
(Here you would make any required modifications to the tempsec file)
makesec tempsec

166

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Note: Add the Tivoli Administrator to the TWS security file after you have
installed the Tivoli Management Framework and Tivoli Workload Scheduler
connector.

Configuring TWS security for the Tivoli Administrator


In order to use the Job Scheduling Console on a master or on an FTA, the Tivoli
Administrator user(s) must be defined in the security file of that master or FTA.
The $framework variable can be used as a user attribute in place of a specific
workstation. This indicates a user logging in via the Job Scheduling Console.

3.11 Maintenance
The Tivoli maintenance strategy for Tivoli Workload Scheduler introduces a new
way to maintain the product more effectively and easily. On a quarterly basis
Tivoli provides you updates with recent patches and offers as a fix pack that is
similar to a maintenance release. This fix pack can be ordered either via the
common support Web page ftp.tivoli.com/support/patches or shipped on a
CD. Ask your local Tivoli support for more details.

Chapter 3. Planning, installation, and configuration of the TWS 8.1

167

168

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Chapter 4.

End-to-end implementation
scenarios and examples
In this chapter we describe several different implementation scenarios and
examples for Tivoli Workload Scheduler for z/OS end-to-end scheduling.
First we describe the rationale behind the conversion to TWS for z/OS
end-to-end scheduling.
Next we describe and show four different implementation, migration, conversion,
and fail-over scenarios:
1. Implementing end-to-end scheduling with TWS distributed fault tolerant
agents in a TWS for z/OS environment with no distributed job scheduling.
You can use this scenario as an example of implementing the end-to-end
scheduling, if you have not used OPC tracker agents or Tivoli Workload
Scheduler before.
2. Migrating from OPC tracker agents to TWS for z/OS end-to-end fault tolerant
workstations.
3. Conversion from a TWS-managed network to a TWS for z/OS managed
network.
4. Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios:
Switch to TWS for z/OS backup engine.

Copyright IBM Corp. 2002

169

Switch to TWS backup domain manager.


HACMP configuration guidelines for TWS fault tolerant workstations.
Configuration guidelines for TWS fault tolerant workstations in other High
Availability (HA) environments.
Finally we describe some important backup and maintenance guidelines that you
should consider when implementing your end-to-end scheduling environment.

170

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

4.1 The rationale behind the conversion to end-to-end


scheduling
As described in Section 2.3.7 Tivoli Workload Scheduler for z/OS end-to-end
benefits on page 80, you can gain several benefits by using Tivoli Workload
Scheduler for z/OS end-to-end scheduling. To review:
You have the ability to use fault tolerant agents. This way the distributed job
scheduling is more independent from problems with network connections and
poor network performance.
You can schedule workload on additional operation systems such as Linux
and Windows 2000.
You have a seamless synchronization of work in mainframe and distributed
environments.

Making dependencies between mainframe jobs and jobs in distributed


environments is straightforward, using the same terminology and known
interfaces.
You have the ability for Tivoli Workload Scheduler for z/OS to use multi-tier
architecture with Tivoli Workload Scheduler domain managers.
You get extended planning capabilities, such as the use of long-term plans,
trial plans, and extended plans, also to the distributed Tivoli Workload
Scheduler network.

Extended plans means that the current plan can span more than 24 hours.
The powerful run-cycle and calendar functions in Tivoli Workload Scheduler
for z/OS can be used for distributed Tivoli Workload Scheduler jobs.

Besides these benefits, using the Tivoli Workload Scheduler for z/OS end-to-end
also makes it possible to:
Reuse or reinforce the procedures and processes that are established for the
Tivoli Workload Scheduler for z/OS mainframe environment.

The operators, planners, and administrators trained and experienced in


managing Tivoli Workload Scheduler for z/OS workload can reuse their skills
and knowledge in the distributed jobs managed by the Tivoli Workload
Scheduler for z/OS end-to-end.
Extend disciplines established to manage and operate workload scheduling in
mainframe environments, to the distributed environment.
Extend procedures for a contingency established for the mainframe
environment to the distributed environment.

Chapter 4. End-to-end implementation scenarios and examples

171

Basically when we look at end-to-end scheduling in this redbook, we consider


scheduling in the enterprise (mainframe and distributed) where the Tivoli
Workload Scheduler for z/OS engine is the master.
Another possibility, when looking at end-to-end in the enterprise (mainframe and
distributed), is having the Tivoli Workload Scheduler engine as the master. This
way the Tivoli Workload Scheduler manages the end-to-end scheduling in your
enterprise. Scheduling in the mainframe environment from Tivoli Workload
Scheduler master can be managed using the Tivoli Workload Scheduler MVS
and OS/390 extended agent to:
Launch and monitor JES jobs and check status for JES jobs not launched by
Tivoli Workload Scheduler.
Launch and monitor CA-7 jobs and check status for CA-7 jobs not launched
by Tivoli Workload Scheduler.
Launch and monitor Tivoli Workload Scheduler for z/OS job streams (jobs)
and check status for Tivoli Workload Scheduler for z/OS jobs (operations) not
launched by Tivoli Workload Scheduler.

Installation and usage of the Tivoli Workload Scheduler MVS and OS/390
extended agent is described in the redbook End-to-End Scheduling with OPC
and TWS Mainframe and Distributed Environment, SG24-6013.

4.2 Description of our environment and systems


In this section we will describe the environment we used for the end-to-end
scenarios.
The environment used for the end-to-end scenarios contains the following
systems:
A mainframe environment with three z/OS Version 1, Release 3.0 (1.3)
systems in one sysplex. The three z/OS systems have SMF ID:

SC63 (z/OS 1.3)


SC64 (z/OS 1.3)
SC65 (z/OS 1.3)
When we reference one of these systems in the scenarios, these SMF IDs
will be used to identify a particular z/OS system in the sysplex.
The distributed environment with three AIX Version 4, Release 3.3 (4.3.3)
systems, four Windows 2000 systems, and one Windows NT system. The
systems have the following hostnames:

eastham (AIX system)

172

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

chatham (AIX system)


yarmouth (AIX system)
paris (Windows 2000 system)
tokyo (Windows 2000 system)
delhi (Windows 2000 system)
istanbul (Windows 2000 system)
central (Windows NT system)
We will use these hostnames in the scenario, when we reference any of these
systems.
All the systems are connected using TCP/IP connections. Figure 4-1 on
page 174 shows our configuration.

Chapter 4. End-to-end implementation scenarios and examples

173

MASTERDM

SC63

SC65

Standby
Engine

Standby
Engine

z/OS 1.3
SYSPLEX
Active
Engine
Server

DM100

SC64

eastham
DM - F100
AIX 4.3.3

paris

chatham

FTA - F102

BDM - F101

Windows 2000

AIX 4.3.3

DM200

yarmouth
DM - F200
AIX 4.3.3

delhi

tokyo
FTA - F201
Windows 2000

FTA - F202
Windows 2000

istanbul

central

FTA - F203

FTA - F204

Windows 2000

Windows NT

Figure 4-1 Our configuration for TWS for z/OS end-to-end scheduling

4.3 Implementing TWS for z/OS end-to-end scheduling


from scratch
In this section we describe how we implemented end-to-end scheduling with
TWS fault tolerant agents in a TWS for z/OS environment with no distributed job
scheduling.

174

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

This implementation is straightforward and consists of the following major steps:


1. Design the topology (for example, domain hirachy or number of domains) for
the distributed TWS network where TWS for z/OS is going to do the workload
scheduling.
Use the guidelines in Section 3.4.1, Network planning and considerations
on page 134 when designing the topology.
2. Install the TWS for z/OS engine, agents, and end-to-end server tasks in the
host environment.
Refer to the Tivoli Workload Scheduler for z/OS V8R1 Installation Guide,
SH19-4543, and Section 3.2, Installing Tivoli Workload Scheduler for z/OS
on page 100.
Note: If you run on a previous release of OPC, you should also migrate
from this release to Tivoli Workload Scheduler for z/OS 8.1 as part of the
installation. Migration steps are described in the Tivoli Workload Scheduler
for z/OS V8R1 Installation Guide, SH19-4543. Migration is performed with
a standard program supplied with TWS for z/OS.

3. Customize the TWS for z/OS server and plan program topology definitions for
the distributed agents.
4. Customize the TWS for z/OS engine for end-to-end scheduling.
5. Restart the TWS for z/OS engine and verify that:
The engine starts without any errors.
The engine starts the TWS for z/OS server task (if configured to do so).
The TWS for z/OS server starts without any errors.
6. Install the TWS distributed workstations (fault tolerant agents).
Refer to the Tivoli Workload Scheduler 8.1 Planning and Installation Guide,
SH19-4555, and Section 3.5 Installing TWS in an end-to-end environment
on page 141.
7. Remember to specify OPCMASTER as the name of the master domain
manager.
8. Start the netman process on the installed agents using the conman StartUp
command.
9. Define fault tolerant workstations in the workstation database on the TWS for
z/OS engine.

Chapter 4. End-to-end implementation scenarios and examples

175

10.Activate the fault tolerant workstation definitions by running the extend or


replan plan program.
Verify that the workstations are active and linked.
11.Create jobs, user definitions, and job streams for the jobs to be executed on
the fault tolerant workstations.
12.Do a verification test of the TWS for z/OS end-to-end scheduling. The
verification test is used to verify that the TWS for z/OS engine can schedule
and track jobs on the fault tolerant workstations.
In the verification test, it should also be verified that it is possible to browse
the job log for the completed jobs run on the fault tolerant workstations.
See Chapter 3, Planning, installation, and configuration of the TWS 8.1 on
page 91 for a detailed description of the installation steps for Tivoli Workload
Scheduler for z/OS and Tivoli Workload Scheduler.

4.3.1 The configuration and topology for the end-to-end network


Based on our test environment and the machines described in Section 4.2,
Description of our environment and systems on page 172, we decided to
establish the initial end-to-end topology as shown in Figure 4-1 on page 174.
The z/OS sysplex with system SC63, SC64, and SC65, is the master domain
(MASTERDM).
The eastham machine is the primary domain manager for the distributed
network, and this primary domain is named DM100. Eastham, chatham, and
paris are all in the DM100 domain. Chatham is the backup domain manager for
eastham.
Yarmouth, tokyo, delhi, istanbul, and central are in a subordinate domain to the
primary domain, DM100. This subordinate domain is named DM200. Yarmouth is
domain manager for the DM200 domain.
The engine and the server are normally started on the SC64 system. Standby
engines are started on SC63 and SC65.
We define the following started task procedure names in z/OS:

176

TWST

For the Tivoli Workload Scheduler for z/OS agent

TWSC

For the Tivoli Workload Scheduler for z/OS engine

TWSCTP

For the end-to-end server

TWSCJSC

For the Job Scheduling Console server

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The distributed TWS fault tolerant workstations are configured with the home
directory:
/tivoli/TWS/E/tws-e (on UNIX systems)
C:\TWS\E\tws-e\ (on Windows systems)

And with the user ID, tws-e on UNIX and Windows systems.
We have two instances in the Job Scheduling Console:
1. TWSC pointing to the TWS for z/OS engine
This instance can be used to work with the TWS for z/OS database and plan
from the JSC.
2. TWSC-F100-Eastham pointing to the primary domain manager (F100)
This instance can be used to work with the Symphony file (the distributed
plan) on the F100 primary domain manager.
Note: Having a JSC instance pointing to the primary domain manager is a
good approach. We use this instance several times to check the link status
for workstations, and compare the job status in the TWS for z/OS plan with
the job status in the Symphony file on the primary domain manager.

We did not install a JSC instance pointing to the backup domain manager
(F101). But you may want to install a JSC pointing to the backup domain
manager, since it can be activated as the primary domain manager.

4.3.2 Customize the TWS for z/OS plan program topology definitions
When the TWS for z/OS engine and agents are installed and up and running in
the sysplex, the next step is to create the end-to-end server started task
procedure and customize the topology definitions.
The started task procedure for the TWS for z/OS end-to-end server task,
TWSCTP, is shown in Example 4-1.
Example 4-1 Started task procedure
//TWSCTP
EXEC PGM=EQQSERVR,REGION=6M,TIME=1440
//*********************************************************************
//* STARTED TASK PROCEDURE FOR THE TWS FOR Z/OS SERVER TASK
//* ------------------------------------------------------//*
//*********************************************************************
//SYSTCPD DD DISP=SHR,DSN=TCPIPMVS.&SYSNAME..TCPPARMS(TCPDATA)
//EQQMLIB DD DISP=SHR,DSN=EQQ.SEQQMSG0
//EQQMLOG DD SYSOUT=*

Chapter 4. End-to-end implementation scenarios and examples

177

//EQQPARM
//EQQTWSIN
//EQQTWSOU
//SYSMDUMP
//EQQDUMP
//*

DD
DD
DD
DD
DD

DISP=SHR,DSN=EQQUSER.TWS810.PARM(TWSCTP)
DISP=SHR,DSN=EQQUSER.TWS810.TWSIN
DISP=SHR,DSN=EQQUSER.TWS810.TWSOU
DISP=MOD,DSN=EQQUSER.TWS810.SYSDUMPS
DISP=SHR,DSN=EQQUSER.TWS810.EQQDUMPS

We use a dedicated work HFS for the TWSCTP server. The work HFS for the
TWS for z/OS end-to-end server task is mounted as read/write on all three
systems in the sysplex (SC63, SC64, and SC65). The work HFS is allocated with
the attributes shown in Example 4-2.
Example 4-2 HFS work allocation
dataset name: OMVS.TWS810.TWSCTP.HFS
Block size . . . . . . : 4096
Total blocks . . . . . : 256008
Mount point: /tws/twsctpwrk

The HFS is initialized with the EQQPCS05 job described in Section 3.2.4,
Create and customize the work directory on page 109.
The HFS file can be created with a job that contains an IEFBR14 step, as in
Example 4-3.
Example 4-3 HFS file creation
//USERHFS EXEC PGM=IEFBR14
//D1
DD DISP=(,CATLG),DSNTYPE=HFS,
//
SPACE=(CYL,(prispace,secspace,1)),
//
DSN=OMVS.TWS810.TWSCTP.HFS

Standards for the end-to-end topology definitions


We decided to use the following initial standards for the topology:
The TWS for z/OS server and fault tolerant agents must use port 31281 for IP
linking.
Note: It is important to use a unique port that is not used by other
programs, including Tivoli Workload Scheduler and the Tivoli Workload
Scheduler for z/OS end-to-end server.
Domain names start with DM and have a contiguous number.

DMxxx, where xxx is a number.


The primary domain manager name is DM100.

178

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The subordinate domian managers will have the names DM200, DM300,
etc.
The name does not imply the level or tier for the domain.
TWS for z/OS fault tolerant workstations names:

Start with F (to indicate it is a fault tolerant workstation).


Second number is used to distinguish domains:

F1xx = Fault tolerant workstation in primary domain


All fault tolerant workstations in this domain will start with F1; xx can be
any number or character in combination.

F2xx = Fault tolerant workstation in the next domain


All fault tolerant workstations in this domain will start with F2; xx can be
any number or character in combination.

F3xx = Fault tolerant workstation in a third domain


All fault tolerant workstations in this domain will start with F3; xx can be
any number or character in combination.

The second number does not indicate the domain managers tier level in
the network. Since we are using numbers and characters we can define
36 domains and 1296 fault tolerant workstations in every domain. The
total number of fault tolerant workstations is 46656 (36*1296).
TWS for z/OS fault tolerant workstations description.

We put the hostname as the first 10 characters in the description field.


In the remaining part of the description field we wrote the name of the
domain and the type for the fault tolerant workstation.

Define the topology initialization statements


With these standards and naming conventions we define the topology
initialization statements used by the TWS for z/OS end-to-end server and plan
programs. The process is as follows:
1. Define the SERVOPTS and INIT initialization statements for the TWSCTP
Tivoli Workload Scheduler for z/OS end-to-end server task pointed to by the
EQQPARM DD-card in the TWSCTP started task procedure:
//EQQPARM

DD

DISP=SHR,DSN=EQQUSER.TWS810.PARM(TWSCTP)

The initialization statements are shown in Example 4-4.


Example 4-4 Initialization statements
/*********************************************************************/
/* Parameter specifications for TCP/IP end-to-end server.
*/
/*
*/

Chapter 4. End-to-end implementation scenarios and examples

179

/*********************************************************************/
SERVOPTS ARM(YES)
/* Use ARM to restart if fail */
CODEPAGE(IBM-037)
/* Use US codepage
*/
PROTOCOL(TCPIP)
/* This is a TCP/IP server
*/
SUBSYS(TWSC)
/* TWSC is our engine
*/
TPLGYPRM(TOPOLOGY)
/* Member with topology defs. */
PORTNUMBER(6000)
/* The portno. is not used if */
/* it is an E2E only server
*/
/* But if it isn't spec. then */
/* it will default to 425 !! */
/*-------------------------------------------------------------------*/*/*
CALENDAR parameter is mandatory for server when using TCP/IP
*/
/* server (also if the server is dedicated to end-to-end!)
*/
/*-------------------------------------------------------------------*/
INIT
CALENDAR(DEFAULT)
/* Must be spec. for IP server*/
/*-------------------------------------------------------------------*//

2. Create the topology member named in the TWSCTP SERVOPTS


TPLGYPRM(TOPLOGY) initialization statement.
The TOPLOGY initialization statements Example 4-5.
Example 4-5 TOPLOGY initialization statements
/**********************************************************************/
/* TOPOLOGY: End-to-End options
*/
/**********************************************************************/
TOPOLOGY BINDIR('/usr/lpp/TWS/TWS810') /* The TWS for z/OS inst. dir */
WRKDIR('/tws/twsctpwrk')
/* The TWS for z/OS work dir */
CODEPAGE(IBM-037)
/* Codepage for translator
*/
HOSTNAME(TWSCTP)
/* Logical hostname for server*/
PORTNUMBER(31281)
/* Port for end-to-end netman */
TPLGYMEM(TPDOMAIN)
/* Mbr. with domain+FTA descr.*/
USRMEM(TPUSER)
/* Mbr. with Windows user+pw */
LOGLINES(100)
/* Lines sent by joblog retr. */
TRCDAYS(10)
/* Days to keep stdlist files */

Note: TRCDAYS defines how many days files in stdlist directory should be
kept. At midnight Netman creates a new directory, ccyy.mm.dd, in the
stdlist directory. TRCDAYS specifies how old these files are going to be
before they are deleted.

3. Create the domain and fault tolerant agent definitions. This is done in the
member named in the TWSCTP SERVOPTS TPLGYMEM(TPDOMAIN))
initialization statement. The domain and fault tolerant agent definitions are
used to define the topology that we have outlined in Figure 4-1 on page 174
We use the definitions shown in Example 4-6 on page 181.

180

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example 4-6 Domain and fault tolerant agent definitions


/**********************************************************************/
/* DOMREC: Defines the domains in the distributed Tivoli Workload
*/
/*
Scheduler network
*/
/**********************************************************************/
/*--------------------------------------------------------------------*/
/* Specify one DOMREC for each domain in the distributed network.
*/
/* With the exception of the master domain (whose name is MASTERDM
*/
/* and consist of the TWS for z/OS controller).
*/
/*--------------------------------------------------------------------*/
DOMREC
DOMAIN(DM100)
/* Domain name for 1st domain
*/
DOMMNGR(F100)
/* Eastham FTA - domain manager
*/
DOMPARENT(MASTERDM)
/* Domain parrent is MASTERDM
*/
DOMREC
DOMAIN(DM200)
/* Domain name for 2nd domain
*/
DOMMNGR(F200)
/* Yarmouth FTA - domain manager */
DOMPARENT(DM100)
/* Domain parrent is DM100
*/
/**********************************************************************/
/* CPUREC: Defines the workstations in the distributed Tivoli
*/
/*
Workload Scheduler network
*/
/**********************************************************************/
/*--------------------------------------------------------------------*/
/* You must specify one CPUREC for workstation in the TWS network
*/
/* with the exception of OPC Controller which acts as Master Domain
*/
/* Manager
*/
/*--------------------------------------------------------------------*/CPUREC
CPUNAME(F100)
/* Domain manager for DM100
*/
CPUOS(AIX)
/* AIX operating system
*/
CPUNODE(EASTHAM.ITSC.AUSTIN.IBM.COM) /* IP address of CUP
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM100)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(ON)
/* Full status on for DM
*/
CPURESDEP(ON)
/* Resolve dependencies on for DM*/
CPULIMIT(20)
/* Number of jobs in parallel
*/
/*
CPUSERVER( )
Not allowed for domain mng.
*/
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUREC
CPUNAME(F101)
/* Backup Domain Mngr. for DM100 */
CPUOS(AIX)
/* AIX operating system
*/
CPUNODE(CHATHAM.ITSC.AUSTIN.IBM.COM) /* IP address of CUP
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM100)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(ON)
/* Full status on for Backup DM */
CPURESDEP(ON)
/* Resolve dep. on for Backup DM */
CPUSERVER(1)
/* Start extra server Mailman p. */
CPULIMIT(20)
/* Number of jobs in parallel
*/

Chapter 4. End-to-end implementation scenarios and examples

181

CPUREC

CPUREC

/*
CPUREC

CPUREC

182

CPUTZ(CST)
/* Time zone for this CPU
*/
CPUNAME(F102)
/* Fault Tolerant Agent in DM100 */
CPUOS(WNT)
/* Windows operating system
*/
CPUNODE(PARIS.ITSC.AUSTIN.IBM.COM) /* IP address of CPU
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM100)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(OFF)
/* Full status off for this CPU */
CPURESDEP(OFF)
/* Resolve dep. off for this CPU */
CPUSERVER(1)
/* Start extra server Mailman p. */
CPULIMIT(10)
/* Number of jobs in parallel
*/
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUNAME(F200)
/* Domain manager for DM200
*/
CPUOS(AIX)
/* AIX operating system
*/
CPUNODE(YARMOUTH.ITSC.AUSTIN.IBM.COM) /*IP address of CPU
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(ON)
/* Full status on for DM
*/
CPURESDEP(ON)
/* Resolve dep. on for DM
*/
CPULIMIT(99)
/* Jobs in parallel 99 is default*/
CPUSERVER( )
Not allowed for domain mng.
*/
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUNAME(F201)
/* Fault Tolerant Agent in DM200 */
CPUOS(WNT)
/* Windows operating system
*/
CPUNODE(TOKYO.ITSC.AUSTIN.IBM.COM)
/* IP address of CPU
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(OFF)
/* Full status off for this CPU */
CPURESDEP(OFF)
/* Resolve dep. off for this CPU */
CPULIMIT(20)
/* Jobs in parallel
*/
CPUSERVER(2)
/* Start extra server Mailman p. */
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUNAME(F202)
/* Fault Tolerant Agent in DM200 */
CPUOS(WNT)
/* Windows operating system
*/
CPUNODE(DELHI.ITSC.AUSTIN.IBM.COM)
/* IP address of CPU
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(OFF)
/* Full status off for this CPU */
CPURESDEP(OFF)
/* Resolve dep. off for this CPU */
CPULIMIT(20)
/* Jobs in parallel
*/
CPUSERVER(2)
/* Start extra server Mailman p. */
CPUTZ(CST)
/* Time zone for this CPU
*/

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

CPUREC

CPUREC

CPUNAME(F203)
/* Fault Tolerant Agent in DM200
CPUOS(WNT)
/* Windows operating system
CPUNODE(ISTANBUL.ITSC.AUSTIN.IBM.COM) /*IP address of CPU
CPUTCPIP(31281)
/* TCP port number of NETMAN
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
CPUTYPE(FTA)
/* This is a FTA CPU type
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
CPUFULLSTAT(OFF)
/* Full status off for this CPU
CPURESDEP(OFF)
/* Resolve dep. off for this CPU
CPULIMIT(20)
/* Jobs in parallel
CPUSERVER(2)
/* Start extra server Mailman p.
CPUTZ(CST)
/* Time zone for this CPU
CPUNAME(F204)
/* Fault Tolerant Agent in DM200
CPUOS(WNT)
/* Windows operating system
CPUNODE(CENTRAL.ITSC.AUSTIN.IBM.COM) /* IP address of CPU
CPUTCPIP(31281)
/* TCP port number of NETMAN
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
CPUTYPE(FTA)
/* This is a FTA CPU type
CPUAUTOLNK(ON)
/* Autolink is on this CPU
CPUFULLSTAT(OFF)
/* Full status off for this CPU
CPURESDEP(OFF)
/* Resolve dep. off for this CPU
CPULIMIT(20)
/* Jobs in parallel
CPUSERVER(2)
/* Start extra server Mailman p.
CPUTZ(CST)
/* Time zone for this CPU

*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/

CPU is the same as workstation.


The name of DM100s parent domain must be MASTERDM.
CPUFULLSTAT (full status information) and CPURESDEP (resolve
dependency information) is set to On for the primary domain manager
(F100), the backup domain manager (F101), and the subordinate domain
manager (F200). This way we assure that the Symphony file on these
managers is updated with the same reporting and logging information. The
backup domain manager (F101) will then be able to take over the
responsibilities of the primary domain manager (F100). The subordinate
domian manager (F200) will be able to resolve dependencies for its
subordinate workstations.
The CPULIMIT value specifies how many jobs can be run in parallel on
the workstation. The default value is 99. CPULIMIT should be based on
the capacity and performance of the workstation.
These definitions will take effect after a TWS for z/OS plan extend, plan
replan, or redistribution of the Symphony file.
We did not install a TWS fault tolerant workstation on F204 in our
environment. It is possible to create the CPUREC entry to prepare for a
future installation of F204. Since we know the F204 is going to be part of

Chapter 4. End-to-end implementation scenarios and examples

183

our configuration, we decide to create the CPUREC definition to be


prepared.
Tip: Defining CPUSERVER is a good way to optimize your Tivoli Workload
Scheduler distributed network and minimize the load on the domain
manager mailman process. By specifying CPUSERVER we ask the
domain manager to start extra mailman processes to manage the
communication with the workstations. The domain manager will start one
mailman process per CPUSERVER character or number.

In our example the primary domain manager (F100) will have one mailman
process for its own communication with the MASTERDM and an extra
mailman process for communication with the F101 and F102 fault tolerant
agents (specified in the CPUSERVER(1) for F101 and F102 CPUREC
definitions).

4. Create user and password definitions for Windows fault tolerant workstations.
This is done in the member named in the TWSCTP SERVOPTS
USRMEM(TPUSER) initialization statement. See Example 4-7.
Example 4-7 USRREC
/*********************************************************************/
/* USRREC: Windows users password definitions
*/
/*********************************************************************/
/*-------------------------------------------------------------------*/
/* You must specify at least one CPUREC for each Windows workstation */
/* in the distributed TWS network.
*/
/*-------------------------------------------------------------------*/
USRREC
USRCPU(F102)
/* The F102 Windows workstation*/
USRNAM(tws-e)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
USRREC
USRCPU(F201)
/* The F201 Windows workstation*/
USRNAM(tws-e)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
USRREC
USRCPU(F202)
/* The F202 Windows workstation*/
USRNAM(tws-d)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
USRREC
USRCPU(F203)
/* The F203 Windows workstation*/
USRNAM(tws-e)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
USRREC
USRCPU(F204)
/* The F204 Windows workstation*/
USRNAM(tws-e)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/

184

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Note: It is not necessary to define these user IDs in the z/OS security
product, for example, RACF. These user IDs are not validated on the
mainframe.

5. Verify the initialization statements for the plan programs.


The plan programs must know the topology for the distributed TWS network.
This information is used by the plan programs when the Symphony file is
created. The plan programs read the topology initialization statements and
incorporate the domain and workstation information to the Symphony file.
The initialization statements for the plan programs are pointed to by the
EQQPARM DD-card in the plan programs JCL:
//EQQPARM

DD DISP=SHR,DSN=EQQUSER.TWS810.PARM(BATCHOPT)

We did verify that the BATHCOPT definition contains the TPLGYPRM


statement:
TPLGYPRM(TOPOLOGY)

/* Member with topology defs. */

The member specified in TPLGYPRM must be the same member name


specified in the TWSCTP server initialization statement.
Tip: You should make sure that the user IDs that are going to run the TWS
for z/OS plan programs have the same RACF access rights to the work
HFS files as the TWS for z/OS server user ID. See Section 3.2.1,
Executing EQQJOBS installation aid on page 102.

6. Verify the JCL used for the plan programs.


The script library dataset (EQQSCPDS DD-card) and the current plan backup
copy for Symphony creation dataset (EQQSCPDS DD-card) must be defined
in the JCL used for the current plan programs.
We have the following two dataset definitions in the JCL for the plan
programs:
//EQQSCPDS DD DISP=SHR,DSN=EQQUSER.TWS810.SCP
//EQQSCLIB DD DISP=SHR,DSN=EQQUSER.TWS810.SCRPTLIB

Chapter 4. End-to-end implementation scenarios and examples

185

Note: It is important that you remember to update the JCL for the plan
programs used in job streams dedicated to do the daily TWS for z/OS
housekeeping.

4.3.3 Customize the TWS for z/OS engine


Next we customize the TWS for z/OS engine for end-to-end scheduling. We
activate the end-to-end feature in the TWS for z/OS server and ask the TWS for
z/OS engine to start and stop the server task automatically.
1. To activate the end-to-end feature and start the Tivoli Workload Scheduler
Enabler task in the engine we specify:
TPLGYSRV(TWSCTP)

/* Activate the end-to-end server */

TWSCTP is the name of our end-to-end server that handles the events to and
from the distributed Tivoli Workload Scheduler agents.
2. To let the TWS for z/OS engine start and stop the end-to-end server task we
specify:
SERVERS(TWSCTP,TWSCJSC)

/* Start these server tasks

*/

Tip: It is possible to have more than one server in the SERVERS()


statement. In our example we have two TCP/IP servers. One for the TWS
for z/OS end-to-end server (TWSCTP) and one for our TWS for z/OS Job
Scheduling Console server (TWSCJSC).

TPLGYSRV and SERVERS are specified in the engine OPCOPTS


initialization statement. OPCOPTS is defined in the member of the
EQQPARM library as specified by the PARM parameter on the JCL EXEC
statement in the engine started task procedure.

4.3.4 Restart the TWS for z/OS engine


To activate the server task, the topology definitions and the definitions in the
TWS for z/OS engine, we restart the Tivoli Workload Scheduler for z/OS engine.
When the TWS for z/OS is started again we verify:
1. There are no errors in the TWS for z/OS message log related to end-to-end
server definitions.
2. The end-to-end feature is started in the engine. This is indicated by the
messages shown in Example 4-8 on page 187.

186

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example 4-8 end-to-end feature is started


EQQZ005I
EQQZ085I
EQQG001I
EQQG001I
EQQZ085I
EQQG001I

OPC SUBTASK
OPC SUBTASK
SUBTASK E2E
SUBTASK E2E
OPC SUBTASK
SUBTASK E2E

E2E ENABLER
IS BEING STARTED
E2E SENDER
IS BEING STARTED
SENDER HAS STARTED
RECEIVER HAS STARTED
E2E RECEIVER
IS BEING STARTED
ENABLER HAS STARTED

3. The end-to-end server task is started by the engine. The engine issues a start
command:
S TWSCTP

4. The TWS for z/OS end-to-end server task (TWSCTP) is started with no
errors. You should see messages as shown in Example 4-9.
Example 4-9 TWSCTP is started
EQQPH00I
EQQPH09I
EQQPH33I
EQQZ024I
EQQPT01I

SERVER TASK HAS STARTED


THE SERVER IS USING THE TCP/IP PROTOCOL
THE END-TO-END PROCESSES HAVE BEEN STARTED
Initializing wait parameters
Program "/usr/lpp/TWS/TWS810/bin/translator" has been
started, pid is 65795
EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/netman" has been started,
pid is 65796
EQQPT22I Input Translator thread stopped until new Symphony will
be available

4.3.5 Define fault tolerant workstations in the engine database


The final step to make the link between the TWS for z/OS engine and the
distributed TWS fault tolerant agents is to define fault tolerant workstations in the
TWS for z/OS engine database.
The fault tolerant workstations are defined the same way as computer
workstations in TWS for z/OS. The only difference is that the Fault Tolerant field
is checked. See Defining fault tolerant workstations on page 127 for more
information.
We define fault tolerant workstations for all the distributed TWS workstations
(F100-F204) in the TWS for z/OS engine workstation database (see Figure 4-2
on page 188).

Chapter 4. End-to-end implementation scenarios and examples

187

Figure 4-2 List with all our fault tolerant workstations defined in TWS for z/OS

Tip: We used the standard to put the hostname for the fault tolerant
workstation as the first characters in the Description field for the workstation.
This way we can easily relate the four character workstation name to the
physical machine it points to.

The description field in Figure 4-2 looks a little strange in the Job Scheduling
Console because it does not use fixed-width fonts. In the legacy ISPF panels it
will look more orderly.

4.3.6 Activate the fault tolerant workstation definitions


The fault tolerant workstation definitions can be activated in the TWS for z/OS
plans either by running the replan or the extend plan programs in the TWS for
z/OS engine.
We run the replan program and verify that the Symphony file is created in the
server. We also verify that the fault tolerant workstations become available and
have a linked status in the TWS for z/OS plan.
In the TWSCTP server message log we get the messages shown in
Example 4-10.
Example 4-10 TWSCTP server message log
EQQPT30I
EQQPT31I
EQQPT20I
EQQPT21I
EQQPT23I

188

Starting switching Symphony


Symphony successfully switched
Input Translator waiting for Batchman is started
Input Translator finished waiting for Batchman
Input Translator thread is running

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The fault tolerant workstations is checked using the Job Scheduling Console
(see Figure 4-3). The Fault Tolerant column indicates that it is a fault tolerant
workstation. The Linked column indicates if the workstation is linked. The Status
column indicates if the mailman process is up and running on the fault tolerant
workstations.

Figure 4-3 FTWs in the Tivoli Workload Scheduler for z/OS plan

The F204 workstation is Not Available, since we have not installed a TWS fault
tolerant workstation on this machine. We have prepared for a future installation
of the F204 workstation, by creating the CPUREC definitions (see Define the
topology initialization statements on page 179).
Tip: If the workstation does not link as it should, the cause can be that the
writer process has not initiated correctly or the run number for the Symphony
file on the fault tolerant workstation is not the same as the run number on the
master. If you mark the unlinked workstations and right click you will get a
pop-up menu, as shown in Figure 4-4. Then click Link to try to link the
workstation.

Figure 4-4 Right clicking one of the workstation shows a pop-up menu

You can check the Symphony run number and the Symphony status in the
legacy ISPF using option 6.6.

Chapter 4. End-to-end implementation scenarios and examples

189

Tip: If the workstation is Not Available/Offline the cause can be that the
mailman, batchman, and jobman processes are not started on the fault
tolerant workstation. You can right click the workstation to get the pop-up
menu, shown in Figure 4-4 on page 189, and then click Set Status.... This will
give you a new panel (see Figure 4-5 on page 190), where you can try to
activate the workstation by clicking the Active radio button. This action will try
to start the mailman, batchman and jobman processes on the fault tolerant
workstation by issuing a conman start command on the agent.

Figure 4-5 Pop-up menu to set status of workstation

See Chapter 6, Troubleshooting in a TWS end-to-end environment on page 333


for more information on how to handle link and initialization problems.

4.3.7 Create jobs, user definitions, and job streams for verification
tests
To verify that we can run jobs on the fault tolerant workstations we did a
verification test. For the verification test we need job streams and jobs to run on
the fault tolerant workstation.
For every fault tolerant workstation defined in the TWS for z/OS engine, we did
the following:
1. Create a job stream with jobs dedicated to run on the fault tolerant
workstation.
2. Define the task (job or script) member in the SCRPTLIB library for all defined
jobs.

190

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

3. Define the user and password for the task on the Windows fault tolerant
workstation.

Step 1. Create job streams and jobs for FTWs


We create one job stream per fault tolerant workstation, as shown in Figure 4-6.
We define a run-cycle, so the test job streams will run on workdays.
The F204DWTESTSTREAM is not active (pending), Active is No (Figure 4-6).
We have created the job stream to be prepared for testing the F204 fault tolerant
workstation when TWS is installed on the F204 machine.

Figure 4-6 Job streams used for verification test

For every job stream we defined four fault tolerant workstation tasks (called FTW
Task in the JSC). We start the job stream with a dummy start job (called General
job in JSC) and end it with a dummy end job.
Tip: Since it is not possible to add dependency to the job stream level in TWS
for z/OS (as it is in TWS), dummy start and dummy end general jobs is a work
around for this TWS for z/OS limitation. When using dummy start and dummy
end general jobs you can always uniquely identify the start point and the end
point for the jobs in the job stream.

The jobs and dependencies are defined, as shown in Figure 4-7 on page 192,
where the F100DWTESTSTREAM is used as an illustration.

Chapter 4. End-to-end implementation scenarios and examples

191

Figure 4-7 Task (job) definition for the test job stream used for verification

Step 2. Define task member in SCRPTLIB library for jobs


For every UNIX job in the test job streams we add a member to the SCRPTLIB
library.
Example 4-11 shows a F100J001 job script definition (used in the
F100DWTESTSTREAM).
Example 4-11 F100J001 job script definition
EDIT
EQQUSER.TWS810.SCRPTLIB(F100J001) - 01.33
Columns 00001 00072
Command ===>
Scroll ===> CSR
****** ***************************** Top of Data ******************************
000001 /* Definiton for F100J001 job to be executed on F100 machine
*/
000002 /*
*/
000003 JOBREC JOBSCR('/tivoli/TWS/scripts/japjob1') JOBUSR(tws-e)
****** **************************** Bottom of Data ***************************

192

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tip: The japjob1 script is installed with TWS in the <twshome>/demo


directory. It is a simple job that waits 15 seconds. This can be used on UNIX
as well as on Windows machines. It is good for testing purposes. We changed
the japjob1 script to wait 60 seconds instead of 15 seconds.

For every Windows job in the test job streams we add a member to the
SCRPTLIB library.
Example 4-12 shows a F102J001 job script definition (used in the
F102DWTESTSTREAM).
Example 4-12 F102J001 job script definition
EDIT
EQQUSER.TWS810.SCRPTLIB(F102J001) - 01.33
Columns 00001 00072
Command ===>
Scroll ===> CSR
****** ***************************** Top of Data ******************************
000001 /* Definiton for F102J001 job to be executed on F102 machine
*/
000002 /*
*/
000003 JOBREC JOBSCR('C:\TWS\E\scripts\japjob1.cmd') JOBUSR(tws-e)
****** **************************** Bottom of Data ***************************

Tip: We have placed all the job scripts in one common directory,
/tivoli/TWS/scripts, on UNIX systems and in C:\TWS\E\scripts on Windows
systems. This makes management and backup much easier.

The SCRPTLIB members can be reused in several job streams and on different
fault tolerant workstations of the same type (UNIX, Windows, etc.). If you, for
example, have a job (script) that is scheduled on all your UNIX systems, you can
create one SCRTPLIB member for this job and define this job in several job
streams on the associated fault tolerant workstations, though this requires that
the script is placed in the same directory on all your systems. This is another
good reason to have all the job scripts placed in the same directories across your
systems.

Step 3. Define user and password for task on Windows FTW


Since we will run the test jobs under the user IDs already specified in the
TPUSER parameter member (TWSCTP SERVOPTS USRMEM(TPUSER))
initialization statement, see Define the topology initialization statements on
page 179, we do not need to define more.

Chapter 4. End-to-end implementation scenarios and examples

193

.
Tip: If we are going to define a new job, F203J011 on F203, that runs under
the user ID NTPROD, then we:

1. Add the new USRREC to the USRMEM definition:


USRREC

USRCPU(F203)
USRNAM(NTPROD)
USRPSW('prodpw')

/* The F203 Windows workstation*/


/* The user name
*/
/* Password for user name
*/

2. Create a F203J011 member in the SCRPTLIB library:


/* Definiton for F203J011 job to be executed on the F203 */
/* machine
*/
JOBREC JOBSCR('C:\TWS\E\scripts\backup.cmd') JOBUSR(NTPROD)

4.3.8 Verification test of the end-to-end installation


Finally we verify the end-to-end installation. The verification is done performing
the following verification and tests:
1. Verify that the TWS for z/OS plan programs are executed OK and that the
end-to-end job streams are added to the plan.
2. Verify that the fault tolerant workstations are active and linked.
3. Verify that the jobs on the fault tolerant workstations are executed without
errors.
4. Check to see that it is possible to browse the job log for all the jobs on the
fault tolerant workstations.
It is possible to do the verification in several ways. Since we would like to verify
that our daily TWS for z/OS plan programs did handle the topology definitions
correctly, we also verify the outcome of these programs.
Another way to do the verification is, for example, to add the job streams with
jobs on the fault tolerant workstations directly to the plan and verify that they are
executed correctly.

Step 1. Verify the TWS for z/OS plan programs and job streams
Our daily TWS for z/OS housekeeping job stream contains jobs to handle the
long-term plan and jobs to handle the current plan in TWS for z/OS (see
Figure 4-8 on page 195).

194

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 4-8 The jobs in the daily housekeeping job stream

We check the output from these jobs and verify that the topology information
pointed to by the EQQPARM initialization statement is read without any errors.
In all the plan jobs shown in Figure 4-8 we find EQQMLOG messages indicating
that the topology information is OK.
Example 4-13 Topology information
EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TOPOLOGY
.........
EQQZ016I RETURN CODE FOR THIS STATEMENT IS: 0000
EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TOPOLOGY IS: 0000
EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TPDOMAIN
........
........
RETURN CODE FOR THIS STATEMENT IS: 0000
MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000
CPU F100 IS SET AS DOMAIN MANAGER OF FIRST LEVEL
CPU F200 SET AS DOMAIN MANAGER
NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER
MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER
IS: 0000

In the EQQMLOG for the current plan extend job, TWSCCPE, we also see the
message:
EQQ3087I THE SYMPHONY FILE HAS BEEN SUCCESSFULLY CREATED

Which indicates that the Symphony file was created without errors.
Finally we verify that the end-to-end server, TWSCTP, activates the new
Symphony file. We see the messages shown in Example 4-14 on page 206 in the
TWSCTP EQQMLOG.
Example 4-14 TWSCTP EQQMLOG
EQQPT30I Starting switching Symphony
EQQPT22I Input Translator thread stopped until new Symphony will be available
EQQPT31I Symphony successfully switched

Chapter 4. End-to-end implementation scenarios and examples

195

EQQPT20I Input Translator waiting for Batchman is started


EQQPT21I Input Translator finished waiting for Batchman
EQQPT23I Input Translator thread is running

Note: It can take some minutes from which the end-to-end starts switching the
Symphony file (message EQQPT30I) until the input translator thread is
running again (message EQQPT23I). In our test environment we see times in
the range one to two minutes.

Step 2. Verify the status of the fault tolerant workstations


We did the same verification as described in Section 4.3.6, Activate the fault
tolerant workstation definitions on page 188.
All the fault tolerant workstations become active and linked after the current plan
extend job.

Step 3. Verify that FTW jobs are executed without any errors
This verification is straightforward. We check the status of the test job streams
and jobs dedicated to run on the fault tolerant workstations. In Figure 4-9 we
have listed jobs in two of the test job streams: F100DWTESTSTREAM and
F101DWTESSTREAM. All jobs are completed successfully. This is also the case
for the test job streams dedicated to run on the F102, F200, F201, F202, and
F203 fault tolerant workstations.

Figure 4-9 Jobs in F100DWTESTSTREAM and F101DWTESTSTREAM

Common errors for jobs on fault tolerant workstations


If you are adding a job stream to the current plan in TWS for z/OS (using JSC or
option 5.1 from legacy ISPF) and get the error message:
EQQM071E A JOB definition referenced by this occurrence is wrong

196

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

This shows that there is an error in the definition for one or more jobs in the job
stream and the job stream is not added to the current plan. If you look in the
EQQMLOG for the TWS for z/OS engine you will find messages like:
EQQM992E WRONG JOB DEFINITION FOR THE FOLLOWING OCCURRENCE:
EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED
EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED

In our example the F100J011 member in EQQSCLIB looks like:


JOBRC JOBSCR('/tivoli/TWS/scripts/japjob1') JOBUSR(tws-e)

We have a typo error: JOBRC should be JOBREC. The solution to this problem
is simply to correct the typo error and try to add the job stream again. The job
stream must be added to the TWS for z/OS plan again, because the job stream
was not added the first time (due to the typo error).
Note: You will get similar error messages in the EQQMLOG for the plan
programs, if the job stream is added during plan extension. The error
messages issued by the plan program are:
EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED
EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED
EQQ3077W BAD MEMBER F100J011 CONTENTS IN EQQSCLIB

Please note that the plan extension program will end with return code 0 .
Another common error is a misspelled name for the script or the user (in the
JOBREC, JOBSCR, or JOBUSR definition) in the FTW job.
If we have a JOBREC definition with a typo error like:
/* Definiton for F100J010 job to be executed on F100 machine
/*
JOBREC JOBSCR('/tivoli/TWS/scripts/jabjob1') JOBUSR(tws-e)

*/
*/

Here the typo error is in the name of the script. It should be japjob1 instead of
jabjob1. This typo will result in an error with the error code FAIL, when the job is
run. The error will not be caught by the plan programs or when you add the job
stream to the plan in TWS for z/OS.
It is possible to correct this error easily using the following steps:
1. First correct the typo in the member in the SCRPTLIB.
2. Then add the same job stream again to the plan in TWS for z/OS.

Chapter 4. End-to-end implementation scenarios and examples

197

Tip: Before adding the job stream again you can modify the input arrival time
on the existing job stream to input arrival time minus one minute. This way you
can re-add the job stream again and give it the same input arrival time as the
original job stream. Using this approach, TWS for z/OS will automatically solve
all dependencies (if any) correctly.

This way of handling typo errors in the JOBREC definitions is actually the same
as if you performed a rerun from on a TWS master. The job stream must be
re-added to the TWS for z/OS plan to have TWS for z/OS send the new JOBREC
definition to the fault tolerant workstation agent. Remember when doing extend
or replan of the TWS for z/OS plan, that the JOBREC definition is built into the
Symphony file. By re-adding the job stream we ask TWS for z/OS to send the
re-added job stream, including the new JOBREC definition, to the agent.
If you have defined the wrong password for a Windows user ID in the USRREC
topology definition or the password has been changed on the Windows machine,
the FTW job will end with an error and the error code FAIL.
To solve this problem you have two choices:
Change the wrong USRREC definition and redistribute the Symphony file
(using option 3.5 from legacy ISPF).

This approach can be disruptive if you are running a huge batch load on
FTWs and are in the middle of a batch peak.
Another possibility is to log into the primary domain manager (the domain
manager directly connected to the TWS for z/OS server) and then alter the
password. This can be done either using conman or using a JSC instance
pointing to the primary domain manager. When you have changed the
password you simply rerun the job in error.

The USRREC definition should still be corrected so it will take effect the next
time the Symphony file is created.

Step 4. Test possibility to browse job log for all FTW jobs
The final verification is to test that it is possible to browse the job log for all FTW
jobs in the test job streams.
Note: The browse job log function used for FTW jobs does not use Tivoli
Workload Scheduler for z/OS Data Store to retrieve job logs for FTW jobs.

The test is done from a list with the FTW jobs in the JSC. To browse the job log
right click the FTW job and select Browse Job Log... in the pop-up menu (see
Figure 4-10 on page 199).

198

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 4-10 Browse Job Log from TWS for z/OS JSC

It the job log has not been retrieved before, we will receive a pop-up window with
an information message (see Figure 4-11). The message says that the job log is
not on the TWS for z/OS engine, but that the engine has requested a copy of the
job log from the remote FTW.

Figure 4-11 Pop-up window when browsing a job log that is not on the engine

Chapter 4. End-to-end implementation scenarios and examples

199

We try to browse the job log again after some seconds, and the JSC shows a
new pop-up window with the job log, as shown in Figure 4-12.
The arrows on the right side of the pop-up window can be used to scroll up and
down in the job log. The arrows on the bottom of the pop-up window can be used
to scroll left and right in the pop-up window.

Figure 4-12 The Browse Job Log pop-up window

4.3.9 Some general experiences from the test environment


Based on the different tests we have done in our Tivoli Workload Scheduler for
z/OS end-to-end environment, we experienced that:
The initialization of the distributed network when doing plan extension, replan,
or redistribution of the Symphony file takes some time. The Symphony file is
sent to the primary domain manager and its subordinate domain mangers
and fault tolerant agents. Furthermore, every agent is stopped and started to
activate their local copy of the new Symphony file.

200

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Be patient to give the distributed network some time to initialize.


Status change for FTW jobs can take some time. A FTW job can stay in the
status Started, with the extended status Job is added to job queue for
some seconds.

This is normal behavior for the TWS network and is due to the fact that events
can pass several TWS domain managers to reach their final destination.
Furthermore, TWS processes have some timers that define the intervals
when the TWS processes check job statuses.

4.4 Migrating TWS for z/OS tracker agents to TWS for


z/OS end-to-end
In this section we will describe how to migrate from a TWS for z/OS tracker agent
scheduling environment to a TWS for z/OS end-to-end scheduling environment
with TWS fault tolerant agents. We show you the benefits of migrating to the fault
tolerant workstations with a step-by-step migration procedure.

4.4.1 Migration benefits


If you plan to migrate to the end solution you can gain the following advantages:
The usage of fault tolerant technology allows you to continue scheduling
without a continuous connection to the z/OS engine.
Multi-tier architecture enables you to configure your distributed environment
into logical and geographic needs through the domain topology.

The monitoring of workload can be separated, based on a dedicated


distributed view.
High availability configuration through:

The support of Microsoft Service Guard.


The workstation configuration is using hostnames instead of numeric IP
addresses.
You can change workstation addresses without recycling the z/OS engine.
New supported platforms:

Windows 2000.
Linux.
Linux z/OS.
Other third party access methods (e.g., Tandem).

Chapter 4. End-to-end implementation scenarios and examples

201

Open extended agent interface, which allows you to write extended agents
for non-supported platforms.
User ID and password definitions of NT workstations are easier to implement
than the impersonation support.
You gain from the same SAP r3batch interface like the Tivoli Workload
Scheduler for z/OS tracker agents.
TBSM support lets you integrate the entire end-to-end environment.
You do not have to touch your planning-related definitions, like run cycles,
periods, and calendars.

4.4.2 Migration planning


Before starting the migration process, you may consider the following issues:
Tivoli Workload Scheduler 8.1 does not provide you with a fully automatic
migration because of the architectural differences of tracker agent and fault
tolerant workstations. Further releases are planned to add some migration
tools to help this task.
You may choose to not migrate your whole tracker agent environment at the
same time. For better planing we recommend first deciding which part of your
tracker environment is more eligible to migrate. This allows you to smoothly
migrate to the new fault tolerant agents. The proper decision can be based
on:

Agents belonging to a certain business unit


Agents running at a specific location or time zone
Agents having dependencies to Tivoli Workload Scheduler for z/OS job
streams
Agents used for testing purposes
The tracker agents topology is not based on any domain manager structure
as used in the Tivoli Workload Scheduler end-to-end solution. Therefore you
need to plan the topology configuration that suits to your needs. The
guidelines for helping you find your best configuration are detailed in
Section 3.4.1, Network planning and considerations on page 134.

202

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

You must be aware of the limitations with this version, when moving to the
fault tolerant agents. Refer to Considerations on page 81 for a detailed
discussion of these.
Tip: We recommend that you do not start migrating the most critical workload
of your environment first. The migration process needs some handling and
experience; therefore a good starting point could be to first migrate a test
tracker agent with test scripts. If this is successful, you can continue with less
critical production job streams and finally the most important ones.

4.4.3 Migration checklist


To guide you through the migration, we provide you with a step-by-step checklist.
See Table 4-1.
Table 4-1 Migration checklist
Migration actions

Page

1) Install Tivoli Workload Scheduler


end-to-end solution.

Installing Tivoli Workload Scheduler


end-to-end solution on page 204

2) Install fault tolerant agents on each


tracker agent you want to migrate.

Installing fault tolerant agents on


page 204

3) Define your end-to-end topology.

Define your end-to-end environment on


page 204

4) Transfer scripts to the local machine.

Transferring scripts to the local machine


on page 206

5) Create the script library member in the


script library dataset.

Create the script library member on


page 207

6) Define user ID and password for


Windows FTW.

Define user ID and password for


Windows FTWs on page 208

7) Modify the workstation name inside the


job instances.

Modify workstation name inside the job


instances on page 209

8) Consider parallel testing.

Parallel testing on page 210

9) Perform the cutover.

Perform the cutover on page 210

4.4.4 Migration actions


For all migration actions listed in Table 4-1, we will explain each step to you in
detail.

Chapter 4. End-to-end implementation scenarios and examples

203

Installing Tivoli Workload Scheduler end-to-end solution


The Tivoli Workload Scheduler end-to-end solution is mandatory, and its
installation and configuration is detailed in Section 3.2, Installing Tivoli Workload
Scheduler for z/OS on page 100.

Installing fault tolerant agents


After you determine the tracker agents you want to migrate, you need to install a
fault tolerant agent on each tracker agent machine. This allows you to migrate a
mixed environment of tracker agents and fault tolerant workstations in a more
controlled way. Both environments might coexists until you decide to perform the
cutover. Cutover means finally switching to the fault tolerant agent after the
testing phase. Installing the fault tolerant agents are explained in great detail in
Section 3.5, Installing TWS in an end-to-end environment on page 141.
Note: If you use alternate workstations for your tracker agents, be aware that
this function is not available in fault tolerant agents. As part of the fault tolerant
technology, a FTW can not be an alternate workstation.

Define your end-to-end environment


Here you need to define your Tivoli Workload Scheduler topology. The topology
is based on the DOMREC and CPUREC keywords within the topology member.
After you have designed the topology that suits your environment, you need to
reflect it in Tivoli Workload Scheduler for z/OS and create the fault tolerant
workstation in the database.
Tips:
If you decide to define a topology with domain managers, you should
define backup domain managers, too.
To better distinguish the fault tolerant workstations, follow a consistent
naming convention.

We want to illustrate an example of a possible domain topology, and the major


difference between a tracker agent and end-to-end topology. First we show you a
classic tracker agent environment. This environment consists of multiple tracker
agents on various operating platforms, and does not follow any domain topology
(Figure 4-13 on page 205).

204

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Standby
Controller

Standby
Controller
z/OS
SYSPLEX

Active
Controller

Tracker
Agents
AIX

Solaris

AIX

OS/400

Win NT

Win 2K

AIX

HPUX

Figure 4-13 A classic tracker agent environment

Figure 4-14 on page 206 shows the existing environment adapted to the domain
topology. Now it consists of three domains with one backup domain manager.

Chapter 4. End-to-end implementation scenarios and examples

205

MASTERDM
Standby
Engine

Standby
Engine

z/OS
SYSPLEX
Active
Engine
Server

DomainZ
Domain
Manager
DMZ

AIX

AIX

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

Backup
Domain
Manager
(FTA)

FTA3
OS/400

Windows 2000

FTA4
Solaris

Figure 4-14 Domain topology example

During the migration process both environments coexist. This means that on
every machine a tracker agent and a fault tolerant workstation are installed.

Transferring scripts to the local machine


Fault tolerant means, among other things, that the scripts of the distributed
environment are placed on the local machine where they are executed. The
distributed Symphony file does not contain any scripts other than the path where
the fault tolerant agent is able to find and execute it. This is different from how the
tracker agent has been used to working. In a tracker agent environment the z/OS
engine accesses the local script repository (the EQQJOBLIB) and submits either
the entire script or the path information over the TCP/IP link to the tracker agents
workstation. If the link is broken, there is no way to transfer workload and the
submission is stopped. To transfer the scripts to the local machine can be
accomplished by using the file transfer program (FTP). In Appendix A, Centrally
stored OPC controllers to FTAs on page 427, we give you a convenient
automated method for transferring your scripts to your FTAs.

206

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Please note that the transferring of scripts needs the following considerations:
To make it consistent and simplify maintenance activities, use standard script
directories for all fault tolerant agents.
Variable substitution is not supported in the current release. As a
circumvention of this limitation please refer to Section 4.4.6, TWS for z/OS
JCL variables in connection with TWS parameters on page 211.
Automatic recovery statements should be commented out, since they are not
supported in the current release. The reason to comment them out rather
than removing is that, in case of a migration fallback, you will be able to reuse
them.
Be aware that the z/OS engine and the distributed operating systems use
different code pages. This means that after transmission, characters might be
misinterpreted. To resolve this issue, adjust the code page translation of your
used transfer program.
Tip: If you still want to use a single repository script library, you may transfer
all scripts to one script library on a domain manager or on a dedicated
machine within the TWS network. This machine distributes the script to the
local machine as an initial load, and acts as a focal point for all of the scripts
within the network. The script transfer can be accomplished for example with a
software distribution product such as Tivoli Software Distribution. Script
changes are performed only within this dedicated library. After the
modifications are done, they could be distributed to the local fault tolerant
agents.

Create the script library member


You need to create a new script library member for each script to specify the job
definition.
Tips:
To simplify this task we recommend using the same member names like in
the joblib. This has the big advantage of not changing the job name in the
job stream definitions during migration.
ISPF numbering in column 73 to 80 does not cause job execution to fail, as
in the tracker agents environment.
The job definition keywords like jobrec, jobscr, jobcmd, and jobusr are not
case-sensitive. The definitions within the keywords are case-sensitive. Take
this into consideration if jobs are failed without any apparent reason.

Chapter 4. End-to-end implementation scenarios and examples

207

Next, syntax and validation checking of the job definitions takes place. In case of
a syntax problem, a common message when Tivoli Workload Scheduler for z/OS
issues is: EQQM07E Job definition referenced by this occurrence is wrong.
If the joblib member contains only a command or the invocation of a local script
(where the script already resides on the local machine), then you can directly
write this in the script library member, following the scriptlib syntax.

Example
The joblibrary contains a member with the syntax shown in Example 4-15.
Example 4-15 Sample script in job library
#!/bin/ksh
#Sample script
/tivoli/TWS/scripts/cleanup.sh

The script resides on the local machine; the invocation path can be taken over
into the script library member. The submitting user ID is either the user ID of the
tracker agent, or is given by submit exit EQQUX001. Example 4-16 shows the
same script in the script library member.
Example 4-16 Sample script in script library syntax
/* This scriptlibrary job calls the claunup script */
JOBREC JOBSCR('/tivoli/TWS/scripts/cleanup.sh') JOBUSR(tws-e)

The invocation path is included into the jobscr keyword. The submitting user ID is
now part of the definition.
Note: The job submit exit EQQUX001 is not called for fault tolerant
workstations. You can still use the exit unchanged.

For more information about the job definition refer to Job definitions on
page 129.

Define user ID and password for Windows FTWs


For each user running jobs on Windows fault tolerant agents, define a new
USRREC statement to provide the Windows user ID and password. USRREC is
defined in the member of the EQQPARM library as specified by the USERMEM
keyword in the TOPOLOGY statement.

208

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Important: Because the passwords are not encrypted, we strongly


recommend that you protect the dataset containing the usrrec definitions with
your security product.

If you use the impersonation support for NT tracker agent workstations, it does
not interfere with the USRREC definitions. The impersonation support assigns a
user ID based on the user ID from exit EQQUX001. Since the exit is not called for
fault tolerant workstations, impersonation support is not used anymore.

Modify workstation name inside the job instances


At this point you may distribute the Symphony file via daily plan batchjob or the
Symphony Renew option in order to check if all FTWs are linked and active.
The Symphony file at this point does not contain any information that causes the
starting of any workload on the fault tolerant agents. Job instance definitions in
the Tivoli Workload Scheduler for z/OS database and plans are still pointing to
the tracker agent workstations.
In order to submit workload to the distributed environment, you need to change
the workstation name of your existing job definitions to the new FTW, or add new
defined jobstreams to the current plan for testing purposes.
Notes:
To change the workstation within a job instance from tracker agent to a
fault tolerant workstation via the Job Scheduling Console is not possible.
We have already addressed this issue of development. The change can be
performed via the legacy GUI (ISPF) and the batch loader program.
Be aware that changes to the workstation affect the job stream database
only. If you want to take this modification into the plans, you need to run a
long term-plan (LTP) modify batchjob and current plan extend or replan
batchjob.
The highest acceptable return code for FT workstations is 0. Consider this
if you are not able to save your job stream definitions.

Chapter 4. End-to-end implementation scenarios and examples

209

Parallel testing
Parallel testing offers you the ability to take over production job streams, with the
exception that they are running only test scripts. This extends your testing
capabilities in that you are able to monitor how your tracker agent workload
behaves in the end-to-end environment. The best way to accomplish this is to
run this task on your Tivoli Workload Scheduler for z/OS test system, with
identical end-to-end solution definitions. Follow these steps to run parallel
testing:
1. Create a new scriptlib member that contains entry shown in Example 4-17.
Example 4-17 Japjob1 test script
/* Sample Test job
*/
JOBREC JOBSCR('/tivoli/TWS/scripts/japjob1') JOBUSR(tws-e)

The japjob1 script is installed with TWS in the <twshome>/demo directory. It


is just a simple job that waits 15 seconds. It can be used on UNIX and
Windows machines. For Windows jobs you need to change the jobscr
keyword to the appropriate path where japjob1 resides.
2. Change the job definitions within the job stream database.
a. Change the job name to the new member name that contains the japjob1
definitions.
b. Change the workstation name of the tracker agents to the FTW name.
c. For large changes you can use the batch loader program.
3. Use either the JSC submit function to add the job stream to the current plan,
or run the daily plan batchjob.
Important: If you decide to run parallel testing on your production system, be
aware that an LTP modify batch job overwrites the plan definitions.

Perform the cutover


After you have finished your test period, you can carry over the workload into the
Tivoli Workload Scheduler production system. For this you need to carry your
modifications from the job stream database into the LTP and to the current plan.
The right moment to accomplish this task is when there is less tracker agent
production activity, to avoid any contention.

4.4.5 Migrating backwards


Perform the following steps for every distributed agent before you begin a
backward migration:
1. Install the tracker agent on the same machine that the FTW resides on.

210

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

2. Define a new destination in the controllers ROUTOPTS initialization


statement and restart the z/OS engine.
3. Make a duplicate of the workstation definition of the computer. Define the new
workstation as Computer Automatic instead of Fault Tolerant and specify the
destination you defined in the ROUTOPTS keyword. In this way, the same
computer will be used both as a fault tolerant workstation and a tracker agent,
making the migration smoother.
4. Copy the scripts from the fault tolerant workstation repository to the Tivoli
Workload Scheduler JOBLIB. As an alternative, copy the scripts to a local
directory that can be accessed by the tracker agent and create a JOBLIB
member to execute the script. You can accomplish this by using FTP.
5. If you have disabled the job submit exit EQQUX001, you need to enable it
again. The same applies for the impersonation support.
6. Modify the workstation name inside the operation. Remember to change the
JOBNAME if the member in the JOBLIB has a name different from the
member of the script library.
7. Perform the cutover.

4.4.6 TWS for z/OS JCL variables in connection with TWS parameters
In this section we present a way to make a relation between TWS for z/OS JCL
variables and TWS parameters. This way you can transfer values for TWS for
z/OS JCL variables to TWS parameters. The TWS parameters can be
referenced in jobs (scripts) executed locally on the TWS workstation by using the
TWS parms command.
This is a workaround for the current limitation in TWS for z/OS, where you cannot
use TWS for z/OS JCL variables in jobs on fault tolerant workstations. With this
workaround, we show you a solution where you do not lose the fault tolerance.
The process used when distributing TWS for JCL variables to parameters on
fault tolerant workstations can be seen in Figure 4-15 on page 212.

Chapter 4. End-to-end implementation scenarios and examples

211

TWS for z/OS job

FTW jobs

IEBGENER job
Updates member in
SCRPTLIB with TWS for
z/OS JCL variables

Jobs executed on local workstation

F100
Parameters

AIX

SCRPTLIB

F100

FJOB

Parameters

AIX

F100
Parameters

AIX

Figure 4-15 Process of distributing TWS for JCL variables to TWS parameters

The process is as shown in Figure 4-15.


1. An IEBGENER job is run on the TWS for z/OS system. The JCL used in the
IEBGENER job looks like what is shown in Example 4-18.
Example 4-18 IEBGENER job
/*%OPC SCAN
//%OJOBNAME. JOB (TWSRES1),'TWS 810',
//
CLASS=A,MSGCLASS=X,
//
MSGLEVEL=(1,1)
//*********************************************************************
//* Job used to update FPARMUPD in SCRPTLIB with TWS for z/OS JCL
//* variables
//*********************************************************************
//* If you need to do some date calculations put your SETVAR
//* (and maybe your SERFORM directives) here before the variables are
//* referenced on the IEGBENER SYSUT1.
//*%OPC SETVAR TVAR1=(OYMD1+1WD)
//*%OPC SETVAR TVAR2=(OYMD1+10WD)
//*-------------------------------------------------------------------//IEBGEN EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSIN
DD DUMMY
//SYSUT2
DD DISP=SHR,DSN=EQQUSER.TWS810.SCRPTLIB(FPARMUPD)
//SYSUT1
DD DATA,DLM=AA

212

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

/* Definiton for FPARMUPD job to be executed on all UNIX machines


/*
JOBREC JOBSCR('/tivoli/TWS/scripts/setparms.sh parm1=%TVAR1
parm2=%TVAR2
parm3=Stefan
parm4=%ODAY
parm5=Michael
parm6=Finn
parm7=%OYMD1')
JOBUSR(tws-e)
AA

*/
*/

The TWS for z/OS JCL variables are substituted by TWS for z/OS when the
job is submitted. The definitions on the SYSUT1 DD-card will be placed in the
member FPARMUPD in the dataset EQQUSER.TWS810.SCRPTLIB, pointed
by the SYSUT2 DD-card (our SCRPTLIB library).
2. The updated FPARMUPD member is defined on fault tolerant workstations in
a job stream (FTW jobs column in Figure 4-15 on page 212).
After the IEBGENER job is executed, the FPARMUPD member will look like
Example 4-19 (the job was executed on March the 4th, 2002).
Example 4-19 Definiton for FPARMUPD job
/* Definiton for FPARMUPD job to be executed on all UNIX machines
/*
JOBREC JOBSCR('/tivoli/TWS/scripts/setparms.sh parm1=020305
parm2=020318
parm3=Stefan
parm4=1
parm5=Michael
parm6=Finn
parm7=020304')
JOBUSR(tws-e)

*/
*/

3. When the fault tolerant workstation jobs are executed on the local
workstation, they will call a script named setparms.sh. This script will be
called with seven parameters in our example (parm1=, parm2=,...,parm7=)
The setparms.sh script looks like the information shown in Example 4-20.
Example 4-20 setparms.sh script
#!/bin/ksh
TWS_HOME=/tivoli/TWS/E/tws-e
for p in $*
do
PARM_NAME=$( echo ${p} | cut -d= -f1 )
PARM_VALUE=$( echo ${p} | cut -d= -f2 )
PARM_CMD="${TWS_HOME}/bin/parms -c $PARM_NAME ${PARM_VALUE}"
echo "Running command:
${PARM_CMD}"
${PARM_CMD}
done

Chapter 4. End-to-end implementation scenarios and examples

213

Note that the setparms.sh script will loop through all the parameters supplied
in the JOBSCR definition. This way it is possible to specify more than seven
parameters. The length of the script defined in JOBSCR can be up to 4096
characters.
The parameters are added to the TWS parameter database by the parms
command:
PARM_CMD="${TWS_HOME}/bin/parms -c $PARM_NAME ${PARM_VALUE}"

The first time the parms command is run on a workstation, it will create the
parameters database if it does not already exist.
We define the IEBGENER in its own job stream. The job stream has the same
run-cycle as the daily housekeeping flow. The IEBGENER job is made
predecessor to the current plan extend job in the daily housekeeping job stream.
This way the FPARMUPD member will be updated right before the TWS for z/OS
plan is extended and a new Symphony file is created. The parameters will then
contain JCL variables for todays production.
The fault tolerant workstation jobs (FPARMUPD-10, FPARMUPD-15 and
FPARMUPD-20) are defined in the same job stream, FXXXDWPARMUPDATE
(see Figure 4-16).

Figure 4-16 The FXXXDWPARMUPDATE job stream with FTW jobs

214

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The FXXXDWPARMUPDATE job stream has the same run-cycle as the daily
housekeeping flow. The DUMMYSTR-5 job is made successor to the current
plan extend job in the daily housekeeping job stream. This way the parameters
will be updated on the local workstation right after the daily plan job is completed.
The FXXXDWPARMUPDATE job stream must run right after the current plan
extend job. The Symphony file is updated and distributed to the distributed
network by the current plan job. The FPARMUPD job that was updated by our
IEBGENER job will be read into the Symphony file by the current plan job. Since
the Symphony file is distributed to the workstations, the local parameter
database will be updated with our JCL variable values.
The FXXDWPARMUPDATE job stream (and its jobs) must be executed before
any other job streams (or jobs) that use the parameters. These job streams
should have the FXXXDWPARMUDPATE jobs as predecessors.
In Figure 4-17 you can see the parameters after the FPARMUPD job has been
executed on the F100 workstation (remember that the IEBGENER job was
executed on March the 4th, 2002).

Figure 4-17 The parameters in the F100 workstation parameter database

Chapter 4. End-to-end implementation scenarios and examples

215

To view the parameter database, we use the JSC instance pointing to our
primary domain manager (TWSC-F100-Eastham in Figure 4-17 on page 215). In
this instance we have created a parameter database list, and called it
Parameters. This way we can check the parameters created or updated locally
on the primary domain manger.
Note: We did experience some minor problems with the parms command after
updating TWS with patch 8-1-TWS-0001. After the patch it was not possible to
issue the parms command. But we did find a workaround. The workaround is
to build the jobs database on the workstation.

The problem was reported in APAR IY28084.

4.5 Conversion from TWS network to TWS for z/OS


managed network
In this section we will outline the guidelines for how to convert a TWS network to
a TWS for z/OS managed network.
The distributed TWS network is managed by a TWS master domain manager.
The TWS master domain manager manages the databases and the plan.
Converting the TWS managed network to a TWS for z/OS managed network
means that we are moving the responsibility for database and plan management
from the TWS master domain manager to the TWS for z/OS engine.

4.5.1 Illustration of the conversion process


In Figure 4-18 on page 217 we have a distributed TWS network. The database
management and daily planning is carried out by the TWS master domain
manager.

216

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MASTERDM
AIX

Master
Domain
Manager

DomainA

DomainB
AIX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

FTA3
OS/400

HPUX

FTA4

Windows 2000

Solaris

Figure 4-18 TWS distributed network with a master domain manager

In Figure 4-19 on page 218, we have a TWS for z/OS managed network. The
database management and daily planning are carried out be the TWS for z/OS
engine.

Chapter 4. End-to-end implementation scenarios and examples

217

Standby
Engine

Standby
Engine

z/OS
SYSPLEX

Active
Engine

Figure 4-19 TWS for z/OS network

The conversion process is to change the TWS master domain manager to the
primary domain manager and then connect it to the TWS for z/OS engine (new
master domain manager). The result of the conversion is a new end-to-end
network managed by the TWS for z/OS engine (see Figure 4-20 on page 219).

218

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MASTERDM
Standby
Master DM

Standby
Master DM

z/OS
Sysplex
Master
Domain
Manager

DomainZ

AIX

Domain
Manager
DMZ

DomainA

DomainB
AIX

HPUX

Domain
Manager
DMA

FTA1

Domain
Manager
DMB

FTA2
AIX

FTA3
OS/400

Windows 2000

FTA4
Solaris

Figure 4-20 TWS for z/OS managed end-to-end network

4.5.2 Considerations before doing the conversion


Before you start to convert your TWS managed network to a TWS for z/OS
managed network, you should evaluate the pros and cons of doing the
conversion.
The pros and cons of doing the conversion will differ from installation to
installation. Some installations will gain significant benefits from doing the
conversion, while other installations will gain less benefits. Based on the
outcome of the evaluation of pros and cons it should be possible to make the
right decision based on your specific installation and current usage of TWS as
well as TWS for z/OS.

Chapter 4. End-to-end implementation scenarios and examples

219

We have outlined the rationale for conversion to TWS for z/OS end-to-end
scheduling. See The rationale behind the conversion to end-to-end scheduling
on page 171. Some important aspects of the conversion you should consider
are:
How is your TWS and TWS for z/OS organization today?

Do you have two independent organizations working independently of


each other?
Do you have two groups of operators and planners to manage TWS and
TWS for z/OS?
Or do you have one group of operators and planners that manages both
the TWS and TWS for z/OS environments?
Do you use considerable resources keeping a high skill level for both
products, TWS and TWS for z/OS?
How integrated is the workload managed by TWS and TWS for z/OS?

Do you have dependencies between jobs in TWS and in TWS for z/OS
Or do most of the jobs in one schedueler (TWS or TWS for z/OS) run
independently of jobs in the other scheduler (TWS or TWS for z/OS)?
Have you already managed to solve dependencies between jobs in TWS
and in TWS for z/OS in a efficient way?
The current usage of TWS-specific functions are not available in TWS for
z/OS.

How intensive is the usage of prompts, file dependencies, and repeate


range (run job every 10 minutes) in TWS?
Can these TWS-specific functions be replaced by TWS for z/OS specific
functions or should they be handled in another way?
Does it require some locally developed tools, programs, or workarounds?
How intensive is the usage of TWS job recovery definitions?
Is it possible to handle these TWS recovery definitions in another way
when the job is managed by TWS for z/OS?
Does it require some locally developed tools, programs or workarounds?
Will TWS for z/OS give you some of the functions you are missing in TWS
today?

Extended planning capabilities, long term plan, current plan that spans
more than 24 hours?
Better handling of carry-forward job streams?
Powerful run-cycle and calendar functions?

220

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Which platforms or systems are going to be managed by the TWS for z/OS
end-to-end scheduling?
What kind of integration do you have between TWS and, for example, SAP
R/3, PeopleSoft, or Oracle Applications?
Partial conversion of some jobs from the TWS-managed network to the TWS
for z/OS managed network?
Partial conversion: 15 percent of your TWS-managed jobs or workload is
directly related to the TWS for z/OS jobs or workload. This means the TWS
jobs are either predecessors to TWS for z/OS jobs or successors to TWS for
z/OS jobs. The current handling of these TWS and TWS for z/OS
inter-dependencies is not effective or stable with the solution you have today.

Converting these 15 percent of jobs to TWS for z/OS managed scheduling


using the end-to-end solution will stabilize the dependency handling and make
the scheduling more reliable. Note that this requires two instances of TWS
workstations (one for TWS and one for TWS for z/OS).
Effort to convert TWS database object definitions to TWS for z/OS database
object definitions.

Will it be possible to convert the database objects with reasonable resources


and within a reasonable time frame?

4.5.3 The conversion process from TWS to TWS for z/OS


Conversion from a TWS-managed network to a TWS for z/OS-managed network
is a process that contains several steps. In the following description we assume
that we have an active TWS for z/OS environment as well as an active TWS
environment. Furthermore, we assume that the TWS for z/OS end-to-end server
is installed and ready for use. The conversion process mainly contains the
following steps or tasks:
1. Plan the conversion and establish new naming standards.
2. Install new TWS workstation instances dedicated to communicate with the
TWS for z/OS server.
3. Define the topology of the TWS network in TWS for z/OS and define
associated TWS for z/OS fault tolerant workstations.
4. Create JOBSCR members (in the SCRPTLIB dataset) for all TWS-managed
jobs that should be converted.
5. Convert the database objects from TWS format to TWS for z/OS format.
6. Educate planners and operators in the new TWS for z/OS server functions.

Chapter 4. End-to-end implementation scenarios and examples

221

7. Test and verify the conversion and finalize for production.


The sequencing of these steps may be different in your environment, depending
of the strategy you are going to follow when doing the conversion in your
environment.
Lets take a look at these conversion steps in more detail.

Step 1. Planning the conversion


The conversion from TWS-managed scheduling to TWS for z/OS-managed
scheduling can be a major project and requires several resources. It depends on
the current size and usage of the TWS environment. Planning of the conversion
is an important task and can be used to estimate the effort required to do the
conversion as well as detail the different conversion steps.
In the planning phase you should try to identify special usage of TWS functions
or facilities that are not easily converted to TWS for z/OS. Furthermore, you
should try to outline how these functions or facilities should be handled when
scheduling is done by TWS for z/OS.
Part of planning is also establishing the new naming standards for all or some of
the TWS objects that are going to be converted. Some examples:
Naming standards for the fault tolerant workstations in TWS for z/OS

Names for workstations can be up to 16 characters in TWS (if you are using
expanded databases in TWS). In TWS for z/OS, workstation names can be
up to four characters. This means you have to establish a new naming
standard for the fault tolerant workstations in TWS for z/OS.
Naming standards for job names

In TWS you can specify job names with lengths of up to 40 characters (if you
are using expanded databases in TWS). In TWS for z/OS, job names can be
up to eight characters. This means that you have to establish a new naming
standard for jobs on fault tolerant workstations in TWS for z/OS.
Adoption of the existing TWS for z/OS object naming standards

You probably already have naming standards for job streams, workstations,
job names, resources, and calendars in TWS for z/OS. When converting
TWS database objects to the TWS for z/OS databases, you need to adapt the
TWS for z/OS naming standard.
Access to the objects in TWS for z/OS database and plan

Access to TWS for z/OS databases and plan objects are protected by your
security product (for example, RACF). Depending on the naming standards
for the imported TWS objects, you may need to modify the definitions in your
security product.

222

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Is the current TWS network topology suitable and can it be implemented


directly in a TWS for z/OS server?

Maybe the current TWS network topology needs some adjustments as it is


implemented today to be optimal. If your TWS network topology is not
optimal, it should be reconfigured when implemented in TWS for z/OS
end-to-end.

Step 2. Install TWS workstation instances for TWS for z/OS


With TWS workstation instances we mean installation and configuration of a new
TWS engine. This engine should be configured to be a domain manager, fault
tolerant agent, or a backup domain manager, according to the TWS production
environment you are going to mirror. Following this approach you will have two
instances on all the TWS managed systems:
1. One old TWS workstation instance dedicated to the TWS master.
2. One new TWs workstation instance dedicated to the TWS for z/OS engine
(master). Remember to use different port numbers.
By creating dedicated TWS workstation instances for TWS for z/OS scheduling,
you can start testing the new environment without disturbing the distributed TWS
production environment. This also makes it possible to do partial conversion,
testing, and verification without interfering with the TWS production environment.
You can chose different approaches for the conversion:
Try to group your TWS job streams and jobs into logical and isolated groups
and then convert them group by group.
Convert all job streams and jobs, run some parallel testing and verification,
and then do the switch from TWS-managed scheduling to TWS for
z/OS-managed scheduling in one final step.

The suitable approach differs from installation to installation. Some installations


will be able to group jobs streams and jobs into isolated groups, while others will
not. You have to decide the strategy for the conversion based on your
installation.

Chapter 4. End-to-end implementation scenarios and examples

223

Note: If you decide to reuse the TWS distributed workstation instances in your
TWS for z/OS managed network, this is also possible. You maybe decide to
move the distributed workstations one-by-one (depending on how you have
grouped your job streams and how you are doing the conversion). When a
workstation is going to be moved to TWS for z/OS you simply change the port
number in the localops file on the TWS workstation. The workstation will then
be active in TWS for z/OS at the next plan extension, replan, or redistribution
of the Symphony file (remember to create the associated DOMREC and
CPUREC definitions in the TWS for z/OS initialization statements).

Step 3. Define topology of TWS network in TWS for z/OS


The topology for your TWS distributed network can be implemented directly in
TWS for z/OS. This is done by creating the associated DOMREC and CPUREC
definitions in the TWS for z/OS initialization statements.
To activate the topology definitions, create the associated definitions for fault
tolerant workstations in the TWS for z/OS workstation database. TWS for z/OS
extend or replan will activate these new workstation definitions.
If you are using a dedicated TWS workstation for TWS for z/OS scheduling, you
can create the topology definitions in a early stage of the conversion process.
This way you can:
Verify that the topology definitions are correct in TWS for z/OS.
Verify that the dedicated fault tolerant workstations are linked and available.
Start getting some experience with the management of fault tolerant
workstations and a distributed TWS network. Implement monitoring and
handling routines in your automation application on z/OS.

Step 4. Create JOBSCR members for all TWS-managed jobs


TWS-managed jobs that should be converted to TWS for z/OS must be defined
in the SCRPTLIB dataset. For every active job defined in the TWS database you
define a member in the SCRPTLIB dataset containing:
Name of the script or command for the job (defined in the JOBREC
JOBSCRS() or the JOBREC JOBCMD specification)
Name of the user ID that the job should execute under (defined in the
JOBREC JOBUSR() specification)

224

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Note: If the same job script is going to be executed on several systems (it is
defined on several workstations in TWS) you only need to create one member
in the SCRPTLIB dataset. This job (member) can be defined on several fault
tolerant workstations in several job streams in TWS for z/OS. It requires that
the script is placed in a common directory (path) across all systems.

Step 5. Convert database objects from TWS to TWS for z/OS


TWS database objects that should be converted are job streams, resources, and
calendars. These TWS database objects probably cannot be converted directly
to TWS for z/OS. In this case you must amend the TWS database objects to
TWS for z/OS format and create the corresponding objects in the respective
TWS for z/OS databases.
Pay special attention to object definitions like:
Job stream run-cycles for job streams and usage of calendars in TWS
Usage of local (workstation-specific) resources in TWS (local resources are
converted to global resources solved by the TWS for z/OS master)
Jobs defined with repeat range, for example, run every 10 minutes in job
streams
Job steams defined with dependencies on job stream level
Jobs defined with TWS recovery actions

For these object definitions you have to design alternative ways of handling for
TWS for z/OS.

Step 6. Education for planners and operators


Some of the handling of distributed TWS jobs in TWS for z/OS will be different
from the handling in TWS. Also some specific fault tolerant workstation features
will be available in TWS for z/OS.
You should plan for the education of your operators and planners so they have
knowledge of:
How to define jobs and job streams for the TWS fault tolerant workstations
Specific rules to be followed for scheduling objects related to fault tolerant
workstations
How to handle jobs and job streams on fault tolerant workstations
How to handle resources for fault tolerant workstations
The implications of doing, for example, Symphony redistribution

Chapter 4. End-to-end implementation scenarios and examples

225

How TWS for z/OS end-to-end scheduling works (engine, server, domain
managers, etc.)
How the TWS network topology has been adopted in TWS for z/OS

Step 7. Test and verify conversion and finalize for production


After testing your approach for the conversion, doing some trial conversions, and
testing the conversion carefully, it is time to do the final conversion to TWS for
z/OS.
The goal is to reach this final conversion and switch from TWS scheduling to
TWS for z/OS scheduling within a reasonable time frame and with a reasonable
level of errors. If the period when you are running TWS and TWS for z/OS is
going to be too long, your planners and operators must handle two environments
in this period. This is not effective and can cause some frustration for both
planners and operators.
The key to a successful conversion is good planning, testing, and verification.
When you are comfortable with the testing and verification it is safe to do the final
conversion and finalize for production.
TWS for z/OS will then handle the central and the distributed workload, and you
will have one focal point for your workload. The converted TWS production
environment can be stopped.

4.5.4 Some guidelines to automate the conversion process


If you have a large TWS scheduling environment, doing manual conversion will
be too time-consuming. In this case you should consider trying to automate some
or all of the conversion from TWS to TWS for z/OS.
One obvious place to automate is the conversion of TWS database objects to
TWS for z/OS database objects. Although this is not a trivial task, some
automation can be implemented. Automation requires some locally developed
tools or programs to handle conversion of the database objects.
Some guidelines to helping automate the conversion process are:
Create text copies of all the TWS database objects by using the composer
create command. See Example 4-21.
Example 4-21 TWS objects ceation
composer
composer
composer
composer
composer

226

create
create
create
create
create

calendars.txt
workstations.txt
jobdef.txt
jobstream.txt
parameter.txt

from
from
from
from
from

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

CALENDARS
CPU=@
JOBS=@#@
SCHED=@#@
PARMS

composer create resources.txt


composer create prompts.txt
composer create users.txt

from RESOURCES
from PROMPTS
from USERS=@#@

These text files are a good starting point when trying to estimate the effort and
time for conversion from TWS to TWS for z/OS.
Use the workstaions.txt file when creating the topology definitions (DOMREC
and CPUREC) in TWS for z/OS.

Creating the topology definitions in TWS for z/OS based on the


workstations.txt file is quite straightforward. The task can be automated
coding a program (script or REXX) that reads the worksations.txt file and
converts the definitions to DOMREC and CPUREC specifications. Be aware

that TWS CPU class definitions cannot be converted directly to similar


definitions in TWS for z/OS.
Use the jobdef.txt file when creating the SCRPTLIB members.

In the jobdef.txt you have the workstation name for the script (used in the job
stream definition), the script name (goes to the JOBREC JOBSCR()
definiton), the stream logon (goes to the JOBREC JOBUSR() definition), the
description (can be added as comments in the SCRPTLIB member), and the
recovery definition. The recovery definition needs special consideration
because it cannot be converted to TWS for z/OS auto-recovery. Here you
need to make some workarounds. Usage of TWS CPU class definitions
needs special consideration. The job definitions using CPU classes probably
have to be copied to separate workstation (CPU) specific job definitions in
TWS for z/OS. The task can be automated coding using a program (scripts or
REXX) that reads the jobdef.txt file and converts each job definition to a
member in the SCRPTLIB. If you have many TWS job definitions, having a
program that can help automate this task can save a considerable amount of
time.
The users.txt file (if you have Windows NT/2000 jobs) is converted to
USRREC initialization statements on TWS for z/OS.

Be aware that the password for the user IDs is encrypted in the users.txt file,
so you cannot automate the conversion right away. You need to get the
password as it is defined on the Windows workstations, and type it in the
USRREC USRPSW() definition.
The jobstream.txt file is used to generate corresponding job streams in TWS
for z/OS. The calendars.txt file is used in connection with the jobstream.txt file
when generating run-cycles for the job streams in TWS for z/OS. It could be
necessary to create additional calendars in TWS for z/OS.

Chapter 4. End-to-end implementation scenarios and examples

227

When doing the conversion, you should notice that:


Some of the TWS job stream definitions cannot be converted directly to
TWS for z/OS job stream definitions. For example, prompts, workstation
specific resources, file dependencies, and jobs with repeat range.
For these definitions you have to analyze the usage and find other ways to
implement similar functions when using TWS for z/OS.
Some of the TWS job stream definitions need to be amended to TWS for
z/OS definitions. For example:

Dependencies on job stream level (use dummy start and dummy end
jobs in TWS for z/OS for job stream dependencies).
Note that dependencies are also dependencies to prompts, file
dependencies, and resources.

TWS job and job stream priority (0 to 101) must be amended to TWS
for z/OS priority (1 to 9). Furthermore, priority in TWS for z/OS is
always on job stream level (it is not possible to specify priority on job
level).

Job stream run-cycles (and calendars) must be converted to TWS for


z/OS run-cycles (and calendars).

Description texts longer than 24 characters are not allowed for job streams
or jobs in TWS for z/OS. If you have TWS job streams or jobs with more
than 24 characters of description text, you should consider adding this text
as TWS for z/OS operator instructions.
If you have a large number of TWS job streams, manual handling of job
streams can be too time-consuming. The task can be automated to a certain
extend coding program (script or REXX).
A good starting point is to code a program that identifies all areas where you
need special consideration or action. Use the output from this program, to
estimate the effort of doing the conversion. Furthermore the output can be
used to identify and group used TWS functions where special workarounds
must be performed when converting to TWS for z/OS.
The program can be further refined to handle the actual conversion,
performing the following steps:
Read all the text files.
Analyze the job stream and job definitions.
Create corresponding TWS for z/OS job streams with amended run-cycles
and jobs.

228

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Generate a file with TWS for z/OS batch loader statements for job streams
and jobs (batch loader statements are TWS for z/OS job stream definitions
in a format that can be loaded directly into the TWS for z/OS databases).
The batch loader file can be sent to the z/OS system and used as input to the
TWS for z/OS batch loader program. The TWS for z/OS batch loader will read
the file (dataset) and create the job streams and jobs defined in the batch
loader statements.
The resources.txt file is used to define the corresponding resources in TWS
for z/OS.

Remember that local (workstation-specific) resources are not allowed in TWS


for z/OS. This means that the TWS workstation specific resources will be
converted to global special resources in TWS for z/OS.
The TWS for z/OS engine is directly involved when resolving a dependency to
a global resource. A fault tolerant workstation job needs to interact with the
TWS for z/OS engine to resolve a resource dependency. This can jeopardize
the fault tolerance in your network.
The usage of parameters in the parameters.txt file needs to be analyzed.

What are the parameters used for?


Are the parameters used for date calculations?
Are the parameters used to pass information from one job to another job
(using the TWS parms command)?
Are the parameters used as parts of job definitions, for example, to specify
where the script is placed?
Depending on how you used the TWS parameters, there will be different
approaches when converting to TWS for z/OS. Unless you use parameters as
part of TWS object definitions, you usually do not have to do any conversion.
Parameters will still work after the conversion.
You do have to copy the parameter database to the home directory of the
TWS fault tolerant workstations. The parms command can still be used locally
on the fault tolerant workstation, when managed by TWS for z/OS.
In Section 4.4.6, TWS for z/OS JCL variables in connection with TWS
parameters on page 211, we show you how to use TWS parameters in
connection with TWS for z/OS JCL variables. This is a way to pass values for
TWS for z/OS JCL variables to TWS parameters so that they can be used
locally on the fault tolerant workstation.
In Appendix C, Merging TWS for z/OS and TWS databases on page 453,
you can find some further considerations about merging TWS for z/OS and
TWS.

Chapter 4. End-to-end implementation scenarios and examples

229

4.6 TWS for z/OS end-to-end fail-over scenarios


In this section we will describe how to make the TWS for z/OS end-to-end
environment fail-safe and plan for system outages. Furthermore, we show some
fail-over example scenarios.
To make your TWS for z/OS end-to-end environment fail-safe, you have to:
Configure TWS for z/OS backup engines (also called hot standby engines) in
your sysplex.

If you do not run sysplex, but have more than one z/OS system with shared
DASD, then you should make sure that the Tivoli Workload Scheduler for
z/OS engine can be moved from one system to another without any
problems.
Configure your z/OS systems to use VIPA.

VIPA is used to make sure that the TWS for z/OS end-to-end server always
gets the same IP address no matter which z/OS system it is run on. VIPA
assigns a system-independent IP address to the TWS for z/OS server task.
If you do not have the possibility to use VIPA, you should consider other ways
of assigning a system independent IP address to the Tivoli Workload
Scheduler for z/OS server task. This can, for example, be a hostname file,
DNS, or stack affinity.
Configure a backup domain manager for the primary domain manager.

Refer to the TWS for z/OS end-to-end configuration, shown in Figure 4-1 on
page 174, for the fail-over scenarios.
When the environment is configured to be fail-safe, the next step is to test that
the environment actually is fail-safe. We did the following fail-over tests:
Switch to the TWS for z/OS backup engine.
Switch to the TWS backup domain manager.

Finally we describe some High Availability configuration guidelines for TWS


domain managers and fault tolerant agents in High Availability Cluster
Multi-Processing (HACMP) and other HA environments.

4.6.1 Configure TWS for z/OS backup engines


To make sure that the TWS for z/OS engine will be started, either as active
engine or standby engine, we specify:
OPCOPTS

230

OPCHOST(PLEX)

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

In the initialization statements for the TWS for z/OS engine (pointed to by the
member of the EQQPARM library as specified by the parm parameter on the JCL
EXEC statement) OPCHOST(PLEX) means that the engine has to start as the
controlling system. If there already is an active engine in the XCF group, the start
up for the engine continues on standby engine.
Note: OPCOPTS OPCHOST(YES) must be specified if you start the engine
with an empty checkpoint dataset. This is, for example, the first time you start
a newly installed engine or after you have migrated from a previous release of
TWS for z/OS.

OPCHOST(PLEX) is valid only when xcf group and member have been
specified. Also, this selection requires that TWS for z/OS is running on a
z/OS/ESA Version 4 Release 1 or later. Since we are running z/OS 1.3 we can
use the OPCHOST PLEX(YES) definition. We specify the following in the xcf
group and member definitions for the engine:
XCFOPTS
/*

GROUP(TWS810)
MEMBER(TWSC&SYSNAME.)
TAKEOVER(SYSFAIL,HOSTFAIL)

Do takeover manually !!

*/

Tip: We use the z/OS sysplex-wide SYSNAME variable when specifying the
member name for the engine in the sysplex. Using z/OS variables this way we
can have common TWS for z/OS parameter member definitions for all our
engines (and agents as well).

When, for example, the engine is started on SC63, the


MEMBER(TWSC&SYSNAME) will be MEMBER(TWSCSC63).
You must have unique member names for all your engines (active and standby)
running in the same sysplex. We assure this by using the sysname variable.

Chapter 4. End-to-end implementation scenarios and examples

231

Tip: We have not activated the TAKEOVER(SYSFAIL,HOSTFAIL) parameter


in XCFOPTS because we do not want the engine to switch automatically to
one of its backup engines in case the active engine fails or the system fails.

Since we have not specified the TAKEOVER parameter, we are doing the
switch to one of the backup engines manually. The switch is done by issuing
the following modify command on the z/OS system where you want the
backup engine to take over:
F TWSC,TAKEOVER

Where TWSC is the name of our TWS for z/OS backup engine started task
(same name on all systems in the sysplex).
The takeover can, for example, be managed by SA/390. This way SA/390 can
integrate the switch to a backup engine with other automation tasks in the
engine or on the system.
We did not define a TWS for z/OS APPC server task for the Tivoli Workload
Scheduler for z/OS panels and PIF programs, as described in Remote panels
and program interface applications on page 43, but it is strongly recommended
to use a TWS for z/OS APPC server task in sysplex environments where the
engine can be moved to different systems in the sysplex. If you do not use the
TWS for z/OS APPC server task you must log off and then log on to the system
where the engine is active. This can be avoided by using the TWS for z/OS
APPC server task.

4.6.2 Configure DVIPA for the TWS for z/OS end-to-end server
To make sure that the engine can be moved from SC64 to either SC63 or SC65,
Dynamic VIPA is used to define the IP address for the server task. This DVIPA IP
address is defined in the profile dataset pointed to by PROFILE DD-card in the
TCPIP started task.
The VIPA definition used to define logical sysplex wide IP addresses for the TWS
for z/OS end-to-end server, engine JSC server, is as shown in Example 4-22.
Example 4-22 The VIPA definition
VIPADYNAMIC
viparange define 255.255.255.248 9.12.6.104
ENDVIPADYNAMIC
PORT
424
TCP TWSC
BIND 9.12.6.105
5000
TCP TWSCJSC
BIND 9.12.6.106

232

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

31281

TCP TWSCTP

BIND 9.12.6.107

In Example 4-22 on page 232 the first column under PORT is the port number,
the third column is the name of started task, and the fifth column is the logical
sysplex-wide IP address.
Port 424 is used for the TWS for z/OS tracker agent IP address, port 5000 for the
TWS for z/OS JSC server task, and port 31281 is used for the TWS for z/OS
end-to-end server task.
With these VIPA definitions we have made a relation between port number,
started task name, and the logical IP address that can be used sysplex-wide.
The TWSCTP hostname and 31281 port number used for the TWS for z/OS
end-to-end server is defined in the TOPLOGY HOSTNAME(TWSCTP)
initialization statement used by the TWSCTP server and TWS for z/OS plan
programs.
When the TWS for z/OS engine creates the Symphony file, the TWSCTP
hostname and 31281 port number will be part of the Symphony file. The primary
domain manager (F100) and the backup domain manager (F101) will use this
hostname when they establish outbound IP connections to the TWS for z/OS
server. The backup domain manager only establishes outbound IP connections
to the TWS for z/OS server if it is going to take over the responsibilities for the
primary domain manager.
Tip: Dynamic VIPA redirects the inbound connection from mailman on the
primary domain manager (F100) to IP address 9.12.6.107. But the outbound
connection from the TWS for z/OS server mailman will use the IP address for
the z/OS system. The local IP address for outbound connections is
determined by the routing table on the z/OS system.

If nm ipvalidate is set to full in the localopts file on the primary domain


manager (F100) or the backup domain manager (F101) the outbound
connection will not be established when the engine is moved to one of the
backup engines in the sysplex. In this case you should use Static VIPA
definitions to force the outbound connection to be established on the IP
address defined in VIPA.

4.6.3 Configure backup domain manager for primary domain


manager
We configure the F101 fault tolerant agent to be the backup domain manager for
the F100 primary domain manager.

Chapter 4. End-to-end implementation scenarios and examples

233

The F101 fault tolerant agent can be configured to be the backup domain
manager simply by specifying:
CPUFULLSTAT(ON)
CPURESDEP(ON)

/* Full status on for Backup DM */


/* Resolve dep. on for Backup DM */

For the CPUREC definition for the F101 workstations, see Define the topology
initialization statements on page 179.
With CPUFULLSTAT (full status information) and CPURESDEP (resolve
dependency information) set to On, the Symphony file on F101 is updated with
the same reporting and logging information as the Symphony file on F100. The
backup domain manager will then be able to take over the responsibilities of the
primary domain manager.

4.6.4 Switch to TWS for z/OS backup engine


In this scenario we cancel the TWSCTP server task and the TWSC engine task
on SC64 and ask the backup engine on SC63 to take over. We cancel the
TWSCTP task first, to bring it down outside the control of TWSC.
Before we bring down the TWSCTP server and the TWSC engine, we starts a
job stream on F101 with four jobs that run while we are switching to the backup
engine on SC63.
The steps in the scenario:
1. Start a F101DWTESTSTREAM job stream.
2. Bring down the TWSCTP server task and the TWSC engine task on SC64.
3. Activate the backup TWSC engine on the SC63 system.
4. Verify that the jobs in the F101DWTESTSTREAM job stream are tracked
correctly in the new engine.

Step 1. Start a F101DWTESTSTREAM job stream


The F101DWTESTSTREAM job stream is added to the TWS for z/OS plan and
the first FTW job is running (see Figure 4-21).

Figure 4-21 Job stream added to plan in TWS for z/OS

234

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

STep 2. TWSCTP server and TWSC engine on SC64


Bring down the TWSCTP server and the TWSC engine on SC64. The two tasks
are cancelled, issuing the following commands on the z/OS master console:
C TWSCTP
C TWSC

When canceling the TWSCTP server task we get several BPXP018I messages:
BPXP018I THREAD 1058D09000000000, IN PROCESS 16908339, ENDED
WITHOUT BEING UNDUBBED WITH COMPLETION CODE 00222000,
AND REASON CODE 00000000.

111

The OPCMASTER workstation gets an unlinked status on the F100 domain


manager (checked with the JSC instance pointing to the F100 domain manager).

Step 3. Activate the backup TWSC engine on the SC63 system


The backup TWSC engine on the SC63 notice that the active TWSC engine on
SC63 is gone and issues the following messages in EQQMLOG:
EQQZ127W THE OPC HOST IN XCF GROUP TWS810 HAS FAILED.
EQQZ127I A TAKEOVER BY THE STANDBY HOST TWSCSC63 MUST BE INITIATED
EQQZ127I BY THE OPERATOR.

We initiate the takover in the TWSC backup engine on the SC63 system by
issuing the following modify command:
F TWSC,TAKEOVER

The TWSC engine on SC63 does the takeover and issues the message:
EQQZ048I AN OPC MODIFY COMMAND HAS BEEN PROCESSED.
EQQZ129I TAKEOVER IN PROGRESS

MODIFY TWSC,TAKEOVER

As part of the takeover, the TWSC engine on SC63 issues a start command for
the TWSCTP end-to-end server task on SC63.

Step 4. Verify the jobs in F101DWTESTSTREAM


We verify that the jobs in the F101DWTESTSTREAM job stream are tracked
correctly (see Figure 4-22 on page 236).

Chapter 4. End-to-end implementation scenarios and examples

235

Figure 4-22 Jobs in F101DWTSTSTREAM after switch to backup engine

As seen in Figure 4-22, all jobs in the F101DWTESTSTREAM are tracked


correctly (Status = Successful).
Note: The scheduling and tracking of F101* jobs are done by the F101 fault
tolerant agent. The F101 fault tolerant agent carries on its work while we made
the shift to the backup engine.

It can take some time before the job status is updated in the z/OS plan after the
switch to a backup engine. The mailman process on the primary domain
manager per default waits 600 seconds before it attempts to link to an unlinked
workstation (specified in the mm retry link option in the localopts file).

4.6.5 Switch to TWS backup domain manager


This scenario is divided into two parts:
A short-term switch to the backup manager

By short-term switch we mean that we have switched back to the original


domain manager before the current plan is extended or replanned.
A long-term switch

By a long-term switch we mean that the switch to the backup manager will be
effective across the current plan extension or replan.

Short-term switch to the backup manager


In this scenario we issue a switchmgr command on the F101 backup domain
manager. We verify that the F101 takes over the responsibilities of the old
primary domain manager.
The steps in the short-term switch scenario are:
1. Issue the switch command on the F101 backup domain manager.
2. Verify that the switch is done.

236

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Step 1. Issue switch command on F101 backup domain manager


Before we do the switch, we check the status of the workstations from a JSC
instance pointing to the primary domain manager (see Figure 4-23).

Figure 4-23 Status for workstations before the switch to F101

Notice from Figure 4-23 that F100 is MANAGER (see the CPU Type column) for
the DM100 domain. F101 is FTA (see the CPU Type column) in the DM100
domain.
To simulate that the F100 primary domain manager is down or unavailable due to
a system failure, we issue the switch manager command on the F101 backup
domain manager. The switch manager command is initiated from the conman
command line on F101:
conman switchmgr "DM100;F101"

Where DM100 is the domain and F101 is the fault tolerant workstation we are
going to switch to.
The F101 fault tolerant workstation responds with the message shown in
Example 4-23.
Example 4-23 Message from F101
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for group 'TWS-EndToEnd'.
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for group 'TWS-EndToEnd'.

Chapter 4. End-to-end implementation scenarios and examples

237

Locale LANG set to "en_US"


Schedule (Exp) 02/27/02 (#107) on F101. Batchman LIVES. Limit: 20, Fence: 0,
Audit Level: 0
switchmgr DM100;F101
AWS20710041I Service 2005 started on F101
AWS22020120 Switchmgr command executed from cpu F101 on cpu F101.

This indicates that the switch has been initiated.


It is also possible to initiate the switch from a JSC instance pointing to the F101
backup domain manager. Since we do not have a JSC instance pointing to the
backup domain manager we use the conman switchmgr command locally on the
F101 backup domain manager.
For your information, we show how to initiate the switch from the JSC in the
following:
1. Double click Status of all domains in the Default Plan Lists in the domain
manager JSC instance (TWSC-F100-Eastham) (see Figure 4-24).

Figure 4-24 Status of all Domains list

2. Then right click DM100 domain (see Figure 4-24) to get pop-up window (as
shown in Figure 4-25 on page 239).

238

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 4-25 Right click the DM100 domain to get pop-up window

3. Click Switch Manager... in the pop-up window shown in Figure 4-25. The
JSC shows a new pop-up window, where we can search for the agent we will
switch too (see Figure 4-26).

Figure 4-26 The Switch Manager - Domain search pop-up window

4. Click the search button ... (the square box with three dots to the right of the
F100 domain), as shown in Figure 4-26, and JSC shows the Find Workstation
Instance pop-up window (Figure 4-27 on page 240).

Chapter 4. End-to-end implementation scenarios and examples

239

Figure 4-27 JSC Find Workstation Instance pop-up window

5. Click the Start button (Figure 4-27). JSC shows a new pop-up window, that
contains all the fault tolerant workstations in the network (see Figure 4-28 on
page 241).
6. If we specify a filter in the Find field (shown in Figure 4-27) this filter will be
used to narrow the list of workstations that are shown (Figure 4-28 on
page 241).

240

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 4-28 The result from Find Workstation Instance

7. Finally mark the workstation to switch to F101 (in our example) and click the
OK button in the Find Workstation Instance pop-up window (Figure 4-28).
8. Click the OK button in the Switch Manager - Domain pop-up window, to
initiate the switch. Notice that the selected workstation (F101) appears in the
pop-up window (see Figure 4-29).

Figure 4-29 Switch Manager - Domain pop-up window with selected FTA

Chapter 4. End-to-end implementation scenarios and examples

241

The switch to F101 is now initiated and TWS will perform the switch.

Step 2. Verify that the switch is done


We check the status for the workstation using the JSC pointing to the old primary
domain manager (F100) (see Figure 4-30).

Figure 4-30 Status for the workstations after the switch to F101

From Figure 4-30 it can be verified that F101 is now MANAGER (see CPU Type
column) for the DM100 domain (see the Domain column). The F100 is changed
to an FTA (see the CPU Type column).
The OPCMASTER workstation has the status unlinked (see the Link Status
column in Figure 4-30). This status is correct, since we are using the JSC
instance pointing to the F100 workstation. The OPCMASTER has a linked status
on F101, as expected.
Switching to the backup domain manager takes some time, so be patient. The
reason for this is that the switch manager command stops the backup domain
manager and restarts it as the domain manager. All domain member fault
tolerant workstations are informed about the switch, and the old domain manager
is converted to a fault tolerant agent in the domain. The fault tolerant
workstations use the switch information to update their Symphony file with the
name of the new domain manager. Then they stop and restart to link to the new
domain manager.
In rare occasions the link status is not shown correctly in the JSC after a switch
to the backup domain manager. If this happens, try to Link the workstation
manually (by right clicking the workstation and clicking Link in the pop-up
window).
Note: To reactivate F100 as the domain manager, simply do a switch manager
to F100 or run Symphony redistribute. The F100 will also be reinstated as the
domain manager when you run the extend or replan programs.

242

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Long-term switch to the backup manager


The identification of domain managers is placed in the Symphony file. If a switch
domain manager command is issued, the old domain manager name will be
replaced with the new (backup) domain manager name in the Symphony file.
If the switch to the backup domain manager is going to be effective across TWS
for z/OS plan extension or replan, we have to update the DOMREC definition.
This is also the case if we redistribute the Symphony file from TWS for z/OS.
The plan program reads the DOMREC definitions and creates a Symphony file
with domain managers and fault tolerant agents accordingly. If the DOMREC
definitions are not updated to reflect the switch to the backup domain manager,
the old domain manager will automatically resume domain management
possibilities.
The steps in the long-term switch scenario are:
1. Issue the switch command on the F101 backup domain manager.
2. Verify that the switch is done.
3. Update the DOMREC definitions used by the TWSCTP server and the TWS
for z/OS plan programs.
4. Run the replan plan program in TWS for z/OS.
5. Verify that the switched F101 is still the domain manager.

Step 1. Issue switch command on F101 backup domain manager


The switch command is done as described in Step 1. Issue switch command on
F101 backup domain manager on page 237.

Step 2. Verify that the switch is done


We check the status of the workstation using the JSC pointing to the old primary
domain manager (F100) (see Figure 4-31).

Figure 4-31 Status of the workstations after the switch to F101

Chapter 4. End-to-end implementation scenarios and examples

243

From Figure 4-31 on page 243 it can be verified that F101 is now MANAGER
(see CPU Type column) for the DM100 domain (see the Domain column). F100
is changed to an FTA (see the CPU Type column).
The OPCMASTER workstation has the status unlinked (see the Link Status
column in Figure 4-31 on page 243). This status is correct, since we are using
the JSC instance pointing to the F100 workstation. The OPCMASTER has a
linked status on F101, as expected.

Step 3. Update the DOMREC definitions for server and plan program
We update the DOMREC definitions (described in Define the topology
initialization statements on page 179) so F101 will be the new primary domain
manager. See Example 4-24.
Example 4-24 DOMREC definitions
/**********************************************************************/
/* DOMREC: Defines the domains in the distributed Tivoli Workload
*/
/*
Scheduler network
*/
/**********************************************************************/
/*--------------------------------------------------------------------*/
/* Specify one DOMREC for each domain in the distributed network.
*/
/* With the exception of the master domain (whose name is MASTERDM
*/
/* and consist of the TWS for z/OS engine).
*/
/*--------------------------------------------------------------------*/
DOMREC
DOMAIN(DM100)
/* Domain name for 1st domain
*/
DOMMNGR(F101)
/* Chatham FTA - domain manager
*/
DOMPARENT(MASTERDM)
/* Domain parent is MASTERDM
*/
DOMREC
DOMAIN(DM200)
/* Domain name for 2nd domain
*/
DOMMNGR(F200)
/* Yarmouth FTA - domain manager */
DOMPARENT(DM100)
/* Domain parent is DM100
*/

The DOMREC DOMNGR(F101) defines the name of the primary domain


manager. This is the only change needed in the DOMREC definition.
We did create an extra member in the EQQPARM dataset and called it
TPSWITCH. This member has the updated DOMREC definitions to be used
when we have a long-term switch. In the EQQPARM dataset we have three
members: TPSWITCH (F101 is domain manager), TPNORM (F100 is domain
manager), and TPDOMAIN (the member used by TWSCTP and the plan
programs).
Before the plan programs are executed we replace the TPDOMAIN member with
the TPSWITCH member. When F100 is going to be the domain manager again
we simply replace the TPDOMAIN member with the TPNORM member.

244

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tip: If you let your system automation (for example, System Automation/390)
handle the switch to the backup domain manager, you can automate the entire
process.
System automation replaces the EQQPARM members.
System automation initiates the switch manager command remotely on the
fault tolerant workstation.
System automation resets the definitions when the original domain
manager is ready be activated.

Step 4. Run replan plan program in TWS for z/OS


We submit a replan plan program (job) using option 3.1 from legacy ISPF in the
TWS for z/OS engine and verify the output.
In EQQMLOG we see the messages shown in Example 4-25.
Example 4-25 EQQMLOG
EQQZ014I
EQQ3005I
EQQ3030I
EQQ3011I
EQQZ013I
EQQZ014I

MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000


CPU F101 IS SET AS DOMAIN MANAGER OF FIRST LEVEL
DOMAIN MANAGER F101 MUST HAVE SERVER ATTRIBUTE SET TO BLANK
CPU F200 SET AS DOMAIN MANAGER
NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER
MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER
IS: 0000

The F101 fault tolerant workstation is the primary domain manager.


The EQQ3030I message is due to the SERVER(1) specification in the CPUREC
definition for the F101 workstation. The SERVER(1) specification is used when
F101 is running as a fault tolerant workstation managed by the F100 domain
manager.

Step 5. Verify that the switched F101 is still domain manager


Finally we verify that F101 is the domain manager after the replan program has
finished and the Symphony file is distributed (see Figure 4-32).

Figure 4-32 Status for workstations after the TWS for z/OS replan program

Chapter 4. End-to-end implementation scenarios and examples

245

From Figure 4-32 on page 245 it can be verified that F101 is still MANAGER (see
CPU Type column) for the DM100 domain (see the Domain column). The CPU
type for F100 is FTA.
The OPCMASTER workstation has the status unlinked (see the Link Status
column in Figure 4-32 on page 245). This status is correct, since we are using
the JSC instance pointing to the F100 workstation. The OPCMASTER has a
linked status on F101, as expected.
Note: To reactivate F100 as a domain manager, simply do a switch manager
to F100 or Symphony redistribute. The F100 will also be reinstated as domain
manager when you run the extend or replan programs.

Remember to change the DOMREC definitions before the plan programs are
executed or the Symphony file will be redistributed.

4.6.6 HACMP configuration guidelines for TWS workstations


Use of High Availability Cluster Multi-Processing (HACMP) is another way to
make the TWS workstations fail-safe.
HACMP for AIX is a control application that can link several RS/6000 servers or
SP nodes into highly available clusters. Clustering servers or nodes enables
parallel access to their data, which can help provide the redundancy and fault
resilience required for business-critical applications.
Using HACMP, it is possible to move the workload from one AIX server to
another (backup) server without loosing data due to the redundancy and fault
resilience in HACMP.

A short description of HACMP


With HACMP, it is possible to have TWS (or another application) automatically
switch over to the standby node when a failure occurs on the primary node. This
is called a failover. Usually, the TWS is installed on a disk array shared by the
two nodes in the HACMP cluster. When one of the nodes fails, TWS will be
started on the other node. Because both nodes have access to the shared disk
array, all the TWS data are available to both nodes.

246

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Before failover
Primary
Node

Shared
Disk
Array

After failover
Standby
Node

Primary
Node

Shared
Disk
Array

Standby
Node

Master

FTA

Master

FTA

/tivoli/TWS/TWS-A/

/tivoli/TWS/TWS-B/

/tivoli/TWS/TWS-A/

/tivoli/TWS/TWS-B/
Master

/tivoli/TWS/TWS-A/

Figure 4-33 HACMP failover of a TWS master to a node also running an FTA

Guidelines for TWS installation in a HACMP environment


On UNIX systems TWS should be installed in a file system with a mount point
above the TWS users home directory. For example, if the maestro users home
directory is /tivoli/TWS/TWS-A/tws, then the file system for this TWS instance
should be one level higher, /tivoli/TWS/TWS-A. This is important because the
filesystem must also include the /tivoli/TWS/TWS-A/unison directory containing
the security file (and on the master, some database files).
Move everything from the mount point to the standby. There is also a
components file that is typically installed as /usr/unison/components. Since the
components file is not in a file system that is easy to switch from one node to the
other, we recommend that the components file be copied to the standby node.
In addition, there is a library file used by TWSs tracing facility (AutoTrace) that
must be installed in /usr/lib on both the primary and standby systems. The file
name of the library depends on the operating system. See Table 4-2 for these file
names.
Table 4-2 The AutoTrace library file must exist in /usr/lib on both systems
Operating system

AutoTrace library file

AIX

/usr/lib/libatrc.a

Solaris

/usr/lib/libatrc.so

HP/UX

/usr/lib/libatrc.s

Chapter 4. End-to-end implementation scenarios and examples

247

For inter-workstation communications, TWS obtains the node and society of the
target system (the domain manger or the master domain manager) from the
workstation definition (in an end-to-end environment, the CPUNODE and
CPUTCPIP definition in the CPUREC server initialization statement). Those
settings must continue to work after the failover to the backup node. For listening
on the local socket for incoming connection requests, the port for the netman
process is defined in the <twshome>/localopts file.
If the HACMP standby node does not normally have a TWS instance running on
it, no further configuration is necessary. If the HACMP standby system has its
own active instance of TWS, it is necessary to ensure that the TWS home
directory, user name, user ID, and netman port number are unique for each TWS
engine. When the standby node takes over the TWS engine from the primary
node, the standby node will be running two separate TWS instances. You must
also ensure that the /usr/unison/components file lists each instance and that they
are uniquely named. Configuration like this is not unusual if the HACMP standby
machine is running the TWS workload while it is in standby mode. In
Example 4-26 you can see how to define two TWS workstation engines.
Example 4-26 Definitions for multiple TWS instances
Definition for the adopting TWS workstation engine:
Userid (account): tws-a
Home directory: /tivoli/TWS/A
Netman port address: 31111 (defined in the localopts file)
Definition for the standby TWS workstation engine:
Userid (account): tws-b
Home directory: /tivoli/TWS/B
Netman port address: 31112 (defined in the localopts
file)

To get an idea of how this will work, look once more at Figure 4-33 on page 247.
For more detailed information about having multiple TWS instances (engines)
running on a single computer, refer to Section 3.5.1, Installing multiple instances
of TWS on one machine on page 142.

Configuring TWS in an HACMP environment


Although there are various ways to configure HACMP environments, the
following steps are the only ones needed to configure TWS in an HACMP
environment:
1. Ensure that the workstation definitions (in the CPUREC) are hostname-based
when creating them in TWS. When configuring HACMP, you need to state the
server ID and its IP address, as well as the fail-over IP address.
For example, we can use the hostname eastham for our F100 TWS
workstation.

248

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

2. Ensure that the DNS server or hosts files are updated accordingly.
The DNS server or hosts files should be updated to resolve the workstation
name accordingly. In other words, after a failover occurs, the host name
should resolve to the standby node, not the primary (failed) node.
3. Ensure that the correct directories are mirrored or that a duplicate installation
of TWS is ready for failover, or both. There are two directories where the TWS
binaries and files reside. These are made by the <twshome> and
<twshome/../unison> directories.
Note: It is possible to configure HACMP so that the standby node takes over
the IP address and even MAC address of the failing node. This is not usually
necessary. As long as the TWS workstation definitions contain host names,
and the DNS is updated to resolve to the standby nodes IP address, the TWS
workstations will be able to reach the standby node using the host name.

HACMP can be configured to stop TWS on the failing system before switching
over to the standby node. It is a very good idea to implement this, because there
can be problems starting TWS back up if it is not shut down correctly.
Example 4-27 shows how to implement this.
Example 4-27 Sample HACMP start and stop functions for TWS
function customer_defined_start_cmds
{
print "$(date '+%b %e %X') - Node \"$(hostname)\": Starting TWS"
su twsuser -c "<twshome>/bin/conman 'start&link @;noask'"
print "$(date '+%b %e %X') - Node \"$(hostname)\": Starting TWS Completed"
}
function customer_defined_halt_cmds
{
print "$(date '+%b %e %X') - Node \"$(hostname)\": Halting TWS"
su twsuser -c "<twshome>/bin/conman 'unlink @;noask&shut;wait'"
print "$(date '+%b %e %X') - Node \"$(hostname)\": Halting TWS Completed"
}

customer_defined_halt_cmds is executed on the system you are switching from


(if possible). customer_defined_start_cmds is executed on the system you are
switching to.

Remember that twsuser should be replaced with the correct user ID (tws-a, in
our example; see Example 4-26 on page 248).

Chapter 4. End-to-end implementation scenarios and examples

249

You should consider starting a kind of timer when doing the shutdown of TWS
(conman shut) and then issue kill commands for the TWS processes after this
timer has expired. If you have the wait parameter as part of the conman shut
command, for example, conman 'shut;wait, this command will not complete (or
return) before the TWS shutdown is complete. If there are problems that interrupt
the shutdown process, the HACMP shutdown process will be halted. Add a timer
in HACMP that expires, for example, after two minutes. If the TWS processes are
not completed then, it issues a kill command for the remaining TWS processes.
You are now ready to configure the HACMP solution in your environment.

Other considerations for TWS in a HACMP environment


TWS links between the TWS fault tolerant workstation, and its linked
workstations will break when HACMP moves TWS to the standby node. These
links will automatically recover once the failover is complete.
Users will have to reconnect to the switched fault tolerant workstation to use local
conman commands on the new system. If the JSC instance pointing to the
workstation is configured to use hostname (instead of IP address) the JSC
instance will still work after the switch. Using JSC, it is not necessary to log off
and log in again to continue to work with the workstation instance.
TWS jobs that are running at failover (when the mirrored HACMP system fails)
will probably go to ABEND or stay in the status started when TWS is started by
HACMP on the adoptive host. After the switch some manual work remains. You
have to check all ABENDed and started jobs in the switched workstation and
decide if the job should be completed or restarted.
You should consider adding the command conman 'limit cpu=xxxxx;0;noask'
in the start script for the switched workstation. This will make sure that no jobs
start on the switched workstation before all switched applications and database
systems are ready on the TWS workstation and your manual verification is
completed.

4.6.7 Configuration guidelines for High Availability environments


In this section we describe guidelines on how to configure and install TWS fault
tolerant agents in Windows NT/200 cluster and HP Service Guard environments.

250

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

TWS configuration in Windows NT/2000 environments


The key to running TWS on an NT cluster is proper installation. Both sides of the
cluster must be the same. This means TWS must be installed as a domain user
account. You do not want to install it as a local user account because that
prefixes the account with the machine name in the registry, thereby making each
side different. TWS must also be installed onto the drive that fails over.
To produce identical TWS registry settings on both sides of the cluster, with only
one set of binaries, perform the following:
1. Install TWS with the NT cluster active on one side.
2. Remove the binaries you just installed from the drive while leaving the registry
entries intact.
3. Fail the NT cluster over so the drive is on the other side and install TWS
exactly the same way as it was installed on the first side.
4. Redefine the tokensrv and netman services as part of the cluster group.
You should finish with the registry settings the same on both sides of the cluster,
but only one set of binaries.

TWS configuration in HP Service Guard environments


Basically the configuration guidelines for TWS in a HP Service Guard
environment are the same as the guidelines described for HACMP (see HACMP
configuration guidelines for TWS workstations on page 246). TWS integrates
pretty much the same way with both HP Service Guard and HACMP. This means
that the approach is the same when configuring TWS in the HP Service Guard
environment as for HACMP. If the TWS workstation on the standby system is
going to be active together with the switched workstation, you must install two
instances of TWS (different user IDs and different installation directories).
Remember to use resolvable IP addresses and unique port numbers (specified
by the nm port option in the localopts file) for the two TWS instances.

4.7 Backup and maintenance guidelines for end-to-end


FTAs
In this section we discuss some important backup and maintenance guidelines
for TWS fault tolerant agents (workstations) in an end-to-end scheduling
environment. The guidelines also apply to TWS fault tolerant agents in a TWS
network, as well as to a TWS master domain manager.

Chapter 4. End-to-end implementation scenarios and examples

251

4.7.1 Backup of the TWS agents


To make sure that you can recover from disk or system failures on the system
where the TWS engine is installed, you should made backup of the installed
engine on a daily or weekly basis.
The back up can be done in several ways. You probably already have some
backup policies and routines implemented for the system where the TWS engine
is installed. These backups should be extended to make a backup of files in the
<TWShome> and the <TWShome/..> directories.
We suggest that you have a backup of all the TWS files in the <TWShome> and
<TWShome/..> directories. If the TWS engine is running as a fault tolerant
workstation in an end-to-end network it should be sufficient to make the backup
on a weekly basis.
When deciding how often a backup should be generated you should consider
things like:
Are you using parameters on the TWS agent?

If you are using parameters locally on the TWS agent and do not have a
central repository for the parameters, you should consider making daily
backups.
Are you using specific security definitions on the TWS agent?

If you are using specific security file definitions locally on the TWS agent and
do not have a central repository for the security file definitions, you should
consider to make daily backups.
Another approach is to make a backup of the TWS agent files, at least before
doing any changes to the files. The changes can, for example, be updates to
configuration parameters or a patch update of the TWS agent.
If the TWS engine is running as a TWS master domain manager you should at
least make daily backups. For a TWS master domain manager it is also good
practice to create text copies of the database files, using the composer create
command for all database files. The composer create command will create text
copies of the database files. The database files can be recreated from the text
files using the composer add and composer replace commands.

252

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

4.7.2 Stdlist files on TWS fault tolerant agents


TWS fault tolerant agents save message files and job log output on the system
where they run. Message files and job log output are saved in a directory named
<twshome>/stdlist. In the stdlist (standard list) directory there will be
subdirectories with the name ccyy.mm.dd (where cc is century, yy is year, mm is
month, and dd is date).
This subdirectory is created on a daily basis by the TWS netman process when a
new Symphony file (Sinfonia) is received on the fault tolerant agent. The
Symphony file is generated either by Jnextday on the TWS master or by the TWS
for z/OS controller plan programs.
The ccyy.mm.dd subdirectory contains a message file from netman, a message
file from the other TWS processes (mailman, batchman, jobman, and writer), and
a file per job with the job log for jobs executed on this particular date as seen in
Example 4-28.
Example 4-28 Files in a stdlist, ccyy.mm.dd directory
NETMAN
O19502.0908
O19538.1052
O38380.1201
TWS-E

Message file with messages form NETMAN process


File with job log for job with process no. 19502 run at 09.08
File with job log for job with process no. 19538 run at 10.52
File with job log for job with process no. 38380 run at 12.01
File with messages from MAILMAN, BATCHMAN, JOBMAN and WRITER

Each job that is run under TWSs control creates a log file in the TWSs stdlist
directory. These log files are created by the TWS job manager process (jobman)
and will remain there until deleted by the system administrator.
The easiest way to maintain the growth of these directories is to decide how long
the log files are needed and schedule a job under TWS for z/OSs (or TWSs)
control, which removes any file older than the given number of days. The TWS
rmstdlist command can perform the deletion of stdlist files. The rmstdlist
command removes or displays files in the stdlist directory based on age of the
files.
The rmstdlist command:
rmstdlist [-v |-u]
rmstdlist [-p] [age]

Where the arguments means:


-v

Displays the command version and exits.

-u

Displays the command usage information and exits.

Chapter 4. End-to-end implementation scenarios and examples

253

-p

Displays the names qualifying standard list file directories. No directory or


files are removed. If you do not specify -p, the qualifying standard list files
are removed.

age

The minimum age, in days, for standard list file directories to be


displayed or removed. The default is 10 days.

We suggest that you run the rmstdlist command on a daily basis on all your
fault tolerant agents. The rmstdlist command can be defined in a job in a job
stream and scheduled by TWS for z/OS (or TWS). You may need to save a
backup copy of the stdlist files, for example, for internal revision or due to
company policies. If this is the case, a backup job can be scheduled to run just
before the rmstdlist job.
The job (or more precisely the script) with the rmstdlist command can be coded
in different ways. If you are using TWS parameters to specify the age of your
rmstdlist files, it will be easy to change this age later on if required.
Example 4-29 shows an example of a shell script where we use the stdlist
command in combination with TWS parameters.
Example 4-29 Shell script using the TWS rmstdlist command in combination with TWS
parameters
#! bin/sh
#
# A TWS for UNX job to cleanup in TWS stdlist files
a=`parms stdlage`

; export a

echo Start cleanup in TWS stdlist directory


rmstdlist -p ${a}
rmstdlist ${a}
echo End cleanup in TWS stdlist directory

Notice from the example that we are calling the rmstdlist command twice, first
with the -p flag and second without the flag. In the first call of stdlist, we will get a
list of all the stdlist directories that will be deleted in the second call of stdlist.
The age of the stdlist directories is specified using variable a. Variable a has
been assigned the value from the stdlage parameter. The stdlage parameter is
defined as five in the parameters database, meaning that stdlist files older than
five days will be removed. The stdlage parameter was created on the fault
tolerant agent using the FPARMUPD script described in Section 4.4.6, TWS for
z/OS JCL variables in connection with TWS parameters on page 211.

254

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The script shown in Example 4-29 on page 254 can be defined in a TWS for
z/OS FTW job, using the JOBREC defintion show in Example 4-30.
Example 4-30 JOBREC defintion for the cleanup script
/* Definiton for F100J012 job to be executed on F100 machine
*/
/* The cleanup.sh script calls the TWS rmstdlist command to cleanup in */
/* the stdlist directory
*/
JOBREC JOBSCR('/tivoli/TWS/scripts/cleanup.sh') JOBUSR(tws-e)

The stdlist output for the job is shown in Figure 4-34. The job was run from a
TWS for z/OS engine. Notice that due to the rmstdlist command with the -p flag,
we get a list of the stdlist directories (message AWS222610502) before they are
deleted.

Figure 4-34 Output form the cleanup.sh when run from TWS for z/OS

4.7.3 Auditing log files on TWS fault tolerant agents


The auditing log files can be used to track changes to the TWS database and
plan (the Symphony file).

Chapter 4. End-to-end implementation scenarios and examples

255

The auditing options are enabled by two entries in the globalopts file in the TWS
or TWS for z/OS server:
plan audit level = 0|1
database audit level = 0|1

If either of these options are set to the value of 1, the auditing is enabled on the
fault tolerant agent. The auditing logs are created in the following directories:
<TWShome>/audit/plan
<TWShome>/audit/database

If the auditing function is enabled in TWS, files will be added to the audit
directories every day. Modifications to the TWS database will be added to the
database directory:
<TWShome>/audit/database/date (where date is in ccyymmdd format)

Modification to the TWS plan (the Symphony) will be added to the plan directory:
<TWShome>/audit/plan/date (where date is in ccyymmdd format)

We suggest that you regularly clean out the audit database and plan directories,
for example, on a daily basis. The clean out in the directories can be defined in a
job in a job stream and scheduled by TWS for z/OS (or TWS). You may need to
save a backup copy of the audit files, for example, for internal revision or due to
company policies. If this is the case, a backup job can be scheduled to run just
before the cleanup job.
The job (or more precisely the script) doing the clean up can be coded in different
ways. If you are using TWS parameters to specify the age of your audit files, it
will be easy to change this age later on if required.
Example 4-31 on page 257 shows an example of a shell script where we use the
UNIX find command in combination with TWS parameters.

256

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example 4-31 Shell script to clean up files in the audit directory based on age
#!/bin/ksh
#
# A TWS for UNX job to cleanup in TWS audit directory
a=`parms audage`

; export a

echo Start cleanup in TWS audit directory


find
find
find
find

<TWShome>/audit/database -mtime +${a} -print


<TWShome>/audit/plan -mtime +${a} -print
<TWShome>/audit/database -mtime +${a} -exec rm -f {} \;
<TWShome>/audit/plan -mtime +${a} -exec rm -f {} \;

echo End cleanup in TWS audit directory

Notice from Example 4-31 that we first issue the find commands with the print
command to list the files that will be deleted. The deletion is done in the find
commands with the exec command.
The age of the audit files is specified using variable a. Variable a has been
assigned the value from the audage parameter. The audage parameter is
defined as 25 in the parameters database, meaning that files older than 25 days
will be removed from the audit directory. The audage parameter was created on
the fault tolerant agent using the FPARMUPD script described in Section 4.4.6,
TWS for z/OS JCL variables in connection with TWS parameters on page 211.

4.7.4 Files in the TWS schedlog directory


Every day Jnextday executes the stageman command on the TWS master
domain manager. Stageman saves archives of the old Symphony file into the
schedlog directory. We suggest creating a daily or weekly job that removes old
Symphony files in the schedlog directory.
Note: This only applies to a TWS master domain manager.

The job (or more precisely the script) doing the cleanup can be coded in different
ways. If you are using TWS parameters to specify the age of your schedlog files,
it will be easy to change this age later on if required.
Example 4-32 shows a sample of a shell script where we use the UNIX find
command in combination with TWS parameters.
Example 4-32 Shell script to clean out files in the schedlog directory

Chapter 4. End-to-end implementation scenarios and examples

257

#!/bin/ksh
#
# A TWS for UNX job to cleanup in TWS schedlog directory
a=`parms schdage`

; export a

echo Start cleanup in TWS schedlog directory


find <TWShome>/schedlog -mtime +${a} -print
find <TWShome>/schedlog -mtime +${a} -exec rm -f {} \;
echo End cleanup in TWS schedlog directory

Notice from Example 4-32 on page 257 that we first issue the find command
with the print command to list the files that will be deleted. The delete is done in
the find command with the exec command.
The age of the schedlog files is specified using variable a. Variable a has been
assigned the value from the schdage parameter. The schdage parameter is
defined as 30 in the parameters database, meaning that schedlog files older than
30 days will be removed.

4.7.5 Monitoring file systems on TWS agents


It is easier to deal with file system problems before they happen. If your file
system fills up, TWS will no longer function and your job processing will stop. To
avoid problems, monitor the file systems containing your TWS home directory
and /tmp. If you have, for example, a 2 GB file system, you might want a warning
at 80 percent, but if you have a smaller file system you will need a warning when
lower percentage fills up. We cannot give you an exact percentage at which to be
warned. This depends on many variables that change from installation to
installation (or company to company).
Monitoring or testing for the percentage of the file system can be done by, for
example, Tivoli Distributed Monitoring and Tivoli Enterprise Console (TEC).
In Example 4-33 we have shown an example of a shell script that will test for the
percentage of the TWS file system filled and report back if it is over 80 percent.
Example 4-33 Shell script that is monitor space in /dev/lv01 TWS file system
/usr/bin/df -P /dev/lv01 | grep TWS >> tmp1$$
/usr/bin/awk '{print $5}' tmp1$$ > tmp2$$
/usr/bin/sed 's/%$//g' tmp2$$ > tmp3$$
x=`cat tmp3$$`
i=`expr $x \> 80`
echo "This file system is less than 80% full." >> tmp4$$

258

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

if [ "$i" -eq 1 ]; then


echo "This file system is over 80% full. You need to
remove schedule logs and audit logs from the
subdirectories in the file system." > tmp4$$
fi
cat tmp4$$
rm tmp1$$ tmp2$$ tmp3$$ tmp4$$

4.7.6 Central repositories for important TWS files


TWS has several files that are important for usage of TWS and for the daily TWS
production workload. This is the case if you are running a TWS master domain
manager or a TWS for z/OS end-to-end server. Managing these files across
several TWS workstations can be a cumbersome and very time-consuming task.
Using central repositories for these files can save time and make your
management more effective.

The scripts files


The scripts (or the JCL) are very important objects when doing job scheduling on
the TWS fault tolerant agents. It is the scripts that actually perform the work or
the job on the agent system, for example, update the payroll database or the
customer inventory database.
The job defintion for distributed jobs in TWS or TWS for z/OS contains a pointer
(the path or directory) to the script. The script by itself is placed locally on the
fault tolerant agent. Since the fault tolerant agents have a local copy of the plan
(Symphony) and the script to run, they can continue running jobs on the system
even if the connection to the TWS master or the TWS for z/OS controller is
broken. This way we have the fault tolerance on the workstations.
Managing scripts on several TWS fault tolerant agents and making sure that you
always have the correct versions on every fault tolerant agent can be a
time-consuming task. Furthermore, you need to make sure that the scripts are
protected so that they are not updated by the wrong person. Pure protected
scripts can cause problems in your production environment if someone has
changed something without notifying the responsible planner or change
manager.
We suggest placing all scripts used for production workload in one common
script repository. The repository can be designed in different ways. One way
could be to have a subdirectory for each fault tolerant workstation (with the same
name as the name on the TWS workstation).

Chapter 4. End-to-end implementation scenarios and examples

259

All changes to scripts are done in this production repository. On a daily basis, for
example, just before the plan is extended, the master scripts in the central
repository are distributed to the fault tolerant agents. The daily distribution can be
handled by a TWS scheduled job. This job can be defined as predecessor to the
plan extend job.
This approach can be made even more advanced, for example, by using a
software distribution application to handle the distribution of the scripts. This way
the software distribution application can help keep track of different versions of
the same script. If you encounter a problem with a changed script in a production
shift you can simply ask the software distribution application, to redistribute a
previous version of the same script and then rerun the job.

Security files
The TWS security file is used to protect access to TWS database and plan
objects. On every TWS engine (domain manager, fault tolerant agent, etc.) you
can issue conman commands for the plan and composer commands for the
database. TWS security files are used to ensure that the right people have the
right access to objects in TWS.
Security files can be created or modified on every local TWS workstation and
they can be different from TWS workstation to TWS workstation.
We suggest having a common security strategy for all TWS workstations in your
TWS network (and end-to-end network). This way the security file can be placed
centrally. Changes are only done in the central security file. If the security file has
been changed it is simply distributed to all TWS workstations in your TWS
network.

Parameters file (database)


Another important file is the TWS parameters file or database. Although it is
possible to have different parameter databases on different TWS workstations,
we suggest having one common parameter database.
The parameter database can then be managed and updated centrally. On a daily
basis the updated parameter database can be distributed to all the TWS
workstations. The process can be as follows:
1. Update the parameter database daily.
This can be done by a daily job that uses the TWS parms command to add or
update parameters in the parameter database.
Note that TWS for z/OS JCL variable values can be distributed to the
parameters database using the approach we have shown in TWS for z/OS
JCL variables in connection with TWS parameters on page 211.

260

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

2. Create a text copy of the updated parameter database using the TWS
composer create command:
composer create parm.txt form parm

3. Distribute the text copy of the parameter database to all your TWS
workstations.
4. Restore the received text copy of the parameter database on the local TWS
workstation using the TWS composer replace command:
composer replace parm.txt

These steps can be handled by one job in TWS. This job could, for example, be
scheduled just before the plan is extended.
Note that this applies for both TWS networks managed by a TWS master domain
manager and by a TWS for z/OS server.

Chapter 4. End-to-end implementation scenarios and examples

261

262

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Chapter 5.

Using The Job Scheduling


Console
The Job Scheduling Console (JSC) is the standard interface to the Tivoli
Workload Scheduler 8.1 suite. One can manage work running under TWS and
TWS for z/OS from the same graphical user interface (GUI). This chapter
provides the details of the JSC, with emphasis on the new functions in JSC and
functions available with JSC that are not available through the other TWS
interfaces (conman, composer, and the TWS for z/OS ISPF panels).
In this chapter we discuss the following:
Using the Job Scheduling Console
Starting the Job Scheduling Console
Job Scheduling Console fundamentals
Enhancements in JSC 1.2

Specific to Tivoli Workload Scheduler for z/OS


Specific to Tivoli Workload Scheduler
General enhancements
Common lists in Job Scheduling Console
JSC troubleshooting

Copyright IBM Corp. 2002

263

5.1 Using the Job Scheduling Console


The Job Scheduling Console is a standard interface to Tivoli Workload Scheduler
(TWS) and TWS for z/OS. One can manage work running under TWS and TWS
for z/OS from the same interface.
The JSC is the new GUI for TWS. It replaces the TWS legacy GUI programs
(gcomposer and gconman). Some of the new functions in TWS, for example,
time zone support and new run-cycle option, are not available in the legacy TWS
GUI. See Section 1.5.3, Enhancements to the Job Scheduling Console on
page 8 for a comprehensive description of all the TWS-specific enhancements to
the JSC 1.2.
For TWS for z/OS the JSC gives you some new possibilities that are not
available in the legacy ISPF interface. Version 1.2 of the JSC contains several
new possibilities that we are going to show in the following sections. See
Section 1.5.3, Enhancements to the Job Scheduling Console on page 8 for a
comprehensive description of all the TWS for z/OS-specific enhancements to the
JSC 1.2.
An important new feature in the JSC is the common view, where you can create
a view, for example, a list with all jobs in error for all your running TWS and TWS
for z/OS systems.
It is possible to access TWS and TWS for z/OS agents from the same JSC
window. However, the fields and windows will appear different depending on the
type of engine the JSC is connecting to. While most features are shared between
TWS and TWS for z/OS, some features are available in only one product or the
other, but not both. JSC has engine-specific extensions that allow it to reveal the
features available in each engine and hide features that are unavailable. IBMs
plan for the future of TWS involves bringing TWS and TWS for z/OS closer and
closer together until they are almost the same. Many of the features of TWS will
be added to the TWS for z/OS, and vice-versa. As new features are added to the
underlying engines, the Job Scheduling Console will also be enhanced to
provide access to the engines new capabilities.

5.2 Starting the Job Scheduling Console


Start the JSC according to the method you chose at installation. Figure 5-1 on
page 265 shows the JSC being selected from a Windows 2000 Start menu.

264

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-1 Starting the Job Scheduling Console (Windows desktop)

As the JSC starts, you are asked to enter the user name, password, and name of
the host machine on which the Tivoli Management Framework (TMF) is running,
as shown in Figure 5-2 on page 266.

Chapter 5. Using The Job Scheduling Console

265

Note: If you encounter problems while logging in, consult Section 3.7.3,
Creating TMF Administrators for Tivoli Workload Scheduler on page 148.

Figure 5-2 Job Scheduling Console password prompt (Windows desktop)

As the JSC starts, it briefly displays a window with the JSC level, as shown in
Figure 5-3 on page 267.
Note: Do not close the MS-DOS window (shown in Figure 5-2). Closing the
MS-DOS window terminates the JSC session; reduce the risk of doing this by
minimizing the window.

We suggest that you create a shortcut to the JSC start program and place this
shortcut on the task bar of your Windows workstation. This way you can easily
start the JSC simply by clicking the icon on the taskbar.

266

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tip: You can change the properties for the shortcut created in your task bar,
by simply right clicking the icon and then selecting Properties (Figure 5-4 on
page 268). If you select Minimized in the Run field, the MS-DOS window will
not show up on your desktop. This way, you will only have an entry in the
Windows task bar and not the MS-DOS window.

Figure 5-3 Job Scheduling Console release level notice (Windows desktop)

Chapter 5. Using The Job Scheduling Console

267

Figure 5-4 Properties window for the JSC shortcut

The first time the JSC is started (new user on new machine), the following
window in Figure 5-5 is shown.

Figure 5-5 Initial information to user first time the JSC is started

268

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

When you click the OK button (Figure 5-5 on page 268) you will see the window
in Figure 5-6.

Figure 5-6 Pre-customized preference file can be specified

The Open Location window shown in Figure 5-6, lets you copy a pre-customized
user profile, which could contain a list view, appropriate to the user. You may
consider it worthwhile to load a pre-customized standard profile file onto users
machines when you install the JSC. See The JSC preference file on page 270
for a detailed description.
If you click Cancel (Figure 5-6) the JSC will start, as in Figure 5-7.

Figure 5-7 The JSC starting window

Chapter 5. Using The Job Scheduling Console

269

This Welcome pop-up window in the JSC (see Figure 5-7 on page 269) gives you
the option to read the online tutorial. Unless you turn the future display of this
window off by selecting the Dont show this window again box, it will be shown
every time you start the JSC. If you have ticked Do not show this window
again you can still get the welcome pop-up window by opening the Help pull
down window, and then clicking the Welcome... entry. Then the Welcome pop-up
window will be shown again.
You are now ready to start working with the JSC.

The JSC preference file


Our recommendation for you is to configure your JSC list views with your data
centers standard preferences with the intention of using these settings as the
default initialization file. You should also consider creating different preference
files for different types of JSC users, for example, one preference file for
operators and another for planners. Once you have set them up, save them
(simply done by closing the JSC) and install the file on users machines when
you install the JSC on these machines.
The file name is GlobalPreferences.ser, and it can be found in the JSC install
directory, JSConsole/dat/.tmeconsole/USER@NODE.LANGUAGE, in Windows
or in <JSConsolehome>/.tmeconsole/USER@NODE.LANGUAGE in UNIX,
where
USER is the name of the Tivoli Management Framework Administrator
associated with the connector accessed by the user. It is followed by an @
sign.

The USER is the operating system logon on the TMR server or managed
node you are connecting to.
NODE is the name of the system running the connector followed by a dot (.)
sign on Windows machines or an underscore sign (_) on UNIX machines.

NODE can be the hostname or IP address of the machine running the


connector.
LANGUAGE is the regional setting of the operating system where the
connector is installed. The LANGUAGE ID changes dynamically if the user
logs onto the same connector and finds that the regional settings have
changed. For example, suppose that to start the JSC on a Windows machine,
you log in for the first time to machine eastham with the Tivoli Administrator
tws-e. A user directory named tws-e@eastham.en_US (where en_US stands
for US-English regional settings) is created under the path described above
on your workstation.

270

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Every time you log onto a different host machine, a new user directory is added.
And every time you close the JSC, a GlobalPreferences.ser that matches your
connection is created or updated in the user directory.
Note: A different GlobalPreferences.ser file exists in
JSConsole\dat\.tmeconsole\TivoliConsole on Windows or in
<JSConsolehome>/.tmeconsole/TivoliConsole on UNIX. This file contains
preferences that affect JSC presentation and should not be propagated to
JSC users.

The JSC client uses Windows regional settings to display dates and times.
Change the regional settings from the Windows control panel according to your
country. After rebooting Windows, dates and times will be shown in the selected
format.

5.3 Job Scheduling Console fundamentals


Before you start working with the JSC in detail, it is important to understand
some of the fundamentals in the JSC.

Figure 5-8 The main Job Scheduling Console window

Chapter 5. Using The Job Scheduling Console

271

The left side of the window in Figure 5-8 on page 271 is a list of all the connector
instances that are installed in the TMR you are logged into. We have defined five
connector instances:
TWSC: Connector instance pointing to a TWS for z/OS controller.
TWSC-F100-Eastham: Connector instance pointing to a TWS for z/OS fault
tolerant agent or workstation (symbolized with a small screen or PC figure).
This fault tolerant agent is known as the F100 workstation in TWS for z/OS
and is running on node eastham (the hostname for the machine).
TWSC-F200-Yarmouth: Connector instance pointing to a TWS for z/OS fault
tolerant agent or workstation (symbolized with a small screen or PC figure).
This fault tolerant agent is known as the F200 workstation in TWS for z/OS
and is running on node yarmouth (the hostname for the machine).
Yarmouth-A: Connector instance pointing to a TWS master domain manager
running on yarmouth, TWS controller.
Yarmouth-B: TWS fault tolerant agent running on yarmouth.
Common Default Plan Lists: Predefined JSC group with common plan lists.
The Common Default Plan Lists are described in detail in Common Default
Plan Lists in JSC on page 322.
Note: Use naming conventions when creating the connector instances; this
makes it easier to relate the instance name to its purpose. From our example
in Figure 5-8 on page 271, we know the exact type of engine (TWS or TWS for
z/OS) the instance points to and the function of the engine (master or agent).

Access to objects in databases or instances in plans are performed through lists


in the JSC. To access a job stream in the database, you first have to open or
make a list containing the job stream and then you can start working with the job
stream.
Note: If you are used to the TWS for z/OS legacy ISPF panels, you can
compare the JSC lists with filter panels in TWS for z/OS ISPF panels. For
example, choosing option 5.3 from the main TWS for z/OS ISPF panel gives
you a filter panel, where you can narrow the list.

After installing the JSC on your machine, you will have access to some
predefined lists called Default Database Lists and Default Plan Lists. The entries
in the default lists vary, depending of which engine (TWS or TWS for z/OS) the
list is related to.
In Figure 5-9 on page 273, you will see the JSC Default Database Lists and
Default Plan Lists for a TWS for z/OS instance (TWSC controller in our example).

272

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-9 JSC Default Database and Plan Lists for TWS for z/OS instance

In Figure 5-10 on page 274 you will see the JSC Default Database Lists and
Default Plan Lists for a TWS instance (Yarmouth-A in our example).

Chapter 5. Using The Job Scheduling Console

273

Note: The Default Database Lists and Default Plan Lists will be exactly the
same for a JSC instance pointing to a fault tolerant agent. Since there are not
any databases on a fault tolerant agent, you should consider removing the
Default Database Lists from these instances. How to tailor user preferences
for JSC users is described in The JSC preference file on page 270.

Figure 5-10 JSC Default Database and Default Plan Lists for a TWS instance

Using the default JSC lists (or views) for database objects and plan instances
could cause long response times in the JSC. The default JSC list simply gives
you a list with all database objects in the database or plan instances in the plan.
If you are using a default database list for job streams and the job stream
database contains, for example, 5,000 job streams, loading the data in JSC and
preparing this data to be shown in the JSC will take a long time. Using dedicated

lists, created with appropriate filters, for example, only to show job streams
starting with PROD will optimize the JSC performance considerable.
You probably will also need some customized lists in JSC, for example, to show
all jobs in error, all jobs planned to run on a dedicated workstation, all jobs
waiting for a resource.

274

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

We recommend that you create various JSC lists corresponding to your


frequently-used legacy TWS for z/OS ISPF panels or TWS legacy GUI views.
Once you have done this, you may save the Global Preferences file and
propagate it to new users (see The JSC preference file on page 270).
Creating JSC lists is simple. Right click the JSC instance where you want to crate
a new instance, and then pick Create Database List or Create Plan List in the
pop-up window, as in Figure 5-11.

Figure 5-11 How to create new lists in the Job Scheduling Console

It is also possible to create something called groups (Create Group... in the


pop-up window). The JSC allows you to organize your lists into groups. You can
organize your groups of lists by any criteria that will help you to quickly display
the objects that you want to work with. For example, you can create a group that
enables you to work with all the objects associated to your payroll application.
Creating lists and working with lists and groups is explained in great detail in
Tivoli Job Scheduling Console Users Guide, SH19-4552. You will also find some
useful information in End-to-End Scheduling with OPC and TWS Mainframe and
Distributed Environment, SG24-6013.
Always remember that all these lists can be made available to all JSC users,
simply by saving the Global Preferences file and propagate it to your JSC users,
as explained in The JSC preference file on page 270.

Chapter 5. Using The Job Scheduling Console

275

5.4 TWS for z/OS-specific JSC enhancements


As described in Section 1.5.3, Enhancements to the Job Scheduling Console
on page 8, Job Scheduling Console Version 1.2 contains several TWS for
z/OS-specific enhancements. Job Scheduling Console Version 1.2 is delivered
with TWS for z/OS 8.1.
Most of the TWS for z/OS-specific JSC enhancements have been in the plan
handling area of TWS for z/OS. This means that the from the JSC you can
perform the same actions or tasks as you can from the TWS for z/OS legacy
ISPF panels.
We will address some of these TWS for z/OS-specific enhancements and show
how they work.
In the last section of this chapter, we will then summarize what can not be done
with the JSC 1.2 compared to the possibilities that you have in other TWS for
z/OS interfaces.

5.4.1 Submit job streams to TWS for z/OS current plan


Do the following steps to submit job streams to the TWS for z/OS current plan.
1. To submit TWS for z/OS job streams from the JSC, right click the TWS for
z/OS engine (TWSC in our example) and then click Submit Job Stream... in
the pop-up window (see Figure 5-12).

Figure 5-12 Submit TWS for z/OS job stream to current plan in TWS for z/OS

276

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

2. After clicking Submit Job Stream in Figure 5-12 on page 276, you will get a
new pop-up window, shown in Figure 5-13.

Figure 5-13 Submitting a job stream to current plan

3. In the pop-up window (Figure 5-13), you can type the name of the job stream,
the start date and time (same as TWS for z/OS input arrival time), and the
deadline date and time.
Our task is to submit a job stream called (TWSCDISTPARJUPD). The job
stream should be put on hold (no jobs in the job streams must start running
before we release them). We will use the predefined start date and time as
well as the deadline date and time for the job stream (taken from the run-cycle
specification for the job stream).
In the window as shown in Figure 5-13, we can type the name of the job
stream or we can use the Find button (the grey box with three dots) to let the
JSC search for the job stream.
4. We click the Find button and get the pop-up window shown in Figure 5-14 on
page 278.
5. Figure 5-14 on page 278 shows the result of a search after we have filled the
Job Stream Name field with the TWSCD* and clicked the Start button.

Chapter 5. Using The Job Scheduling Console

277

Figure 5-14 Search result for job streams starting with TWSCD*

6. We highlight our job stream TWSCDISTPARJUPD (Figure 5-15) and click the OK
button. The result is shown in Figure 5-14.

Figure 5-15 JSC Submit Job Stream window filled with job stream information

278

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Notice that JSC has filled the Start and Deadline fields for us. The information
was taken from the run-cycle definition for the job stream in the job stream
database.
7. Remember to change the Start Date and Time (Figure 5-14 on page 278) if
the job stream is already in the current plan with this time.
Tip: It is a good practice to predefine your ad hoc job streams with a dummy
run-cycle that does not schedule the job stream, but contains start date and
time and deadline date and time according to your installation standard. This
way it is very simple to add the ad hoc job stream to the current plan. You do
not have to specify any start or deadline information. TWS for z/OS will read
this information directly form the job stream database, and then conform to
your standards. Dependency resolution will be handled correctly.

8. Since the job stream should be submitted in hold, we click the Submit & Hold
button (Figure 5-14 on page 278) and then the OK button. The result can be
seen in Figure 5-16.

Figure 5-16 Jobs in TWSDISTPARJUPD for TWS for z/OS current plan

Note from Figure 5-16 that all jobs in the job stream are in Held status. They will
not start before they are released with a release command.
Note: It is not possible to submit a job stream with the hold option in legacy
TWS for z/OS IPSF panels.

9. To release the jobs (when it is OK to run the job), you simply right click the job
and click the release entry in the pop-up window. If all jobs are going to be
released at once, you can select all jobs in the list (by selecting the first job
with the mouse, pressing and holding the left button, and dragging the mouse
through all entries in the list). Then right click, and click Release (see
Figure 5-17 on page 280).

Chapter 5. Using The Job Scheduling Console

279

Figure 5-17 Release all held jobs in JSC with just one release command

Note: If you click the Submit & Edit button (Figure 5-15 on page 278), the
JSC will open a Job Stream Instance Editor window (see Figure 5-18). In this
window you will be able to edit the job stream and jobs in it quickly. This can,
for example, be to remove some of the jobs in the job stream, set a job in the
job stream to complete status, or change time specifications for a job in the job
stream.

Figure 5-18 JSC Job Stream Instance Editor - Adding job stream to current plan

280

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Submit job streams directly from JSC database list view


It is also possible to submit a job stream directly from a JSC database list view,
as shown in Figure 5-19.

Figure 5-19 Submit job stream from JSC database job stream list view

1. Right click the job stream that you want to submit. JSC will then show a
pop-up window.
2. Then click the Submit... entry, and JSC will show a new window, as in
Figure 5-20 on page 282.

Chapter 5. Using The Job Scheduling Console

281

Figure 5-20 The JSC Submit Job Stream...window

Note: When submitting the job stream directly from the TWS for z/OS job
stream database, JSC has filled all required input fields in the Submit Job
Stream... window (Figure 5-20). This is because JSC read this information
from the database.

Remember that only the Start and Deadline fields will be filled if the job stream
is defined with a run-cycle in the TWS for z/OS database.

5.4.2 JSC text editor to display and modify JCLs in current plan
From the JSC, it is possible to display and modify JCLs for a z/OS job in the TWS
for z/OS current plan. The JSC editor provides import and export functions so
that users can store a JCL as a template and then reuse it for other job JCLs. It
also includes functions to copy, cut, and paste JCL text. The JCL editor displays
information on the current JCL, seen as the current row and column, the job
name, the workstation name, and who has last updated it.
Let us now see how it works:
1. From a job instance view, right click the job where you want to edit JCL. Then
you will get a pop-up window where you can select Edit JCL... (see
Figure 5-21 on page 283).

282

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-21 Edit JCL for a z/OS job in the Job Scheduling Console

Our task is now to copy the JCL from the Waiting job with Job ID 10
(operation number 10) in the TWSCDISTPARJUPD job stream to the Held job
with Job ID 10 in the TWSCDISTPARJUPD job stream (see Figure 5-21).
2. First we right click the waiting job with Job ID 10 in the TWSCDISTPARJUPD
job stream and then we click Edit JCL..., as shown in Figure 5-21. The result
is an Edit JCL window, as shown in Figure 5-22 on page 284.

Chapter 5. Using The Job Scheduling Console

283

Figure 5-22 The Job Scheduling Console Edit JCL window

In the Edit JCL window, you can see that the JCL is read directly from the
TWS for z/OS job library (status is not from JS). You can also see that the
cursor was placed on row 15 and column 67 (see the upper right corner).
It is now possible to start editing the JCL and eventually correct JCL errors if
there are any. Our task is to copy this JCL to another job definition.
To perform this task we have two possibilities:
a. We can click the Actions bar in the top of the window (see Figure 5-22).
This will show a pull-down menu where we can chose Select All to select
all JCL lines. Then we click the Edit bar. This will show a pull-down menu
where we can chose to delete, copy, or paste. To copy we can select the
Copy option.

284

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Note: The Delete, Copy, and Paste options are also available from the
three icons to the right in the Edit JCL window (see Figure 5-22 on
page 284).

When the JCL is copied we can open a new Edit JCL window for the job
where the JCL should be copied to and then paste the JCL there.
b. We can click the File bar in the top of the window (see use the Figure 5-22
on page 284). This will show a pull-down menu where we can chose
Export or Import. The Export function can be used to save a copy of the
JCL in a file stored locally on your machine. Import can be used to read a
locally stored copy of the JCL from a file on your machine.
Note: The Export and Import options are also available from the two icons
to the left in the Edit JCL window (see Figure 5-22 on page 284).

3. We will use the Export and Import options to accomplish our task. We select
File and then Export from the pull-down menu (see Figure 5-23).

Figure 5-23 The Job Scheduling Console File Import/Export pull-down menu

4. JSC shows a new Save window (Figure 5-24 on page 286). In this window we
specify the file name, TWSCIEBG_copy_jcl for the file to be saved locally on the
machine running the JSC. Then we click Save to save the file.

Chapter 5. Using The Job Scheduling Console

285

Figure 5-24 The Job Scheduling Console Save window

Now we have a local copy of the TWS for z/OS JCL in a PC file.
5. The next step is to go back to the JSC window to see the list of jobs. We do
this by clicking Cancel in the Edit JCL window shown in Figure 5-22 on
page 284.
Then we return to the window shown in Figure 5-21 on page 283.
6. Since we have to import the JCL into the Held job with Job ID 10 in the
TWSCDISTPARJUPD job stream (see Figure 5-21 on page 283), we simply
right click this job, and select Edit JCL (Figure 5-25 on page 287).

286

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-25 Select Edit JCL for the job where the JCL should be imported

7. In the new Edit JCL window, we select all JCLs (using the Actions and
Select All pull-down entries) and delete the selected JCLs (using the Edit
and Delete pull-down entries). The result is a window with no JCL, as shown
in Figure 5-26.

Figure 5-26 JSC Edit JCL window after deleting records of all JCLs

Chapter 5. Using The Job Scheduling Console

287

8. Then we select File and Import from the pull-down menu. This shows a new
Open window, where we can select the local file with the JCL we will import
(see Figure 5-27).
9. We click the file we want to import and the file name is placed in the File name
field in the Open window.
10.Finally we click the Open button. The result can be seen in Figure 5-27.

Figure 5-27 JSC Open window used when importing a file

The Edit JCL window with the imported JCL is shown in Figure 5-28 on
page 289.

288

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-28 The Edit JCL window with the import JCL

5.4.3 JSC read-only text editors for job logs and operator
instructions
The JSC read-only text editors can be used to view:
Job logs produced by job instance runs
The Operator Instructions (OI) associated with a job instance.

View job logs produced by a job instance


The View Job Log menu is also often referred to as the Browse Job Log. To view
a job log, simply right click a job in a job instance list and select Browse Job
Log... in the pop-up window (see Figure 5-29 on page 290).

Chapter 5. Using The Job Scheduling Console

289

Figure 5-29 The Job Scheduling Console Browse Job Log... entry

If the TWS for z/OS controller has a copy of the job log in its JCL repository
dataset, the job log will be shown right away in the JSC. If the TWS for z/OS
controller does not have a copy of the job log, it will request a copy of the job log.
This copy is requested in two ways:
If the job log is for a z/OS job, the controller asks the TWS for z/OS data store
to send a copy of the job log. The JSC user is informed, via a pop-up window
(message GJS1091E), that the controller has asked for a copy of the job log
(see Figure 5-30 on page 291).
If the job log is for a fault tolerant workstation job, the TWS for z/OS controller
asks the associated fault tolerant agent to send a copy of the job log. The
JSC user is informed, via a pop-up window (message GJS1091E), that the
controller has asked for a copy of the job log (see Figure 5-30 on page 291).

The JSC pop-up window with message GJS1091E is shown in Figure 5-30 on
page 291.

290

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-30 JSC pop-up window when controller does not have copy of job log

If you get the message GJS1091E, then simply wait a few seconds, and try to
browse the job log again (as shown in Figure 5-29 on page 290).
The JSC Browse Job Log window, shown as a result of the actions performed in
Figure 5-29 on page 290, is shown in Figure 5-31 on page 292.
Note: In the JSC Browse Job Log window you can:
Mark some of the text and copy the marked text to, for example, a mail.
The copy can be done simply by right clicking after marking the text and
then selecting Copy.
Save the job log to a local file your workstation. The job log can be saved
by clicking File and then selecting Export in the pull-down menu (as
shown in Figure 5-24 on page 286).

Chapter 5. Using The Job Scheduling Console

291

Figure 5-31 The Job Scheduling Console Browse Job Log window

View TWS for z/OS Operator Instructions in the JSC


If you have defined Operator Instructions (OI) for a job, it is possible to browse
these OIs in the JSC when the job is in the TWS for z/OS current plan.
To view OIs for a job in the TWS for z/OS current plan, right click a job in a job
instance list and select Browse Operator Instruction... in the pop-up window
(Figure 5-32 on page 293).
Note: It is not possible to select the Browse Operator Instruction... entry in
the pop-up window shown in Figure 5-32 on page 293, if there is no OI defined
for the job. In this case the JSC has marked the Browse Operator Instruction
entry light grey. Only dark black entries can be selected in the pop-up window.

292

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-32 The Job Scheduling Console Browse Operator Instruction entry

The Browse Operator Instruction window is shown in Figure 5-33.

Figure 5-33 The JSC Browse Operator Instruction window

Notice that the OIs shown in Figure 5-33 can be saved in a file locally on your
workstation or copied to, for example, a mail.

Chapter 5. Using The Job Scheduling Console

293

5.4.4 New restart possibilities in JSC


In the JSC Version 1.2 there are several new restart possibilities available:
Restart a job after it has run. You can:

Restart a job instance with the option to choose which step must be first,
which must be last, and which must be included or not (requires TWS for
z/OS data store). Furthermore, you can work with expanded JCLs. That is,
JCLs generated from the job log.
Rerun a job instance that will entirely execute all the steps of a job
instance.
Clean the list of datasets used by the selected job instance.
Display the list of datasets cleaned by a previous clean up action.
Rerun an entire job stream instance.

This function opens a job stream instance editor with a set of reduced
functionalities where you can select the job instance that will be the starting
point of the rerun. When the starting point is selected, an impact list is
displayed that shows all possible job instances that will be impacted from this
action. For every job instance within the current job stream instance, you can
perform a clean-up action and display its results.

Restart a job after it has run


The following steps instruct you on how to restart a job after it has run.
1. To use the TWS for z/OS restart facilities in JSC, you right click a job in a job
instance list and select Restart and Cleanup in the pop-up window, as in
Figure 5-34 on page 295.

294

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Note: To use the Restart and Cleanup function, the TWS for z/OS data store
function must be activated because TWS for z/OS uses job log information
when the restart JCL is built.

Figure 5-34 The JSC Restart and Cleanup entry

2. After clicking the Restart and Cleanup selection, JSC shows the Restart and
Cleanup window, as in Figure 5-35.

Figure 5-35 The Job Scheduling Console Restart and Cleanup window

Chapter 5. Using The Job Scheduling Console

295

3. In this window there are several types of actions. We will show the Step
Restart action, so we select the Step Restart button and click OK. The result
is shown in Figure 5-36.

Figure 5-36 The Job Scheduling Console Step Restart window

In the Step Restart panel you can work with the different steps in the job. JSC
shows information like program name, completion code, and step status
(extracted from the job log if the job is executed). Furthermore, TWS for z/OS
has supplied step restart information for each job step (Best Restart Step, Not
Restartable).
4. If you double click an entry in the Action column (Figure 5-36), you will be able
to specify if:
The step should be the restart step.
The step should be included (executed) when the job is rerun.
The step should be excluded (not executed) when the job is rerun.
The step is the end step (steps after this step will not be executed).
Furthermore, you can specify what your next step should be using the Next
Step box in the button of the window (Figure 5-36). After performing this you
will get the following next step actions:
Dataset List: If this entry is selected, JSC will open a window with a list of
datasets that will be deleted when the job is rerun (these datasets are
within the step restart range). You can, for example, remove datasets from
this list. It is possible to go to Edit JCL from this Dataset List window.
Edit JCL: If this entry is selected, JSC will open the JCL edit window,
where you can edit the JCL. This is the same window as shown in
Figure 5-28 on page 289.
Tip: Clicking OK in this Edit JCL window will start the job.

Execute: If this entry is selected, JSC will start the job.

296

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Rerun an entire job stream instance


To rerun an entire job stream instance in the JSC:
1. Right click a job stream in a job stream instance list and select Rerun from
the pop-up window (see Figure 5-37).

Figure 5-37 Rerun selection for job stream in Job Scheduling Console

2. JSC will then show the Job Stream Instance Editor window (see Figure 5-38
on page 298).
Note: JSC uses the same Job Stream Instance Editor when editing job
streams in the database and job stream instances in the plan. So you have the
same look and feel, but there is one difference: Using the Job Stream Instance
Editor for plan instances, the job status is depicted in the upper left corner of
the icon representing the job (see Figure 5-38).

Chapter 5. Using The Job Scheduling Console

297

Figure 5-38 The JSC Job Stream Instance Editor

3. Our task is to rerun the job stream from the TWSCCPET-20 job instance. So
we right click this job, in the Job Stream Instance Editor. JSC then shows a
new pop-up window (see Figure 5-39 on page 299), where we can choose
between Start From, Stat Cleanup, or Display Cleanup Result.

298

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-39 The JSC pop-up window with rerun actions for a job in a job stream

4. We select Start From since we are asked to do a rerun from the


TWSCCPET-20 job instance. After clicking Start From, JSC shows the Rerun
Impact List window, as in Figure 5-40 on page 300.
The Rerun Impact List window shows the jobs that will be impacted from the
rerun. The Rerun Impact List window either contains all the predecessors to our
rerun job or all the successors to our rerun job. The rule is as follows:
If the job instance we are stating from (rerunning) is not complete, the Rerun
Impact List window contains all impacted predecessors (both internal and
external). These jobs are impacted because TWS for z/OS will complete the
jobs before rerunning our job.
If the job we are starting from (rerunning) is complete, the Rerun Impact List
window contains all impacted successors (both internal and external).

These jobs are impacted because TWS for z/OS will change the successor
job status to waiting before rerunning our job. Our Start From job is waiting
(and can be seen from the little hour glass in the upper left corner of the job
icon in Figure 5-39). The job instances shown in the Rerun Impact List
window are all the predecessors to the job we are rerunning (see Figure 5-40
on page 300).
When TWS for z/OS is performing the rerun, all these predecessors will be set to
complete status.

Chapter 5. Using The Job Scheduling Console

299

Figure 5-40 The Job Scheduling Console Rerun Impact List

5.5 TWS-specific JSC enhancements


In this section we introduce the new features in Job Scheduling Console 1.2,
including:
Editing a job stream instance
Resubmitting a job stream instance
New run cycle options

5.5.1 Editing a job stream instance


Job Scheduling Console 1.2 allows you to edit a job stream instance in the plan
just as you would edit a job stream definition in the database.
To edit a job stream instance, first display the job stream instance in a job stream
query list, as in Figure 5-41 on page 301.

300

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-41 A job stream instance list

Once you have found the job stream instance you wish to edit, double click it.
The Job Stream Instance Editor will appear, as in Figure 5-42.

Figure 5-42 The Job Stream Instance Editor

As you can see, many of the buttons are disabled. The job stream instance editor
does not give you all the capabilities that you will have when editing a job stream
definition. The following operations are possible in the Job Stream Instance
Editor:
Change the job stream properties.
Change the predecessor dependencies (links) between the jobs in the job
stream and predecessor dependencies on external jobs or job streams.
Change the properties of any of the jobs within the job stream.

Chapter 5. Using The Job Scheduling Console

301

It is also possible to submit new jobs into an existing job stream instance. To do
this, simply select the job stream instance, right click the selected job stream
instance, and choose Submit -> Job... from the pop-up menu, as in Figure 5-43.

Figure 5-43 Submitting a job into an existing job stream instance

You will then have to type in or find the job you wish to submit, as shown in
Figure 5-44.

Figure 5-44 Specifying job that is to be submitted into job stream instance

5.5.2 Re-Submitting a job stream instance


With JSC 1.2 it is possible to resubmit a job stream that is already in the plan. To
do this, simply select the job stream instance, right click the selected job stream,
and choose Re-Submit from the pop-up menu, as indicated in Figure 5-45 on
page 303.

302

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-45 Resubmitting a job stream instance

Because no two job stream instances on the same workstation may have the
same name, it is necessary to specify an alias that will be used as the name of
the resubmitted job stream, as in Figure 5-46.

Figure 5-46 Specifying an alias for a resubmitted job stream instance

The job stream will be resubmitted into the plan. Reload the job stream instance
list, and you should see the resubmitted job stream appear in the list, as shown
in Figure 5-47 on page 304.

Chapter 5. Using The Job Scheduling Console

303

Figure 5-47 The job stream has been resubmitted, with a new name

5.5.3 New run cycle options


Job Scheduling Console 1.2 displays run cycles a bit differently than previous
versions. The changes in the interface allow JSC to take advantage of new run
cycle options in Tivoli Workload Scheduler 8.1.
When you first open a job stream, you will see a Job Stream Editor window
similar to the one in Figure 5-48 on page 305.

304

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-48 The Job Stream Editor window in graph mode

In the above example, the Job Stream Editor is in Graph mode. You can tell this
because the Graph mode button is selected (the left-most button in the group of
three buttons on the right side of the tool bar).
Graph mode is the mode in which jobs and dependencies are added to the job
stream. In this example, there is only one job in the job stream. The second
button in the group of three mode buttons is the Timeline mode button. Timeline
mode is useful for seeing when the jobs in a job stream will run. The last button
(the one that looks like a calendar) is the Run Cycle mode button. If you click this
button, the Job Stream Editor switches to run cycle mode, and should look
something like Figure 5-49 on page 306.

Chapter 5. Using The Job Scheduling Console

305

Figure 5-49 The Job Stream Editor window in run cycle mode

Notice that the Run Cycle button is now selected and the Job Stream Editor
window looks completely different from before. There are also three buttons in
the toolbar that appear only when the Job Stream Editor is in Run Cycle mode.
With these buttons, you can add simple, weekly, or calendar run cycles.
Run cycles can now be inclusive or exclusive. If a run cycle is inclusive, the job
stream will run on the days specified by that run cycle. If a run cycle is exclusive,
the job stream will not run on the days specified by the run cycle, even if some of
these days are also specified in an inclusive run cycle. This way, multiple run
cycles, some inclusive and others exclusive, can be combined together. The job
stream will run only on days specified in an inclusive run cycle and not specified
in an exclusive run cycle.

Simple run cycles


Simple run cycles contain a list of specific days. You would use these when you
need to select specific days, but there is no calendar that contains these specific
days. You might choose to define a calendar for sets of days that will be used in
multiple job streams, but use a simple run cycle in a job stream only if that single
job stream will use that set of days.

306

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

In Figure 5-50 you can see the General page of the Simple Run Cycle window.
Here you will find the option to make the run cycle inclusive or exclusive. You can
also see the option to select the freedays rule. The General page looks the same
for all types of run cycles.

Figure 5-50 Simple Run Cycle window with freedays rule menu displayed

The freedays rule


The freedays rule specifies what to do on freedays. There are four options:
Select the nearest workday before the freeday
Select the nearest workday after the freeday
Do not select
No freeday rule specified

The rule determines what day, if any, will be selected by the run cycle when the
current day is a freeday. By default, freedays include Saturday, Sunday, and the
days in the freedays calendar. The default freedays calendar is the holidays
calendar. The freedays calendar, and whether Saturday or Sunday is considered
a freeday, can be specified on a per-job stream basis in the General page of the
Job Stream Properties window under the Freedays Calendar section. See
Specifying freedays for a job stream on page 312 for instructions on how to do
this.

Chapter 5. Using The Job Scheduling Console

307

Figure 5-51 shows the Rule page of the Simple Run Cycle window. This is where
you can select the days that will be a part of the simple run cycle.

Figure 5-51 Rule page of Simple Run Cycle window; 10th of April is selected

The Rule page of the Simple Run Cycle window looks very much like the window
used when editing a calendar. You can select or de-select days by clicking them.
You can also switch from Monthly to Yearly mode if you want to view the whole
year instead of just one month. To do this, click the Yearly tab at the top of the
window.

Weekly run cycles


Weekly run cycles are used to specify days of the week. Weekly run cycles can
also be used to specify all weekdays, all freedays, all workdays, or everyday.
The General page for a weekly run cycle looks just like the one for a simple run
cycle. See Figure 5-50 on page 307.
Figure 5-52 on page 309 shows the Rule page of the Weekly Run Cycle window.
This is where you select the days of the week you want to be a part of the run
cycle.

308

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-52 Rule page of Weekly Run Cycle window

In Figure 5-52, Tuesday and Thursday are selected. Note that it is also possible
to select weekdays, freedays, work days, or everyday.

Calendar run cycles


If you have several job streams that all run on the same days, you might want to
create a separate calendar containing these specific days, and then reference
that calendar in each job stream using a calendar run cycle. Calendar run cycles
can also be exclusive, meaning that the job stream will not run on days in the
specified calendar.
The General page for a calendar run cycle looks just like the one for a simple run
cycle. See Figure 5-50 on page 307.
The Rule page of the Calendar Run Cycle window is where you specify the
predefined calendar upon which the run cycle will be based. This calendar must
already exist in the database. You can find it by clicking the ellipsis button (...).
See Figure 5-53 on page 310.

Chapter 5. Using The Job Scheduling Console

309

Figure 5-53 Rule page of Calendar Run Cycle window; LOTTA selected

In Figure 5-53, the calendar LOTTA has been selected. Note that it is also
possible to specify an offset from the days defined in the calendar. The offset can
be positive or negative, and can be specified in days, workdays, or weekdays. In
this example, no offset (an offset of 0) is specified.

Combining multiple run cycles


As noted above, it is possible to specify multiple run cycles, some inclusive and
others exclusive. The job stream will run on a day only if that day is specifically
included in a run cycle and not specifically excluded. The following figures show
our sample job stream with all three types of run cycle added: An exclusive
simple run cycle, an exclusive weekly run cycle, and an inclusive calendar run
cycle. The job stream will run only on days that are in the calendar specified and
are not in either of the exclusive run cycles specified.
In run cycle mode, when you select a run cycle you have added, the calendar on
the right will display the days included (blue) or excluded (red).
The red bars in Figure 5-54 on page 311 indicate the days that have been
specifically excluded by the exclusive run cycle selected from the list on the left.

310

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-54 An exclusive weekly run cycle

The blue bar in Figure 5-55 on page 312 indicates the single day that has been
specifically included by the inclusive calendar run cycle selected from the list on
the left.

Chapter 5. Using The Job Scheduling Console

311

Figure 5-55 An inclusive calendar run cycle; the 10th of April is included

The white bars on weekend days in the above figures indicate that these days
are freedays. Freedays are defined on a per-job stream basis. By default,
freedays are Saturdays, Sundays, and the days defined in the holidays calendar.
The freeday settings can be changed for any job stream.

Specifying freedays for a job stream


As mentioned default, freedays include Saturdays, Sundays, and the days in the
holidays calendar. It is possible to specify different freedays for a job stream. You
can specify a calendar other than holidays as the freedays calendar. You can
also specify whether Saturdays or Sundays are considered freedays.
These options can be specified in the job stream properties. The settings made
here apply to all run cycles added to the job stream. For information about how
run cycles are affected on freedays, see The freedays rule on page 307.
In Figure 5-56 on page 313, you can see a job stream called PAYROLL that is set
to use a freedays calendar called UK-HOLS.

312

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-56 Freedays calendar for job stream set to UK-HOLS calendar

You can also see the option to specify whether Saturday or Sunday is considered
a freeday.
These freedays settings apply only to the job stream where the change is made.

5.6 General JSC enhancements


In this section, we introduce the new features in Job Scheduling Console 1.2,
including:
The filter row
Sorting list results
Non-modal windows
Copying jobs from one job stream to another

Chapter 5. Using The Job Scheduling Console

313

5.6.1 The filter row


The old Maestro gconman program had the ability to save filters, so that for a
given user only certain scheduling objects would be displayed. The usefulness of
this feature was limited though, because each user could save only one set of
filter criteria for a particular object type. Job Scheduling Console introduced a
new way of listing and working with scheduling objects: Query lists. These lists
are essentially saved queries. Refreshing a list issues the query again.
JSC 1.2 introduces a new way of quickly filtering the results of a query: The filter
row. The filter row allows you to specify a filter that will be applied to the
displayed results of a list. The most common use of the filter row is to narrow the
results of a query that returned a large number of results. For example, if you
load the All Job Streams list, there may be a very large number of job streams
displayed. By enabling a filter for the job stream name, you can quickly find the
job streams in which you are interested.

Enabling the filter row


To enable the filter row you can perform the following:
1. First load a query list.
2. Next, click the right-pointing black triangle to reveal the pop-up menu.
3. This menu contains functions related to filtering and sorting lists. Select Show
Filter Row from the pop-up menu.

Figure 5-57 Enabling the filter row

314

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The filter row will then appear just below the title row. A filter can be set for any
column displayed (Figure 5-58).

Figure 5-58 A job streams list with the filter row displayed but no filters set

4. To set a filter, click the Filter button below the column heading. If no filter is
set this button will be labeled <no filter>. In our example, to set a filter for the
name of a job stream) we click the <no filter> button just below the Name
column heading. The Edit Filter window appears, as in Figure 5-59.

Figure 5-59 Editing a filter

In this case we want to see only the job streams that contain ACC in their names.
You can also set a filter for objects whose names start with the filter text.
With the filter enabled, only job streams whose names contain ACC are displayed
(Figure 5-60 on page 316).

Chapter 5. Using The Job Scheduling Console

315

Figure 5-60 A job streams list displayed with a filter for ACC set

Note that it was not necessary to change the properties of the query list; we
simply filtered the results from the list.
5. If we now click the gray bar beneath the ACC filter button, we can quickly
disable that particular filter (Figure 5-61 on page 317).

316

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-61 A job streams list with a filter for the name ACC set, but disabled

With the filter disabled, the results displayed are the same as when no filter at all
was set. The advantage of having a filter set but disabled is that one can quickly
turn filters on and off to see the results in which you are interested.
Smart use of filters can save you a lot of time. You do not have to reload a list just
to find a specific object or group of objects. Just set a filter or two and find the
pertinent objects quickly.
Tip: A good strategy for getting the most out of JSC is to define query lists that
narrow down the search to a large group (say, your department or
geographical location), and then use filters to find the specific object or small
set of objects that you seek.

5.6.2 Sorting list results


With Job Scheduling Console 1.2, it is possible to sort the results displayed when
a list is loaded. The sort can be in ascending or descending order. To sort
displayed results, simply move the mouse pointer over any of the column
headings. For example, if you want to sort the list by name, move the mouse
pointer over the Name column heading and click one of the sort buttons that
appear. Two sort buttons will appear: One for sorting in ascending order, and

Chapter 5. Using The Job Scheduling Console

317

another for sorting in descending order. To sort in ascending order, click the
button that looks like an upward pointing triangle; to sort in descending order,
click the button that looks like a downward pointing triangle. In Figure 5-62, you
can see what the sort buttons look like.

Figure 5-62 A list of jobs

After clicking the right-hand button, the list of jobs is sorted in descending
(reverse alphabetical) order according to the job name. (See Figure 5-63.)

Figure 5-63 A list of job definitions sorted in descending order by name

If you wish to undo any sort that has been applied to a list, simply choose Clear
Sort from the pop-up menu on the right side of the window, as in Figure 5-64 on
page 319.

318

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-64 The Clear Sort function

5.6.3 Non-modal windows


Most of the windows in Job Scheduling Console 1.2 are non-modal. That is, you
can put them in the background and continue working in other open JSC
windows. For example, you can open a Job Stream Properties window for one
job stream, and then put that window in the background, going back to the list of
job streams or to another window. You can also open two job streams in separate
windows and copy jobs from one job stream to the other.

5.6.4 Copying jobs from one job stream to another


Job Scheduling Console 1.2 also allows you to copy jobs from one job stream to
another. Jobs are often used in multiple job streams. Copying jobs from an
existing job stream to a new job stream is much quicker than adding the jobs
from scratch. The dependencies set in the source job stream are copied with the
jobs, so it is not necessary to set them again. This feature allows you to speed up
the process of creating new job streams.
The first step is to open the source and target job streams in separate windows.
Once we have done that, we can select the job in the source job stream and copy
it. The simplest way to copy the job is by right clicking the job and choosing Copy
from the pop-up menu, as in Figure 5-65 on page 320.

Chapter 5. Using The Job Scheduling Console

319

Figure 5-65 Copying a job from a job stream

The next step is to paste the job into the target job stream. Right-click in the Job
Stream Editor window of the target job stream and choose Paste from the
pop-up menu (Figure 5-66).

Figure 5-66 Pasting a job into a job stream

At this point, the Job Properties window for that job will appear (Figure 5-68 on
page 321). Here is where the properties of the job in the new job stream are
specified. The properties set in the source job stream will be copied into the
target job stream; the Job Properties window gives you an opportunity to change
the properties if they need to be different in the target job stream from what they
were in the source job stream.

320

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-67 Window appears when you paste a job into a job stream

Note: When copying a TWS job from one job stream to another, it may be
necessary to specify the name and workstation of the job in the Job Properties
window. You can specify these by clicking on the ellipsis button (...). This
limitation will probably be removed in a future version of JSC.

Now the job should have been copied to the target job stream, as shown in
Figure 5-68.

Figure 5-68 The job ACCJOB01 has been copied to the new job stream

Chapter 5. Using The Job Scheduling Console

321

5.7 Common Default Plan Lists in JSC


If your JSC scheduling view includes multiple scheduling engines, you can list
job stream instances and job instances for more than one engine at a time. This
also applies if you have a mixed environment of Tivoli Workload Scheduler for
z/OS controllers and Tivoli Workload Scheduler masters.
We have two TWS for z/OS production controllers and one TWS production
master. We would like our operators to monitor all three scheduling
environments, and take appropriate action if any job fails in any of the
schedulers. Using legacy interfaces to the three schedulers, operators have to
shift between three different systems or interfaces repeatedly to check if any job
has ended in error.
Using the JSC common plan lists, we can create one list in the JSC and the
operator can monitor all three schedulers at the same time in one place. The list
can be refined further to automatically refresh, for example, every second
minute. Then jobs in error will automatically pop-up in the automatically
refreshed list in our JSC window.
After installing the JSC on your workstation, you will have two common plan lists
predefined in the JSC Common Default Plan Lists group entry (see Figure 5-69
on page 323).
If you want, you can create additional common plan lists that use particular
filters, such as a subset of all the engines. However, common plan lists are
limited to job and job stream instances. Common plan lists provide fewer details
about the job and job stream instances than engine-specific plan lists. If you are
connected to a single engine, or if you want to see more information about the
instances of a specific engine, you should use the plan lists for that engine.

322

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Important: The name of the predefined JSC Common Default Plan Lists is
hard coded and cannot be changed.

Figure 5-69 Common Default Plan Lists group and default lists in the group

5.7.1 Example showing how to use common list of job instances


In this section we will show how to:
Create a common list of job instances.
Work with jobs in a common list of job instances.

Our task is to create a common list of job instances for our TWS for z/OS
controller instance (TWSC) and for our TWS master (Yarmouth-A) instance. The
common list should show all jobs in error in the two engines and should be
automatically refreshed every second minute.

Create a common list of job instances


First we create a new common list of job instances.
1. This is done by right clicking the Common Default Plan Lists group entry,
shown in Figure 5-69.
JSC will then show a pop-up window, where we can select Create Plan List
as in Figure 5-70 on page 324.

Chapter 5. Using The Job Scheduling Console

323

2. Selecting Create Plan List gives a new pop-up window with two entries: Job
Stream Instance... and Job Instance....

Figure 5-70 Creating a new entry in the Common Default Plan List group

3. Since our task is to create a job instance list in the Common Default Plan List
group, we select Job Instance....
JSC shows a new window, the Properties - Job Instance Common List (see
Figure 5-71 on page 325).

324

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Note: If you are going to create a job stream instance list, you can simply click
Job Stream Instance..., shown in Figure 5-70 on page 324. The process is
the same for creating Job Stream Instance Lists as for creating Job Instance
Lists.

Figure 5-71 JSC Properties window used to create a common job instance list

4. In the Properties - Job Instance Common List, we specify the name for the
list, the job status and the names of the engines we want to be in the list. We
name the list CommonErrorList, simply by typing this name in the Name field
(see Figure 5-72 on page 326).
5. Then we select the job status we want to track by clicking the grey box to the
right of the Status field.
6. This gives us a new pop-up window, where we check the Error status (see
Figure 5-72 on page 326).

Chapter 5. Using The Job Scheduling Console

325

Figure 5-72 The Status pop-up window where we check the error code

7. Next we have to define which engines we want include in the common list.
This is done by clicking the grey box to the right of the Engine field name. In
the new pop-up window (shown in Figure 5-73 on page 327) we un-check the
engines that should not be part of our common list (TWSC-F100-Eastham,
TWSC-F200-Yarmouth, and Yarmouth-B).

326

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-73 Unchecking the engines that should not be part of the common list

8. Finally, since we want the list to be automatically refreshed every two


minutes, we check the Periodic Refresh box in the top of the Properties - Job
Instance Common List (Figure 5-74 on page 328) and type the refresh rate
120 (in seconds) in the Period field.
The periodic refresh rate will take effect every time the CommonErrorList
window is opened in the JSC.
Note: Be careful not to specify a periodic refresh rate that is too low, since it
can degrade the JSC performance as well as your work with other lists in the
JSC.

Chapter 5. Using The Job Scheduling Console

327

Figure 5-74 The periodic refresh is activated and set to 120 second (2 minutes)

9. We are almost finished now. To save our new Job Instance Common List
view, we have two possibilities:
a. Click the OK button (Figure 5-74).
If you click the OK button, the new Job Instance Common List view will be
saved and the window will be closed.
b. Click the Apply button (Figure 5-74).
If you click the Apply button, the new Job Instance Common List view will
be saved, but the window will not be closed. The JSC will try the new view
to see if there are any jobs satisfying the search criteria (error job status
and job run on engine TWSC or Yarmouth-A).
Note: Using the Apply button is an efficient way to try the filter criteria right
away and to see the result of your filter. Since the Properties window is not
closed, it is very easy to change filter criteria and retry the changed filter.

10.In Figure 5-75 on page 329 you can see the result after clicking the Apply
button. The Properties window is still there, and JSC has created a list of jobs

328

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

satisfying our filter criteria (jobs in error in TWSC or Yarmouth-A engines).


When you are satisfied with the filter and the result of the filter, you simply
click the OK button in the Properties window.

Figure 5-75 JSC Properties window with the filter specification and the result

11.Maybe you would like to have an extra window, with only the jobs in error.
This can be done by clicking the Detach button in the top of the JSC window
(see Figure 5-76).

Figure 5-76 The Detach button

Using the Detach button, JSC will detach the CommonErrorList into its own
window. This window will then stay open while you are working with other lists
in the JSC. Checking the window once and awhile you can see if there are
any new jobs in error. Remember that we have defined the detached window
to be automatically refreshed every second minute.

Chapter 5. Using The Job Scheduling Console

329

Note: You can have up to seven detached windows running at the same time,
though you should be aware of performance degradation if all these windows
are using periodic refresh options.

Figure 5-77 A detached JSC window

From the detached window in Figure 5-77, it can be seen that we have five
jobs in error in the TWSC controller (engine) and six jobs in error in the
Yarmouth-A master (engine). The total number of jobs in error is 11 (shown in
the bottom of the window). The window will be periodically refreshed every
120 seconds (shown in the bottom of the window).
12.If we want to check details for a particular job in the list in Figure 5-77, we
simply right click the job or double click the job to see all jobs in the job stream
for the job in error. In Figure 5-78 on page 331, we right click a z/OS job in
error in the TWSC controller.

330

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 5-78 Pop-up window shown when clicking a TWS for z/OS job in error

Note: The selectable options or entries in the pop-up window, shown in


Figure 5-78, will vary depending on which workstation and on which engine
(TWS for z/OS or TWS) the job has been executed. Furthermore it will not be
possible to select some of the entries, because they are not allowed for the job
status or are not available for the job.

In Figure 5-79 on page 332 you will see a pop-up window for a TWS job ended in
error. Note the differences between the pop-up window shown in Figure 5-78 and
Figure 5-79 on page 332. As explained before, this difference reflects that the
two jobs are being handled by two different engines; TWS for z/OS and TWS.

Chapter 5. Using The Job Scheduling Console

331

Figure 5-79 Pop-up windows shown when right clicking a TWS job in error

5.7.2 When to use the TWS for z/OS legacy interfaces


Some administration functions not present on the JSC can be performed through
legacy TWS for z/OS interfaces, for example, ISPF panels or PIF user interfaces.
In the following list we have summarized the administration functions currently
not present in the JSC:
TWS for z/OS databases

Access the calendar, period, operator instruction, event-triggered tracking,


and JCL variable databases. Edit JCL from job stream database.
TWS for z/OS long-term plan

All long-term plan related tasks.


TWS for z/OS batch jobs

Invoke TWS for z/OS batch jobs so you, for example, can extend the
long-term plan and current plan, generate different reports, and do
redistribution of the Symphony file (in end-to-end environment).
TWS for z/OS service functions

Access TWS for z/OS service functions so, for example, you can stop or start
job submission, automatic recovery, and event-triggered tracking.

332

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Chapter 6.

Troubleshooting in a TWS
end-to-end environment
In this chapter we want you to get familiar with the identification and isolation of
the most common problems encountered in Tivoli Workload Scheduler
end-to-end solution. In order to have a common troubleshooting chapter for the
entire Tivoli Workload Scheduler for z/OS product, we also want to mention
troubleshooting methods based on Tivoli Workload Scheduler for z/OS only.

Copyright IBM Corp. 2002

333

6.1 Troubleshooting for Tivoli Workload Scheduler for


z/OS
We will show you how to identify the distinguishing features that will help you
obtain a solution. The answer might then be found in the manuals, but, often, it is
not possible to get a solution or circumvention without involving the Tivoli support
structure. However, it can be helpful for the first analysis or for providing the right
documentation when you are facing a problem. A good guideline for this chapter
is the Tivoli Workload Scheduler for z/OS V8R1 Diagnosis Guide and Reference,
LY19-6410.
To identify an error, you must first gather information related to the problem, such
as abend codes and dumps. You can then determine whether the problem is in
Tivoli Workload Scheduler for z/OS. If the problem is in Tivoli Workload
Scheduler for z/OS, this chapter helps you classify and describe the problem.
The external symptoms of several problems are described to help you identify
which problem type to investigate. Each problem type requires a different
procedure when you describe the problem. Use these procedures to build a
string of keywords and to obtain documentation relevant to the problem. This
combination of a keyword string and associated documentation helps you
describe the problem accurately to the Tivoli service personnel.

6.1.1 Using keywords to describe a problem


A keyword is a word or abbreviation that describes a single aspect of a program
failure to the Tivoli Support Center. You use keywords to describe all aspects of a
problem, from the Tivoli Workload Scheduler for z/OS component ID to the area
of failure. You then use the problem analysis procedures to build a keyword
string. For example, if your program failure is due to the abnormal termination of
a task, the keyword is ABEND. Other keywords are also formed to describe
particular aspects of abnormal termination, such as the name of the module
where the abend occurred. These keywords are then combined to form a
keyword string.
Let us look at the following example:
5697WSZ01 ABEND0C4 EQQYVARG

In this example, 5697-WSZC01 is the Tivoli Workload Scheduler for z/OS


component ID; ABEND is the problem type, and 0C4 is the abend code.
EQQYVARG is the module containing the abend.

334

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

6.1.2 Searching the software-support database


To determine if the problem has been noted before, you can use the keyword
string that you create to search the software-support database. If a problem
similar to yours is described in the database, a solution is probably available. To
widen or narrow the database search, you can vary the keyword string you
develop. If you have access to the Tivoli support database, you can use the
keyword string to search for solutions of problems similar to yours. Link to the
Tivoli support database at the following URL:
http://www.tivoli.com/support/

6.1.3 Problem-type keywords


The problem-type keywords are used to identify the failure that occurred.
Table 6-1 lists the keywords and the problem types they identify.
Table 6-1 Keywords
Keywords

Keywords meaning

ABEND

Abnormal end

ABENDU

Abnormal end with user abend code

DOC

Documentation

LOOP

Loop

WAIT

Wait

MSG

Message

PERF

Performance

INCORROUT

Incorrect output

ABEND
Choose the ABEND keyword when the Tivoli Workload Scheduler for z/OS
program comes to an abnormal end with a system abend code. You should also
use ABEND when any program that services Tivoli OPC (for example, VTAM)
terminates it, and one of the following symptoms appears:
An abend message at an operator console. The abend message contains the
abend code and is found in the system console log.
A dump is created in a dump dataset.

Chapter 6. Troubleshooting in a TWS end-to-end environment

335

ABENDU
Choose the ABENDU keyword when the Tivoli Workload Scheduler for z/OS
program comes to an abnormal end with a user abend code and the explanation
of the abend code states that it is a program error. Also, choose this keyword
when a user abend (which is not supposed to signify a program error) occurs
when it should not occur, according to the explanation. If a message was issued,
use the MSG keyword to document it.

DOC
Choose the DOC keyword when one or more of the following symptoms appears:
There is incomplete or inaccurate information in a Tivoli OPC publication.
The published description of Tivoli OPC does not agree with its actual
operation.

INCORROUT
Choose the INCORROUT keyword when one or more of these symptoms
appears:
You received unexpected output, and the problem does not appear to be a
loop.
The output appears to be incorrect or incomplete.
The output is formatted incorrectly.
The output comes from damaged files or from files that are not set up or
updated correctly.

LOOP
Choose the LOOP keyword when one or more of the following symptoms exists:
Part of the program (other than a message) is repeating itself.
A Tivoli Workload Scheduler for z/OS command has not completed after an
expected period of time, and the processor usage is at higher-than-normal
levels.
The processor is used at higher-than-normal levels, a workstation operator
experiences terminal lockout, or there is a high channel activity to a Tivoli
Workload Scheduler for z/OS database.

336

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MSG
Choose the MSG keyword to specify a message failure. Use this keyword when
a Tivoli Workload Scheduler for z/OS problem causes an error message. The
message might appear at the system console or in the message log, or both. The
messages issued by Tivoli Workload Scheduler for z/OS appear in the following
formats:
EQQ FnnnC
EQQ FFnnC
EQQ nnnnC

The message is followed by the message text. The variable components


represent:
F or FF: This is the Tivoli Workload Scheduler for z/OS component that
issued the message.
nn, nnn, or nnnn: This is the message number.
C: Severity code of I (information), W (warning), or E (error).

The following are message number examples:


EQQN008E
EQQWl10W
EQQF008I

If the message that is associated with your problem does not have the EQQ
prefix, your problem is probably not associated with Tivoli Workload Scheduler
for z/OS, and you should not use the MSG keyword.

PERFM
Choose the PERFM keyword when one or more of the following symptoms
appears:
Tivoli Workload Scheduler for z/OS event processing or commands, including
commands entered from a terminal in session, take an excessive amount of
time to complete.
Tivoli Workload Scheduler for z/OS performance characteristics do not meet
explicitly stated expectations. Describe the actual and expected
performances and the explicit source of the performance expectation.

Chapter 6. Troubleshooting in a TWS end-to-end environment

337

WAIT
Choose the WAIT keyword when one or more of the following symptoms
appears:
The Tivoli Workload Scheduler for z/OS program, or any program that
services this program, has suspended activity while waiting for a condition to
be satisfied without issuing a message to indicate why it is waiting.
The console operator cannot enter commands or otherwise communicate
with the subsystem and Tivoli Workload Scheduler for z/OS does not appear
to be in a loop.

6.1.4 Problem analysis procedures


This section details the procedures that you use to further describe a problem.
First, you gather the information for the specific problem type. When you have
chosen a problem-type keyword, you need to collect problem documentation and
create a keyword string to describe the problem. To do this, gather the
information for the specific problem.
System or user abnormal termination procedure (ABEND or ABENDU)
Documentation procedure (DOC)
Incorrect output procedure (INCORROUT)
Loop procedure (LOOP)
Message procedure (MSG)
Performance procedure (PERFM)
Wait procedure (WAIT)

6.1.5 Abnormal termination (ABEND or ABENDU) procedure


A malfunction in the system can cause an abnormal termination (ABEND).
Abend categories are:
User ABEND
System ABEND

User abends originate in the application program. Abend codes are documented
in Appendix A, Abend Codes of the Tivoli Workload Scheduler for z/OS V8R1
Diagnosis Guide and Reference, LY19-6410, and Tivoli Workload Scheduler for
z/OS Messages and Codes, SH19-4548.
A common user abend is 3999, as shown in Example 6-1 on page 339.

338

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example 6-1 User abend 3999


Explanation: An internal validity checking has discovered an error condition
(internal Tivoli OPC error). A message that contains the reason for the abend,
as well as other debugging information, is written to the Tivoli OPC diagnostic
file, EQQDUMP.
Problem determination: None.
System programmer response: Call your IBM representative.

You may find the occurrence Data Router task abended while processing the
following queue element in the system log (Example 6-2).
Example 6-2 Symptom dump output
IEA995I SYMPTOM DUMP OUTPUT
USER COMPLETION CODE=3999
TIME=15.46.40 SEQ=00456 CPU=0000 ASID=0031
PSW AT TIME OF ERROR 078D1000
800618CE ILC 2 INTC 0D
ACTIVE LOAD MODULE
ADDRESS=00054DF8 OFFSET=0000CAD6
NAME=EQQBEX
DATA AT PSW 000618C8 - 00181610 0A0D1812 0A0D47F0
GPR 0-3 80000000 80000F9F 00000F9F 000844E8
GPR 4-7 C5D8D8C2 C4C54040 C5E7C9E3 40404040
GPR 8-11 00000000 00000001 00000F9F 00061728
GPR 12-15 00000000 001DA4C0 800579D2 00000000
END OF SYMPTOM DUMP

In addition, Tivoli Workload Scheduler for z/OS writes diagnostic information in


its EQQDUMP dataset:
EQQ0000T MODULE: EQQDXQPR, REASON: INVDEST

System abends can occur, for example, when a program instruction refers to a
storage area that does not exist anymore.

6.1.6 The diagnostic file (EQQDUMP)


When Tivoli Workload Scheduler for z/OS internal validity checking discovers
error conditions within the network communication function, debugging
information is written to the diagnostic file (defined by ddname EQQDUMP). For
serious error conditions, Tivoli Workload Scheduler for z/OS abends with user
code 3999 as well. The diagnostic information consists of the message EQQ0000T,
which gives the name of the module in error and the reason for the error in two
8-byte character strings. Tivoli Workload Scheduler for z/OS also writes a
formatted version of the trace table to the diagnostic file. In most situations, Tivoli
Workload Scheduler for z/OS will also snap the data that it considers to be in
error.

Chapter 6. Troubleshooting in a TWS end-to-end environment

339

6.1.7 Trace information


Tivoli Workload Scheduler for z/OS maintains an internal trace to make it
possible to see the order that its modules have been invoked in prior to an
abend. The trace is wrap-around with an end mark after the last trace entry
added. Each entry consists of two 8-byte character fields: The Module Name
field and the Reason field. The end mark consists of a string of 16 asterisks
(X'5C'). For most abnormal terminations, and a trace table is written in the
diagnostic file (EQQDUMP). These trace entries are intended to be used by
Tivoli support when they are diagnosing Tivoli Workload Scheduler for z/OS
problems. A trace entry with the reason PROLOG is added upon entry to the
module. Similarly, an entry with EPILOG is added at the exit from the module.
When trace entries are added for other reasons, the reason is provided in the
Reason field.

6.1.8 System dump dataset


An abnormal end (abend) of major tasks may affect the entire Tivoli Workload
Scheduler for z/OS PLEX (or SYSPLEX) and can jeopardize the whole
production. The Recovery and Terminating Manager (RTM) of the operating
system produces valuable information for diagnostic purposes. Therefore, it is
extremely important to make sure that this information is kept in a dataset, called
the system dump dataset, for further analysis.
The sample JCL procedure for an Tivoli Workload Scheduler for z/OS address
space includes a SYSMDUMP DD statement, and a dump dataset is allocated by
the EQQPCS02 JCL created by EQQJOBS. SYSMDUMP is the dump format
preferred by the service organization. Ensure that the dump options for
SYSMDUMP include RGN, LSQA, TRT, CSA, and GRSQ on systems where a
Tivoli Workload Scheduler for z/OS address space will execute. To display the
current SYSMDUMP options, issue the z/OS command, DISPLAY DUMP,OPTIONS.
You can use the CHNGDUMP command to alter the SYSMDUMP options. Note that
this will only change the parameters until the next IPL. Do not forget to insert a
SYSMDUMP DD statement into the JCL of your PIF programs. It is very
important to use the right disposition of the dump dataset because you have to
be sure that the dump written to the dataset will not be replaced by maintask or
recursive abends. Therefore, we recommend that you use DISP=MOD. The
disadvantage of the disposition is that the dump dataset can be burst when
multiple dumps are written; so make sure that you save the dumps and clear
them afterwards.
The following is a SYSMDUMP example:
//SYSMDUMP DD DISP=MOD,DSN=OPC.V2R3M0.DMP

340

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Please note that //SYSOUT=* destroys the internal format of the dump and
renders it useless. When you experience an abend and find no dumps in your
dump datasets have a look to your dump analysis and elimination (DAE) set up.
DAE can be used to prevent the creating of certain kind of dumps. See also the
z/OS V1R3.0 MVS Initialization and Tuning Guide, SA22-7591.

6.1.9 LOOP procedure


If your problem type is LOOP, you should take the following steps:
Use the TWS for z/OS message log or system console log to help you identify
what happened just before the program loop occurred.
Obtain a dump using the z/OS DUMP command. The internal system trace is
very helpful for the Tivoli support representative to analyze a loop. The default
trace table is 64 KB for each processor and could not be enough when
encountering a wide spread loop. We recommend increasing the trace table
to 120 KB before obtaining the dump with the following console command:
/Trace ST,120K

To become familiar with obtaining a console dump, see Preparing a console


dump on page 344.
Document instruction addresses from within the loop, if possible.
Provide a description of the situation leading up to the problem.

6.1.10 Message (MSG) procedure


If your Tivoli Workload Scheduler for z/OS problem type is MSG, you should take
the following steps:
Look up the message in Tivoli Workload Scheduler for z/OS V8R1 Messages
and Codes, SH19-4548, for an explanation. This manual includes information
on what action Tivoli Workload Scheduler for z/OS takes and what action the
operator should take in response to a message. If you plan to report the
problem, gather the documentation before you take action.
Copy the message identifier and the message text. The Tivoli Support Center
representative needs the exact message text.
Supplement the MSG keyword with the message identifier. You use the
supplemented keyword in your keyword string when searching the software
support database.

Chapter 6. Troubleshooting in a TWS end-to-end environment

341

With OS/390 V2R5, Tivoli Workload Scheduler for z/OS introduced a new
task, EQQTTOP, that handles the communication between the TCP/IP server
that now runs on UNIX System Services (USS) in full function mode.
EQQTTOP uses C coding in order to use the new C socket interface. New
messages are implemented, some of them pointing to other z/OS manuals.
Example:
EQQTT20E THE RECEIVE SOCKET CALL FAILED WITH ERROR CODE 1036

Explanation: An error was encountered when the TCP/IP communication task


attempted to issue a receive socket call to TCP/IP. The ERRNO value is the
error code returned by the failing socket call.
System action: Depending on the failing call, either the TCP/IP
communication task is terminated or the specific socket connection is closed.
Whenever possible, the task is automatically restarted. If the socket
connection was closed, it is reestablished.
System programmer response: Check the error code in the z/OS CS IP and
SNA manual and make any possible corrective action. If the error reoccurs,
save the message log (EQQMLOG) and contact your Tivoli representative.

To find the cause, look in the System Error Codes for socket calls chapter of the
z/OS V1R2 Communications Server: IP and SNA Codes, SC31-8791.
Table 6-2 Socket error codes
Error number

Message name

Error description

1036

EIBMNOACTIVETCP

TC/IP is not active

New modify commands in Tivoli Workload Scheduler for z/OS are a handy way
to get important information very quickly. When you want to find out which Tivoli
Workload Scheduler for z/OS task is active or inactive (other than by looking into
MLOG for related messages), enter the command in SDSF shown in
Example 6-3.
Example 6-3 Modify command
/F procname,status,subtask
/*where procname is the subsystem name of engine or agent */

This will show you the tasks shown in Example 6-4.


Example 6-4 Task display
F TWSC,STATUS,SUBTASK
EQQZ207I NORMAL MODE MGR
EQQZ207I JOB SUBMIT TASK
EQQZ207I DATA ROUTER TASK
EQQZ207I TCP/IP TASK

342

IS
IS
IS
IS

ACTIVE
ACTIVE
ACTIVE
ACTIVE

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

EQQZ207I
EQQZ207I
EQQZ207I
EQQZ207I
EQQZ207I

EVENT MANAGER
GENERAL SERVICE
JT LOG ARCHIVER
EXTERNAL ROUTER
WS ANALYZER

IS
IS
IS
IS
IS

ACTIVE
INACTIVE
ACTIVE
ACTIVE
ACTIVE

The above screen shows that the general service task has an inactive status. To
find more details, have a look into MLOG. The modify commands are described
in the Tivoli Workload Scheduler for z/OS V8R1 Quick Reference, GH19-4541.

6.1.11 Performance (PERFM) procedure


If your problem concerns performance, you should:
Document the actual performance, the expected performance, and the
source of information for the expected performance. If a document is the
source, note the order number and page number of the document.
Document the information about your operating environment, such as the
number of active initiators, the number of TSO users, and the number of Tivoli
Workload Scheduler for z/OS users connected. Any user modifications to the
program exits, REXX programs, and command lists can affect performance.
You should consider whether the user-installed code, REXX programs, or
CLISTs are contributing to the problem.
Document any modifications to your system. Performance problems can be
related to various system limitations. Your market division representative
might be able to identify possible causes of a performance problem.

6.1.12 WAIT procedure


If your problem type is WAIT, you should take the following steps:
Research the activity before system activity was suspended, identifying which
operation is in the wait state.
Specify any messages that were sent to the message log or to the system
console.
Obtain a dump using the z/OS DUMP command. Check if the dump options
include RGN and GRSQ.
A wait state in the system is similar to a hang; however, the processing is
suspended. Usually it is recognized by system not being able to submit jobs.
A probable cause could be that one task holds a resource while other tasks
are waiting until the owning task releases the resource. Such resource
contentions can happen a lot of the time but are not serious if they are
resolved in a short time. If you experience a long wait or hang, you can

Chapter 6. Troubleshooting in a TWS end-to-end environment

343

display an eventual resource contention when entering the following


command in SDSF shown in Example 6-5.
Example 6-5 Display resource contention
COMMAND INPUT ===> /D grs,c

ISG343I 23.23.19 GRS STATUS 043


S=SYSTEMS SYSZDRK OPCATURN2
SYSNAME
MCEVS4

JOBNAME
OPCA

ASID
003F

TCBADDR
EXC/SHR STATUS
007DE070 EXCLUSIVE OWN

As you see, there are two tasks trying to get access (or lock) for one resource
exclusive. Exclusive means that no other task can get the lock at the same time.
An exclusive lock is usually an update access. The second task has to wait until
the first, which is currently the owner, releases it. Message ISG343I returns with
two fields, called Major and Minor name. In our example, SYSZDRK is the major
name, and OPCATURN2 is the minor name. SYSZDRK represents the active
current plan while the first four digits of the minor name represents your Tivoli
Workload Scheduler for z/OS subsystem name. With this information, you can
search for known problems in the software database. If you find no hint, your
Tivoli support representative may ask you for a console dump.

6.1.13 Preparing a console dump


The console dump contains a snapshot of virtual storage areas, such as system
dumps. The major difference is that a system dump is created by the operating
system when an abnormal end happens. The console dump has to be created by
you via z/OS commands from the system console. The dump options are very
important because they influence the different parts of storage to be dumped. For
waits or hangs, the GRSQ option must be turned on.
Example 6-6 shows the display of the current dump options.
Example 6-6 Display dump option
COMMAND INPUT ===>/d d,o
RESPONSE=MCEVS4
IEE857I 18.22.00 DUMP OPTION 371
SDUMP- ADD OPTIONS (ALLPSA,SQA,LSQA,RGN,LPA,TRT,CSA,SWA,SUMDUMP,
ALLNUC,Q=YES,GRSQ),BUFFERS=00000000K,
MAXSPACE=00001200M,MSGTIME=99999 MINUTES

344

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

SDUMP indicates the options for the SYSMDUMP, which is the preferred type of
dump. The options shown are sufficient for almost every dump in Tivoli Workload
Scheduler for z/OS. For a detailed explanation, refer to the z/OS system
commands, options for SDUMP types. If you miss one of these options, you can
change it with the change dump command (CD). For GRSQ, as an example:
CD SET,SDUMP=(GRSQ)

You need to be sure that the dump datasets, which have been provided by the
z/OS installation, are free to be taken.
Example 6-7 Display dump datasets
COMMAND INPUT ===>/d d,t
RESPONSE=MCEVS4
IEE853I 18.43.40 SYS1.DUMP TITLES 385
SYS1.DUMP DATA SETS AVAILABLE=003 AND FULL=000
CAPTURED DUMPS=0000, SPACE USED=00000000M, SPACE FREE=00001200M

The previous example shows that all three can be used for console dumps. If not,
you can clear a certain one. Make sure that nobody needs it anymore. To clear a
certain dump dataset you can issue following command:
//dd clear,dsn=00

In this case Sys1.Dump00 is eligible.

6.1.14 Dump the failing system


Now, you are ready to obtain the console dump for further analysis.
Run the dump command as in Example 6-8.
Example 6-8 Dump command
COMMAND INPUT ===> dump comm=(demo)
19:13:27.27 SFRA4
00000290 DUMP COMM=(DEMO)
19:13:27.30 SFRA4
00000090 *17 IEE094D SPECIFY OPERAND For DUMP COMMAND

Enter the outstanding reply number, 17, as in Example 6-9.


Example 6-9 Dump the address space
COMMAND INPUT ===>/17,tsoname=(opca)
SFRA4
SFRA4

00000290
00000090
00000090
00000090
00000090

R 17,TSONAME=(OPCA)
IEE600I REPLY TO 17 IS;TSONAME=(OPCA)
IEA794I SVC DUMP HAS CAPTURED: 482
DUMPID=049 REQUESTED BY JOB (*MASTER*)
DUMP TITLE=DEMO

Chapter 6. Troubleshooting in a TWS end-to-end environment

345

00000290
00000290
00000290
00000090

IEF196I
IEF196I
IEF196I
IEA611I

IGD100I 40AA ALLOCATED TO DDNAME SYS00273


IEF285I
SYS1.DUMP01
IEF285I
VOL SER NOS= O260C1.
COMPLETE DUMP ON SYS1.DUMP01

Do not be confused about the TSONAME parameter. It specifies the name of an


address space to be dumped. Alternatively, you can use ASID(hex) as well.
Dump processing finished successfully as indicated by the message, IEA611I.
Please verify the existence of this message when you provide the dump to your
local Tivoli support.

6.1.15 Information needed for all problems


Even when you are unable to identify a problem type, you should gather the
following information for any problem you have. Begin your initial problem
analysis by examining the contents of the message log dataset with the following
steps:
1. Obtain a copy of the Tivoli Workload Scheduler for z/OS message log. This is
a sequential dataset defined by the EQQMLOG ddname.
2. Record the Tivoli Workload Scheduler for z/OS component ID: 5697-WSZ01.
The component ID should be the first keyword in the string preceding the
problem type and other modifier keywords.
3. Record the maintenance level for all operating environments, particularly
those for z/OS, JES, ISPF, and RACF.
4. Document any additional program temporary fixes (PTFs) or APARs that have
been applied to your level of Tivoli Workload Scheduler for z/OS.
5. If the problem is within the network communication function, obtain copies of
the EQQDUMP file.
6. Obtain copies of the Tivoli Workload Scheduler for z/OS diagnostic files
defined to the user address space and to the subsystem address space by
SYSMDUMP.
7. Obtain a copy of the system log.
8. Reconstruct the sequence of events leading to the problem. Include any
commands entered just before the problem occurred. Write down the exact
events that lead to the problem:

346

What was the first indication of the problem?


What were you trying to do?
What should have happened?
What did happen?
Can you recreate the problem?

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

9. Specify any unique information about the problem or about your system:
Indicate any other applications that were running when the problem
occurred.
Describe how Tivoli Workload Scheduler for z/OS was started.
Describe all user modifications to active Tivoli Workload Scheduler for
z/OS programs.
If more information is needed, a Tivoli Support Center representative will guide
you concerning any additional diagnostic traces that you can run.

6.1.16 Performing problem determination for tracking events


Successful tracking of jobs in Tivoli Workload Scheduler for z/OS relies on the
creation of different events written in the agent address space and processed
from the engine. The engine waits for the complete arrival of these events and
updates the current plan accordingly.
The different tracking events are listed in Table 6-3.
Table 6-3 Tracking events
Event number

Event name

Meaning

Reader

A job has entered the


system.

Start event

A job has started to


execute.

3S

Step end event

A step has finished


execution.

3J

Job end event

A job has finished


execution.

3P

Job termination event

A job has been added to


the output queue.

Print event

An output group has been


printed.

Purge event

A job output has been


purged from JES spool.

Chapter 6. Troubleshooting in a TWS end-to-end environment

347

The events are prefixed with either A (for JES2) or B (for JES3). At least the set of
type 1, 2, 3J, and 3P events is needed to correctly track the several stages of a
jobs life. The creation of step-end events (3S) depends on the value you specify
in the STEPEVENTS keyword of the EWTROPTS statement. The default is to
create a step-end event only for abending steps in a job or started task. The
creation of print events depends on the value you specify in the PRINTEVENTS
keyword of the EWTROPTS statement. By default, print events are created.
If you find that the current plan status of a job is not reflecting the status in JES,
you may have missing events. A good starting point is to run the Tivoli Workload
Scheduler for z/OS AUDIT package for the affected occurrence to easily see
which events are processed from the engine and which are missing, or you can
browse your event datasets for the job name and job number to prove which
events are not written.
Problem determination depends on which event is missing and whether the
events are created on a JES2 or JES3 system. In Table 6-4, the first column
refers to the event type that is missing, and the second column tells you what
action to perform. The first entry in the table applies when all event types are
missing (when the event dataset does not contain any tracking events).
Table 6-4 Problem determination of tracking events
Type

Problem determination actions

ALL

1. In the EQQMLOG dataset, verify that


the event writer has started
successfully.
2. Verify that the definition of the
EQQEVDS ddname in the Tivoli
Workload Scheduler for z/OS
started-task procedure is correct, that
is, the events are written to the correct
dataset.
3. Verify that the required exits have
been installed.
4. Verify that the IEFSSN nn member of
SYS1.PARMLIB has been updated
correctly and that an IPL of the MVS
system has been performed since the
update.

348

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Type

Problem determination actions

A1

If both A3P and A5 events are also


missing:
1. Verify that the Tivoli OPC version of
the JES2 exit 7 routine has been
correctly installed. Use the $T EXIT(7)
JES command.
2. Verify that the JES2 initialization
dataset contains a LOAD statement
and an EXIT7 statement for the Tivoli
OPC version of JES2 exit 7
(OPCAXIT7).
3. Verify that the exit has been added to
a load module library reachable by
JES2 and that JES2 has been
restarted since this was done. If either
A3P or A5 events are present in the
event dataset, call a Tivoli service
representative for programming
assistance.

B1

1. Verify that the Tivoli Workload


Scheduler for z/OS version of the
JES3 exit IATUX29 routine has been
correctly installed.
2. Verify that the exit has been added to
a load-module library that JES3 can
access.
3. Verify that JES3 has been restarted.

Chapter 6. Troubleshooting in a TWS end-to-end environment

349

Type

Problem determination actions

A2/B2

1. Verify that the job for which no type 2


event was created has started to
execute. A type 2 event will not be
created for a job that is flushed from
the system because of JCL errors.
2. Verify that the IEFUJI exit has been
correctly installed:
a. Verify that the System
Management Facility (SMF)
parameter member SMFPRM nn
in the SYS1.PARMLIB dataset
specifies that the IEFUJI exit
should be called.
b. Verify that the IEFUJI exit has not
been disabled by an operator
command.
c. Verify that the correct version of
IEFUJI is active. If
SYS1.PARMLIB defines LPALIB
as a concatenation of several
libraries, z/OS uses the first
IEFUJI module found.
d. Verify that the library containing
this module was updated by the
Tivoli OPC version of IEFUJI and
that z/OS has been IPLd since the
change was made.

350

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Type

Problem determination actions

A3S/B3S

If type 3J events are also missing:


1. Verify that the IEFACTRT exit has
been correctly installed.
2. Verify that the SMF parameter
member SMFPRM nn in the
SYS1.PARMLIB dataset specifies that
the IEFACTRT exit should be called.
3. Verify that the IEFACTRT exit has not
been disabled by an operator
command.
4. Verify that the correct version of
IEFACTRT is active. If
SYS1.PARMLIB defines LPALIB as a
concatenation of several libraries,
z/OS uses the first IEFACTRT module
found.
5. Verify that this library was updated by
the Tivoli Workload Scheduler for
z/OS version of IEFACTRT and that
z/OS has been IPLd since the change
was made.
If type 3J events are not missing, verify in
the EQQMLOG dataset that the event
writer has been requested to generate
step-end events.
Step-end events are only created if the
EWTROPTS statement specifies
STEPEVENTS(ALL) or
STEPEVENTS(NZERO) or if the job step
abended.

A3J/B3J

If type 3S events are also missing, follow


the procedures described for type 3S
events.
If type 3S events are not missing, call a
Tivoli service representative for
programming assistance.

A3P

If A1 events are also missing, follow the


procedures described for A1 events.
If A1 events are not missing, call a Tivoli
service representative for programming
assistance.

Chapter 6. Troubleshooting in a TWS end-to-end environment

351

Type

Problem determination actions

B3P

1. Verify that the Tivoli Workload


Scheduler for z/OS version of the
JES3 exit IATUX19 routine has been
correctly installed.
2. Verify that the exit has been added to
a load-module library that JES3 can
access.
3. Verify that JES3 has been restarted.

352

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Type

Problem determination actions

A4/B4

1. If you have specified


PRINTEVENTS(NO) on the
EWTROPTS initialization statement,
no type 4 events are created.
2. Verify that JES has printed the job for
which no type 4 event were created.
Type 4 events will not be created for a
job that creates only held SYSOUT
datasets.
3. Verify that the IEFU83 exit has been
correctly installed:
a. Verify that the SMF parameter
member SMFPRM nn in the
SYS1.PARMLIB dataset specifies
that the IEFU83 exit should be
called.
b. Verify that the IEFU83 exit has not
been disabled by an operator
command.
c. Verify that the correct version of
IEFU83 is active. If
SYS1.PARMLIB defines LPALIB
as a concatenation of several
libraries, z/OS uses the first
IEFU83 module found.
d. Verify that the library containing
this module was updated by the
Tivoli Workload Scheduler for
z/OS version of IEFU83 and that
MVS has been IPLd since the
change was made.
e. For JES2 users (A4 event),
ensure that you have not specified
TYPE6=NO on the JOBCLASS
and STCCLASS statements of the
JES2 initialization parameters.

Chapter 6. Troubleshooting in a TWS end-to-end environment

353

Type

Problem determination actions

A5

1. Verify that JES2 has purged the job


for which no A5 event was created.
2. Ensure that you have not specified
TYPE26=NO on the JOBCLASS and
STCCLASS statements of the JES2
initialization parameters.
3. If A1 events are also missing, follow
the procedures described for A1
events.
4. If A1 events are not missing, call a
Tivoli service representative for
programming assistance.

B5

1. Verify that JES3 has purged the job


for which no B5 event was created.
2. If B4 events are also missing, follow
the procedures described for B4
events.
3. If B4 events are not missing, call a
Tivoli service representative for
programming assistance.

6.2 Troubleshooting TWS for z/OS end-to-end solution


This section describes troubleshooting possibilities for the to end components. It
helps you to determine with which parts of the product to look for useful
diagnostic information and to solve common error scenarios.

6.2.1 End-to-end working directory


The working directory is important for troubleshooting because it contains, for
example, the messages produced from several end-to-end processes and
tracking events from the distributed environment. To look at the files into the
directory, you need to have an UNIX System Services UID and the proper
authorization. There are several ways to navigate through the directory.
Using the ishell

This kind of dialog uses ISPF service and is useful if you are not familiar
with the native shell environment.
The ishell can be run from the TSO command processor if you enter ish in
the command line.

354

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Using native shell

Native shell is similar the UNIX shell, except that not all commands are
supported.
Using the native shell, enter omvs from the TSO command processor
command line.
We recommend using the ishell to better illustrate the directory layout. Run ish to
list the contents of your working directory. The directory is the same as the one
you defined in the wrkdir parameter of the topology member (in our installation,
/tws/twsctpwrk).
Example 6-10 Listing the work directory
Directory List
Select one or more files with / or action codes.
EUID=0
/tws/
Type Filename
_ Dir
.
_ Dir
..
l Dir
twsctpwrk

The list of files shown in Example 6-11 appears.


Example 6-11 Working directory layout
Type Filename
_ Dir
.
_ Dir
..
_ File Intercom.msg
_ File localopts
_ File Mailbox.msg
_ Dir
mozart
_ File NetConf
_ File NetReq.msg
_ Dir
pobox
_ File Sinfold
_ File Sinfonia
_ Dir
stdlist
_ File Symold
_ File Symphony
_ File Translator.chk
_ File Translator.wjl

We will explain each file in more detail in Table 6-5 on page 356.

Chapter 6. Troubleshooting in a TWS end-to-end environment

355

Table 6-5 Files and directory structure of UNIX System Services


File name

Explanation

Intercom.msg

Inter-process communication messages


between batchman and mailman process

localopts

Customization-related parameter applying


for this workstation

Mailbox.msg

Messages from other distributed


workstations

Mozart directory

Contains database objects

NetConf

Contains network tuning options for


distributed workstations

NetReq.msg

Message file read by the netman

Pobox directory

Message queue files for inter-workstation


communication

Sinfold

Old production plan file (Symphony)

Sinfonia

Copy of the Symphony file

Stdlist directory

Contains batchman, mailman, writer,


netman logs and translator traces.

Symold

Copy of the Sinfold

Symphony

Symphony file created by Tivoli Workload


Scheduler for z/OS

Translator.chk

Translator files

Translator.wjl

Translator files

Recommendation: Please do not modify or manipulate these files without


contacting your Tivoli support.

6.2.2 The standard list directory


The standard list (stdlist) directory covers important files and directories to find
error messages related to end-to-end processing. It is subdivided into several
directories names, related to the date messages have been issued. The netman
process generates (at midnight) a new directory and switches to it.

356

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example 6-12 Stdlist directory


Dir
..
_ File stderr
_ File stdout
_ Dir
2002.02.21
_ Dir
2002.02.22
_ Dir
2002.02.23
l Dir
2002.02.25

If you list the directory for a specific date, you will see three files, as listed in
Example 6-13.
Example 6-13 Stdlist files
Type Filename
_ Dir
.
_ Dir
..
_ File NETMAN
_ File STC
_ File TRANSLATOR

The netman file holds all message files related to the netman process while
batchman, writer, and mailman write to the STC file. This name can vary from the
type of the installation. File STC represents the same file name as the TWS user
ID in Tivoli Workload Scheduler stdlist directory. The translator file is used by the
translator process.
Example 6-14 Batchman messages
BATCHMAN:01:20/Received Bl:
BATCHMAN:01:20/OPCMASTER#B73EA5E0519ECF25.J_010_F202DWTESTSTREAM #J1086
BATCHMAN:01:20/Jobman streamed
BATCHMAN:01:20/OPCMASTER#B73EA5E0519ECF25.J_010_F202DWTESTSTREAM (#J1086)
BATCHMAN:01:20/AWS22010075I Changing schedule B73EA5E0519ECF25 status to
BATCHMAN:01:20/EXEC
BATCHMAN:01:20/Received Us: F202
BATCHMAN:01:20/+ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BATCHMAN:01:20/+ AWS22010001E Unable to stream job
BATCHMAN:01:20/+ OPCMASTER#B73EA5E0519ECF25.J_010_F202DWTESTSTREAM in file
BATCHMAN:01:20/+ DIR: Error launching Invalid argument:
BATCHMAN:01:20/+ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 6. Troubleshooting in a TWS end-to-end environment

357

6.2.3 The standard list messages


The messages you find within the files are all prefixed with AWS and an 8-digit
number, either ending with I for informational, or an E for error messages. Error
messages are prefix with a plus sign (+) too. The process that issues the
message is shown on the first column.
With Tivoli Workload Scheduler 8.1, new documentation, Tivoli Workload
Scheduler 8.1 Error Messages, SH19-4557, has been provided, that gives you a
detailed explanation of messages and corresponding operator responses, as
shown in Example 6-15.
Example 6-15 Message explanation
22010069E Schedule (schedule name) is stuck, operator intervention required
Explanation: A schedule is in the STUCK state. Schedules will go stuck for
different reasons:
When jobs in the schedule cannot launch because they depend on a job that has
abended. This is the most common reason.
When operator intervention is required for a reply to a prompt on one of the
jobs in the schedule.
If a job on which another depends cannot launch because its priority is zero.
Schedule Name is the name of the schedule that is stuck.
Operator Response: Use internal policies to determine how to handle the abended
job. User can either cancel the job to satisfy the other job dependencies or
rerun it again successfully. For prompts, replying to the active prompt will
cause a change in status. For priorities, altering the value to any number
greater than 0 will cause a change in state as well.

You can use the message number as a search argument in the knowledge
database, which can be accessed from the following site:
http://www.tivoli.com/support

You might need to investigate in the Tivoli Workload Scheduler for z/OS, the
batchjob output and the end-to-end server log too, to get an entire picture of the
error you are facing.
The controller message logs and batch output contains information about
Symphony creation and its switch.
The server log includes Starter and Translator log information.

358

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

6.2.4 Diagnose and fix problems with unlinked workstations


If the workstation is not linked as it should, the cause of the problem could be that
the writer process has not been initiated correctly or the run number for
Symphony file on the fault tolerant workstation is not the same as the run number
on the TWS master. If you mark the unlinked workstations and right mouse click,
you will get a pop-up menu, as shown in Figure 6-1. Then click Link to try to link
the workstation (Figure 6-1).

Figure 6-1 Link the workstation

You can check the Symphony run number and Symphony status in the legacy
ISPF using option 6.6 (Figure 6-2 on page 360).

Chapter 6. Troubleshooting in a TWS end-to-end environment

359

Command ===>
Current plan created
Planning period end
Backup information:
Last CP backup
First logged event
after backup

: 02/02/28 11.02
: 02/03/01 13.30

: 02/02/28 11.45
: 02/02/28 11.46

Daily planning status:


Under production
NCP ready

: No
: No

Symphony status:
Symphony run number
Under production
New Symphony ready

: 118
: No
: No

In use ddname of:


Current plan

: EQQCP2DS

Time stamp:

0102059F 16460266

Figure 6-2 Displaying the Symphony run number

If the workstation is Not Available/Offline, the reason could be that the


mailman, batchman, and jobman processes are not started on the fault tolerant
workstation. You can right click the workstation to get the pop-up menu, and then
click Set Status.... This will give you a new panel where you can try to activate
the workstation by clicking the Active radio button (Figure 6-3 on page 361). This
action will try to start the Mailman, Batchman and Jobman processes on the fault
tolerant workstation (issuing a conman start command on the agent).

360

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 6-3 Setting status to active

If you still encounter link problems you may have a closer look at the following
definitions:
Check if the TPLGYPRM parameter in servopts of the end-to-end server and
the batchopt points to the same topology member.
Verify the hostname and port number definitions of the topology member.

Ensure that the port numbers are equal within the entire end-to-end
environment.
Check your DOMREC and CPUREC definitions, especially the cpunode
and cputcpip keywords.
If you modified the member, make sure that you either run a replan, a daily
plan extend, or a Symphony renew job.
Investigate into the end-to-end server logs as to whether the Symphony file
has been successfully created and switched.
Use the netstat command from the TSO command processor to check if the
connection between the end-to-end server and the distributed domain
manager is established.

Chapter 6. Troubleshooting in a TWS end-to-end environment

361

6.2.5 Symphony renew option


In normal situations, the Symphony file is automatically generated during the
daily plan processing. Some examples of error situations include the following:
There is a non-valid job definition in the script library.
The workstation definitions are incorrect.
An incorrect Windows user name or password is specified.

But, sometimes during the regular operation situations you might need to renew
the Symphony file such as when:
You make changes to the script library or to the definitions of the TOPOLOGY
statement.
You add or change information in the current plan, such as workstation
definitions.

If a problem occurs during the building of the Symphony file, the Symphony file
will not be built. To create the Symphony file, you must perform a Symphony
renew after correcting the errors. You need to look to the following logs to check
if the Symphony file has been created successfully:
The return code of the batchjob may be the first place to look. The messages
produced by the job should include the messages shown in Example 6-16.
Example 6-16 Symphony creation message in batch output
EQQ3101I 0000048 JOBS ADDED TO THE SYMPHONY FILE FROM THE CURRENT Plan
EQQ3087I THE SYMPHONY FILE HAS BEEN SUCCESSFULLY CREATED

Note: In a certain situation, when the scriptlib contains syntax errors indicated
by the message EQQZ086E, the Symphony renew job ended with return code 0.
We have already addressed this issue.
The end-to-end server log must issue the messages that the input translator
finished waiting for batchman.
Example 6-17 Creation messages in end-to-end server log
EQQPT30I
EQQPT22I
EQQPT31I
EQQPT20I
EQQPT21I
EQQPT23I

Starting switching Symphony


Input Translator thread stopped until new Symphony will be available
Symphony successfully switched
Input Translator waiting for Batchman is started
Input Translator finished waiting for Batchman
Input Translator thread is running

Messages in the log of the Tivoli Workload Scheduler for z/OS engine should
contains the messages shown in Example 6-18 on page 363.

362

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example 6-18 Creation message in the z/OS engine log


EQQN111I A NEW SYMPHONY FILE HAS BEEN CREATED
EQQW090I THE NEW SYMPHONY FILE HAS BEEN SUCCESSFULLY SWITCHED

If the Symphony file is not created successfully you need to investigate the logs
for any error messages. Look in the Tivoli Workload Scheduler for z/OS 8.1
Message and Codes, SH19-4548, for an explanation and system programmer
response, or contact your Tivoli customer support.
Note: Recovering the current plan from an error situation may also imply
recovering the Symphony file. If the Symphony file is not up-to-date with the
Current Plan submit the Symphony renew or the daily plan batch job.

See also the Disaster recovery planning chapter in Tivoli Workload Scheduler for
z/OS V8R1 Customization and Tuning , SH19-4544.

6.2.6 UNIX System Services diagnostics


UNIX System Services provides powerful display commands. They can help you
to verify if the required processes are still running, and returns valuable
information about the HFS dataset.
Example 1: To display z/OS UNIX System Services process information on all
z/OS System Services address spaces owned by our end-to-end server user
TWSRES1 enter in SDSF:
/DISPLAY OMVS,U=TWSRES1

The output is as shown in Example 6-19.


Example 6-19 End-to-end process
BPXO040I 13.14.12 DISPLAY OMVS 009
OMVS
000F ACTIVE
OMVS=(3A)
USER
JOBNAME ASID
PID
PPID STATE
START
CT_SECS
TWSRES1 TWSCJSC 0050
65614
1 1FI--- 10.16.33
1.93
LATCHWAITPID=
0 CMD=EQQPHTOP
TWSRES1 TWSCTP
004F
65615
1 MW---- 10.16.33
102.46
LATCHWAITPID=
0 CMD=EQQPHTOP
TWSRES1 TWSCTP
004F
65616
65615 1S---- 10.16.33
102.46
LATCHWAITPID=
0 CMD=/usr/lpp/TWS/TWS810/bin/starter /usr/lpp
TWSRES1 TWSCTP
004F
65617
65616 HS---- 10.16.33
102.46
LATCHWAITPID=
0 CMD=/usr/lpp/TWS/TWS810/bin/translator SYSZD
TWSRES1 TWSCTP
004F
65618
65616 1F---- 10.16.34
102.46
LATCHWAITPID=
0 CMD=/usr/lpp/TWS/TWS810/bin/netman -port 312
TWSRES1 TWSC
004E
50397284
1 1RI--- 13.54.26
28.82

Chapter 6. Troubleshooting in a TWS end-to-end environment

363

LATCHWAITPID=
TWSRES1 TWSCTP
LATCHWAITPID=
TWSRES1 TWSCTP
LATCHWAITPID=
TWSRES1 TWSCTP
LATCHWAITPID=
TWSRES1 TWSRES1
LATCHWAITPID=
TWSRES1 TWSRES1
LATCHWAITPID=
TWSRES1 TWSCTP
LATCHWAITPID=

004F
004F
004F
001F
001F
004F

0 CMD=EQQTTTOP
33620089
65618 1F---- 12.34.18
102.46
0 CMD=EQQTTTOP
33620089
65618 1F---- 12.34.18
102.46
0 CMD=/usr/lpp/TWS/TWS810/bin/writer -- 2001
50397311
65618 1F---- 12.34.11
102.46
0 CMD=/usr/lpp/TWS/TWS810/bin/mailman -parm 32
33620135
1 MRI--- 12.00.09
1.83
0 CMD=EXEC
67174569
33620135 1CI--- 12.01.40
1.83
0 CMD=sh -L
33620140
50397311 1F---- 12.34.12
102.46
0 CMD=/usr/lpp/TWS/TWS810/bin/batchman -parm

The display output shows that job name TWSCTP (our end-to-end server started
task) running in address space id x4f started several end-to-end processes like
translator, netman, writer, mailman, and batchman at a specific time. Every
process has a process ID (PID) assigned. Looking to the parent process ID
(PPID) you can see that mailman and writer have been spawned by the netman
process.
Example 2: To display detailed file system information on currently mounted files,
enter:
/DISPLAY OMVS,FILE

The output is as shown in Example 6-20.


Example 6-20 File system status
BPXO045I 13.39.04 DISPLAY OMVS 234
OMVS
000F ACTIVE
OMVS=(3A)
TYPENAME
DEVICE ----------STATUS----------AUTOMNT
27 ACTIVE
NAME=*AMD/u
PATH=/u
OWNER=SC65
AUTOMOVE=Y CLIENT=N
TFS
238 ACTIVE
NAME=/SC63/TMP
PATH=/SC63/tmp
MOUNT PARM=-s 500
OWNER=SC63
AUTOMOVE=N CLIENT=N
TFS
223 ACTIVE
NAME=/SC65/TMP
PATH=/SC65/tmp
MOUNT PARM=-s 500
OWNER=SC65
AUTOMOVE=N CLIENT=Y
HFS
65 ACTIVE
NAME=WTSCPLX2.SC65.SYSTEM.HFS

364

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

MODE
RDWR

RDWR

RDWR

RDWR

PATH=/SC65
OWNER=SC65
AUTOMOVE=N CLIENT=Y
HFS
50 ACTIVE
NAME=OMVS.TWS810.TWSCTP.HFS
PATH=/tws/twsctpwrk
OWNER=SC65
AUTOMOVE=Y CLIENT=Y

RDWR

The display command returns the name of the HFS dataset, mountpoint, owner,
and mode once it has been mounted.
Example 3: To display information about current system-wide parmlib limits,
enter:
/ DISPLAY OMVS,L

The output is as shown in Example 6-21.


Example 6-21 Displaying limits
BPXO051I 14.52.40 DISPLAY OMVS 073
OMVS
000F ACTIVE
OMVS=(3A)
SYSTEM WIDE LIMITS:
LIMMSG=NONE
CURRENT HIGHWATER
SYSTEM
USAGE
USAGE
LIMIT
MAXPROCSYS
47
61
300
MAXUIDS
1
2
50
MAXPTYS
0
1
256
MAXMMAPAREA
0
0
4096
MAXSHAREPAGES
0
0
32768000
IPCMSGNIDS
10
10
20000
IPCSEMNIDS
0
0
20000
IPCSHMNIDS
0
0
20000
IPCSHMSPAGES
0
0
2621440
IPCMSGQBYTES
--108
262144
IPCMSGQMNUM
--9
10000
IPCSHMMPAGES
--0
25600
SHRLIBRGNSIZE
0
0
67108864
SHRLIBMAXPAGES
0
0
4096

Displaying the limits is useful to determine if you hit any system-specific


limitations.
For more display examples you can also look in z/OS V1R3.0 MVS System
Commands, SA22-7627.
Example 4: Verify the current utilization of the working directory.

Chapter 6. Troubleshooting in a TWS end-to-end environment

365

This is not possible via the display omvs command. Instead of you can use the
df -k shell command. In the Ishell type u in the command line before the working
directory.
Example 6-22 File system utilization
File System Attributes
File system name:
OMVS.TWS810.TWSCTP.HFS
Mount point:
/tws/twsctpwrk
Status . . . . .
File system type
Mount mode . . .
Device number .
Type number . .
DD name . . . .
Block size . . .
Total blocks . .
Available blocks
Blocks in use .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

:
:
:
:
:
:
:
:
:
:

Available
HFS
R/W
50
1
4096
256008
251383
4595

6.2.7 TCP/IP server


If an internal trace is necessary to pin down problems on the TCP/IP server, you
can insert diagnose flags within the server parameter file in order to trace certain
types of data. Perform this task only if the Tivoli support engineer asks you,
because it could generate a huge trace output. The trace information will be put
into the EQQMLOG of the server. To activate the trace, define the diagnose
statement in the following way:
DIAGNOSE SERVERFLA(.............)
DIAGONSE TPLGYFLAGS(...........)

Where SERVERFLA produces general trace output and TPLGYFLAGS


produces output related to end-to-end processing.
Ask your IBM support center for the right settings related to your problem.

6.2.8 Tivoli Workload Scheduler for z/OS connector


The wopcconn utility provides a connector trace that can be useful for debugging
by the Tivoli support engineers. The trace can either be activated at the
command line with wopcconn or in interactive mode.

366

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

To control the current settings, issue the following command:


wopcconn -view -e engine_name | -o object_id

In order to set a trace level, issue the following command:


wopcconn -set -e engine_name | -o object_id

[-t trace_level] [-l trace_length]

Or, use interactive mode, wopcconn, as follows:


1. Log on to the managed node where the connector is installed and type in
wopcconn.
Example 6-23 Connector display
******** OPC Connector manage program ********
Main menu
1. Create new OPC Connector
2. Manage an existing OPC Connector
0. Exit

2. Select option 2.
Example 6-24 Instance selection
******** OPC Connector manage program ********
Select instance menu
1. OPC
0. Exit

3. Choose the instance where you need to activate the trace level.
Example 6-25 Changing connector attributes
******** OPC Connector manage program ********
Manage menu
Name
Object id
Managed node
Status

:
:
:
:

OPC
1929225022.1.1771#OPC::Engine#
itso7
Active

OPC version

: 2.3.0

1. Stop
the OPC Connector
2. Start
the OPC Connector
3. Restart the OPC Connector
4. View/Change attributes

Chapter 6. Troubleshooting in a TWS end-to-end environment

367

5. Remove instance
0. Exit

4. Select option 4 to change the connector attributes.


5. Change the trace level with option 6 to your value.
Example 6-26 Setting the new trace level
******** OPC Connector manage program ********
View/Change attributes menu
Name
Object id
Managed node
Status

:
:
:
:

OPC
1929225022.1.1771#OPC::Engine#
itso7
Active

OPC version

: 2.3.0

2. Name

: OPC

3. IP Address or Hostname: 9.39.62.19


4. IP portnumber
: 3111
5. Trace Length
6. Trace Level

: 524288
: 1

0. Undo changes
1. Commit changes

6. Commit your changes and restart the connector to activate it.


The default length of the trace is 512 KB for each instance. When the length is
exceeded, the trace is wrapped around. If an unexpected error occurs, the trace
must be copied as soon as possible. You will find the trace in the directory
$DBDIR/OPC/engine_name.log.
The trace levels, listed in Table 6-6, are available for a Tivoli Workload Scheduler
for z/OS connector instance.
Table 6-6 Trace levels

368

Level

Trace data

Errors

Called methods
Connections
IDs

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Level

Trace data

Filters
PIF requests
Numbers of elements returned in queries

Flow the connection details


Main functions in/out values

Service functions in/out


Main functions in/out values

Frequently called functions in/out

6.2.9 Job Scheduling Console


In case of errors when using the Job Scheduling Console, check the message
identifier to understand the source of the error:
GJS0xxx

JSC error

GJSQxxx

Tivoli Workload Scheduler for z/OS specific error

GJSWxxx

Tivoli Workload Scheduler specific error

Read error details for explanations and suggested behaviors. The console and
error log can be found in the \Jsconsole\dat\.tmeconsole directory. Consult the
trace file. Remember that error tracing is active by default. Also, check the file
bin\java\error.log for untraced errors.

JSC error examples


Here we show you common JSC error situations and the possible causes.
Error description: Error while logging into the TMR host.

Chapter 6. Troubleshooting in a TWS end-to-end environment

369

Figure 6-4 JSC log on error message

Possible cause: Check login/password, TCP/IP connection to Tivoli host, and


Tivoli authorizations.
Error description: Connector error message

Figure 6-5 Connector link failure

Possible cause: The connector is probably not correctly installed or instances


have not been created. Check connector installation. Check also if JSC
installation is corrupted or incomplete.
Error description: One of the connector instances is not enabled because of
errors during loading.

370

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 6-6 Disabled instance

Possible cause: Check if the version of that engine is supported by the JSC. For
a complete list of the compatible versions, see Chapter 3, Planning, installation,
and configuration of the TWS 8.1 on page 91.
Error description: There is a problem with Tivoli Framework (for example oserv
down, marshal error).

Figure 6-7 Framework failure

Possible cause: Check Framework status. Compare also the JSC and connector
trace files.

Chapter 6. Troubleshooting in a TWS end-to-end environment

371

Error description: There is a problem with TCP/IP communication between the


connector and the TWS for z/OS host.

Figure 6-8 Allocation error message

Possible cause: Check network integrity and TWS for z/OS connector
parameters.

6.2.10 Trace for the Job Scheduling Console


The trace utility provides a strong mechanism to find and diagnose the JSC
problems. A log file is produced that monitors of all JSC activities. Tracing can
work at different detail levels to filter data of interest. Trace output can be
customized.
Open the console file using:
...\bin\java\console.bat on Windows
.../bin/java/SUNconsole.sh on Solaris
.../bin/java/AIXconsole.sh on AIX

Find the section where the user can customize variable values. Locate the two
variables, TRACELEVEL and TRACEDATA. They should be set to 0 by default.
Example 6-27 Console.bat file
REM
REM
set
set
REM

372

---------- Section to be customized --------change the following lines to adjust trace settings
TRACELEVEL=0
TRACEDATA=0
------ End of section to be customized -------

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Change the value of the variable TRACELEVEL to activate the control flow trace
at different levels. Change the value of the variable TRACEDATA to activate the
data flow trace at different levels. Acceptable values range from 0 to 3.
TRACELEVEL also allows the value -1, which completely disables the trace, as
shown in Table 6-7.
Table 6-7 Tracelevel values
Trace level value

Trace types

-1

No trace at all (to be used only in particular


cases).

Only errors and warnings are recorded.

Errors, warnings, and info/debug lines are


recorded.

Errors, warnings, and methods entry/exit


are recorded.

Errors, warnings, info/debug lines, and


method entry/exit are recorded.

Table 6-8 lists trace data values and their corresponding trace types.
Table 6-8 Tracedata
Tracedata value

Trace type

No data is traced.

Data structures from/to the connector are


traced.

The internal value of the JSC beans are


recorded.

Both data structures and bean internal


values are recorded.

Note: Tracing can adversely affect the performance of JSC. Use major values
of TRACELEVEL and TRACEDATA only when necessary.

Trace files can become huge. Use advanced customization to optimize disk
space allocation. Move or delete log files related to the previous executions.

Chapter 6. Troubleshooting in a TWS end-to-end environment

373

6.3 TWS troubleshooting checklist


Below are some common TWS problems and how to solve them.

6.3.1 FTAs not linking to the master


If netman is not running on the FTA:

If netman has not been started, start it from the command line with the
StartUp command. Note that this will start only netman, not any other
TWS processes.
If netman started as root and not as a TWS user, bring TWS down
normally, and then start up as a TWS user through the conman command
line on the master or FTA:
unlink <FTA name>
stop <FTA name>; wait
shut <FTA name>; wait
StartUp

If netman could not create a standard list directory:

If the file system is full, open some space in the file system.

If a file with the same name as the directory already exists, delete the
file with the same name as the directory. The directory name would be
in a yyyy.mm.dd format.

If the directory or netman standard list is owned by root and not TWS,
change the ownership of the directory standard list from the command
line in UNIX with the chown yyyy.mm.dd TWS command. Note that this
must be done as root user.

If the host file or DNS changes, it means that:

The host file on the FTA or master has been changed.


The DNS entry for the FTA or master has been changed.
The hostname on the FTA or master has been changed.
If the communication processes are hung:

Mailman process down or hung on FTA:

TWS was not brought down properly.


Try to always bring TWS down properly via the conman command line
on the master or FTA using the following commands:
unlink <FTA name>
stop <FTA name>; wait
shut <FTA name>; wait

374

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

If the mailman read corrupted data, try to bring TWS down normally. If
this is not successful, kill the mailman process with the following steps.
UNIX:
Run ps -ef | grep maestro to find the process ID.
Run kill -9 <process id> to kill the mailman process.
Windows (commands in TWShome\unsupported directory):
Run listproc to find the process ID.
Run killproc <process id> to kill the mailman process.

If batchman is hung:
Try to bring TWS down normally. If not successful, kill the mailman
process as explained in the previous bullet.

If the writer process for FTA is down or hung on the master, it means that:

FTA was not properly unlinked from the master.


The writer read corrupted data.
Multiple writers are running for the same FTA.
Use ps -ef | grep maestro to check that the writer processes are
running. If there is more than one process for the same FTA, perform the
following steps:

Shut down TWS normally.

Check the processes for multiple writers again.

If there are multiple writers, kill them.

If the netman process is hung:

If multiple netman processes are running, try shutting down netman


properly first. If this is not successful, kill netman using the following
commands:
UNIX:
Use ps -ef | grep maestro to find the running processes.
Issue kill -9 <process id> to kill the netman process.
Windows (commands in unsupported directory):
Use listproc to find the process ID.
Run killproc <process id> to kill the mailman process.
Hung port/socket; FIN_WAIT2 on netman port.

Use netstat -a | grep <netman port> for both UNIX and NT systems
to check if netman is listening.

Look for FIN_WAIT2 for the TWS port.

If FIN_WAIT2 does not time out (approximately 10 minutes), reboot.

Chapter 6. Troubleshooting in a TWS end-to-end environment

375

Network problems to look for outside of TWS include:

The router is down in a WAN environment.


The switch or network hub is down on an FTA segment.
There has been a power outage.
There are physical defects in the network card/wiring.

6.3.2 Batchman not up or will not stay up (batchman down)


If the message file has reached 1 MB:

Check the size of the message files (files whose names end with .msg) in the
TWS home directory ad pobox subdirectory. 48 bytes is the minimum size of
these files.
Use the evtsize command to expand temporarily, and then try to start
TWS:
evtsize <filename> <new size in bytes>

For example:
evtsize Mailbox.msg 2000000

If necessary, remove the message file (only after failing with the EVTSIZE
and start).
Message files contain important messages being sent between TWS
processes and between TWS agents. Remove a message file only as a
last resort; all data in the message file will be lost.
Jobman not owned by root.

If jobman (in the bin subdirectory of the TWS home directory) is not owned by
root, correct this problem by logging in as root and running the following
command chown root jobman.
Read bad record in Symphony file.

This can happen for the following reasons:


There is a byte order problem between UNIX and Intel platforms (requires
patches).
Initialization process interrupted or failed.
Cannot create Jobtable.
Corrupt data in Jobtable.

376

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Message file corruption.

This can be for the following reasons:


Bad data
File system full
Power outage
CPU hardware crash

6.3.3 Jobs not running


Jobs not running on NT

If NT authorizations for TWS users are not in place, you can try the
following:

Act as part of the operating system.

Increase quotas.

Log on as a batch job.

Log on as a service.

Log on locally.

Replace a process level token.

Valid NT or Domain user for FTA not in the TWS user database
Add the TWS user for FTA in the TWS user database. Do not fill in the
CPU Name field if the TWS user is a domain account.
Password for NT user has been changed.
Do one of the following:

Change the password on NT to match the one in the TWS user


database.

Change the password in the TWS user database to a new password.

Note that changes to the TWS user database will not take effect until
Jnextday.
If the user definition existed previously, you can use the altpass command
to change the password for theproduction day.
Jobs not running on NT or UNIX

Batchman down.
See Batchman not up or will not stay up (batchman down) on page 376.

Chapter 6. Troubleshooting in a TWS end-to-end environment

377

Limit set to 0.
To change the limit to 10 via the conman command line:
For a single FTA:
lc <FTA name>;10

For all FTAs:


lc @;10;noask

Fence set above the limit.


To change fence to 10 via the conman command line:
For all FTAs:
f @;10;noask

If dependencies are not met, it could be for the following reasons:

Start time not reached yet, or UNTIL time has passed.

OPENS file not present yet.

Job FOLLOW not complete.

6.3.4 Jnextday is hung or still in EXEC state


Stageman cannot get exclusive access to Symphony.
Batchman and/or mailman was not stopped before running Jnextday from the
command line.
Jnextday not able stop all FTAs.

Network segment down and cannot reach all FTAs.


One or more of the FTAs has crashed.
Netman not running on all FTAs.
Jnextday not able to start or initialize all FTAs.

The master or FTA was manually started before Jnextday completed


stageman.
Reissue a link from the master to the FTA.
The master was not able to start batchman after stageman completed.
See the section Batchman not up or will not stay up (batchman down) on
page 376.
The master was not able to link to FTA.
See FTAs not linking to the master on page 374.

378

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

6.3.5 Jnextday in ABEND state


Jnextday not completing compiler processes.

This may be due to bad or missing data in schedule or job. You can perform
the following actions:
Check for missing calendars.
Check for missing resources.
Check for missing parameters.
Jnextday not completing stageman process.

This may be due to bad or missing data in the CARRYFORWARD schedule.


You can perform the following actions:
Run show jobs or show schedules to find the bad schedule.
Add missing data and rerun Jnextday.
Cancel the schedule and rerun Jnextday.
Jnexday not completing logman process.

The reason may be one of the following:


Negative runtime error (requires patch).
The master was manually started before logman completed.

6.3.6 FTA still not linked after Jnextday


Symphony file corruption

Corruption during transfer of Sinfonia file


Byte order problem between UNIX and Intel
Apply patches that correct byte order problem. Recent versions of TWS
are unaffected by this problem.
Symphony file, but no new run number, date, or time stamp

You can perform the following actions:


Try to link FTA. See Section 6.3.1, FTAs not linking to the master on
page 374.
Remove Symphony and message files (on FTA only) and link from the
master again.

Chapter 6. Troubleshooting in a TWS end-to-end environment

379

Run number, Symphony file, but no date or time stamp

You can perform the following actions:


Try to link FTA. See Section 6.3.1, FTAs not linking to the master on
page 374.
Remove Symphony and message files (on FTA only) and link from the
master again.

6.4 A brief introduction to the TWS 8.1 tracing facility


Tivoli Workload Scheduler 8.1 introduces a new tracing facility called AutoTrace.
Before the incorporation of AutoTrace into TWS, support personnel used a
comparatively primitive tracing method: If a problem appeared that could not be
explained without a trace, the program developers would compile a debug
version of the affected program and the customer would put this debug program
in place until the problem happened again.
With the new AutoTrace tracing facility, potentially valuable trace data are
collected all the time. These data can be exported to a binary file and sent to IBM
support in the event of a program failure.
Normally, it should not be necessary to capture a trace. If IBM support requests a
trace, you can capture a trace using the command shown in Example 6-28.
Example 6-28 Capturing a trace using the atctl snap command
atctl snap 1 snapfile.at

The file snapfile.at is the captured trace file. This file can be named anything you
like, but it is customary to end the filename with .at to identify it as an AutoTrace
snap file. The file is not readable without special tools and library files.

380

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Chapter 7.

Tivoli NetView integration


This chapter explains the integration of Tivoli Workload Scheduler on UNIX with
Tivoli NetView on AIX platform. We first give a brief overview of Tivoli NetView
and then describe how you can integrate Tivoli Workload Scheduler with Tivoli
NetView and how you can use it to monitor your scheduling environment.
TWS will run on UNIX and NT; however, the integration is only provided on the
UNIX platform.
We will cover the following in this chapter:
Tivoli NetView basics
What the Tivoli Workload Scheduler and NetView integration provides
How the integration works
Integration architecture and our environment
Installing and customizing the integration software
Operation

Copyright IBM Corp. 2002

381

7.1 Tivoli NetView


Tivoli NetView (or NetView) is one of Tivolis availability tools that emphasizes
monitoring network resources. In order to better explain the functions of NetView,
we will first cover the basic network management terminology.

7.1.1 Network management ABCs


Here is basic terminology that is commonly used as the network management
jargon:
Manager

Manager is a software application that monitors and


controls a network. A manager collects, processes,
stores, and displays network data. Tivoli NetView is a
manager.

Agent

Agent is a software application that is responsible for


reporting on and maintaining information related to one or
more devices on the network. An agent gives network
information to a manager.

Protocol

Protocol is a set of rules that determine how individual


network devices communicate with each other.

SNMP

Simple Network Management Protocol (SNMP) enables


managers to ask agents to retrieve and change
information about network devices. NetView uses SNMP
to manage TCP/IP networks. Because SNMP has low
network overhead, it is an inexpensive way to gather
network statistics. It is also ideal for real-time monitoring.

MIB

Management information base (MIB), a collection of many


pieces of information called MIB objects, that are located
on the network device. The MIB objects can be accessed
and sometimes changed by the agent at the managers
request. This is how the Tivoli NetView program manages
network devices.

7.1.2 NetView as an SNMP manager


NetView uses SNMP to gather information about the resources that it manages
and serves as an SNMP manager. The following are the basic functions of
NetView:
Discovering the devices in the network.
Collecting and storing data about network conditions.

382

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Issuing and responding to notifications about network conditions.


Issuing commands that cause actions at network nodes.

In addition to these basic functions, NetView has extended is management


flexibility to provide both network management and systems management as it
becomes more tightly integrated with Tivolis other management applications
such as Tivoli Workload Scheduler.
NetView keeps the dynamically updated topology and customization information
in its database. It uses maps to maintain customizable views of the database.
Each network object in the map is seen as a symbol. In other words, a symbol is
a visual representation of a network object. NetView has two user interfaces. The
NetView GUI on the server and the Java-based Web browser interface. The
topology maps are the same in the NetView GUI on the server and the
Java-based Web browser interface.
For more information about NetView, a good place to start is NetView for Unix
Users Guide for Beginners V7.1, SC31-8891.

7.2 What the integration provides


The integration between NetView and Tivoli Workload Scheduler provides
systems management of Tivoli Workload Scheduler as an application from the
NetView console. It is treated like any other application with information being
collected by SNMP agents on the Tivoli Workload Scheduler workstations and
forwarded to NetView. NetView uses this information to update Tivoli Workload
Scheduler-specific submaps.
The integration (which we will call Tivoli Workload Scheduler/NetView or the
integration software, hereafter) provides SNMP managers and agents for Tivoli
Workload Scheduler as well as the Tivoli Workload Scheduler MIB. Tivoli
Workload Scheduler uses the NetView databases to store workstation and
process information, which is used to generate maps. It can send and receive
SNMP events and can signal events using SNMP traps. The Tivoli Workload
Scheduler MIB can be viewed from the NetView console.

Chapter 7. Tivoli NetView integration

383

The integration includes the following:


A number of Tivoli Workload Scheduler-specific submaps have been defined
to illustrate where Tivoli Workload Scheduler is running on the network and its
status. This shows the Tivoli Workload Scheduler network topology. There are
two sets of submaps:

The submaps showing the status of the Tivoli Workload Scheduler


managed jobs or job streams on a Tivoli Workload Scheduler network.
The maps reflect job or job stream status changes.
The submaps showing the status of the Tivoli Workload Scheduler
processes on a given workstation.
The provided Tivoli Workload Scheduler management events include:

Notification of stopped or failed Tivoli Workload Scheduler processes


Notification of abended or failed jobs
Notification of a broken link between the Tivoli Workload Scheduler
workstations
Some Tivoli Workload Scheduler tasks can be launched from the NetView
console. The menu actions include:

Start Tivoli Workload Scheduler processes.


Stop Tivoli Workload Scheduler processes.
Run the console manager (conman) on any workstation in the Tivoli
Workload Scheduler network.
These components of the integration are explained in the following sections.

384

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

7.3 How the integration works


The integration is accomplished by the Tivoli Workload Scheduler/NetView
software, which is delivered and installed as part of Tivoli Workload Scheduler on
UNIX.
Important: Tivoli Workload Scheduler/NetView is only provided for NetView
running on AIX. NetView/NT is not supported for integration. Also, at least one
Tivoli Workload Scheduler workstation running on UNIX is required for the
integration.

Tivoli Workload Scheduler/NetView consists of manager and agent software. The


manager runs on the NetView server nodes, and the agent runs on the managed
nodes. The manager (mdemon) polls its agents (magent) periodically to obtain
information about scheduler processing. If the information returned during a poll
is different from that of the preceding poll, the color of a corresponding symbol is
changed to indicate a state change, for example, from green (normal) to red
(critical) or yellow (marginal).
The agents also generate SNMP traps to inform the manager of asynchronous
events, such as job abends, stuck jobs, and restarted scheduler processes.
Although polling and traps are functionally independent, the information that
accompanies a trap can be correlated with symbol state changes. If, for example,
a scheduled job abends, the symbol for the workstation changes color, and a job
abend trap is logged in the NetView event log. By scanning the log, you can
quickly isolate the problem and take appropriate action.
The muser process runs commands issued by a NetView user, and updates the
users map. An muser is started for each NetView user whose map has the Tivoli
Workload Scheduler/NetView application activated.

Chapter 7. Tivoli NetView integration

385

7.4 Integration architecture and our environment


The environment we used for testing the integration between NetView and Tivoli
Workload Scheduler is shown in Figure 7-1.
tividc11 (AIX 4.3)
Netview for AIX 6.2
Tivoli Workload Scheduler/Netview
manager software (mdemon)
Tivoli Managed Node
Fault Tolerant Agent

tokyo (NT SP 5)
Fault Tolerant Agent

Token
Ring

eastham (AIX 4.3)


Backup Domain Manager
Tivoli Workload Scheduler/Netview
agent (magent)
Tivoli Managed Node

yarmouth (AIX 4.3)


TWS Master
Tivoli Workload Scheduler/Netview
agent (magent)
TMR Server

Figure 7-1 Our distributed TWS environment

Note that Tivoli Workload Scheduler/NetView manager software (mdemon)


should be installed on the NetView for AIX machine, which is also called the
management node in Tivoli Workload Scheduler/NetView terminology.

386

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Note: In NetView 4.1 and later, the management node functions can be
distributed across a server and one or more clients. In that case you have to
install Tivoli Workload Scheduler and magent on the client NetView machines
as well. We will not use NetView clients in our scenario, but if you need more
information on how to install and configure magent on NetView clients, please
refer to Chapter 9, Integration with Other Products of the Tivoli Workload
Scheduler 8.1 Reference Guide, SH19-4556.

In the same way, Tivoli Workload Scheduler/NetView agent (magent) software


should be installed on at least one Tivoli Workload Scheduler UNIX workstation.
This is called a managed node (not to be confused with the Tivoli managed node)
in Tivoli Workload Scheduler/NetView terminology. Tivoli Workload
Scheduler/NetView agent should be installed on either the master or the backup
domain manager that is, a fault tolerant agent with fullstatus on and
resolvedep on in its definition.
Important: You cannot install magent on TWS workstations other the UNIX,
but other workstations can be managed by NetView in the same way as UNIX
workstations, provided that at least one magent is installed in the Tivoli
Workload Scheduler network.

A group of nodes that are configured as a Tivoli Workload Scheduler network,


and whose job scheduling status is managed from a NetView management node,
is called Managed Workload Scheduler Network. There must be at least one
managed node in a Managed Workload Scheduler Network.
In our environment we opted to install one management node (tividc11) and two
managed nodes (yarmouth, which is also the master, and eastham, the backup
domain manager).

Chapter 7. Tivoli NetView integration

387

Installation tips:
It is a good policy to configure both TWS master and TWS backup domain
managers as managed nodes. If you are using only one managed node,
which is not the Tivoli Workload Scheduler master, you have to set
OPTIONS=MASTER in the BmEvents configuration file on the managed
node. For more explanation of the BmEvents configuration file refer to
Chapter 9, Integration with Other Products, of the Tivoli Workload
Scheduler 8.1 Reference Guide, SH19-4556.
Although we have included, management nodes (server and client) need
not necessarily be members of Managed Workload Scheduler Networks,
but making them so makes your configuration easier.
The best location for a management node is a Tivoli Workload Scheduler
FTA workstation. Choosing a Tivoli Workload Scheduler master for this
purpose is generally not recommended, especially for busy Tivoli Workload
Scheduler networks, due to the additional overhead of the NetView
application (although it does have one benefit of minimizing Tivoli
Workload Scheduler/NetView manager-agent polling traffic).

7.5 Installing and customizing the integration software


The integration software is installed as part of Tivoli Workload Scheduler
installation on UNIX. We will show you how to customize the software to enable
the integration. Installation and customization steps are also covered in detail in
Chapter 9, Integration with Other Products, of the Tivoli Workload Scheduler
8.1 Reference Guide, SH19-4556, so please refer to this manual if you need
more information.
Important: Before performing the following steps for Tivoli Workload
Scheduler/NetView, make certain that Tivoli Workload Scheduler for UNIX is
properly installed on your environment (master, backup domain controller, and
fault tolerant agents).

All installations use the /usr/local/<TWShome>/OV/customize script. This script


can be viewed if you want to see what it actually does. This script can also be run
with a -noinst option so the changes can be reviewed.
If there are problems with the installation, it can be backed out using the
/usr/local/<TWShome>/OV/decustomize script.

388

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The customize script should be run on both management nodes (NetView server
--and clients, if any--and managed nodes (Tivoli Workload Scheduler/UNIX
workstations).

7.5.1 Installing mdemon on the NetView Server


The following steps are used to install mdemon on the NetView server-tividc11:
1. Run conman shutdown to stop all Tivoli Workload Scheduler processes.
2. Log in as root.
3. Run the following:
/bin/sh /tivoli/TWS/D/tws-d/OV/customize -uname tws-d

Where /tivoli/TWS/D/tws-d/ is our Tivoli Workload Scheduler home directory


and tws-d is our maestro user ID.
Important notes:

1. If you are installing on a NetView client instead of a server, you use the
-client keyword such as:
/bin/sh /tivoli/TWS/D/tws-d/OV/customize -client hostname

Where hostname is the NetView Server.


By default, the customize script assumes that you are installing on a
server.
2. Do not forget to use the uname parameter if your maestro user ID is
different from the default user ID, which is maestro.

3. The output of the script is shown in Example 7-1.


Example 7-1 Output of the script
280505 modifying files for tws-d in /tivoli/TWS/D/tws-d
AWS22280506 BmEvents.conf.orig already exists
AWS22280506 StartUp.OV.orig already exists
AWS22280506 Maestro.s.c.orig already exists
AWS22280506 decustomize.orig already exists
AWS22280506 H.conf.D.orig already exists
AWS22280506 H.conf.T.orig already exists
AWS22280506 H.timeout.T.orig already exists
AWS22280506 H.traps.T.orig already exists
AWS22280506 Mae.aix.app.orig already exists
AWS22280506 Mae.hp.app.orig already exists
AWS22280506 Mae.mgmt.lrf.orig already exists
AWS22280506 Mae.actions.orig already exists
AWS22280506 mae.traps.hp.orig already exists

Chapter 7. Tivoli NetView integration

389

AWS22280506 maestroEvents.orig already exists


AWS22280507 Copying the appropriate application registration file.
AWS22280508 Copying sample filters
AWS22280509 Copying the field registration
AWS22280510 Compiling the field registration
AWS22280511 Making the help directories
AWS22280512 Copying the dialog help texts
AWS22280513 Copying the task help texts
AWS22280514 Copying the function help texts
AWS22280515 Copying the local registration
ovaddobj - Static Registration Utility
Successful Completion
AWS22280517 Adding OVW path to TWS .profile
AWS22280519 tws-d .profile has replaced the old file.
The previous version is in /tivoli/TWS/D/tws-d/.profile.old.
AWS22280520 Adding traps to trapd.conf
Trap uTtrapReset has been added.
Trap uTtrapProcessReset has been added.
Trap uTtrapProcessGone has been added.
Trap uTtrapProcessAbend has been added.
Trap uTtrapXagentConnLost has been added.
Trap uTtrapJobAbend has been added.
Trap uTtrapJobFailed has been added.
Trap uTtrapJobLaunch has been added.
Trap uTtrapJobDone has been added.
Trap uTtrapJobUntil has been added.
Trap uTtrapJobCancel has been added.
Trap uTtrapJobCant has been added.
Trap uTtrapSchedAbend has been added.
Trap uTtrapSchedStuck has been added.
Trap uTtrapSchedStart has been added.
Trap uTtrapSchedDone has been added.
Trap uTtrapSchedUntil has been added.
Trap uTtrapGlobalPrompt has been added.
Trap uTtrapSchedPrompt has been added.
Trap uTtrapJobPrompt has been added.
Trap uTtrapJobRecovPrompt has been added.
Trap uTtrapLinkDropped has been added.
Trap uTtrapLinkBroken has been added.
Trap uTtrapDmMgrSwitch has been added.
AWS22280524 IMPORTANT
AWS22280525 Be sure to add an appropriate entry to each TWS cpu's rhost
This could be '"<manager> <user>"' if a user other than tws will be
running the management station. Or '"<manager>"' if the TWS user
AWS22280528 will be managing from the management station.
AWS22280529 see documentation on remsh and .rhosts for further information.
AWS22280530 will install the agent reporting to tividc11
AWS22280531 changing ownership and permissions on agent
AWS22280532 Processing the agent files

390

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

AWS22280534 /etc/snmpd.peers has been replaced the old file is


/etc/snmpd.peers.old
AWS22280536 /etc/snmpd.conf has been replaced the old file is
/etc/snmpd.conf.old
AWS22280537 Recompile the defs for this machine
AWS22280539 /etc/mib.defs has been replaced the old file is /etc/mib.defs.old
AWS22280541 Refreshing snmpd
0513-095 The request for subsystem refresh was completed successfully.
AWS22280542 Setting up the agent files
AWS22280543 Copying the configuration files.
AWS22280545 /tivoli/TWS/D/tws-d/StartUp has been replaced
AWS22280523 changing ownership and permissions on programs
AWS22280546 You should now restart NetView/Openview and TWS

4. Execute the startup script to restart the TWS netman process using:
/tivoli/TWS/D/tws-d/StartUp

5. mdemon is started with NetView as part of the ovstart sequence so start the
NetView deamon as follows:
/usr/OV/bin/ovstop
/usr/OV/bin/ovstart

Or you can use the smitty panels: Smitty -> Commands Applications and
Services -> Tivoli NetView -> Control -> Stop all running daemons, and then
Restart all stopped daemons.
Tip: The run options of mdemon are included in the /usr/OV/lrf/Mae.mgmt.lrf
file.

7.5.2 Installing magent on managed nodes


The following steps are used to install magent on eastham.
1. Run conman shutdown to stop all Tivoli Workload Scheduler processes.
2. Log in as root.
3. Run the following:
/bin/sh /tivoli/TWS/D/tws-d/OV/customize -uname tws-d -manager tividc11

Where /tivoli/TWS/D/tws-d/ is our Tivoli Workload Scheduler home directory,


tws-d is our so user ID, and tividc11 is the NetView server.

Chapter 7. Tivoli NetView integration

391

4. Execute the startup script to restart the TWS netman process and magent
using:
/tivoli/TWS/D/tws-d/StartUp

You can repeat the same steps on yarmouth.

7.5.3 Customizing the integration software


After running the scripts on the management and managed nodes, the following
additional steps are necessary:
Configure user access.
Define the submaps in NetView.
Load the Tivoli Workload Scheduler MIB.
Restart Tivoli Workload Scheduler on all systems.

Configuring user access


There are two elements to the configuration:
Configure the remote access for the user.
Define the user to Tivoli Workload Scheduler.

When you define the user that will manage Tivoli Workload Scheduler from
NetView, the user must have access to run commands from the NetView server.
This is done by adding the name of the NetView server to the users
$HOME/.rhosts file on each managed node. In our case the user was root, so the
/.rhosts file was edited to add the entry:
tividc11.itsc.austion.ibm.com root

The second step is to add this user to the Tivoli Workload Scheduler security file.
Since we are using the root user, we do not need to perform this step. (Root user
and the user that you perform the Tivoli Workload Scheduler installation, also
known as the maestro user, is included in the security file by default.) But if you
are using any other user, you have to modify the security file. to add the required
user using the makesec command. Please refer to the Tivoli Workload Scheduler
8.1 Planning and Installation Guide, SH19-4555, for more information on how to
add a user to the security file.

Defining Tivoli Workload Scheduler maps in NetView


Do the following steps to define the Tivoli Workload Scheduler map in NetView:
On the NetView server:
1. Run netview to launch the NetView console.

392

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

2. Select File -> New Map.


3. Give a name to the new map, such as TWS in the Name field.
4. Select Maestro-Unison Software Inc. (c) in the Configurable Applications
field, as in Figure 7-2.

Figure 7-2 Selecting Maestro - Unison Software Inc. (c)

5. Click Configure For This Map... to specify options for the TWS map.
6. When the Configuration dialog appears, the Enable Maestro for this map
option must be set to True. All other options, the commands run under the
Tivoli Workload Scheduler menu items, are left as default (Figure 7-3 on
page 394).

Chapter 7. Tivoli NetView integration

393

Figure 7-3 Configuring the TWS map

7. To complete the addition of the new map, click OK. The new map, TWS, is
shown in Figure 7-4 on page 395.

394

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 7-4 NetView Root map with TWS icon added

Note: The TWS map (Tivoli Systems,Inc. (c) labeled icon) will be seen first in
the color blue. This means that NetView has not yet received any information
from the Tivoli Workload Scheduler application. After the polling cycle it will
turn to the color green.

8. Next you need to add the Tivoli Workload Scheduler management information
base (MIB). MIB is a formal description of a set of network objects that can be
managed using the Simple Network Management Protocol, which is the main
protocol that NetView uses to get information from its agents.
Although this is not a mandatory step, it is highly recommended, since if you
add the MIB, you will be able to use NetViews MIB browser application.
Select Options -> Load/Unload MIBs -> SNMP from the NetView GUI.
9. Since the Workload Scheduler MIB is not loaded by default (Figure 7-5 on
page 396), press the Load button to load it.

Chapter 7. Tivoli NetView integration

395

Figure 7-5 Loaded MIBs by default

10.When the Load MIB From File dialog opens, find and select the Maestro MIB
(/usr/OV/snmp_mibs/Maestro.mib) from the drop-down list and press OK
(Figure 7-6).

Figure 7-6 Loading Maestro MIB file

11.Click Close to close the Load/Unload MIBs dialog box.

396

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Restart Tivoli Workload Scheduler on all systems


Once the changes have been made, Tivoli Workload Scheduler has to be
restarted. To do this, log on to the Tivoli Workload Scheduler master, yarmouth in
our case, and issue the command conman start @. The @ symbol is the Tivoli
Workload Scheduler wildcard for all nodes.

7.6 Tivoli Workload Scheduler/NetView operation


After installing and customizing Tivoli Workload Scheduler/NetView, we will cover
what you can do with this integration in terms of managing your scheduling
environment.

7.6.1 Discovery of the Tivoli Workload Scheduler Network


The first phase of managing the Tivoli Workload Scheduler network is
discovering what nodes are running Tivoli Workload Scheduler and adding
submaps and icons to the NetView display. With a single Tivoli Workload
Scheduler network and the NetView server node as a Tivoli Workload Scheduler
node, as we have, this is done automatically. With multiple Tivoli Workload
Scheduler networks, or where the NetView server is not in the Managed
Workload Scheduler Network, there are additional setup steps outlined in
Chapter 9, Integration with Other Products, of the Tivoli Workload Scheduler
8.1 Reference Guide, SH19-4556.
The discovery adds submaps and icons for:
The Tivoli Workload Scheduler network. These are contained under a new
root map icon.
The Tivoli Workload Scheduler processes running on each node. These are
contained under the node icons along with the interfaces and other node
components.

7.6.2 Tivoli Workload Scheduler maps


Now let us go through some of the TWS maps created in the NetView Root map.
1. Launch the NetView console, if it is not opened already. The Root map is
shown in Figure 7-7 on page 398. After the installation of integration software
this map has an icon labelled Tivoli Systems,Inc. (c). This application symbol
represents all discovered Tivoli Workload Scheduler networks. The color of
the icon represents the aggregate status of all the Tivoli Workload Scheduler
networks.

Chapter 7. Tivoli NetView integration

397

Figure 7-7 NetView Root map

2. Choose the TWS symbol by double clicking. There is only one Tivoli
Workload Scheduler network in this application submap, labelled
YARMOUTH-D:Maestro (Figure 7-8 on page 399). If there were multiple Tivoli
Workload Scheduler networks there would be multiple icons, each labelled
with the specific Tivoli Workload Scheduler network name. The icon color
represents the status of all workstations and links that comprise the Tivoli
Workload Scheduler network.

398

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 7-8 TWS main map

3. Opening the Tivoli Workload Scheduler network icon shows all workstations
and links in the network, as in Figure 7-9 on page 400.

Chapter 7. Tivoli NetView integration

399

Figure 7-9 TWS map showing all workstations

Figure 7-9 is equivalent to the diagram of our Tivoli Workload Scheduler network
(Figure 7-1 on page 386). There are three Tivoli Workload Scheduler nodes, with
the center one yarmouth being the master node. If there were more nodes, they
would be arranged in a star pattern around the master node.
Each node symbol represents the job scheduling on that workstation. The color
represents the status of the job scheduling. If a trap is received indicating a
change in status of a job scheduling component, the icon color will be changed.
The links represent the Tivoli Workload Scheduler workstation links with the color
representing the status of the workstation link.
In addition to monitoring Tivoli Workload Scheduler status, you can run several
tasks from this window.

7.6.3 Menu actions


To use Workload Scheduler/NetView menu actions, select Workload Scheduler
from the Tools menu, as seen in Figure 7-10 on page 401.

400

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 7-10 Launching Maestro menu from Tools

These actions are also available from the object context menu by clicking a
symbol with mouse button three (or right mouse button, if you are using a
two-button mouse).
The menu actions are:

View
Open a child submap for a Workload Scheduler/NetView symbol. Choosing View
after selecting a workstation symbol on the Workload Scheduler network submap
opens the monitored processes submap. Choosing View after selecting a
workstation symbol on the IP node submap returns to the Workload Scheduler
network submap.

Master conman
Run the conman program on the Workload Scheduler master. Running the
program on the master permits you to execute all conman commands (except
shutdown) for any workstation in the Workload Scheduler network.

Chapter 7. Tivoli NetView integration

401

Acknowledge
Acknowledge the status of selected Workload Scheduler/NetView symbols.
When acknowledged, the status of a symbol returns to normal. It is not
necessary to acknowledge critical or marginal status for a monitored process
symbolit will return to normal when the monitored process itself is running
again. Critical or marginal status of a Workload Scheduler workstation symbol
should be acknowledged either before or after you have taken some action to
remedy the problemit will not return to normal otherwise.

Conman
Run the conman program on the selected Workload Scheduler workstations.
Running the program on a workstation other than the master permits you to
execute all conman commands on that workstation only.

Start
Issue a conman start command for the selected workstations. By default, the
command for this action is:
remsh %H %P/bin/conman 'start %c'

Down (stop)
Issue a conman stop command for the selected workstations. By default, the
command for this action is:
remsh %H %P/bin/conman 'stop %c'

Start up
Execute the Workload Scheduler StartUp script on the selected workstations. By
default, the command for this action is:
remsh %h %P/StartUp

Rediscover
Locate new agents and new Workload Scheduler objects, and update all
Workload Scheduler/NetView submaps.
Note: You need to run the re-discover function each time you change the
Workload Scheduler workstation configuration.

As an example, we will stop batchman process on tividc11 from the NetView


console:
1. Select tividc11 and then right click the icon. You will see the Maestro menu
among the side menu options.

402

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

2. Select Maestro and then select Down (stop) from the menu, as shown in
Figure 7-11. This will stop the batchman process on tividc11.

Figure 7-11 Stopping conman

3. We can verify that the batchman process has been stopped by checking the
status on tividc11 as in Example 7-2.
Example 7-2 Check status on tividc11
tividc11:/tivoli/TWS/D/tws-d>conman status
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for group 'TWS-Distributed'.
Locale LANG set to "En_US"
Schedule (Exp) 02/08/02 (#10) on TIVIDC11-D. Batchman down. Limit: 10,
Fence:
0, Audit Level: 1
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001

Chapter 7. Tivoli NetView integration

403

US Government User Restricted Rights


Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Schedule (Exp) 02/08/02 (#10) on TIVIDC11-D. Batchman down. Limit: 10,
Fence:
0, Audit Level: 1

There are no additional submaps to TWS maps. If you want to see the Tivoli
Workload Scheduler process information, you need to browse the node symbol
under the main IP submap, which we are going to explain next.

7.6.4 Discovery of Maestro process information


To find the Maestro process information, drill down to the submap representing
the node. In NetView you can use the Locate function to find a node on the IP
map. To discover the Tivoli Workload Scheduler processes in tividc11 perform
the following steps:
1. In the NetView console select Locate -> Objects -> By Label as shown in
Figure 7-12.

Figure 7-12 Locate function

2. Type tividc11 in the Symbol Label field and then press Apply for NetView to
locate the node for you.

404

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

3. Select the node and press Open, as in Figure 7-13.

Figure 7-13 Locating tividc11 node

This will open up the map that contains the tividc11 node, as shown in
Figure 7-14 on page 406.

Chapter 7. Tivoli NetView integration

405

Figure 7-14 tividc11 on the map

4. Select tividc11 to open up the processes on this node, as shown in


Figure 7-15 on page 407.

406

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Physical interfaces
of the node.

Aggregate status of all


monitored TWS processes
on the node. Clicking on
the icon will show you the
status of monitored TWS
processes such as mailman
and batchman.

TWS workstation icon.


You can issue selected
TWS commands from
the pull down menu of
this icon. Also for
example if a job abends
you can see a status
change in this icon.

Figure 7-15 tividc11 map

This submap shows four icons:


The Tivoli Workload Scheduler workstation (YARMOUTH-D TIVIDC11-D:Maestro icon). The label is in the format TWS master:FTA
name:Maestro.
The Tivoli Workload Scheduler software (Maestro icon). It represents all
Tivoli Workload Scheduler processes running on the node. The color
indicates the aggregate status of all the monitored processes on a Tivoli
Workload Scheduler workstation.
Two different physical interfaces for this node.
Note that the Maestro icon is yellow, which means that there is a problem.
Opening the Tivoli Workload Scheduler software icon shows all Tivoli
Workload Scheduler processes on the node, as in Figure 7-16 on page 408.

Chapter 7. Tivoli NetView integration

407

Figure 7-16 Tivoli Workload Scheduler processes

We see that BATCHMAN, JOBMAN, and MAILMAN icons are yellow, which
means that they are stopped. Remember that we have previously issued a
stop conman command from the NetView console.
Tip: Yellow means a marginal problem or warning. Red means a severe
problem. For example, if we were to kill the mailman process manually using
the kill command (simulating an abend), the mailman process icon would have
turned red.

The processes can be restarted by:


Running the conman start command on the node (or from the Maestro
master).
Using the pull-down menus that were added with the NetView integration
software installation.
Many of these items are grayed-out when selecting this menu for the process
icon. This is because many options are not relevant to a Maestro process.
One of the grayed-out items is the start command, which we need to use.
5. To access the start command we need to go up a level. The easiest way to do
this is to click the grey area on the map (not on icons) with the third mouse
button and select Show Parent. (See Figure 7-17 on page 409.)

408

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 7-17 Show Parent dialog

This will open up the parent map for you.


6. Select the YARMOUTH-D-TIVIDC11-D:Maestro icon with the third mouse
button and select Start from the menu, as shown in Figure 7-18 on page 410.

Chapter 7. Tivoli NetView integration

409

Figure 7-18 Issuing Start command

7. After the polling cycle (which is 60 seconds by default) all Tivoli Workload
Scheduler processes should be started on tividc11. When you click the
Maestro icon again you should see all processes started and all colors
should turn green (Figure 7-19 on page 411).

410

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tip: The rate is defined in the file /usr/OV/lrf/Mae.mgmt.lrf on the


management node. To change the rate you may perform the following:

1. Edit the file to add the -timeout option to the mdemon command line.
2. Delete the old registration by running the ovdelobj command.
3. Register the manager by running the ovaddobj command and supplying
the name of the lrf file.

Figure 7-19 All processes up

7.6.5 Monitoring job schedules from NetView


In addition to giving the status information of Tivoli Workload Scheduler
processes, Tivoli Workload Scheduler/NetView can also inform you about the
abended or failed jobs and broken links between the Tivoli Workload Scheduler
workstations.
In order to test this function, we created a scenario where two jobs (ABENDJOB
and ABENDJOB2) were abended on tividc11. These jobs can be seen from the
Job Scheduling Console in abend states (Figure 7-20 on page 412).

Chapter 7. Tivoli NetView integration

411

Figure 7-20 Job abended on tividc11

When we checked the NetView console we saw that the tividc11 icon in the TWS
map turned red (Figure 7-21 on page 413) and this also has turned the Tivoli
Systems Inc icon (seen at the left side) yellow.

412

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure 7-21 Status changed on tividc11

Note that when the schedules complete successfully after initial failures, the
NetView icons are not restored to a normal green state. The display looks the
same as Figure 7-21. This is because the Tivoli Workload Scheduler/NetView
does not change the symbol state with the incoming success traps by default.
You have two options to turn it back to the green state:
You can configure the Tivoli Workload Scheduler/NetView to send events that
are not sent by default (mostly successful completion events or warning
messages, a list is given in Table 7-2 on page 418). But in this case, you have
to be careful about the extra network traffic you are introducing by doing so.
The other method is to use NetViews Acknowledge function. To do this:

a. Select tividc11 and click it with the third mouse button.


b. Select Options and then Acknowledge.

7.6.6 Tivoli Workload Scheduler/NetView configuration files


On each managed node, the selection of events and how they are reported is
controlled by setting variables in two configuration files:
The BmEvents configuration file controls the reporting of job scheduling
events by the mailman and batchman production processes. These events

Chapter 7. Tivoli NetView integration

413

are passed on to the agent, which may convert them to SNMP traps,
depending on the settings in its configuration file. The BMEvents.conf file is
located in the /<TWShome>/OV directory.
Example 7-3 shows the BMEvents.conf file on yarmouth, our TWS master.
Example 7-3 BmEvents.conf file
# cat BmEvents.conf
# @(#) $Header:
/usr/local/SRC_CLEAR/maestro/JSS/maestro/NetView/RCS/BmEvents.co
nf,v 1.6 1996/12/16 18:19:50 ee viola_thunder $
# This file contains the configuration information for the BmEvents module.
#
# This module will determine how and what batchman notifies other processes
# of events that have occurred.
#
# The lines in this file can contain the following:
# OPTIONS=MASTER|OFF
#
#
#
#
#
#
#
#

MASTER This tells batchman to act as the master of the network and
information on all cpus are returned by this module.
OFF This tells batchman to not report any events.
default on the master cpu is to report all job scheduling events
for all cpus on the Maestro network (MASTER); default on other cpus
is to report events for this cpu only.

# LOGGING=ALL|KEY
# ALL This tells batchman all the valid event numbers are reported.
#
# KEY This tells batchman the key-flag filter is enabled
#
# default is ALL for all the cpus
# EVENTS = <n> ...
# <n> is a valid event number (see Maestro.mib traps for the valid event
# numbers and the contents of each event.
#
# default is 51 101 102 105 111 151 152 155 201 202 203 204 251 252 301
#
#
#
#
#

414

These can be followed with upto 5 different notification paths in the


following format:
PIPE=<filename> This is used for communicating with a Unison Fifo file.
The format of this is a 4 byte message len followed by the message.
FILE=<filename> This is for appending to the end of a regular file.

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

# This file will be truncated on each new processing day.


# MSG=<filename%-.msg> This is used for communicating with a Unison Message
# file.
# To communcate with Unison snmp agent, the following is required:
# PIPE=/tivoli/TWS/D/tws-d/MAGENT.P
PIPE=/tivoli/TWS/D/tws-d/MAGENT.P
# To communcate with OperationsCenter using the demonstration log file
# encapsulation the following is required:
# FILE=/tivoli/TWS/D/tws-d/event.log

The MAgent configuration file controls reporting by the Workload


Scheduler/NetView agent, magent. Events selected in this file are turned into
SNMP traps, which are passed to NetView by the Workload
Scheduler/NetView manager, mdemon, on the management node. The traps
can also be processed by other network management systems. The
MAgent.conf file is located in the /<TWShome>/OV directory.

Example 7-4 shows the MAgent.conf file on yarmouth, our TWS master.
Example 7-4 MAgent.conf file
# cat MAgent.conf
# @(#) $Header:
/usr/local/SRC_CLEAR/maestro/JSS/maestro/NetView/RCS/MAgent.conf
,v 1.6 1996/12/16 18:20:50 ee viola_thunder $
# This file contains the configuration information for the snmp agent.
# OPTIONS=MASTER|OFF
#
#
#
#
#
#

MASTER This tells the agent to act as the master of the network and
information on all cpus are returned by this module.
OFF This tells the agent not report any events, i.e., no traps are sent.
default is MASTER on the master cpu, and OFF on other cpus

# EVENTS = <n> ...


# <n> is a valid event number (see Maestro.mib traps for the valid event
# numbers and the contents of each event.
#
# default is 1 52 53 54 101 102 105 111 151 152 155 201 202 203 204 252 301
#
#
#
#
#

These can be followed with changes to the critical processes in the


following format:
+<name> [<pidfilename>] If this is a maestro process then the pidfilename
is optional.
+SENDMAIL /etc/sendmail.pid

Chapter 7. Tivoli NetView integration

415

#
#
#
#
#
#
#

+SYSLOG

/etc/syslogd.pid

-<name> To eliminate a maestro process from the critical processes.


-@:WRITER to eliminate all writers
-SERVER@ to eliminate all servers
-MAGENT to eliminate the snmp agent.

7.6.7 Tivoli Workload Scheduler/NetView events


The Tivoli Workload Scheduler/NetView events are listed in Table 7-1 on
page 417 (traps enabled by default) and in Table 7-2 on page 418 (traps not
enabled by default). Trap numbers (1-53) are related with the status of critical
processes that are monitored by the Workload Scheduler/NetView agents,
including the agents themselves (event 1). The remaining events (101-252)
indicate the status of the job scheduling activity.
All of the listed events can result in SNMP traps generated by the Workload
Scheduler/NetView agents.

416

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Tip: Whether or not traps are generated is controlled by the two different
settings:
EVENTS parameter in the BmEvents configuration file:

It lists the events to be reported. If the EVENTS parameter is omitted, the


events reported by default are: 51 101 102 105 151 152 155 201 202 203
204 251 252.
EVENTS parameter in the MAgent configuration file:

It lists the events to be sent as SNMP traps. With the exception of events 1,
52, and 53, traps will not be generated unless the corresponding events
are turned on in the BmEvents configuration file.
For example, the following settings will enable the events numbers 1, 52,
53, 54, and101 to be sent to NetView.
EVENTS parameter in the MAgent configuration file:
EVENT=1,52,53,54,101:

EVENTS parameter in the BMEvents configuration file


EVENT=54,101

The Additional Actions column in Table 7-1 lists the actions available to the
operator for each event. The actions can be initiated by selecting Additional
Actions from the Options menu, then selecting an action from the Additional
Actions window.
Note: The operator must have the appropriate Tivoli Workload Scheduler
security access to perform the chosen action.

Table 7-1 gives the Tivoli Workload Scheduler traps that are enabled by default.
Table 7-1 TWS traps enabled by default
Trap #

Name

Description

Additional actions

uTtrapReset

The magent process


was restarted.

N/A

52

uTtrapProcessGone

A monitored process is
no longer present.

N/A

53

uTrapProcessAben
d

A monitored process
abended.

N/A

Chapter 7. Tivoli NetView integration

417

Trap #

Name

Description

Additional actions

54

uTrapXagentConnL
ost

The connection between


a host and xagent has
been lost.

N/A

101

uTtrapJobAbend

A scheduled job
abended.

Show job, rerun job,


cancel job

102

uTtrapJobFailed

A scheduled job could not


be launched.

Show job, rerun job,


cancel job

105

uTtrapJobUntil

A scheduled jobs UNTIL


time has passed, it will
not be launched.

Show job, rerun job,


cancel job

151

uTtrapSchedAbend

A schedule ABENDed.

Show schedule,
cancel schedule

152

uTtrapSchedStuck

A schedule is in the
STUCK state.

Show schedule,
cancel schedule

155

uTtrapSchedUntil

A schedules UNTIL time


has passed, it will not be
launched.

Show schedule,
cancel schedule

201

uTtrapGlobalPrompt

A global prompt has been


issued.

Reply

202

uTtrapSchedPrompt

A schedule prompt has


been issued.

Reply

203

uTtrapJobPrompt

A job prompt has been


issued.

Reply

204

uTtrapJobRerunPro
mpt

A job rerun prompt has


been issued.

Reply

252

uTtrapLinkBroken

The link to a workstation


has closed due to an
error.

Link

Table 7-2 TWS traps not enabled by default

418

Trap #

Name

Description

Additional actions

51

uTtrapProcessRes
et

A monitored process was


restarted.

N/A

103

uTtrapJobLaunch

A scheduled job was


launched successfully.

Show job, rerun job,


cancel job

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Trap #

Name

Description

Additional actions

104

uTtrapJobDone

A scheduled job finished


in a state other than
ABEND.

Show schedule,
cancel schedule

153

uTtrapSchedStart

A schedule has started


execution.

Show schedule,
cancel schedule

154

uTtrapSchedDone

A schedule has finished


in a state other than
ABEND.

Show schedule,
cancel schedule

251

uTtrapLinkDropped

The link to a workstation


has closed.

Link

Next we will cover how to configure Tivoli Workload Scheduler to send events to
any SNMP manager.

7.6.8 Configuring TWS to send traps to any SNMP manager


In this chapter up to now we mainly covered how to integrate Tivoli Workload
Scheduler with NetView. But Tivoli Workload Scheduler has the capability to
send traps to any SNMP manager, not only NetView. The added function with
NetView is for TWS workstations and processes to be seen in the NetView
Console graphically and the capability to issue commands on Tivoli Workload
Scheduler workstations from the NetView console. But if you are using an SNMP
manager application other than NetView, you can still configure Tivoli Workload
Scheduler to send SNMP traps to these management applications.
To configure Tivoli Workload Scheduler to send SNMP traps, perform the
following steps on the master workstation:
1. Edit the BmEvents.conf file in the /<TWShome>/OV directory to include the
desired SNMP traps.
2. Determine whether to use a named pipe to send the information to the
application or whether the application will parse a text file.
a. For the pipes option, uncomment the Pipe option and name the file
accordingly if a specific name is required. This will provide the output in a
Unison FIFO file whose format is a four-byte message length followed by
the message. For example:
PIPE=/<TWShome>/MAGENT.P

b. For the file option, uncomment the File option and name the file
accordingly. The output will be an ascii file that will have the trap number in

Chapter 7. Tivoli NetView integration

419

the first field with varying numbers of fields following the trap number,
depending upon the trap. For example:
FILE=/<TWShome>/event.log

Tip: The difference between using the File option versus the PIPE option is
that the File option creates a flat ascii file which you can then parse to get the
snmp traps to your application. The Pipe option, on the other hand, creates a
fifo file with the traps in it and the management node should have to read the
file as a pipe. We found that the File option is easier to implement.

3. After making the configuration changes above, run the customize script
contained in the /<TWShome>/OV directory to enable Maestro to trap the
events. This will add a line to start magent on the master pointing to the peer
(which should be the hostname of the workstation on which the SNMP
management node is running or the hostname of the SNMP manager).
/bin/sh customize -manager hostname-of-host-receiving-traps

For example, in order to send Tivoli Workload Scheduler events to a TEC


server named tecitso, you can run the customize script as follows:
/bin/sh customize -manager tecitso

Note: In addition to running this script, you need to install and configure the
TEC SNMP adapter on the Tivoli Workload Scheduler master to send SNMP
events directly to TEC. Refer to TEC manuals for more information on
customizing the TEC SNMP adapter.

The other way to the send Tivoli Workload Scheduler events to TEC is to use
the TWS/TEC adapter that is shipped with the Tivoli Workload Scheduler Plus
module.
Also, if you are using NetView and TEC together, you can send TWS events
from NetView to TEC, as well. This configuration does not require the TEC
SNMP adapter to be installed.
4. To enable the magent daemon, stop maestro with the following command:
conman stop;wait and then shut;wait

5. Run Startup to restart netman and magent.


6. Issue a link command as follows:
conman link @;noask

420

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Chapter 8.

The future of TWS


In this chapter, we introduce Tivoli Workload Scheduler 8.1 and summarize the
new functions that are available in this version. TWS 8.1 is an important
milestone in the integration of OPC- and Maestro-based scheduling engines.
This chapter contains the following:
The future of end-to-end scheduling
Planned enhancements to Tivoli Workload Scheduler for z/OS
Planned enhancements to Tivoli Workload Scheduler
Planned enhancements to the Job Scheduling Console

Copyright IBM Corp. 2002

421

8.1 The future of end-to-end scheduling


Tivolis end-to-end scheduling strategy is to move TWS for z/OS and TWS
products closer to each other until the products are combined in one Tivoli
Scheduler product.
TWS developers plan to eliminate the functional differences between the two
products by implementing the missing functions in each product. Eventually,
TWS and TWS for z/OS will evolve into a unified end-to-end scheduling solution.
In future releases, all functions available in the TWS for z/OS ISPF panels and
the TWS composer and composer programs will be available in the Job
Scheduling Console.
Important: All features that we mention in this chapter are short-term (less
than one year) planned enhancements to TWS. But product plans may
change, so this information is given on an as is basis.

8.2 Planned enhancements to Tivoli Workload


Scheduler for z/OS
This section describes some of the features planned for future versions of Tivoli
Workload Scheduler for z/OS.
Script Library management

The TWS for z/OS approach in managing distributed workload is a


centralized approach where all the job scripts and executables to run on the
distributed side are stored in the TWS for z/OS OS/390 libraries and they are
sent to the tracker agent at the time they are executed. TWS, instead, is
based on a completely distributed approach, where all the scripts to be
executed must physically reside on the machines where they will be
executed. In the future, TWS will provide a way to manage the TWS script
library centrally (from the OS/390 library, or a central script library on the TWS
master) and to distribute those scripts to the FTAs for execution.
Job recovery

The TWS for z/OS recovery statements are directly coded inside the job script
in the TWS for z/OS job library. When the TWS for z/OS engine submits a
tracker agent job, it strips the recovery statements from the job script and
stores them internally to use in case of job failure to run the recovery of the
job. The current implementation of the common agent technology does not
allow defining and performing any automatic recovery action on a job running
on an FTA. This functionality is planned to be provided in the next versions.

422

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

JCL Variables management

TWS will support coding of user variables and TWS for z/OS built-in variables
directly inside the scripts that run on FTAs (as it is the case with tracker
agents today).
Removal of one domain manager limitation

Today, the Common Agent Technology architecture imposes a limit of having


only one domain manager directly attached under a TWS for z/OS engine. All
other distributed domain managers must be attached under that single proxy
domain manager. This limitation is planned to be removed in the future
versions.
Migration utility

TWS will be shipped with a migration utility, which will provide an assisted (or
possibly automated) migration path from an TWS for z/OS tracker agents
configuration to an TWS for z/OS-FTAs configuration.
Improved memory management

TWS tasks are planned to be moved above the current 16 MB memory line.

8.3 Planned enhancements to Tivoli Workload


Scheduler
This section describes some of the features planned for future versions of Tivoli
Workload Scheduler.

8.3.1 Security enhancements


The following sections discuss security enhancements.

A new security mechanism


A new global option will be provided in TWS allowing the central TWS
administrator to decide between two different security models, central or local
security.
Local security

The local security approach is much the same as in previous versions. This
option is maintained for added flexibility and backward compatibility.

Chapter 8. The future of TWS

423

Centralized security

With the centralized security method, security definitions can be added,


deleted, or modified from the TWS security file only on the master node. The
makesec utility can be executed only on a master node. The administrator
must then distribute the compiled security file to all the nodes where direct
user access is needed. If the security file is not present on a non-master node
no access is allowed from that system.

Secure Sockets Layer (SSL) support


Future versions of TWS are planned to support Secure Sockets Layer for
authentication and encryption between TWS nodes. SSL is a commonly used
protocol for managing the security of message transmission on the Internet. The
primary goal of the SSL protocol is to provide privacy and reliability between two
communicating applications or components.

Firewall support
TWS will fully support working across the firewalls. Currently all TWS TCP/IP
communications between two nodes use one well-defined and user-configurable
listening port. But on the other hand, the ports used to open connections are
allocated dynamically. This behavior will be changed in the future release(s) to
allow users to configure the port used for opening connections.
Also, with the current implementation, several TWS administration commands
require a direct TCP/IP connection between the node where the command is
issued and the destination node where the command is executed. This will be
removed, allowing the routing of all administration commands through the TWS
master -> domain managers -> agents hierarchy.

8.3.2 Serviceability enhancements


TWS will support additional tracing functions such as:
Complete the TKG AutoTrace instrumentation for the missing TWS platforms:
Windows NT/2000 and Linux.
Complete the TKG AutoTrace instrumentation for all TWS components.

8.4 Planned enhancements to Job Scheduling Console


The Job Scheduling Console is a very important step in this transformation
because it provides the same graphical user interface to both TWS and TWS for
z/OS. This section describes some of the features planned for future versions of
the Job Scheduling Console.

424

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Firewall support

The JSC will handle new workstation definition options related to the
command routing.
Common agent technology enhancements

JSC will handle the centrally managed job in the database and plan. It will
show rerun jobs and recovery jobs that are added by recovery actions, and
recovery prompts will be added as a result of a Recovery prompt action.

Chapter 8. The future of TWS

425

426

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Appendix A.

Centrally stored OPC


controllers to FTAs
In this appendix we provide a sample script that will help you to export jobs from
centrally stored OPC controllers to the FTAs. The credits go to Dean Harrison
from Business Systems, Consignia, Inc. who provided the script to the redbook
team. As usual, the script is given on an as is basis. The ascii and ebcidic
versions of this utility can be downloaded from the Redbooks Web site. Please
refer to Appendix D, Additional material on page 461 for information on how to
download the code.

Copyright IBM Corp. 2002

427

Introduction
This document describes how to use the conversion utility to help export jobs
from centrally stored OPC controllers to the new fault tolerant agents (also called
workstation tolerant agents) that become available in Version 8.1 of Tivoli
Workload Scheduler for z/OS.

Software prerequisites
The following are the software prerequisites of the utility:
Tivoli OPC, any version
ISPF
FTP

JCL and fault tolerant agents


Up to Version 2.3 of OPC it has been possible to schedule on distributed
platforms (for example, UNIX or Windows). This was done by using the OPC
tracker agents; the scripts being run on the remote agents were stored centrally
in the controller's JCL library (EQQJBLIB). However effective OPC tracker
agents were, they suffered from the architectural limitation that, should the
controller or connection to the controller become unavailable, then even if the
remote tracker agent knew what to do next, it could not run any more jobs until
the connection was restored, since it could not access the JCL.
From TWS Version 8.1 the tracker agent technology becomes fault tolerant, such
that it is possible for work to continue without connection to the controller. This is
achieved by utilizing the fault tolerant agent technology pioneered within the
Maestro product, but presents a conversion issue in that the JCL for jobs on the
distributed tracker agents must now be stored on the remote box, not centrally on
the controller.
The TWSXPORT utility will help get the JCL to its intended destinations by
creating FTP scripts, and optionally executing them, to copy the JCL from the
OPC EQQJBLIB to the required directories on the remote boxes. The utility
searches the EQQJBLIB concatenation to find the location of each of the
members you wish to transfer and creates the necessary FTP commands to
perform the transfer.

428

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

FTP considerations
The default method TWSXPORT uses is to push the JCL, using FTP, from the
EQQJBLIB libraries to the remote destinations, in a single FTP step. However,
for this to work, all of the remote boxes must have an FTP server/service in effect.
Though this is commonplace with UNIX, it is not the default configuration for
Windows, so pushing may not be possible for all of your remote agents.
It is possible however, to pull from a remote box without an FTP server, simply an
FTP client, which usually comes with the TCP/IP stack.
Switching from push to pull will make the following differences to the FTP script:
From the mainframe, the very first open and user statements are not actually
coded with the command keywords, just the values. On subsequent
connections to other boxes open and user must be specified. When run from
a distributed machine it uses an open statement, but does not need a user
statement, only the value, on every connection, including the first.
From the mainframe, files are put. When run from distributed you must get
them.
From the mainframe, DEST, USER, and PASSWORD refer to the remote box.
When run from a distributed box these parameters should point back to the
mainframe.
The directory commands cd and lcd are reversed when running from a
remote box.
From the mainframe, the script can contain transfers to many boxes, one after
the other. From distributed you will typically only perform transfers between
the mainframe and one box at a time.

Appendix A. Centrally stored OPC controllers to FTAs

429

Notes: For each tracker that is to be converted to a workstation tolerant agent,


you need to determine the destination hostname or IP address (which can be
found in the OPC ROUTOPTS statement). You need to determine the location
(directory) on the remote box into which the jobs should be transferred. You
need to obtain a valid user ID and password; the user ID must have the
necessary permissions to be able to create jobs within the remote directory.
You need to determine whether it has an FTP server active, and finally you
need to determine the naming convention for the destination box, e.g., should
the names be in lower case, should there be a prefix or suffix added to the
member name.

TWSXPORT can set the case and add a prefix or suffix to the member name.
If more involved changes are needed then you can possibly automate this
yourself by writing your own REXX code to be included in the TWSEXPORT
REXX at the point where the comment USER_RENAME is included in the
code. Variable MEM_NAME contains the member name as retrieved from
OPC; MEM_DEST should be set to contain the name to be used on the
remote agent.
Some of the behavior of FTP is based upon site installation options. The way
TWSXPORT deals with open, user, and password commands are based on
some of these assumptions, and these assumptions are documented above. If
these assumptions do not work for your site then the code may need to be
amended to fit with you installation options.

EQQJBLIB concatenation considerations


The JCL is stored within the libraries attached to the OPC controller. This is
usually the EQQJBLIB concatenation, but it can be contained in alternate DD
statements if you use techniques such as the job-submission/job-library-read
exits (EQQUX001/2).
TWSXPORT needs to determine the dataset names from which each member is
retrieved from by OPC, this is necessary to be able to use FTP, which cannot
deal with concatenations.
For straightforward EQQJBLIB concatenations it will do this automatically. If
alternate libraries are used via exits then it may complicate matters a little. If the
members in the alternate libraries are unique to that library, then they can be
declared to TWSXPORT as if they are part of the concatenation. If not then you
can either deal with jobs from each alternate library in separate run of

430

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

TWSXPORT or amend the TWSXPORT code to build in your exit logic to set the
dataset name yourself. The dataset name is stored as the second word in stem
variable WSTA_DEF.CURR_WSID.NEW_ITEM, which is set shortly after the
USER_RENAME comment.

Member names
TWSXPORT needs to know which members to export to which remote boxes.
This is done by providing it with a list of member names, or member name masks
(using % and * wildcards as with ISPF/PDF) and a list of OPC workstation names
to which each member should be sent. Each member listed explicitly can be sent
to a list of workstations unique to that entry if necessary. If a mask is used, then
every member identified by the mask will be sent to the same destination(s)
listed against the mask.
You may wish to obtain the list of member names by using such utilities as the
Batch Command Interface Tool, or using other OPC program interface tools or
techniques.

Installing TWSXPORT
TWSXPORT consists of three pieces of REXX code and one JCL procedure.
The REXX code should be copied into a library and this library name be declared
in the TWSREXX symbolic parameter in the TWSXPORT JCL procedure.
The procedure should be stored in your procedure library concatenation, or in a
library referenced by JCLLIB, whichever mechanism fits in with your site
procedures. It consists of two steps, the first is a batch ISPF step, the second is
an FTP step. You may wish to amend either of these steps to fit in with your site
standards. The system ISPF libraries are named in symbolic parameters at the
top of the procedure. If your installation does not use these names then you will
need to amend them to match your site installation.

FTP script file


The first step of TWSXPORT will create an FTP script (on the FTPOUT DD
statement). You must create a catalogued dataset for this purpose. This dataset
should be a sequential dataset that has a fixed blocked (FB) record format and a
record length of 80. The size should be enough to cope with the number of
records that will be generated, which can be approximated as follows:
5 lines per remote box + 1 line per library per remote box + 1 line per member
being transmitted per remote box + 1 line for the quit statement

Appendix A. Centrally stored OPC controllers to FTAs

431

Security implications
To transmit data between the mainframe and remote boxes you are going to
need to provide user IDs and passwords. These will be stored in two places
throughout this process:
The workstation definition file (WSTADEF)
The FTP script (FTPOUT)

You must ensure that these two datasets are protected by security rules for the
duration of their existence to prevent the passwords being inadvertently
disclosed.

Planning
Once you understand all of the implications highlighted in this section you should
produce a plan to determine the number of TWSXPORT jobs you are going to
need by determining which boxes you can push to, and which you must pull from,
and considering how to deal with implications of the exits, if you have used them
to retrieve JCL from alternate DD statements within the OPC controller.

Using the TWSXPORT utility


Having planned the scope of each run of the TWSXPORT job, you then need to
write each job. For each job you need to set the method used (either push or
pull), include the relevant JCL libraries from OPC, define the remote workstations
that are being used, and define the list of members to be transferred along with
their destinations.
In the example below the WSTADEF file is show as in-stream data for the
purpose of demonstration; it is not recommended that this is done in practice. To
prevent disclosure of passwords it is recommended that the WSTADEF file is a
well-protected dataset.
Example: A-1 Example TWSEXPORT job
//TWSXPORT EXEC TWSXPORT,
//
METHOD=PUSH,
//
FTPOUT='CASS.TWS.FTP.SCRIPT'
//EXP.JCLLIB01 DD DSN=CSSA.OPC.DEVT.DYNAM.JOBLIB,DISP=SHR
//EXP.JCLLIB02 DD DSN=CSSA.OPC.DEVT.DB2.JOBLIB,DISP=SHR
//EXP.JCLLIB03 DD DSN=CSSA.OPC.DEVT.BATCH.JOBLIB,DISP=SHR

432

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

//EXP.JCLLIB04 DD DSN=CSSA.OPC.DEVT.FETCH.JOBLIB,DISP=SHR
//EXP.WSTADEF DD *
* Set Site Defaults
WSTASTART DIR(/opt/tws/user/bin) SUFFIX(.sc)
* Define workstations
WSTASTART WSID(I101) DEST(10.20.30.101) USER(tws) PASSWORD(xxxxxxxx)
WSTASTART WSID(H121) DEST(10.20.30.121) USER(tws) PASSWORD(xxxxxxxx)
//EXP.MEMBERS DD *
C%%101* I101R%%121* H121SJS*

This example will create and run an FTP script to transmit members matching the
mask C%%101* to IP address 10.20.30.101, members matching the mask
R%%121* to IP address 10.20.30.121, and members beginning with SJS, to
both destinations.
In all cases the jobs will be stored in the /opt/tws/user/bin directory and have .sc
appended to the end of the file names.

Selecting the METHOD


If the boxes you are transmitting to all have FTP services active, then you can
use the push method (which is the default). This will generate the necessary FTP
parameters to transfer all the selected members and automatically run the FTP
sessions to perform the transfers.
If you need to select the pull method then you may want to create a separate
TWSXPORT job for each destination box, so that it creates a separate FTP script
for each box. Alternatively, you could create a script containing the FTP
instructions for all of the boxes in one single run and then break up the output file
into separate files.
METHOD is specified by a symbolic parameter in the executing JCL.
Assuming no other errors occur, the EXP step will end with a return code of 2 if
pull is selected, ensuring that the FTP step will not run. You can then transmit the
script to the remote box to execute there and pull the files down.
You may, however, wish to consider using your OPC tracker agents to then
execute the FTP script, using OPC to do the hard work of getting the FTP script
down to the remote box for you.
For example, on UNIX you could generate a job that executes FTP and uses the
heredoc technique to bring in the FTP script, simply by adding one line to the top
and bottom of the FTP output.

Appendix A. Centrally stored OPC controllers to FTAs

433

Example: A-2 Example of UNIX FTP job using the heredoc technique
ftp<<EOF
open 10.20.30.5
xfer
xferpswd
lcd /opt/tws/user/bin
cd 'CSSA.OPC.DEVT.DYNAM.JOBLIB'
get CJSAOLST cjsaolst.sc
cd 'CSSA.OPC.DEVT.BATCH.JOBLIB'
get SJSATEST sjsatest.sc
get SJSAUNAM sjsaunam.sc
close
EOF

Under Windows the technique to convert this into a job is a little more difficult, but
it can be achieved by using Windows commands to echo the FTP script into a file
(on the remote box), which is then executed remotely. Essentially the conversion
of the FTP script into an OPC job is achieved by the following changes:
1. Add a line to the beginning of the script to delete the file, in case it already
exists, that the FTP script is about to be built in on the remote box.
2. Prefix each line of the script with cmd.exe /c echo.
3. Suffix each line of the script with a redirect to the remote file, for example:
c:\temp\ftpscript.txt

4. Add a line to the end of the script to execute FTP using the remotely created
script file.
Example: A-3 Example of using the FTP script on a Windows tracker agent
cmd.exe /c del c:\temp\ftpscript.txtcmd.exe /c echo open 10.20.30.5.>>
c:\temp\ftpscript.txt cmd.exe /c echo xfer.>> c:\temp\ftpscript.txtcmd.exe /c
echo xferpswd.>> c:\temp\ftpscript.txtcmd.exe /c echo lcd /tws/user/bin.>>
c:\temp\ftpscript.txtcmd.exe /c echo cd 'CSSA.OPC.DEVT.DYNAM.JOBLIB'.>>
c:\temp\ftpscript.txt cmd.exe /c echo get CJSAOLST cjsaolst.sc.>>
c:\temp\ftpscript.txtcmd.exe /c echo cd 'CSSA.OPC.DEVT.BATCH.JOBLIB'.>>
c:\temp\ftpscript.txtcmd.exe /c echo get SJSATEST sjsatest.sc.>>
c:\temp\ftpscript.txtcmd.exe /c echo get SJSAUNAM sjsaunam.sc.>>
c:\temp\ftpscript.txtcmd.exe /c echo close.>> c:\temp\ftpscript.txtcmd.exe /c
ftp -s:c:\temp\ftpscript.txt

434

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

There may well be other techniques for other platforms, but since most of these
techniques are likely to be heavily influenced by site standards, it is not really
possible to write a generic utility to create these jobs that would perfectly fit all
customer permutations. However, the two above examples could easily be
performed by a simple piece of REXX code or an Edit Macro. Making your
conversion process for workstations that have to pull is simply a schedule of
three steps:
1. Run TWSXPORT to generate the FTP script.
2. Run a process to convert the FTP script into a job appropriate for the platform.
3. Run the job generated by step 2.

Defining the JCL libraries


If all of your JCL lives in the EQQJCLIB concatenation then simply define each
file in the EQQJCLIB concatenation as a separate override DD statement with
the name of EXP.JCLLIBnn (where nn is a number starting at 01 and
incrementing by 1 for each successive library in the concatenation).
TWSEXPORT will create an FTP statement to retrieve selected members from
the lowest numbered JCLLIBnn DD statement it finds the member in.
If alternate libraries are used then consider where to include these in the
sequence of JCLLIBnn DD statements.
Notes: It is important that the first JCLLIBnn DD statement be coded as
JCLLIB01 and that all other JCLLIBnn DD statements be consecutively
numbered, with no gaps.

Do not code more than one library on a JCLLIBnn DD statement. TWSXPORT


uses ISPF library management facilities to process the directories of the
libraries. It is designed to process the libraries one DD statement at a time. It
does it this way to avoid the restriction of a maximum of four libraries per
concatenation, since it is expected that some sites may have EQQJBLIB
concatenations with more than four libraries.

Defining the workstations


An override DD statement of EXP.WSTADEF should be defined to point to a
dataset containing the WSTASTART statements, which define the destinations to
send the JCL to.

Appendix A. Centrally stored OPC controllers to FTAs

435

The WSTADEF file can contain all of the remote boxes your controller knows
about, or just the ones that will be used in this particular TWSXPORT job.
However, be aware that if you include all of the workstations in the file, then any
member that is referred to within the members file that do not contain a list of
destination workstations will be sent to every workstation within the WSTADEF
file.
The WSTASTART statements, essentially, tie workstation names within OPC to
the information necessary to perform the transmission of the JCL members to the
remote box on which the workstation exists.
If you create a WSTASTART statement without using the WSID parameter, then
this statement can be used to set the defaults for all subsequent WSTASTART
statements. If you have standardized configuration across many of your boxes
then this should greatly simplify your WSTADEF file. If an entry in the members
file refers to a workstation that is not declared within the WSTADEF file, the
process will fail in the EXP step with a return code of 8.
Comments can be made within the WSTADEF file by using either * or /*; anything
following this on a line will be considered a comment. There is no need to code */
to close a comment, since a comment does not span more than one line.
No continuation characters are needed. A WSTASTART statement can span
many lines.
Keywords can be abbreviated to their shortest unique form if necessary.

WSTASTART statement
Use the WSTATART statement to signal the start of a new workstation tolerant
agent. The syntax of the command is shown in Figure A-1 on page 437.

436

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Figure A-1 Syntax of the WSTASTART control statement

Parameters
The following sections describe the parameters.

WSID(workstation)
Name of the workstation being defined. If omitted, the WSTASTART statement
will be used to define defaults for all following workstations (until another
WSTASTART statement without WSID alters defaults again).

DEST(address)
Specifies the host name or IP address of the workstation being defined. If
communication to this workstation must use a specific port then supply the
address in the format IP-ADDRESS:PORT-NUMBER.

USER(userid)
Specifies the user ID on the remote box to be used for the transfer.

PASSWORD(password)
Specifies the password on the remote box to be used for the transfer. Care
should be taken to store these parameters in a well-protected dataset to ensure
against inadvertent disclosure of these passwords. The passwords will be output
into the FTPSCRIPT file, which should equally be protected.

Appendix A. Centrally stored OPC controllers to FTAs

437

DIR(directory)
Specifies the directory into which the transfer will be made. If omitted, the
transfer will take place to the default directory that the user will be presented with
at log in.

PREFIX(prefix)
Specifies a prefix to add to the beginning of every member name as it is
transferred in the remote server.

SUFFIX(suffix)
Specifies a suffix to add to the end of every member name as it is transferred to
the remote server. If you wish the suffix to be added as a file extension then you
should include a period in the suffix. For example SUFFIX(.txt) will transfer a
member called ABCDEF to become abcdef.txt on the remote server.

CASE(UPPER|LOWER)
Determines whether the filename on the remote server will be stored in upper or
lower case. The default is lower.
Note: If you are using the pull method then the USER and PASSWORD
keywords will be for a mainframe user ID that has access to read the JCL
libraries. The DEST keyword will be the hostname or IP address by which the
mainframe is known from the remote box.

Selecting members to transfer


The members to be transferred to the remote boxes are identified by records in
the members file, specified in the executing JCL by an override DD statement of
EXP.MEMBERS.
Each line of this file begins with a member name, or member name mask. A
mask can use the same wildcards as supported by ISPF/PDF, e.g., % a single
character, * multiple characters.
Following the member name on the line can be a list of OPC workstation names
to which the member(s) must be sent. If no workstations are coded against a
member or mask, then any members selected by that statement will be sent to
every workstation declared in the WSTADEF file.
Each member will only be selected once, regardless of how many statements it
matches in the members file. This can be used to make some members be
handled slightly differently from the rest. Consider the members file in
Example A-4 on page 439.

438

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Example: A-4 MEMBERS file


ABCMAST X001
ABC* X002
ABCSLAVE X003

In this example all members beginning with ABC will be sent to workstation X002
except a member called ABCMAST, which will be sent to workstation X001.
Workstation ABCSLAVE will still be sent to workstation X002, instead of X003 as
you might expect, since it first matches the second line of the members file and
will therefore not be processed by the third line, which in this example is
completely redundant.

TWSXPORT program outline


In this section we will give you some more details about the TWSXPORT
program, which should be especially useful for the support personnel.

Error codes
When you run the EXP step of TWSXPORT, it may not end with return code zero
for one of the following reasons:

Return code 2
You have selected METHOD=PULL. RC=2 means that no errors have occurred,
but the FTP step could not run, since the FTP script needs to be run remotely.

Return code 4
A record in the MEMBERS DD statement has not found a match in any of the
JCLLIBnn libraries.

Return code 8
A serious error has occurred, which could be one of the following:
Unable to read or write to a file.
Unable to initialize a library.
A syntax error in the WSTADEF statements.
A workstation referenced in MEMBERS that does not exist within WSTADEF.

Appendix A. Centrally stored OPC controllers to FTAs

439

Variables
Table A-1 shows the variables used in the program.
Table A-1 Variables

440

Variable name

Usage

CURR_CASE

Current entity case

CURR_DEST

Current entity destination

CURR_DIR

Current entity directory

CURR_PASSWOR
D

Current entity password

CURR_PREFIX

Current entity prefix

CURR_SUFFIX

Current entity suffix

CURR_USER

Current entity user ID

CURR_WSID

Current entity workstation ID

DEST_LIST

List of destinations for the selected member

EQQJBLIB

ISPF DATAID for the current library being


processed

FTPOUT.

Contents of FTP script

HOST_CD

Change directory command for host machine

LAST_LIB

Library name of last processed member

LASTRC

Return code save from last command

LC

Lower case alphabet

LIB_COUNT

Number of libraries being processed

LIB_DD

List of JCLLIB DD statements

LIB_DSN

List of JCLLIB dataset names

LIB_LOOP

Loop counter to step through libraries

MEM_COUNT.

Counts number or matches for each mask

MEM_DEST

Destination member name, containing prefix,


suffix, and case corrected, if required

MEM_FOUND

Identifies members already found

MEM_LOOP

Loop counter to step through member masks

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Variable name

Usage

MEM_MASK

Member pattern to search library with

MEM_NAME

Member name retrieved from library

MEM_SPEC

Translated member name ready for parsing

MEMBERS

List of member masks input from user file

NEW_ITEM

New item counter for FTP commands

REMOTE_CD

Change directory command for remote machine

SCRIPT_COUNT

Number of entries in FTP script

SCRIPT_LINE

New line of FTP script

UC

Upper case alphabet

WSTA_DEF

Workstation definitions

WSTA_KEYWORD

Workstation definition token keyword

WSTA_LIST

List of defined workstations

WSTA_LOOP

Loop counter to step through workstation


definitions

WSTA_SPEC

Translated workstation definition line ready for


parsing

WSTA_TOKEN

Individual workstation definition token (keyword


and value)

WSTA_VALUE

Workstation definition token value

WSTADEF

List of WSTA definitions input from user file

XMIT_CMD

FTP command to transmit a member

OPEN_CMD

FTP command to open a connection

USER_CMD

FTP command to specify a user ID

REXX members
Table A-2 on page 442 shows the REXX members used in the program.

Appendix A. Centrally stored OPC controllers to FTAs

441

Table A-2 REXX members


Variable
name

Usage

BATISPF

Start a batch ISPF environment

EXITRC

Pass the REXX exit code back to the ISPF environment


exit

TWSXPORT

TWS export utility

Flow of the program


The program consists of three main sections:
Process the workstation definitions file to produce a reference model to base
the transfers upon later.
Process the list of members against the allocated libraries to produce a list of
what files to move from where to where.
Generate the FTP script.

Processing the workstation definitions


This section is essentially a big parsing loop. It builds up a definition within
CURR_ prefixed variables. Each time a new WSTASTART statement is
encountered the current values are saved away and the defaults are reset.
Values are saved within the WSTA_DEF stem variable. The second qualifier of
the stem is the workstation being defined. There is a special case workstation of
0, which is where the defaults are stored.
To be tolerant of different user coding habits, the ABBREV function is used to
allow keywords to be abbreviated to their shortest unique form, and the
TRANSLATE function is used to cope with users delimiting keywords with
commas instead of spaces.

Processing the member list


This section starts by using the LISTDSI function to detect what DD statements
are in the JCL, beginning with JCLLIB, and obtains the dataset names.
Once this is known, a loop is started that processes each library in turn, using
ISPF library management functions to initialize and open each library. It then
uses the LMMLIST function to perform a member list for each entry in the
members file. If the members entry does not have a list of workstations to send it
to, it will use the complete list of workstations generated from reading the
WSTADEF file.

442

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

For each member found it looks up the dataset name. Then for each workstation
in the destination list it generates the member name that it will be transmitted to
on the remote box and stores all of this information in the WSTA_DEF stem for
later use by the next section.
It then checks how many members were looked up for each entry on the
members file. If any entries did not find any references in the library then it exits
with return code 4.
Again to be tolerant of different user coding habits, the Translate function is used
to cope with users delimiting members and workstations with commas instead of
spaces.

Generating the FTP script


The FTP command options are set depending on which method is selected. If
pull is selected then an exit code of 2 is stored at this point.
There is then a loop through the list of workstations to generate the FTP
commands for each remote box. If no members were selected for a particular
box then no FTP script is created.
At the start of each box it will generate lines for the open, user, and password
elements of initializing a connection to a box. It is worth noting that the behavior
between mainframe and distributed FTP clients is slightly different in this respect,
and that mainframe FTP is slightly anomalous in that if multiple connections are
being performed in a single step then the first open statement is implied (and will
indeed not recognize the word open), but for subsequent connections it is
required. It will also switch to the relevant directory if DIR was included in the
WSTASTART for that workstation.
Then for each library being processed there will also be a relevant mainframe
directory switch before a list of get or put statements are generated for each
member being transferred.
Once finished, for each remote box it will generate a close statement and, once
finished, completely generate a quit statement, before writing the file and then
exiting with the relevant exit code.

TWSXPORT program code


Example A-5 shows the TWSXPORT program code.
Example: A-5 TWSXPORT program code
//TWSXPORT PROC TWSREXX='CASS.TWS.REXX',
00010003

Appendix A. Centrally stored OPC controllers to FTAs

443

//
SYSPROC='ISP.SISPCLIB',
//
ISPMLIB='ISP.SISPMENU',
//
ISPPLIB='ISP.SISPPENU',
//
ISPSLIB='ISP.SISPSENU',
//
ISPTLIB='ISP.SISPTENU',
//
METHOD=PUSH,
//
TMPDIR=',20',
//
TMPORG=PO,
//
@=
//* +------------------------------------------------------------------+
//* | MODULE: TWSXPORT : CASSDH2 - 09FEB02
|
//* | PURPOSE : EXPORT JCL TO WORKSTATION TOLERANT AGENTS
|
//* |
|
//* | HISTORY ---------------------------------------------------------|
//* |
|
//* +------------------------------------------------------------------+
//* +------------------------------------------------------------------+
//* | SYSPROC - NAME OF LIBRARY CONTAINING CLIST/REXX-EXEC CODE
|
//* | ISPMLIB - NAME OF LIBRARY CONTAINING ISPF MESSAGES
|
//* | ISPPLIB - NAME OF LIBRARY CONTAINING ISPF PANELS
|
//* | ISPSLIB - NAME OF LIBRARY CONTAINING ISPF SKELETONS
|
//* | ISPTLIB - NAME OF LIBRARY CONTAINING ISPF TABLES
|
//* | METHOD - WHICH FTP METHOD TO USE (PUT/GET)
|
//* | TMPDIR - DIRECTORY BLOCKS FOR ISPF TEMP DATASET
|
//* | TMPORG - ORGANISATION FOR ISPF TEMP DATASET
|
//* | @
- DUMMY SYMBOL TO ENSURE SUBSTITUTION WITHIN QUOTES
|
//* +------------------------------------------------------------------+
//* +------------------------------------------------------------------+
//* | CREATE FTP SCRIPT TO EXPORT FROM OPC
|
//* +------------------------------------------------------------------+
//EXP
EXEC PGM=IKJEFT01,
00030003
//
DYNAMNBR=70,
//
PARM=&@'BATISPF TWSXPORT &METHOD'
//SYSPROC DD DSN=&TWSREXX,DISP=SHR
00262103
//
DD DSN=&SYSPROC,DISP=SHR
//SYSPRINT DD DUMMY
00360003
//SYSOUT
DD SYSOUT=*
00370003
//ISPMLIB DD DSN=&ISPMLIB,DISP=SHR
00262603
//ISPPLIB DD DSN=&ISPPLIB,DISP=SHR
00263103
//ISPSLIB DD DSN=&ISPSLIB,DISP=SHR
00263703
//ISPTLIB DD DSN=&ISPTLIB,DISP=SHR
00263903

444

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

//ISPPROF DD SPACE=(80,(500,1000,20)),AVGREC=U,
//
DSORG=PO,RECFM=FB,LRECL=80
//ISPCTL1 DD SPACE=(80,(15000,12000&TMPDIR)),AVGREC=U,
//
DSORG=&TMPORG,RECFM=FB,LRECL=80
//ISPCTL2 DD SPACE=(80,(15000,12000&TMPDIR)),AVGREC=U,
//
DSORG=&TMPORG,RECFM=FB,LRECL=80
//ISPLST1 DD SPACE=(121,(500,1000,20)),AVGREC=U,
//
DSORG=PO,RECFM=FB,LRECL=121
//ISPLST2 DD SPACE=(121,(500,1000,20)),AVGREC=U,
//
DSORG=PO,RECFM=FB,LRECL=121
//ISPLOG DD SYSOUT=*,
//
DSORG=PO,RECFM=FB,LRECL=121
//FTPOUT DD DSN=&FTPOUT,DISP=OLD
//SYSTSPRT DD SYSOUT=*
00350003
//SYSTSIN DD DUMMY
00380003
//*
//* +------------------------------------------------------------------+
//* | RUN THE FTP PROCESS IF IN PUSH MODE
|
//* +------------------------------------------------------------------+
//
IF (EXP.RC = 0) THEN
//FTP
EXEC PGM=FTP,REGION=8192K
//OUTPUT DD SYSOUT=*
//INPUT
DD DSN=&FTPOUT,DISP=SHR
//
ENDIF

Sample job
Example A-6 shows a sample job.
Example: A-6 Sample Job
//*A JOB CARD ACCCORDING TO YOUR INSTALLATION STANDARDS IS REQUIRED
//*
//
JCLLIB ORDER=CASS.TWS.JOBLIB
//*
//* +------------------------------------------------------------------+
//* | MODULE : MEMLIST : CASSDH2 - 09FEB02
|
//* | PURPOSE : SAMPLE EXPORT JOB
|
//* |
|
//* | HISTORY ---------------------------------------------------------|
//* |
|
//* +------------------------------------------------------------------+
//TWSXPORT EXEC TWSXPORT,
//
METHOD=PUSH,
//
FTPOUT='CASS.TWS.FTP.SCRIPT'
//EXP.JCLLIB01 DD DSN=CSSA.OPC.DEVT.DYNAM.JOBLIB,DISP=SHR
//EXP.JCLLIB02 DD DSN=CSSA.OPC.DEVT.DB2.JOBLIB,DISP=SHR

Appendix A. Centrally stored OPC controllers to FTAs

445

//EXP.JCLLIB03 DD DSN=CSSA.OPC.DEVT.BATCH.JOBLIB,DISP=SHR
//EXP.JCLLIB04 DD DSN=CSSA.OPC.DEVT.FETCH.JOBLIB,DISP=SHR
//EXP.WSTADEF DD *
* Set Site Defaults
WSTASTART DIR(/opt/tws/user/bin) SUFFIX(.sc)
* Define workstations
WSTASTART WSID(I101) DEST(10.20.30.101) USER(tws) PASSWORD(xxxxxxxx)
WSTASTART WSID(H121) DEST(10.20.30.121) USER(tws) PASSWORD(xxxxxxxx)
//EXP.MEMBERS DD *
C%%101 I101
R%%121 H121
SJS*

446

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Appendix B.

Connector reference
In this appendix we describe the commands related to the Tivoli Workload
Scheduler and Tivoli Workload Scheduler for z/OS connectors. We also describe
some Tivoli Management Framework commands related to the connectors.

Copyright IBM Corp. 2002

447

Setting the Tivoli environment


To use the commands described in this appendix, you must first set the Tivoli
environment. To do this, log in as root or administrator, then enter one of the
commands shown in Table B-1.
Table B-1 Setting the Tivoli environment
Shell

Command to set the Tivoli environment

sh or ksh

. /etc/Tivoli/setup_env.sh

csh

source /etc/Tivoli/setup_env.csh

DOS
(Windows)

%SYSTEMROOT%\system32\drivers\etc\Tivoli\setup_env.cmd
bash

Authorization roles required


To manage connector instances, you must be logged in as a Tivoli administrator
with one or more of the roles listed in Table B-2.
Table B-2 Authorization roles required for working with connector instances
An administrator with this role...

Can perform these actions

user

Use the instance, view instance settings

admin, senior, or super

Use the instance, view instance settings,


create and remove instances, change
instance settings,
start and stop instances

Note: To control access to the scheduler, the TCP/IP server associates each
Tivoli administrator to a Remote Access Control Facility (RACF) user. For this
reason, a Tivoli administrator should be defined for every RACF user. For
additional information, refer to Tivoli Workload Scheduler V8R1 for z/OS
Customization and Tuning, SH19-4544.

Working with TWS for z/OS connector instances


This section describes how to use the wopcconn command to create and manage
Tivoli Workload Scheduler for z/OS connector instances.
Much of the following information is excerpted from the Tivoli Job Scheduling
Console Users Guide, SH19-4552.

448

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

The wopcconn command


Use the wopcconn command to create, remove, and manage Tivoli Workload
Scheduler for z/OS connector instances. This program is downloaded when you
install the connector. Table B-3 describes how to use wopcconn in the command
line to manage connector instances.
Note: Before you can run wopcconn, you must set the Tivoli environment. See
Setting the Tivoli environment on page 448.

Table B-3 How to manage TWS for z/OS connector instances


If you want to...

Use this syntax

Create an instance

wopcconn -create [-h node] -e instance_name -a


address -p port

Stop an instance

wopcconn -stop -e instance_name | -o object_id

Start an instance

wopcconn -start -e instance_name | -o object_id

Restart an instance

wopcconn -restart -e instance_name | -o object_id

Remove an instance

wopcconn -remove -e instance_name | -o object_id

View the settings of an


instance

wopcconn -view -e instance_name | -o object_id

Change the settings of an


instance

wopcconn -set -e instance_name | -o object_id [-n


new_name] [-a adress] [-p port] [-t trace_level] [-l
trace_length]

Where:
Node is the name or the object ID (OID) of the managed node on which you
are creating the instance. The TMR server name is the default.
instance_name is the name of the instance.
object_id is the object ID of the instance.
new_name is the new name for the instance.
Address is the IP address or hostname of the z/OS system where the Tivoli
Workload Scheduler for z/OS subsystem to which you want to connect is
installed.
Port is the port number of the OPC TCP/IP server to which the connector
must connect.

Appendix B. Connector reference

449

trace_level is the trace detail level from 0 to 5. trace_length is the maximum


length of the trace file.You can also use wopcconn in interactive mode. To do
this, just enter the command, without arguments, in the command line.

Example
We used a Tivoli Tivoli Workload Scheduler for z/OS with the hostname twscjsc.
On this machine, a TCP/IP server connects to port 5000. Yarmouth is the name
of the TMR managed node where we installed the OPC connector. We called this
new connector instance twsc.
With the following command, our instance has been created:
wopcconn -create -h yarmouth -e twsc -a twscjsc -p 5000

You can also run the wopcconn command in interactive mode. To do this, perform
the following steps:
1. At the command line, enter wopcconn with no arguments.
2. Select choice number 1 in the first menu.
Example: B-1 Running wopcconn in interactive mode
Name
Object id
Managed node
Status

:
:
:
:

TWSC
1234799117.5.38#OPC::Engine#
yarmouth
Active

OPC version

: 2.3.0

2. Name

: TWSC

3. IP Address or Hostname: TWSCJSC


4. IP portnumber
: 5000
5. Data Compression
: Yes
6. Trace Length
7. Trace Level

: 524288
: 0

0. Exit

Working with TWS connector instances


This section describes how to use the wtwsconn.sh command to create and
manage Tivoli Workload Scheduler connector instances.

450

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Much of the following information is excerpted from the Tivoli Job Scheduling
Console Users Guide, SH19-4552. Note that there is an error on page 29 of that
guide: The command used to create TWS connector instances is called
wtwsconn.sh, not wtwsconn.

The wtwsconn.sh command


Use the wtwsconn.sh utility to create, remove, and manage connector instances.
This program is downloaded when you install the connector.
Note: Before you can run wtwsconn.sh, you must set the Tivoli environment.
See Setting the Tivoli environment on page 448.
Table 8-1 How to manage TWS for z/OS connector instances
If you want to...

Use this syntax

Create an instance

wtwsconn.sh -create [-h node]-n instance_name -t


twsdir

Stop an instance

wtwsconn.sh -stop -n instance | -t twsdir

Remove an instance

wtwsconn.sh -remove -n instance_name

View the settings of an


instance

wtwsconn.sh -view -n instance_name

Change the TWS home


directory of an instance

wtwsconn.sh -set -n instance_name -t twsdir

Where:
Node specifies the node where the instance is created. If not specified, it
defaults to the node where the script is run from.
Instance is the name of the new instance. This name identifies the engine
node in the Job Scheduling tree of the Job Scheduling Console. The name
must be unique within the Tivoli Managed Region.
twsdir specifies the home directory of the Tivoli Workload Scheduler engine
associated with the connector instance.

Example
We used a Tivoli Workload Scheduler for z/OS with the hostname twscjsc. On
this machine a TCP/IP server connects to port 5000. Yarmouth is the name of the
TMR managed node where we installed the TWS connector. We called this new
connector instance Yarmouth-A.

Appendix B. Connector reference

451

With the following command, our instance has been created:


wtwsconn.sh -create -h yarmouth -n Yarmouth-A -t /tivoli/TWS/

Useful Tivoli Framework commands


These commands can be used to check your Framework environment. Refer to
the Tivoli Framework 3.7.1 Reference Manual, SC31-8434, for more details.
wlookup -ar ProductInfo lists the products installed on the Tivoli server.
wlookup -ar PatchInfo lists the patches installed on the Tivoli server.
wlookup -ar MaestroEngine lists the instances of this class type (same for the
other classes).

For example:
barb 1318267480.2.19#Maestro::Engine#

The number before the first period (.) is the region number and the second
number is the managed node ID (1 is the Tivoli server). In a multi-Tivoli
environment, you can determine where a particular instance is installed by
looking at this number because all Tivoli regions have a unique ID.
wuninst -list lists all the products that can be un-installed.
wuninst {ProductName}-list lists the managed nodes where a product is
installed.
wmaeutil Maestro -Version lists the versions of the installed engine, database,
and plan.
wmaeutil Maestro -dbinfo lists information about the database and the plan.
wmaeutil Maestro -gethome lists the installation directory of the connector.

452

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Appendix C.

Merging TWS for z/OS and


TWS databases
This appendix gives general guidelines on the decision to merge TWS for z/OS
and TWS databases. It complements the considerations that are discussed in
Section 4.5, Conversion from TWS network to TWS for z/OS managed network
on page 216.

Copyright IBM Corp. 2002

453

Alternatives to consider
Customers having both TWS and TWS for z/OS installed at their site often ask
this question and there is no straight answer. The customer must look at the
original decision of why they installed both TWS for z/OS and TWS and ask
themselves whether these facts are still relevant in today's environment.
It is true that TWS for z/OS can manage both the mainframe and distributed
environment from one single engine and in fact the same can be said for TWS.
OPC, up until Version 2.3, used tracker agents for both the MVS and distributed
environments, whereas TWS uses fault tolerant agents for the distributed
environment and extended agents to manage the MVS systems.
So you have three choices, all of which have pros and cons:
Continue as today with both TWS for z/OS and TWS engines and use the
common Graphical User Interface (GUI), to maintain and monitor both
engines.
Migrate all the TWS schedules into the TWS for z/OS database and use TWS
for z/OS to schedule both the distributed and mainframe environments.
Migrate all the TWS for z/OS schedules into TWS and use TWS to schedule
both the distributed and mainframe environments.

Here are our general suggestions:


Option 1 is the way forward for users running systems up to and including
8.1.0, and since there is no increase in cost to run the same environment,
only the technical statements below should determine your decision.
Option 2 can be considered, but should be implemented if the technical
statements below will not cause any major impact to service.
Option 3 needs very careful consideration, and should not even be
considered since TWS for z/OS users would lose a lot of MVS-specific
functions.

The following paragraphs indicate the benefits and considerations of each


solution and will hopefully assist your decision on how to proceed and, should
you eventually decide to migrate one database into the other, then it should also
assist you in assessing the amount of effort involved for the planning,
implementation, and testing of any such solution.

Option 1: Keep both TWS for z/OS and TWS engines


The following section discusses keeping both TWS and TWS for z/OS engines.

454

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Benefits
There is no loss of engine-specific function when keeping both scheduling
engines. Users who have implemented TWS for z/OS to run their production
workload on the mainframe will almost certainly have utilized many of the
functions available to them specifically for this environment. Many functions
available to TWS for z/OS users are just not available when using the extended
agents with TWS to manage the mainframe.

TWS for z/OS-specific functions


Now we will discuss TWS for z/OS-specific functions.
Planning and forecasting functions enables the user to ascertain the
workload over both the short term and long term. Using these powerful
facilities allows the user to plan for year end schedules and other critical
periods and to adjust the schedules online with just a few key strokes.
The Event Triggered Tracking (ETT) facility allows for unplanned work to be
dynamically added into the current plan and to be processed according to
predefined rules.
The Job Completion Checker function allows the output from MVS jobs to be
read by the scheduler to determine if a job has worked or failed even though
the return code set by the operating system was deemed to be OK. For
example, a return code zero with a message of control totals do not match
in the user output may well indicate a failure.
It is not just jobs that can be scheduled with TWS for z/OS. Started Tasks,
Write to Operator (WTO) messages, which can be used to communicate with
automation tools and the tracking of Print Operations, can also be scheduled.
Many operations can make up a complex scheduling environment and these
complex operations can be spread across many areas. Each area within the
TWS for z/OS scheduling system is known as a workstation and each
workstation has its own individual plan. Therefore, for example, it is possible
to see the workload at any given CPU, the amount of JCL preparation that is
required and the manual operations that need to be fulfilled to satisfy the
schedule.
The Automated reroute is a facility that allows the user to predefine what
should happen to any given operation in the event of the CPU it was
scheduled to run on is unavailable.
Catalogue management is the facility that allows the automatic clean-up of
MVS catalogues, both tape and disk files, in order for a job to be rerun without
any human intervention.
JCL tailoring is a function that can be used to automatically manipulate the
control language that is used to run the job depending on the results of tests

Appendix C. Merging TWS for z/OS and TWS databases

455

that it has been set up to run. For example, it could modify the execution JCL
based on the day of the week.
Workload Manager Integration ensures that a job under the control of TWS
for z/OS that is deemed to be running late will have its internal priority within
the MVS operating system adjusted to ensure it gets maximum resources
from the operating system, thus improving its throughput.
Hiperbatch is an MVS facility that allows large I/O bound files to reside in
storage until they are no longer required. When a job in TWS for z/OS is using
a file that has the Hiperbatch flag set, then this file is read and placed into
storage. When the job using this file ends, TWS for z/OS checks to see if any
other scheduled job requires this file. If it finds another job then it leaves it in
storage thus removing the I/O for second or subsequent operations, but if no
other job requiring this file is found in the schedule then the file is removed
from Hiperbatch.
Feedback/Smoothing is the facility that allows the actual duration of jobs to be
reported back to the database when it seems as though the planned duration
is either too long or too short. This ensures accurate future plans.

Users who have implemented TWS to run their production workload within the
distributed world will have almost certainly utilized many of the TWS-specific
functions. Some of these functions are just not available as standard options
within the TWS for z/OS product.

TWS-specific functions
The following are TWS-specific functions.
Global prompts is a method of holding back specific parts of the schedule
until a manual prompt has been received. In TWS for z/OS this is achieved by
using a manual workstation and creating a Global Prompt job stream that
contains the manual task. Any other job streams waiting for this manual task
to be completed would have a predecessor of the manual task defined in the
Global Prompt job stream.
Recovery = Continue is an option in TWS that allows the user to continue with
successor jobs even in the case of a failure. To do this in TWS for z/OS
requires the use of automatic recovery statements placed with the job stream
and a dummy step added to the end of the job.
Resources at job stream level are used to ensure that a resource is only set to
available or not when all jobs in a job stream have completed, regardless of
the sequence they run in, and that the resource is held for the duration of the
job stream. In TWS for z/OS this is achieved by adding a dummy step to the
start and end of the job stream and attaching the resource to these new
steps.

456

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Run every n minutes is used to make a one-line entry into the scheduling
database to run a job at specified intervals. For example, run every 60
minutes. In TWS for z/OS this would require 24 line entries. The first one
specifying 0000, the second 01:00, the third 02:00 up to and including 24:00.
Use the new GUI, which is free, to maintain and monitor both environments
from one single graphical view. In the GUI shipped with TWS 8.1.0, one single
graphical view can see all engines and the same operators can use one
common interface to monitor and manage both environments.
Keeping both environments the same would incur no migration costs since
this would be business as usual.

Considerations
The following are things to consider.
Dependencies between TWS for z/OS and TWS have to be handled by the
user, but this can be achieved outside of the current scheduling dialogues by
using such techniques as data-set triggering or special resource flagging,
both of which are available to TWS for z/OS and TWS users, to communicate
between the two environments.
Keeping both TWS for z/OS and TWS engines does mean that there are two
pieces of scheduling software to maintain.

Option 2: Move TWS schedules into TWS for z/OS


We will now discuss moving TWS schedules into TWS for z/OS.

Benefits
We will now discuss the benefits of moving TWS schedules into TWS for z/OS.
Dependencies between mainframe and distributed jobs are handled within
the single scheduling engine and there is no need to do anything special
outside of the scheduling dialogues, such as dataset triggering or resource
flag setting.
In TWS for z/OS for ZOS (TWS 8.1.0) the same fault tolerant agents are used
as in TWS, therefore reducing maintenance effort and technical support
education requirements.
There is only one scheduling engine to maintain, thus reducing maintenance
and technical support overheads especially when installing upgrades, etc.
A common GUI can be used.

Appendix C. Merging TWS for z/OS and TWS databases

457

The carryforward issue is resolved. When TWS creates the next days
schedule, JNEXTDAY, uncompleted work can be dropped from the schedule
or carried over into the next schedule dependent on configuration options set
by the user. However, since TWS can only handle one version of the same
job in its plan file, it renames the original job to another name of the format
CFnnnnnn where nnnnnn is a TWS-generated number. While this does not in
itself cause a problem in running the correct task, it does, however, make it
somewhat difficult for operators to monitor its progress once it has been
renamed. Since TWS for z/OS can handle multiple jobs with the same name
in the same schedule this issue is resolved.
Both tracker agents and fault tolerant agents can coexist on the same
distributed box, thus simplifying migration.

Considerations
Consider the following things.
Some standard functions that may be utilized within the TWS engine will need
to be converted to TWS for z/OS equivalents. See TWS-specific functions
on page 456.
Some TWS-functions cannot be converted. CPU classes, for example. This
function is TWS specific and can be used to make adding new jobs to the
schedule very easy when a new distributed box is added to the scheduled
environment.

For example: If we have five UNIX boxes and on each of these five boxes we
run the same backup script everyday, instead of scheduling the script on each
UNIX box separately we can create a class called UNIXSERVER and
associate the backup script with this class. The class can then be associated
with each of the UNIX servers. The backup script, defined only once, still runs
on all the UNIX boxes and when a new UNIX server arrives, in order to run
the same backup script on this new server, we just associate the new server
to the UNIXSERVER class.
Although this would probably not be a major effort, the work involved in
moving the TWS schedules to TWS for z/OS should not be underestimated.
TWS schedules can be unloaded and, by using REXX or equivalent
programming languages, be converted into TWS for z/OS batch loader
statements ready for loading into the existing or a new TWS for z/OS
controller database using the TWS for z/OS utility provided as a standard part
of the product. Care should be taken here though since, as already stated,
some TWS-specific functions will not automatically convert.
Many fields in TWS, such as workstation name, job name, and job stream
name, are a lot longer than the equivalent fields within TWS for z/OS. For
example, the workstation name in TWS can be 40 characters in length whilst

458

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

in TWS for z/OS it is only four. These of course are not problems that cannot
be resolved, but they do have to be considered and catered for in any
migration. See Chapter 4, End-to-end implementation scenarios and
examples on page 169 for some techniques that you can use for easy
migration.
File dependencies defined in TWS (distributed), when converted to TWS for
z/OS, will not be recognized when they are created on a distributed box by
the TWS for z/OS master. These file dependencies, however, may not be
required; if all the jobs are controlled by TWS for z/OS then it may be enough
to just have a job dependency. In the cases where there is a true need for a
file dependency, then a looping script scheduled on the distributed engine at
the appropriate time may satisfy this requirement. (See EQQFM in the TWS
for z/OSSAMPLIB.)

Option 3: Move TWS for z/OS schedules into TWS


Now we will discuss moving TWS for z/OS schedules into TWS.

Benefits
Below are some of the benefits of moving TWS for z/OS schedules into TWS.
Dependencies between mainframe and distributed jobs are handled within
the single scheduling engine and there is no need to do anything special
outside of the scheduling dialogues such as dataset triggering or resource
flag setting.
There is only one scheduling engine to maintain, thus reducing maintenance
and technical support overheads, especially when installing upgrades.
A common GUI can be used.

Considerations
Below are some things to consider about moving TWS for z/OS schedules into
TWS.
Many functions that are currently found within the TWS for z/OS engine will
not be available to TWS when running jobs in the mainframe environment.
See TWS for z/OS-specific functions on page 455.
There is a limit as to the number of jobs TWS can schedule from one master
database, and the current limit would need to be understood if moving all of
the TWS for z/OS operations into the TWS database.

Appendix C. Merging TWS for z/OS and TWS databases

459

Depending on the number of jobs within the TWS for z/OS database, this
could quite easily become a major effort. TWS for z/OS schedules can be
unloaded and converted by REXX or an equivalent programming language
into TWS control statements ready for loading into the existing or a new TWS
master database using the TWS utility provided as a standard part of the
product. Care should be taken here though since, as already stated, many
TWS for z/OS-specific functions cannot be migrated to TWS and would be
lost to the user.
This migration is also against the direction of the TWS and TWS for z/OS
products. Any major enhancements to the TWS extended agent for OS/390
feature should not be expected.

460

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Appendix D.

Additional material
This redbook refers to additional material that can be downloaded from the
Internet as described below.

Locating the Web material


The Web material associated with this redbook is available in softcopy on the
Internet from the IBM Redbooks Web server. Point your Web browser to:
ftp://www.redbooks.ibm.com/redbooks/SG246022

Alternatively, you can go to the IBM Redbooks Web site at:


ibm.com/redbooks

Select the Additional materials and open the directory that corresponds with
the redbook form number, SG246022.

Copyright IBM Corp. 2002

461

Using the Web material


The additional Web material that accompanies this redbook includes the
following files:

File name
SG246022.zip

Description
Zipped conversion utility to help export jobs from centrally
stored OPC controllers to the FTAs

System requirements for downloading the Web material


The following system configuration is recommended:
Hard disk space
Operating System

1 MB minimum
Windows/UNIX

How to use the Web material


Create a subdirectory (folder) on your workstation, and unzip the contents of the
Web material zip file into this folder.

462

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Abbreviations and acronyms


ACF

Advanced Communications
Function

PID

Process id

PIF

Program interface

PSP

Preventive service planning

PTF

Program temporary fix

RACF

Resource Access Control


Facility

RFC

Remote Function Call

RODM

Resource Object Data


Manager

RTM

Recovery and Terminating


Manager

API

Application Programing
Interface

ARM

Automatic Restart Manager

COBRA

Common Object Request


Broker Architecture

CP

Control point

DMTF

Desktop Management Task


Force

EM

Event Manager

FTA

Fault tolerant agent

SCP

Symphony Current Plan

FTW

Fault tolerant workstation

SMF

System Management Facility

GID

Group Identification Definition

SMP

System Modification Program

GS

General Service

SMP/E

GUI

Graphical user interface

System Modification
Program/Extended

HFS

Hierarchical File System

SNMP

IBM

International Business
Machines Corporation

Simple Network Management


protocol

STLIST

Standard list

ISPF

Interactive System
Productivity Facility

TME

Tivoli Management
Environment

ITSO

International Technical
Support Organization

TMF

Tivoli Management
Framework

JCL

Job control language

TMR

Tivoli Management Region

JES

Job Execution Subsystem

TSO

Time Sharing Option

JSC

Job Scheduling Console

TWS

Tivoli Workload Scheduler

JSS

Job Scheduling Services

USS

UNIX System Services

MIB

Management Information
Base

VTAM

Virtual Telecommunications
Access Method

MN

Managed nodes

WA

Workstation Analyzer

NNM

Normal Mode Manager

WLM

Workload Monitor

OMG

Object Management Group

X-agent

Extended Agent

OPC

Operations, Planning and


Control

XCF

Cross-system coupling facility

Copyright IBM Corp. 2002. All rights reserved.

463

464

End-to-End Scheduling with TWS 8.1

Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.

IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks
on page 466.
End-to-End Scheduling with OPC and TWS Mainframe and Distributed
Environment, SG24-6013
TCP/IP in a Sysplex, SG24-5235

Other resources
These publications are also relevant as further information sources:
NetView for Unix Users Guide for Beginners V7.1, SC31-8891
Tivoli Framework 3.7.1 Installation Guide, GC32-0395
Tivoli Framework 3.7.1 Reference Manual, SC31-8434
Tivoli Framework 3.7.1 Users Guide, GC31-8433Tivoli Job Scheduling
Console User's Guide, SH19-4552
Tivoli Job Scheduling Console Users Guide, SH19-4552
Tivoli Workload Scheduler 8.1 Error Messages, SH19-4557
Tivoli Workload Scheduler 8.1 Planning and Installation Guide, SH19-4555
Tivoli Workload Scheduler 8.1 Reference Guide, SH19-4556
Tivoli Workload Scheduler for z/OS V8R1 Controlling and Monitoring the
Workload, SH19-4547
Tivoli Workload Scheduler for z/OS V8R1 Customization and Tuning,
SH19-4544
Tivoli Workload Scheduler for z/OS V8R1 Diagnosis Guide and Reference,
LY19-6410
Tivoli Workload Scheduler for z/OS V8R1 Installation Guide, SH19-4543
Tivoli Workload Scheduler for z/OS V8R1 Messages and Codes, SH19-4548

Copyright IBM Corp. 2002

465

Tivoli Workload Scheduler for z/OS V8R1 Quick Reference, GH19-4541


Tracker agents for AIX,UNIX,VMS and OS/390 OMVS Installation and
Operation, SH19-4484
Tracker for OS/400 Installation and Operation, SH19-4485
Tracker agents for OS/2 and Windows NT platforms Installation and
Operation, SH19-4483
Workload Monitor/2 Users Guide, SH19-4482
z/OS V1R2 Communications Server: IP and SNA Codes, SC31-8791
z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775
z/OS V1R3.0 MVS Initialization and Tuning Guide, SA22-7591
z/OS V1R3.0 MVS System Commands, SA22-7627

Referenced Web sites


These Web sites are also relevant as further information sources:
Job Scheduling Console and related components: 1.2-JSC-0001
ftp://ftp.tivoli.com/support/patches/patches_1.2/1.2-JSC-0001

Tivoli support site


http://www.tivoli.com/support/

TWS updates
ftp://ftp.tivoli.com/support/patches/

Tivoli Workload Scheduler: 8.1-TWS-0001


ftp://ftp.tivoli.com/support/patches/patches_8.1/8.1-TWS-0001

How to get IBM Redbooks


You can order hardcopy Redbooks, as well as view, download, or search for
Redbooks at the following Web site:
ibm.com/redbooks

You can also download additional materials (code samples or diskette/CD-ROM


images) from that site.

466

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

IBM Redbooks collections


Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the
Redbooks Web site for information about all the CD-ROMs offered, as well as
updates and formats.

Related publications

467

468

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Index
Numerics
24/7 availability 2

A
ABEND 335
ABENDU 335
Acknowledge 402
Acrobat reader 95
Agent 382
agents 5
AIX 146
Alternatives to consider 454
altpass 377
AS/400 job 2
Automated reroute 455
Automatic recovery statements 207

B
best restart step 59

C
calendar 27
CARRYFORWARD schedule 379
Catalogue management 455
central repository 259
chown 374
cleanup 6
CLIST 61
CODEPAGE 114
common interface 5
communications between workstations 18
Connector 17
controller 3
corrupted data 375
CP extension 122
CPU 26
CPU type 119
CPUFULLSTAT 234
CPUNODE 248
CPUREC 115
CPUSERVER 184
CPUTCPIP 248

Copyright IBM Corp. 2002

customize script 143


Customizing
DVIPA 232
Job Scheduler Console 147
Netview integration software 392
Tivoli environment 144, 448
Tivoli Workload Scheduler for z/OS 102
TWS for High Availability 250
TWS for z/OS backup engines 230
TWS for z/OS engine 186
TWS for z/OS topology definitions 177
TWS Security file 161
work directory 109
cutover 211

D
daily production plan 33
dedicated library 207
default user name 142
dependency object 4
dependency resolution 11
Discovering network devices 382
DNS server 249
domain 34
domain manager 5
domain topology 115
DOMREC 115
dumpsec 165
Dynamic Virtual IP Addressing
See DVIPA

E
e-commerce 2
Endian 38
End-to-end scheduling
activating end-to-end feature 186
activating FTWs 188
configuration 176
considerations for scripts 207
conversion process 216
creating the script library member 207
creating Windows user and password definitions
184

469

customization of TWS for z/OS 186


defining FTWs 187
education 225
future 422
general experiences 200
guidelines for coversion 226
implementation scenarios 169
Implementing from scratch 174
major implementation steps 175
migrating backwards 210
Migrating from tracker agents to FTWs 201
migration actions 203
migration checklist 203
migration planning 202
our environment 172
password consideration 209
performing cutover 211
planning 92
previous release of OPC 175
rationale behind conversion 171
relation between of JCL variables and TWS parameters 211
restarting TWS for z/OS 186
run number 189
software distribution 207
Standards 178
TCP/IP considerations 96
topology 176
topology initialization 179
transferring scripts 206
troubleshooting 354
verify the conversion 226
verify the initialization statements 185
Verify the JCL for plan programs 185
engine 322
enhancement APARs 131
EQQAUDIT 7
EQQDUMP 339
EQQPDF 95
EQQTWSOU dataset 107
error jobs 34
event processing 337
Event Triggered Tracking 455
extended agent 119
extended agents 5

F
failover 246

470

fault tolerant agent


fault tolerant workstation 12
local copy 12
naming conventions 138
Feedback/Smoothing 456
file dependency 34
Free day rule 7
freeday 7
freedays rule 307
FTA
See fault tolerant agent
FTP 211
FTW
See fault tolerant workstation

G
gcomposer 153
gconman 153
generation data group 59
globalopts 36
Graph mode 305
grep 375

H
HACMP 246
HACMP for AIX 246
hardcopy 95
HFS
See Hierarchical File System
Hierarchical File System 95
HFS directory 104
in a sysplex 95
OS/390 V2R9 95
sharing 95
High Availability
Configuring TWS 248
failover 246
fail-over IP address 248
HACMP 246
HACMP standby node 248
HP Service Guard 251
shared disk array 246
Windows NT cluster 251
High Availability Cluster Multi-Processing
HACMP
Hiperbatch 456
HOLIDAYS calendar 312
home directory 143

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

HP-UX 146
HP-UX PA-RISC 146
Hyperbolic view 9

I
IBM AIX 146
IBM support center 366
idle time 4
INCORROUT 335
Init calendar 110
initial load 207
Installing
connectors 146
Job Scheduling Services 145
multiple instances of TWS 142
TCP/IP Server 131
Tivoli Management Framework 3.7B 145
Tivoli Workload Scheduler 141
Tivoli Workload Scheduler for z/OS 100
instance 451
Intel 379
ISPF panels 4

J
Java GUI interface 4
JCL 10
JCL tailoring 455
JCL variables 211
Jnextday script 33
job 2, 26
Job Completion Checker 455
job duration 6
Job FOLLOW 378
job instances 34
Job Scheduling Console 2
availability 153
Calendar run cycles 309
Common Default Plan Lists 322
Common view 9
compatibility 154
connector trace 366
Copying jobs 319
Creating connector instances 147
Editing a job stream instance 300
enhancements 8
error examples 369
filter row 314
freedays rule 307

future 424
General enhancements 313
Graph mode 305
Graphical enhancements 9
hardware and software prerequisites 157
installation on AIX 158
installation on Sun Solaris 158
installation on Windows 158
installing 156
Installing Job Scheduling Services 145
JSC 1.2 319
migration considerations 154
multiple run cycles 310
Non-modal windows 9, 319
Re-Submitting a job stream instance 302
Severity code 337
Simple run cycles 306
Sorting list results 317
Specifying free days 312
trace table 339
traces 372
troubleshooting 369
TWS for z/OS connector troubleshooting 366
TWS for z/OS-specific enhancements 10
TWS-specific enhancements 10
Usability enhancements 8
WAIT keyword 338
Weekly run cycles 308
Job Scheduling Services
See JSS
job stream 26
job stream instances 34
job_instance_output 89
JSC 4
See Job Scheduling Console
JSS 12

K
kill 375
killproc 375

L
legacy GUI 153
Legacy system 2
Linux Red Hat 7.1 146
listproc 375
local machine 206
logging 5

Index

471

LOGLINES 114
LOOP 335

M
Maestro 11
maestro_database 88
maestro_engine 88
maestro_plan 88
maestro_x_server 89
mailman 375
maintenance strategy 167
Maintenence release 141
makesec 166
manage TCP/IP 382
Managed Workload Scheduler Network 387
management hub 18
Management Information Base
See MIB
Manager 382
master 5, 11
Master conman 401
master domain manager 5, 7
menu actions 401
MIB 382
mixed environment 322
mount point 247

N
nested INCLUDEs 59
Netman 8
netstat 361, 375
NetView GUI 383
NetView management node 34
network 5
network conditions 382
Network management 382
Network management ABC 382
Network manager 34
network traffic 19
NT cluster 251

O
OPC 35
opc_connector 88
opc_connector2 88
OPCMASTER 141
OPENS file 378

472

Operations Planning and Control


See OPC
Oracle 2, 5
OS/390 3, 5
OS/400 11
out of sync 147
Overview
Tivoli Workload Scheduler for Z/OS 3

P
parameter 27
parent directory 142
parent domain manager 12
Performance improvements
complex relations 8
Daily plan creation 8
Daily plan distribution 8
Event files 8
I/O optimization 8
massive scheduling plans 8
mm cache mailbox 137
mm cache size 137
sync level 137
wr enable compression 137
polling traffic 388
PORTNUMBER 114
predecessor 3
preventive service planning
See PSP
process id 375
production day 29
Program Directory 92
Program Interface 4
program interface 61
program temporary fix
PTF
prompt 27
prompt dependency 34
Protocol 382
ps 375
PSP 93

R
RACF user 148
real-time monitoring 382
Redbooks Web site 466
Contact us xxiii
Refresh CP group field 105

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Removable Media Manager 6


reporting 5
reqistry 251
RESERVE macro 108
resource 27
resources 34
REXX 441
RMM
See Removable Media Manager
roll over 4
router 376
run cycle 4

S
sample security file 164
SAP/3 2
scalable agent 7
schedlog directory 257
Scheduling 2
scheduling engine 12
Script repository 100
SD37 abend code 108
SEQQMISC dataset 95
server 12
server started task 110
shared disk array 246
Simple Network Management protocol
See SNMP
Sinfonia 379
Sinfonia file 32
SNMP 382
SNMP Manager 382
SNMP traps 416
start Up 402
Started Tasks 455
StartUp 374
stdlist files 104
submit jobs 3
Submitting userid 100
subordinate agents 11
subordinate domain manager 30
Sun Solaris 146
switch manager 246
Symnew file 32
Symphony file 122
creating 31
distribution 31
Monitoring instances 34

Symnew file 32
updates 31
sysplex 5, 104
System Automation/390 245
System Display and Search Facility 59
system documentation 95

T
TBSM
See Tivoli Business Systems Manager
TCP/IP considerations 96
Apar PQ55837 99
Dynamic Virtual IP Addressing 99
stack affinity 98
Usage of the host file 98
terminology 11
time stamp 379
Tivoli Business Systems Manager 6, 34
Tivoli managed node 387
Tivoli Managed Region
See TMR
Tivoli Management Framework 148
Tivoli Management Framework 3.7.1 145
Tivoli Management Framework 3.7B 145
Tivoli NetView 382
Tivoli Software Distribution 207
Tivoli Workload cheduler/Netview
TWS maps 397
Tivoli Workload Scheduler 2, 4
architecture 5
Auditing 35
auditing log files 255
Backup and maintenance guidelines for FTAs
251
backup domain manager 138
Benefits of integrating with TWS for z/OS 5
Calendar 27
central repositories 259
Centralized security 424
configuring for HACMP 248
creating TMF Administrators 148
database 28
database files 5
defining objects 28
definition 11
dependency resolution 11
Distributed 11
end-to-end scheduling 2

Index

473

engine 12
enhancements 7
Extended Agents 5
fault tolerant workstation 12
Firewall support 424
four tier network 20
Free Day Rule 7
future 423
HA in Windows NT/2000 environment 251
HACMP 250
home directory 36
HP Service Guard 251
Installation improvements 8
installing 141
installing an agent 141
installing and configuring Tivoli Framework 144
installing Job Scheduling Services 145
installing multiple instances 142
introduction 4
Job 26
Job Scheduling Console 2
Job stream 26
maintenance 167
master domain manager 30
MASTERDM 11
Monitoring file systems 258
multi-domain configuration 19
Multiple holiday calendars 7
naming conventions 138
network 5
network fail safe 37
new security mechanism 423
NT cluster 251
Options 36
overview 4
Parameter 27
Parameters files 260
Performance improvements 8
plan 4, 29
production day 29
Prompt 27
Reporting 35
schedlog directory 257
scheduling engine 12
scripts files 259
Security files 260
security model 160
Serviceability enhancements 424
Setting Global options 36

474

Setting Local options 36


Setting security 36
single domain configuration 18
Software ordering 93
SSL support 424
Symphony file 30
TBSM integration 7
terminology 11
trends 422
troubleshooting 374
TWS user 143
unison directory 142
User 27
Using time zones 37
Workstation 26
Workstation class 26
Tivoli Workload Scheduler 7.0 154
Tivoli Workload Scheduler 8.1 suite 2
Tivoli Workload Scheduler for z/OS 3
backup engines 230
Benefits of integrating with TWS 5
Controller 3
creating TMF Administrators 148
database 3
end-to-end dataset allocation 107
end-to-end FEATURE 103
end-to-end scheduling 7
enhancements 6
EQQJOBS installation aid 102
EQQPDF 95
fail-over scenarios 230
future 422
HA configuration guidelines 230
HFS Installation Directory 103
HFS Work Directory 103
hot standby engines 230
Improved memory management 423
installation tasks 100
installing 100
introduction 3
JCL Variables management 423
Job durations in seconds 6
Job recover 422
long term plan 5
Migration Utility 423
Minor enhancements 7
overview 3
PSP Upgrade and subset id 93
Refresh CP group 105

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

Removal of one domain manager limitation 423


restart and cleanup 6
RMM integration 6
Script Library management 422
switch manager 246
System documentation 95
TBSM integration 6
tracker 3
troubleshooting 334
User for OPC address space 104
VIPA 230
Workload monitor 94
Workload Monitor/2 94
Tivoli Workload Scheduler network 17
Tivoli Workload Scheduler/NetView 34
Tivoli Workload Scheduler/Netview
any SNMP Manager 419
architecture 386
benefits 383
Configuring user access 392
Defining TWS maps 392
events 416
Installing and customizing 388
Installing magent 391
Installing mdemon 389
internals 385
Loading Maestro MIB 396
Maestro process information 404
Managed Node 387
Managed Workload Scheduler Network 387
Menu actions 400
Monitoring job schedules 411
operation 397
state change 385
traps enabled by deault 417
traps not enabled by default 418
UNIX requirement 385
TMF 12
See Tivoli Management Framework
topology 122
TOPOLOGY statement 113, 362
TPLGYMEM 114
TPLGYPRM 113
tracker 3
Tracker agent migration 94
tracker agents 94
transformation 424
TRCDAYS 114
trends and directions 422

Troubleshooting 333
ABEND 335
ABENDU 336
Abnormal termination 338
AutoTrace 380
batchman down 376
Byte order problem 379
compiler processes 379
connector 366
console dump 345
diagnostic file 339
end-to-end 354
end-to-end working directory 354
evtsize 376
FTA not linked 379
FTAs not linking to the master 374
INCORROUT 336
Information needed 346
internal trace 366
Jnextday in ABEND 379
Jnextday is hung 378
Job Scheduling Console 369
Jobs not running 377
JSC error examples 369
LOOP procedure 341
missing calendars 379
missing resources 379
MSG 335
MSG keyword 337
multiple netman processes 375
negative runtime error 379
PERFM keyword 337
preparing a console dump 344
Problem analysis 338
Problem-type keywords 335
software-support database 335
standard list directory 356
standard list messages 358
Starter log information 358
Symphony renew 362
System dump dataset 340
TCP/IP server 366
Tivoli Workload Scheduler for z/OS 334
Trace information 340
TRACEDATA 373
TRACELEVEL 373
tracing facility 380
tracking events 347
Translator log information 358

Index

475

TWS port 375


TWS troubleshooting checklist 374
Unix System Services 363
unlinked workstations 359
Using keywords 334
Wait 335
writer process down 375
TSO command 4
TWS 5
Composer 422
Conductor 422
See Tivoli Workload Scheduler
trends 422
TWS 8.1 Suite 11
TWS engine 12
TWS for z/OS
Tivoli Workload Scheduler for z/OS
TWS parameters 211
TWSXPORT utility 428

WRKDIR 115
WSTASTART statement 436
wtwsconn.sh command 147, 451

Z
z/OS job 2

U
Unison 4
UNIX 11, 247
UNIX System Services 95
user 27
USRMEM 115
USRREC 115
USS
See UNIX System Services

V
virtual ip address 99

W
Web browser 2
Windows 11
Windows NT 2
WLM/2
See Workload Manager/2
wlookup 452
work directory 104
workday 7
Workload 2, 94
Workload Monitor/2 94
workstation 26, 34
workstation class 26
Write to Operator 455

476

End-to-End Scheduling with Tivoli Workload Scheduler 8.1

End-to-End Scheduling with


Tivoli Workload Scheduler 8.1

Back cover

End-to-End Scheduling
with Tivoli Workload
Scheduler 8.1
Plan and implement
your end-to-end
scheduling
environment with
TWS 8.1
Model your
environment using
realistic scheduling
scenarios
Learn the best
practices and
troubleshooting

Tivoli Workload Scheduling (TWS) 8.1 is a very important


milestone to further integrate the OPC- and Maestro-based
scheduling engines. This redbook considers how best to
provide end-to-end scheduling using Tivoli Workload
Scheduling 8.1, both distributed (previously known as
Maestro) and mainframe (previously known as OPC)
components.

INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION

Some of the topics covered in this book are:

BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE

-TWS 8.1 end-to-end architecture


-Installation and customization of TWS 8.1
-End-to-end scenarios
-Troubleshooting
-High availability of TWS environment
-Migration from earlier versions to TWS 8.1 end-to-end
-Working smart with the Job Scheduling Console
-NetView integration
-Scheduling best practices
-A glimpse into the future
We hope that this book will be a major reference for all TWS
and TWS for z/OS specialists while implementing an
end-to-end scheduling architecture.

IBM Redbooks are developed by


the IBM International Technical
Support Organization. Experts
from IBM, Customers and
Partners from around the world
create timely technical
information based on realistic
scenarios. Specific
recommendations are provided
to help you implement IT
solutions more effectively in
your environment.

For more information:


ibm.com/redbooks
SG24-6022-00

ISBN 0738425079

Das könnte Ihnen auch gefallen