Beruflich Dokumente
Kultur Dokumente
End-to-End Scheduling
with Tivoli Workload
Scheduler 8.1
Plan and implement your end-to-end
scheduling environment with TWS 8.1
Model your environment using
realistic scheduling scenarios
Learn the best practices
and troubleshooting
Vasfi Gucer
Stefan Franke
Finn Bastrup Knudsen
Michael A Lowry
ibm.com/redbooks
SG24-6022-00
Take Note! Before using this information and the product it supports, be sure to read the
general information in Notices on page xix.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Job scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Introduction to Tivoli Workload Scheduler for z/OS. . . . . . . . . . . . . . . . . . . 3
1.2.1 Overview of Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . 3
1.2.2 Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 3
1.3 Introduction to Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Overview of Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Tivoli Workload Scheduler architecture. . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Benefits of integrating TWS for z/OS and TWS . . . . . . . . . . . . . . . . . . . . . 5
1.5 Summary of enhancements in Tivoli Workload Scheduler 8.1 . . . . . . . . . . 6
1.5.1 Enhancements to Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . 6
1.5.2 Enhancements to Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . 7
1.5.3 Enhancements to the Job Scheduling Console . . . . . . . . . . . . . . . . . 8
1.6 The terminology used in this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 Material covered in this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Chapter 2. End-to-end TWS architecture and components. . . . . . . . . . . . 15
2.1 Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 The Tivoli Workload Scheduler network . . . . . . . . . . . . . . . . . . . . . . 18
2.1.2 Tivoli Workload Scheduler workstation types . . . . . . . . . . . . . . . . . . 21
2.1.3 Tivoli Workload Scheduler topology . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.4 Tivoli Workload Scheduler components . . . . . . . . . . . . . . . . . . . . . . 24
2.1.5 Tivoli Workload Scheduler database objects . . . . . . . . . . . . . . . . . . 25
2.1.6 Tivoli Workload Scheduler plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.7 Other Tivoli Workload Scheduler features . . . . . . . . . . . . . . . . . . . . 34
2.1.8 Making the Tivoli Workload Scheduler network fail-safe. . . . . . . . . . 37
2.2 Tivoli Workload Scheduler for z/OS architecture. . . . . . . . . . . . . . . . . . . . 39
iii
iv
Contents
vi
.......
.......
.......
.......
.......
.......
.......
......
......
......
......
......
......
......
Contents
.
.
.
.
.
.
.
381
382
382
382
383
385
386
vii
viii
Return code 8 . . . . . . . . . . . . .
Variables . . . . . . . . . . . . . . . . .
REXX members . . . . . . . . . . .
Flow of the program . . . . . . . .
TWSXPORT program code. . .
Sample job . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
......
......
......
......
......
......
......
......
......
......
......
.
.
.
.
.
.
.
.
.
.
.
453
454
454
455
457
457
457
458
459
459
459
......
......
......
......
......
......
.......
.......
.......
.......
.......
.......
......
......
......
......
......
......
.
.
.
.
.
.
465
465
465
466
466
467
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
Contents
ix
Figures
2-1
2-2
2-3
2-4
2-5
2-6
2-7
2-8
2-9
2-10
2-11
2-12
2-13
2-14
2-15
2-16
2-17
2-18
2-19
2-20
2-21
2-22
2-23
2-24
3-1
3-2
3-3
3-4
3-5
3-6
3-7
3-8
3-9
3-10
3-11
3-12
3-13
3-14
xi
3-15
3-16
3-17
3-18
3-19
3-20
3-21
3-22
3-23
3-24
3-25
3-26
3-27
3-28
3-29
3-30
4-1
4-2
4-3
4-4
4-5
4-6
4-7
4-8
4-9
4-10
4-11
4-12
4-13
4-14
4-15
4-16
4-17
4-18
4-19
4-20
4-21
4-22
4-23
4-24
4-25
4-26
4-27
xii
4-28
4-29
4-30
4-31
4-32
4-33
4-34
5-1
5-2
5-3
5-4
5-5
5-6
5-7
5-8
5-9
5-10
5-11
5-12
5-13
5-14
5-15
5-16
5-17
5-18
5-19
5-20
5-21
5-22
5-23
5-24
5-25
5-26
5-27
5-28
5-29
5-30
5-31
5-32
5-33
5-34
5-35
5-36
Figures
xiii
5-37
5-38
5-39
5-40
5-41
5-42
5-43
5-44
5-45
5-46
5-47
5-48
5-49
5-50
5-51
5-52
5-53
5-54
5-55
5-56
5-57
5-58
5-59
5-60
5-61
5-62
5-63
5-64
5-65
5-66
5-67
5-68
5-69
5-70
5-71
5-72
5-73
5-74
5-75
5-76
5-77
5-78
5-79
xiv
6-1
6-2
6-3
6-4
6-5
6-6
6-7
6-8
7-1
7-2
7-3
7-4
7-5
7-6
7-7
7-8
7-9
7-10
7-11
7-12
7-13
7-14
7-15
7-16
7-17
7-18
7-19
7-20
7-21
A-1
Figures
xv
xvi
Tables
3-1
3-2
3-3
3-4
3-5
3-6
3-7
3-8
3-9
3-10
4-1
4-2
6-1
6-2
6-3
6-4
6-5
6-6
6-7
6-8
7-1
7-2
A-1
A-2
B-1
B-2
B-3
8-1
xvii
xviii
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
xix
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
IBM
PAL
Perform
PowerPC
RACF
Redbooks
Redbooks(logo)
RS/6000
S/390
Sequent
SP
Tivoli
Tivoli Enterprise
Tivoli Enterprise Console
Tivoli Management
Environment
TME
VTAM
z/OS
The following terms are trademarks of International Business Machines Corporation and Lotus Development
Corporation in the United States, other countries, or both:
Lotus
Notes
Word Pro
xx
Preface
The beginning of the new century sees the data center with a mix of work,
hardware, and operating systems previously undreamt of. Todays challenge is to
manage disparate systems with minimal effort and maximum reliability. People
experienced in scheduling traditional host-based batch work must now manage
distributed systems, and those working in the distributed environment must take
responsibility for work running on the corporate OS/390 system.
This redbook considers how best to provide end-to-end scheduling using Tivoli
Workload Scheduling 8.1, both distributed (previously known as Maestro) and
mainframe (previously known as OPC) components.
In this book, we provide the information for installing the necessary TWS 8.1
software components and configure them to communicate with each other.
In addition to technical information, we will consider various scenarios that may
be encountered in the enterprise and suggest practical solutions. We will
describe how to manage work and dependencies across both environments
using a single point of control.
xxi
xxii
Notice
This publication is intended to help Tivoli specialists implement an end-to-end
scheduling environment with TWS 8.1. The information in this publication is not
intended as the specification of any programming interfaces that are provided by
Tivoli Workload Scheduler 8.1. See the PUBLICATIONS section of the IBM
Programming Announcement for Tivoli Workload Scheduler 8.1 for more
information about what publications are considered to be product documentation.
Comments welcome
Your comments are important to us!
We want our Redbooks to be as helpful as possible. Send us your comments
about this or other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Preface
xxiii
xxiv
Chapter 1.
Introduction
In this chapter, we introduce the Tivoli Workload Scheduler 8.1 suite and
summarize the new functions that are available in this version. TWS 8.1 is an
important milestone in the integration of OPC- and Maestro-based scheduling
engines.
This chapter contains the following:
An overview of job scheduling
An overview of Tivoli Workload Scheduler
An overview of Tivoli Workload Scheduler for z/OS
New functions in Tivoli Workload Scheduler 8.1
A description the material covered in this book
An introduction to the terminology used in this book
Chapter 1. Introduction
The main method of accessing the controller is via ISPF panels, but several
other methods are available including Program Interfaces, TSO commands, and
the Job Scheduling Console.
The Job Scheduling Console (JSC), a Java GUI interface, was introduced in
Tivoli OPC Version 2.3. The current version of JSC has been updated with
several new specific TWS for z/OS functions. The JSC provides a common
interface to both TWS for z/OS and TWS.
For more information on TWS for z/OS architecture, see Chapter 2, End-to-end
TWS architecture and components on page 15.
Chapter 1. Introduction
Minor enhancements
EQQAUDIT is now installed during the installation of Tivoli Workload Scheduler
for z/OS. One can access EQQAUDIT from the main menu of Tivoli Workload
Scheduler for z/OS ISPF panels. The batch command interface tool (BCIT) is
now installed as part of the regular installation of Tivoli Workload Scheduler for
z/OS.
Freeday rule
The freeday rule introduces the concept of the run cycle that is already used in
Tivoli Workload Scheduler for z/OS. It consists of a number of options (or rules)
that determine when a job stream should actually be run if its schedule falls on a
freeday.
Chapter 1. Introduction
Performance improvements
The new performance enhancements will be particularly appreciated in Tivoli
Workload Scheduler networks with many workstations, massive scheduling
plans, and complex relations between scheduling objects. The improvements are
in the following areas:
Daily plan creation: Jnextday runs faster and, consequently, the master
domain manager can start its production tasks sooner.
Daily plan distribution: the Tivoli Workload Scheduler administrator can now
enable the compression of the Symphony file so that the daily plan can be
distributed to other nodes earlier.
I/O optimization: Tivoli Workload Scheduler performs fewer accesses to the
files and optimizes the use of system resources. The improvements reflect:
Event files: The response time to the events is improved so the message
flow is faster.
Daily plan: The access to the Symphony file is quicker for both read and
write. The daily plan can therefore be updated in a shorter time than it was
previously.
Installation improvements
On Windows NT, the installation of netman is no longer a separate process. It is
now part of the installation steps of the product.
Linux support
Version 8.1 of Tivoli Workload Scheduler adds support for the following Linux
platforms:
Linux for INTEL as master domain manager or fault-tolerant agent
Linux for S/390 as fault-tolerant agent
Usability enhancements
The following usability enhancements are featured:
Improved tables for displaying list views. Users can now sort and filter table
contents by right clicking the table. They can also automatically resize a
column by double clicking its header.
Message windows now directly display the messages. Users no longer have
to click the Details button to read the message. For error messages, the error
code is also displayed on the window title.
The addition of jobs to a job stream can also be done from within the Timeline
view, as well as from the Graph view, of the Job Stream Editor.
New jobs are now automatically positioned within the Graph view of the Job
Stream Editor. Users are no longer required to click their mouse on the
background of the view to open the jobs properties window.
A new editor for job stream instances is featured. This editor is similar to the
Job Stream Editor for the database and enables users to see and work with
all the job instances contained in a specified job stream. From it users can
modify the properties of a job stream instance and its job instances, and the
dependencies between the jobs. The job stream instance editor does not
include the Timeline and Run cycle views. The job instance icons also display
the current status of the job.
Graphical enhancements
The following graphical enhancements are featured:
Input fields have changed to conform to Tivoli Presentation Services norms.
Mandatory fields have a yellow background. Fields into which input containing
syntax errors were introduced display a white cross on a red background.
A new Hyperbolic view graphically displays all the job dependencies of every
single job in the current plan.
Non-modal windows
The properties windows of scheduling objects are now non-modal. This means
that you can have two or more properties windows open at the same time. This
can be particularly useful if you need to define a new object that is in turn
required for another objects definition.
Common view
The Common view provides users with the ability to list job and job stream
instances in a single view and regardless of their scheduling engine, thus
furthering integration for workload scheduling on the mainframe and the
distributed platforms. The Common view is displayed as an additional selection
at the bottom of the tree view of the scheduling engines.
Chapter 1. Introduction
10
available in the old gconman program through the Set Symphony function, and is
available in the command line conman program via the setsym command. It is
useful for quickly seeing how things were (e.g., what ran and when it ran) on a
specific day in the past.
TWS
Master
Domain manager
Chapter 1. Introduction
11
Fault tolerant agent An agent that keeps its own local copy of the plan file, and
can continue operation even if the connection to the
parent domain manager is lost. Also called an FTA. In
TWS for z/OS, FTAs are referred to as fault tolerant
workstations.
Scheduling engine
TWS engine
TWS for z/OS engine The part of TWS for z/OS that does actual scheduling
work, as distinguished from the other components related
primarily to the user interface (e.g., the TWS for z/OS
connector). Essentially the controller plus the server.
TWS for z/OS
controller
The part of the TWS for z/OS engine that is based on the
old OPC program.
TWS for z/OS server The part of TWS for z/OS that is based on the UNIX TWS
code. Runs in UNIX System Services (USS) on the
mainframe.
JSC
Connector
JSS
TMF
12
the pieces fit in relation to one another and work together. This chapter is
divided into four main parts:
Tivoli Workload Scheduler for z/OS 8.1 architecture
Tivoli Workload Scheduler 8.1 architecture
End-to-end scheduling architecture (combining both TWS and TWS for
z/OS into one big picture)
Description of how Job Scheduling Console fits into this picture
Chapter 3, Planning, installation, and configuration of the TWS 8.1 on
page 91 includes guidelines for intelligent use of existing computer resources
in your TWS network, as well as step-by-step instructions for installing each
of the required components. This chapter is divided into sections devoted to
each component of the complete installation:
Planning an end-to-end scheduling installation for TWS for z/OS
Installing Tivoli Workload Scheduler for z/OS 8.1
Planning end-to-end scheduling for TWS
Installing Tivoli Workload Scheduler 8.1 in an end-to-end environment
Installing the Tivoli Management Framework as a stand-alone TMR server
just for TWS
Installing Job Scheduling Console 1.2
Chapter 4, End-to-end implementation scenarios and examples on
page 169 shows implementation examples for different TWS for z/OS and
TWS environments. The following implementation scenarios are covered:
Upgrading from previous versions
Maintenance
Fail-over scenarios
Chapter 5, Using The Job Scheduling Console on page 263. This chapter
gives best practices for using JSC in an end-to-end scheduling environment.
Chapter 6, Troubleshooting in a TWS end-to-end environment on page 333
covers best practices for troubleshooting and has the following sections:
Troubleshooting Tivoli Workload Scheduler for z/OS 8.1 (including
end-to-end troubleshooting)
Troubleshooting Tivoli Workload Scheduler 8.1
Chapter 7, Tivoli NetView integration on page 381 covers the steps
necessary to successfully integrate Tivoli NetView with TWS in order to
process job statuses.
Chapter 1. Introduction
13
14
Chapter 2.
End-to-end TWS
architecture and
components
This chapter describes the end-to-end scheduling architecture using Tivoli
Workload Scheduler and Tivoli Workload Scheduler for z/OS.
In this chapter, the following topics are discussed:
Tivoli Workload Scheduler architecture
Tivoli Workload Scheduler for z/OS architecture
End-to-end scheduling architecture
Job Scheduling Console and related components
If you are unfamiliar with Tivoli Workload Scheduler terminology and architecture
you can start with the section on Tivoli Workload Scheduler architecture to get a
better understanding of how Tivoli Workload Scheduler works. If you are already
familiar with Tivoli Workload Scheduler but would like to learn more about Tivoli
Workload Scheduler for z/OS, you can start with the section on Tivoli Workload
Scheduler for z/OS architecture. If you are already familiar with both TWS and
TWS for z/OS, skip ahead to the third section where we describe how both
programs work together when configured as an end-to-end network.
15
The Job Scheduling Console, its components, and its architecture, are described
in the last topic. In this topic we describe the different components used to
establish a Job Scheduling Console environment.
16
17
In the next sections we will provide an overview of the Tivoli Workload Scheduler
network and workstations, the topology used to describe the architecture in Tivoli
Workload Scheduler, and the two basic aspects to job scheduling in Tivoli
Workload Scheduler: The databases, and the plan and the terminology used in
Tivoli Workload Scheduler.
MASTERDM
AIX
Master
domain
manager
FTA1
FTA2
AIX
FTA3
OS/400
Windows 2000
FTA4
Solaris
Figure 2-1 Tivoli Workload Scheduler network with only one domain
18
Using multiple domains reduces the amount of network traffic by reducing the
communications between the master domain manager and the other computers
in the network. In Figure 2-2 we have a Tivoli Workload Scheduler network with
three domains.
MASTERDM
AIX
Master
Domain
Manager
DomainA
DomainB
AIX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
FTA3
OS/400
HPUX
FTA4
Windows 2000
Solaris
19
Once the network is started, scheduling messages like job starts and
completions are passed from the agents to their domain managers, through the
parent domain managers to the master domain manager. The master domain
manager then broadcasts the messages throughout the hierarchical tree to
update the Symphony files of domain managers and fault tolerant agents running
in full status mode.
It is important to remember that Tivoli Workload Scheduler does not limit the
number of domains or levels (the hirerarchy) of your Tivoli Workload Scheduler
network. The number of domains or levels in your Tivoli Workload Scheduler
network should be based on the topology of the physical network where you
want to implement the Tivoli Workload Scheduler network. See Section 3.4.1,
Network planning and considerations on page 134 for more information on how
to design a Tivoli Workload Scheduler network.
Figure 2-3 on page 21 shows an example of a multi-domain Tivoli Workload
Scheduler four-tier network, that is, a network with four levels:
1. Master domain manager, MASTERDM
2. DomainA and DomainB
3. DomainC, DomainD, DomainE, FTA1, FTA2, and FTA3
4. FTA4, FTA5, FTA6, FTA7, FTA8, and FTA9
20
MASTERDM
AIX
Master
Domain
Manager
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMB
Domain
Manager
DMA
FTA1
FTA2
FTA3
Solaris
HPUX
DomainC
DomainD
Solaris
DMD
FTA5
AIX
DomainE
AIX
AIX
DMC
FTA4
AIX
OS/400
FTA6
Win NT
DME
FTA7
FTA8
Win 2K
FTA9
AIX
HPUX
21
Backup master
A fault tolerant agent or domain manager capable of assuming the
responsibilities of the master domain manager for automatic workload
recovery. The copy of the plan on the backup master is updated with the
same reporting and logging as the master domain manager plan.
Domain manager
The management hub in a domain. All communications to and from the
agents in a domain are routed through the domain manager. The domain
manager can resolve dependencies between jobs in its subordinate agents.
The copy of the plan on the domain manager is updated with reporting and
logging from the subordinate agents.
Backup domain manager
A fault tolerant agent capable of assuming the responsibilities of its domain
manager. The copy of the plan on the backup domain manager is updated
with the same reporting and logging information as the domain manager plan.
Fault tolerant agent (FTA)
A workstation capable of resolving local dependencies and launching its jobs
in the absence of a domain manager. It has a local copy of the plan generated
in the master domain manager. It is also called workstation tolerant agents.
Standard agent
A workstation that launches jobs only under the direction of its domain
manager.
Extended agent
A logical workstation definition that enables you to launch and control jobs on
other systems and applications, such as Peoplesoft; Oracle Applications;
SAP; and MVS, JES2, and JES3.
Network agent
A logical workstation definition for creating dependencies between jobs and
job streams in separate Tivoli Workload Scheduler networks.
Job Scheduling Console client
Any workstation running the graphical user interface from which schedulers
and operators can manage Tivoli Workload Scheduler plan and database
objects. Actually this is not a workstation in the Tivoli Workload Scheduler
network; the Job Scheduling Console client is where you work with the Tivoli
Workload Scheduler database and plan.
Figure 2-4 on page 23 shows a Tivoli Workload Scheduler network with some of
the different workstation types.
22
MASTERDM
AIX
Master
Domain
Manager
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
Domain
Manager
DMB
Job
Scheduling
Console
FTA1
FTA2
FTA3
Solaris
HPUX
DomainC
DomainD
DMC
Solaris
DMD
FTA5
AIX
DomainE
AIX
AIX
FTA4
AIX
OS/400
FTA6
DME
FTA7
Win NT
FTA8
Win 2K
FTA9
AIX
HPUX
23
In Section 3.4.1, Network planning and considerations on page 134, you can
find more information on how to configure a Tivoli Workload Scheduler network
based on your particular distributed network and environment.
writer
mailman
batchman
jobman
24
Operator input
Message files
TWS processes
conman stop,
start and shut
NetReq.msg
netman
writer
JSC or
conman
Mailbox.msg
mailman
Intercom.msg
batchman
Courier.msg
jobman
To remote writer
25
Scheduling objects are elements used to define your Tivoli Workload Scheduler
network and production. Scheduling objects include jobs, job streams,
dependencies, calendars, workstations, prompts, resources, parameters, and
users.
Scheduling objects can be created, modified, or deleted by using the Job
Scheduling Console or the command line interface, the composer program.
Workstation
Workstation class
Job
Job stream
26
Prompt
Resource
Parameter
User
27
Dependencies
A condition that must be met in order to launch a job or job stream. Note that
dependencies can be created on a job stream level or a job level.
28
29
A copy of the Symphony file is sent to all subordinate domain managers and to
all the fault tolerant agents in the same domain.
The subordinate domain managers distribute their copy to all the fault tolerant
agents in their domain and to all the domain managers that are subordinate to
them, and so on down the line. This enables fault tolerant agents throughout the
network to continue processing even if the network connection to their domain
manager is down. From the Job Scheduling Console or the command line
interface, the operator can view and make changes in the days production by
making changes in the Symphony file.
The distribution of the Symphony file from master domain manager, to domain
managers and their subordinate agents is shown in Figure 2-7 on page 31.
30
MASTERDM
Master
Domain
Manager
AIX
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
FTA3
OS/400
Windows 2000
FTA4
Solaris
Figure 2-7 The distribution of the plan (Symphony file) in a TWS network
Tivoli Workload Scheduler processes monitor the Symphony file and make calls
to the operating system to launch jobs as required. The operating system runs
the job, and in return informs Tivoli Workload Scheduler whether the job has
completed successfully or not. This information is entered into the Symphony file
to indicate the status of the job. This way the Symphony file is continuously being
updated to reflect the work that needs to be done, the work in progress, and the
work that has been completed.
It is important to remember that the Symphony file contains copies of the objects
read from the Tivoli Workload Scheduler databases. This way any updates,
changes, or modifications made to objects in the Symphony file will not be
reflected in the Tivoli Workload Scheduler databases.
31
The final job stream is placed in the plan everyday, and results in running a job
named Jnextday prior to the start of a new day. The major steps are depicted in
Figure 2-8 on page 33. The job performs the following tasks:
The scheduler program runs (Step 1 in Figure 2-8 on page 33). The
scheduler reads the system date and selects job streams for the new days
plan (read from the mastsked database). Only the job streams whose run
cycles include that date are selected. The job streams are placed in the
production schedule file (prodsked).
The compiler program runs (Step 2 in Figure 2-8 on page 33). The compiler
creates an interim plan file (Symnew). The compiler does this by importing
into the Symnew file all the scheduling objects (jobs, prompts, resources, and
NT users) referenced by the job streams in the production schedule file.
Prints preproduction reports.
Stops Tivoli Workload Scheduler (stops and unlinks from the subordinate
agents).
The stageman program runs (Step 3 in Figure 2-8 on page 33). This program
carries forward into the new plan incomplete job streams from the old plan.
Only incomplete job streams whose Carry Forward option is set are carried
forward. Stageman then archives the old plan file in the schedlog directory
and installs the new plan (Symphony). A duplicate copy of the plan called
Sinfonia is created for distribution to the agents.
Starts Tivoli Workload Scheduler for the new day.
Prints post-production reports for the previous day.
Logs job statistics for the previous day.
These steps are run on the master domain manager.
When Tivoli Workload Scheduler starts up again on the master it automatically
links to the subordinate agents in the network. The agents receive the updated
Sinfonia file from the master, save the Sinfonia file as their new local Symphony
file, and resume processing.
32
Incomplete
carry forward
job streams
Symphony
CF job
streams
1
Job streams
database
mastsked
calendars
jobs
Other TWS
databases
schedulr
Symphony
stageman
prodsked
Production
schedule file
Symnew
compiler
Interim
Plan File
jobs, prompts,
resources,
NT users
Sinfonia
Copy of plan
for agents
prompts
Other scheduling
objects
resources
NT users
33
Furthermore you can rerun completed or in-error jobs as well as job streams in
the plan.
34
Reporting
As part of the preproduction and post-production processes, reports are
generated that show summary or detail information about the previous or next
production day. These reports can also be generated ad hoc. The available
reports are:
Job details listing
Prompt listing
Calendar listing
Parameter listing
Resource listing
Job history listing
Job histogram
Planned production schedule
Planned production summary
Planned production detail
Actual production summary
Actual production detail
Cross reference report
In addition, during production, a standard list file (STDLIST) is created for each
job instance launched by Tivoli Workload Scheduler. Standard list files contain
header and trailer banners, echoed commands, and errors and warnings. These
files can be used to troubleshoot problems in job execution.
Auditing
An auditing option helps track changes to the database and the plan.
For the database, all user modifications, except for the delta of the modifications,
are logged. If an object is opened and saved, the action will be logged even if no
modification has been done.
For the plan, all user modifications to the plan are logged. Actions are logged
whether they are successful or not.
Audit files are logged to a flat text file on individual machines in the Tivoli
Workload Scheduler network. This minimizes the risk of audit failure due to
network issues and allows a straightforward approach to writing the log. The log
formats are the same for both plan and database in a general sense. The logs
35
consist of a header portion, which is the same for all records; an action ID; and a
section of data, which will vary according the action type. All data is kept in clear
text and formatted to be readable and editable from a text editor such as Vi or
Notepad.
Setting security
Security is accomplished with the use of a security file that contains one or more
user definitions. Each user definition identifies a set of users, the objects they are
permitted to access, and the types of actions they can perform.
36
A template file is installed with the product. The template must be edited to
create the user definitions, and compiled and installed with a utility program to
create a new operational security file. After it is installed, further modifications
can be made by creating an editable copy with another utility.
Each workstation in a Tivoli Workload Scheduler network has its own security
file. An individual file can be maintained on each workstation, or a single security
file can be created on the master domain manager and copied to each domain
manager, fault tolerant agent, and standard agent.
37
Ensure that the domain managers (including the master domain manager)
have full status and resolve dependencies turned on. This is important if you
need to resort to long-term recovery, where the backup master generates a
Symphony file (runs Jnextday). If those records are not enabled, the former
master domain manager shows up as a regular fault tolerant agent after the
first occurrence of Jnextday. During normal operations, the Jnextday job
automatically turns on the full status and resolve dependency flags for the
master domain manager, if they are not already turned on. When the new
master runs Jnextday, it does not recognize the former master domain
manager as a backup master unless those flags are enabled. The former
master does not have an accurate Symphony file when the time comes to
switch back. Treat the all domain managers workstation definitions as if they
were backup domain manager definitions to the new domain managers. This
ensures true fault tolerance.
For the standby master domain manager, it may be necessary to transfer files
between the master domain manager and its standby. For this reason, the
computers must have compatible operating systems. Do not combine UNIX
with Windows NT computers, and in UNIX, do not combine big-endian with
little-endian computers.
Terminology note: Endian refers to which bytes are most significant in
multi-byte data types. In big-endian architectures, the left-most bytes
(those with a lower address) are most significant. In little-endian
architectures, the right-most bytes are most significant. Mainframes and
most RISC-based systems, including AIX, are big-endian. Intel-based
systems generally use little-endian format. PowerPC uses both.
38
39
Besides these major parts, the Tivoli Workload Scheduler for z/OS product also
contains the Tivoli Workload Scheduler for z/OS connector and the Job
Scheduling Console (JSC).
Tivoli Workload Scheduler for z/OS connector
Maps the Job Scheduling Console commands to the Tivoli Workload
Scheduler for z/OS engine. The Tivoli Workload Scheduler for z/OS
connector pre-requires the Tivoli Management Framework configured for a
Tivoli server or Tivoli managed node.
Job Scheduling Console
A Java-based graphical user interface (GUI) for the Tivoli Workload
Scheduler suite.
The Job Scheduling Console runs on any machine from which you want to
manage Tivoli Workload Scheduler for z/OS engine plan and database
objects. It provides, through the Tivoli Workload Scheduler for z/OS
connector, functionality similar to the Tivoli Workload Scheduler for z/OS
legacy ISPF interface. You can use the Job Scheduling Console from any
machine as long as it has a TCP/IP link with the machine running the Tivoli
Workload Scheduler for z/OS connector.
The same Job Scheduling Console can be used for Tivoli Workload
Scheduler and Tivoli Workload Scheduler for z/OS.
In the next topics we will provide an overview of Tivoli Workload Scheduler for
z/OS configuration, the architecture, and the terminology used in Tivoli Workload
Scheduler for z/OS.
40
41
Agent
Agent
Standby
Engine
Standby
Engine
z/OS
SYSPLEX
Agent
Active
Engine
VTAM
Remote
Agent
VTAM
Remote
Agent
Remote
Agent
Remote
Agent
z/OS
SYSPLEX
Remote
Agent
Figure 2-9 TWS for z/OS - Two sysplex environments and stand-alone systems
42
Remote systems
The agent on a remote z/OS system passes status information about the
production work in progress to the engine on the controlling system. All
communication between Tivoli Workload Scheduler for z/OS subsystems on the
controlling and remote systems is done via ACF/VTAM.
Tivoli Workload Scheduler for z/OS lets you link remote systems using
ACF/VTAM networks. Remote systems are frequently used locally on premises
to reduce the complexity of the data processing installation.
43
PIF program
z/OS
SYSPLEX
ISPF
panels
Active
Engine
APPC
Server
VTAM
VTAM
Remote
System
Remote
System
Remote
System
ISPF
panels
ISPF
panels
PIF program
Figure 2-10 APPC server with remote panels and PIF access to TWS for z/OS
44
z/OS
SYSPLEX
Job
Scheduling
Console
Job
Scheduling
Console
Active
Engine
TCP/IP
Server
45
Tivoli Workload Scheduler for z/OS databases contains information about the
work that is to be run, when it should be run, and the resources that are needed
and available. This information is used to calculate a forward forecast called the
long-term plan.
Scheduling objects are elements used to define your Tivoli Workload Scheduler
for z/OS workload. Scheduling objects include job streams (jobs and
dependencies as part of job streams), workstations, calendars, periods, operator
instructions, resources, and JCL variables.
All these scheduling objects can be created, modified, or deleted by using the
legacy Tivoli Workload Scheduler for z/OS ISPF panels. Job streams,
workstations, and resources can be managed from the Job Scheduling Console
as well.
Job streams
46
Dependencies
47
Calendars
Resources
48
example is a job that uses a dataset as input, but must not start
until the dataset is successfully created and loaded with valid
data. You can use resource serialization support to send
availability information about a data processing resource to
Tivoli Workload. To accomplish this Tivoli Workload Scheduler
for z/OS uses resources (also called special resources).
Resources are typically defined to represent physical or logical
objects used by jobs. A resource can be used to serialize
access to a dataset or to limit the number of file transfers on a
particular network link. The resource does not have to
represent a physical object in your configuration, although it
often does.
Tivoli Workload Scheduler for z/OS keeps a record of the state
of each resource and its current allocation status. You can
choose to hold resources in case a job allocating the resources
ends abnormally. You can also use the Tivoli Workload
Scheduler for z/OS interface with the Resource Object Data
Manager (RODM) to schedule jobs according to real resource
availability. You can subscribe to RODM updates in both local
and remote domains.
Tivoli Workload Scheduler for z/OS lets you subscribe to
dataset activity on z/OS systems. The dataset triggering
function of Tivoli Workload Scheduler for z/OS automatically
updates special resource availability when a dataset is closed.
You can use this notification to coordinate planned activities or
to add unplanned work to the schedule.
Periods
49
JCL variables
50
51
Databases
Resources
Workstations
Job
Streams
Calendars
Periods
Long Term
Plan
90 days
1
workday
This way the long-term plan always reflects changes made to job streams, run
cycles, and calendars, since these definitions are re-read by the program that
extends the long term plan. The long term plan extension program reads job
streams (run cycles), calendars, and periods and creates the high level long term
plan based on these objects.
Current plan
The current plan, or simply the plan is the heart of Tivoli Workload Scheduler for
z/OS processing: In fact, it drives the production workload automatically and
provides a way to check its status. The current plan is produced by the run of
batch jobs that extract from the long term plan the occurrences that fall within the
specified period of time, considering also the job details. What the current plan
does is select a window from the long term plan and make the jobs ready to be
run: They will really be started depending on the decided restrictions (for
example, dependencies, resources availability, or time depending jobs).
Job streams and related objects are copied form the Tivoli Workload Scheduler
for z/OS databases to the current plan occurrences. Since the objects are copied
to the current plan dataset, any changes made to these objects in the plan will
not be reflected in the Tivoli Workload Scheduler for z/OS databases.
The current plan is a rolling plan that can cover several days. The extension of
the current plan is performed by a Tivoli Workload Scheduler for z/OS program.
This program is normally run on workdays as part of the daily Tivoli Workload
Scheduler for z/OS housekeeping job stream scheduled on workdays (see
Figure 2-14 on page 53).
52
Databases
Current
Plan
Extension
Resources
Workstations
Job
Streams
Remove completed
job streams
90 days
Long Term
Plan
Calendars
Add detail
for next day
Periods
1
workday
today tomorrow
Extending the current plan by one workday means that the plan can cover more
than one calendar day. If, for example, Saturday and Sunday are considered as
Fridays (in the calendar used by the run-cycle for the housekeeping job stream),
then when the current plan extension program is run on Friday afternoon, the
plan will go to Monday afternoon. The current plan is a rolling plan that can cover
several days. A common method is to cover 12 days with regular extensions
each shift.
Production workload processing activities are listed by minute in the plan. You
can either print the current plan as a report, or view, or alter and extend it online,
by using the legacy ISPF dialogs.
Note: Changes made to job stream run-cycle, for example, changing job
stream from running on Mondays to running on Tuesdays, will not immediately
be reflected in the long term plan or the current plan. To have such changes
reflected in the long-term plan and current plan you must first run a modify all
or extend the long-term plan and then extend or replan the current plan. Due
to this, it is a good practice to run the Extend of long-term plan with one
working day (shown in Figure 2-13 on page 52) before the Extend of current
plan as part of normal Tivoli Workload Scheduler for z/OS housekeeping.
53
54
Job tailoring
Tivoli Workload Scheduler for z/OS provides automatic job-tailoring functions,
which enable jobs to be automatically edited. This can reduce your dependency
on time-consuming and error-prone manual editing of jobs. Tivoli Workload
Scheduler for z/OS job tailoring provides:
Automatic variable substitution
Dynamic inclusion and exclusion of inline job statements
Dynamic inclusion of job statements from other libraries or from an exit
For jobs to be submitted on a z/OS system, these job statements will be z/OS
JCL.
Variables can be substituted in specific columns, and you can define verification
criteria to ensure that invalid strings are not substituted. Special directives
supporting the variety of date formats used by job stream programs let you
dynamically define the required format and change them multiple times for the
same job. Arithmetic expressions can be defined to let you calculate values such
as the current date plus four work days.
Status inquiries
With the legacy ISPF dialogs or with the Job Scheduling Console, you can make
queries online and receive timely information on the status of the production
workload.
Time information that is displayed by the dialogs can be in the local time of the
dialog user. Using the dialogs, you can request detailed or summary information
on individual job streams, jobs, and workstations, as well as summary
information concerning workload production as a whole. You can also display
dependencies graphically as a network at both job stream and job level.
55
Status inquiries:
Provide you with overall status information that you can use when considering
a change in workstation capacity or when arranging an extra shift or overtime
work.
Help you supervise the work flow through the installation; for instance, by
displaying the status of work at each workstation.
Help you decide whether intervention is required to speed the processing of
specific job streams. You can find out which job streams are the most critical.
You can also check the status of any job stream, as well as the plans and
actual times for each job.
Enable you to check information before making modifications to the plan. For
example, you can check the status of a job stream and its dependencies
before deleting it or changing its input arrival time or deadline. See Modifying
the current plan on page 56 for more information.
Provide you with information on the status of processing at a particular
workstation. Perhaps work that should have arrived at the workstation has not
arrived. Status inquiries can help you locate the work and find out what has
happened to it.
56
57
Figure 2-15 Tivoli Workload Scheduler for z/OS automatic recovery and restart
58
through the panels. Tivoli Workload Scheduler for z/OS will reset the catalog to
the status that it was before the job ran for both generation dataset groups
(GDGs) and for DD allocated datasets contained in JCL. In addition, restart and
cleanup supports the use of Removable Media Manager in your environment.
Restart at both the step- and job-level is also provided in the Tivoli Workload
Scheduler for z/OS legacy ISPF panels and in the JSC. It manages resolution of
generation data group (GDG) names, JCL-containing nested INCLUDEs or
PROC, and IF-THEN-ELSE statements. Tivoli Workload Scheduler for z/OS also
automatically identifies problems that can prevent successful restart, providing a
logic of the best restart step.
You can browse the job log or request a step-level restart for any z/OS job or
started task even when there are no catalog modifications. The job-log browse
functions are also available for the workload on other operating platforms, which
is especially useful for those environments that do not support a System Display
and Search Facility (SDSF) like facility.
These facilities are available to you without the need to make changes to your
current JCL. Tivoli Workload Scheduler for z/OS gives you an enterprise-wide
dataset cleanup capability on remote agent systems.
59
60
Security
Today, data processing operations increasingly require a high level of data
security, particularly as the scope of data processing operations expands and
more people within the enterprise become involved. Tivoli Workload Scheduler
for z/OS provides complete security and data integrity within the range of its
functions. It provides a shared central service to different user departments even
when the users are in different companies and countries. Tivoli Workload
Scheduler for z/OS provides a high level of security to protect scheduler data and
resources from unauthorized access. With Tivoli Workload Scheduler for z/OS,
you can easily organize, isolate, and protect user data to safeguard the integrity
of your end-user applications (see Figure 2-16 on page 62). Tivoli Workload
Scheduler for z/OS can plan and control the work of many user groups, and
maintain complete control of access to data and services.
61
Audit trail
With the audit trail, you can define how you want Tivoli Workload Scheduler for
z/OS to log accesses (both reads and updates) to scheduler resources. Because
it provides a history of changes to the databases, the audit trail can be extremely
useful for staff that works with debugging and problem determination.
A sample program is provided for reading audit-trail records. The program reads
the logs for a period that you specify and produces a report detailing changes
that have been made to scheduler resources.
62
Operator instructions
JCL
To support distributed, multi-user handling, Tivoli Workload Scheduler for z/OS
lets you control the level of security you want to implement, right down to the
level of individual records. You can define generic or specific RACF resource
names to extend the level of security checking.
If you have RACF Version 2 Release 1 installed, you can use the Tivoli Workload
Scheduler for z/OS reserved resource class (IBMOPC) to manage your Tivoli
Workload Scheduler for z/OS security environment. This means that you do not
have to define your own resource class by modifying RACF and restarting your
system.
63
The Tivoli Workload Scheduler domain manager acts as the broker for the
distributed network by resolving all dependencies for the subordinate managers
and agents. It sends its updates (in the form of events) to Tivoli Workload
Scheduler for z/OS so that it can update the plan accordingly. Tivoli Workload
Scheduler for z/OS handles its own jobs and notifies the domain manager of all
the status changes of the Tivoli Workload Scheduler for z/OS jobs that involve
the Tivoli Workload Scheduler plan. In this configuration, the domain manager
and all the distributed agents recognize Tivoli Workload Scheduler for z/OS as
the master domain manager and notify it of all the changes occurring in their own
plans. At the same time, the agents are not permitted to interfere with the Tivoli
Workload Scheduler for z/OS jobs, since they are viewed as running on the
master that is the only node that is in charge of them.
With this version of Tivoli Workload Scheduler for z/OS, the fault tolerant agents
replace the Tivoli OPC tracker agents and make scheduling possible on the
distributed platform with more reliable, fault tolerant, and scalable agents.
In Figure 2-17 on page 65 you can see a Tivoli Workload Scheduler network
managed by a Tivoli Workload Scheduler for z/OS engine. This is accomplished
by connecting a Tivoli Workload Scheduler domain manager directly to the Tivoli
Workload Scheduler for z/OS engine. Actually if you compare Figure 2-2 on
page 19 with Figure 2-17 on page 65, you will see that the Tivoli Workload
Scheduler network that is connected to Tivoli Workload Scheduler for z/OS is a
Tivoli Workload Scheduler network managed by a Tivoli Workload Scheduler
master domain manager. When connecting this Tivoli Workload Scheduler
network to the Tivoli Workload Scheduler for z/OS engine, the former Tivoli
Workload Scheduler master domain manager is changed to domain manager for
DomainZ (Z was chosen because this domain manager is intermediary between
the Tivoli Workload Scheduler distributed network and the Tivoli Workload
Scheduler for z/OS engine). The new master domain manager is the Tivoli
Workload Scheduler for z/OS engine.
64
MASTERDM
Master
Domain
Manager
z/OS
DomainZ
Domain
Manager
DMZ
AIX
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
FTA3
OS/400
Windows 2000
FTA4
Solaris
Tivoli Workload Scheduler for z/OS also allows you to access job streams
(schedules in Tivoli Workload Scheduler) and add them to the current plan in
Tivoli Workload Scheduler for z/OS. In addition, you can build dependencies
among Tivoli Workload Scheduler for z/OS job streams and Tivoli Workload
Scheduler jobs. From Tivoli Workload Scheduler for z/OS, you can monitor and
control the distributed agents.
In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs to
run on workstations in the Tivoli Workload Scheduler network. The TWS for z/OS
engine passes the job information to the Symphony file in the TWS for z/OS
server, which in turn passes the Symphony file to the TWS domain manager
(DMZ) to distribute and process. In turn, TWS reports the status of running and
completed jobs back to the current plan for monitoring in the Tivoli Workload
Scheduler for z/OS engine.
65
In Tivoli Workload Scheduler for z/OS 8.1.0, it is only possible to connect one
focal point domain manager directly to the Tivoli Workload Scheduler for z/OS
server (this domain manager is also called the primary domain manager).
It is possible to designate a backup domain manager for the focal point Tivoli
Workload Scheduler domain manager.
66
job log
retrievers
GS
GS
NMM
EM
sender
subtask
receiver
subtask
Message files
TWS processes
NetReq.msg
netman
spawns writer
spawned as
necessary
end-to-end task
WA
(in USS)
outbound
queue
inbound
queue
writer
output
translator
input
translator
Mailbox.msg
mailman
Intercom.msg
batchman
From remote
mailman
To remote
writer
tomaster.msg
writer
67
mailman
batchman
68
Input translator
Receiver subtask
Output translator
EQQTWSOU
69
Symphony
HFS file containing the active copy of the plan used by the
distributed Tivoli Workload Scheduler agents. This file is
not shown in Figure 2-18 on page 67.
Sinfonia
NetReq.msg
Mailbox.msg
intercom.msg
tomaster.msg
EQQSCLIB
EQQSCPDS
70
Job streams and jobs to run on Tivoli Workload Scheduler distributed agents
are defined like other job streams and jobs in Tivoli Workload Scheduler for
z/OS. To run a job on a Tivoli Workload Scheduler distributed agent, the job is
simply defined on a fault tolerant workstation. Dependencies between Tivoli
Workload Scheduler distributed jobs are created exactly the same way as
other job dependencies in the Tivoli Workload Scheduler for z/OS engine.
This is also the case when creating dependencies between Tivoli Workload
Scheduler distributed jobs and Tivoli Workload Scheduler for z/OS mainframe
jobs.
Some of the Tivoli Workload Scheduler for z/OS mainframe specific options
will not be available for Tivoli Workload Scheduler distributed jobs.
Tivoli Workload Scheduler for z/OS resources
Only global resources are supported and can be used for Tivoli Workload
Scheduler distributed jobs. This means that the resource dependency is
resolved by the TWS for z/OS engine (controller) and not locally on the
distributed agent.
For a job running on a distributed agent, the usage of resources causes the
loss of fault tolerance. Only the engine determines the availability of a
resource and consequently lets the distributed agent start the job. Thus if a
job running on a distributed agent uses a resource, the following occurs:
When the resource is available, the engine sets the state of the job to
started and the extended status to waiting for submission.
71
It is not possible to use Tivoli Workload Scheduler for z/OS JCL variables or
automatic recovery statements in the task definition for distributed agent jobs,
because the task definition is placed in a separate library and does not
contain the actually script (JCL) only the placement (the path).
The JOBREC definitions are read by the Tivoli Workload Scheduler for z/OS
plan programs when producing the new current plan and placed as part of the
job definition in the Symphony file.
72
If a TWS distributed job stream is added to the plan in TWS for z/OS, the
JOBREC definition will be read by TWS for z/OS, copied to the Symphony file
on the TWS for z/OS server, and sent (as events) by the server to the TWS
agent Symphony files via the directly connected TWS domain manager.
It is important to remember that the EQQSCLIB members only has a pointer
(the path) to the job that is going to be executed. The actual job (the JCL) is
placed locally on the distributed agent or workstation in the directory defined
by the JOBREC JOBSCR definition.
73
If the end-to-end feature is activated in The Tivoli Workload Scheduler for z/OS,
the current plan program will read the topology definitions described in the
TOPLOGY, DOMREC, CPUREC, and USRREC initialization statements (see
Section 2.3.3, Tivoli Workload Scheduler for z/OS end-to-end configuration on
page 70) and the script library (EQQSCLIB) as part of the planning process.
Information from the initialization statements and the script library will be used to
create a Symphony file for the Tivoli Workload Scheduler distributed agents (see
Figure 2-20).
The Tivoli Workload Scheduler for z/OS current plan program is normally run on
workdays in the engine as described in Section 2.2.3, Tivoli Workload Scheduler
for z/OS plans on page 50.
Databases
Current
Plan
Extension
&
Replan
Resources
Workstations
Job
Streams
Remove completed
job streams
1.
2.
3.
Add detail
for next day
Script
library
New Symphony
Topology
Definitions
Figure 2-20 Creation of Symphony file in TWS for z/OS plan programs
Note that creating the plan, extracting plan objects related to the distributed
agents, and building the related Symphony file, do not involve Jnextday or any of
the Jnextday processes described in Section 2.1.6, Tivoli Workload Scheduler
plan on page 29. The process is handled by Tivoli Workload Scheduler for z/OS
planning programs.
74
Tivoli Workload Scheduler for z/OS Normal Mode Manager (NMM) sends an
event to the output translator to stop the Tivoli Workload Scheduler network.
In the meanwhile the plan program has started producing the Tivoli Workload
Scheduler for z/OS plan and the Symphony file with workstation information.
The output translator stops the Tivoli Workload Scheduler for z/OS server in
USS and stops processing incoming events. An End Sync event is added to
the inbound queue. The output translator starts to stop all the Tivoli Workload
Scheduler agents.
The Event Manager (EM) processes all the events on the inbound queue
while the Sync Stop event is found, then notifies NMM that the Tivoli
Workload Scheduler network has been stopped.
When the plan program has produced the new plan, NMM eventually wait
until EM has finished to process events. After that the NMM applies the job
tracking events received while the new plan was produced. It then takes a
backup of the new current plan on the Tivoli Workload Scheduler for z/OS
current plan dataset (CP1, CP2) and the Symphony Current Plan (SCP)
dataset. NMM sends a CP Ready Sync event to the output translator to
separate events from the old plan and events from the new plan.
Tivoli Workload Scheduler for z/OS mainframe schedule is resumed.
The plan program starts producing the Symphony file starting from SCP.
When the Symphony has been created, the plan program ends and NMM
notifies the output translator that the new Symphony file is ready.
The output translator copy the new Symphony (Symnew file) into Symphony
and Sinfonia file, a Symphony OK (or NOT OK) Sync event is sent to the
Tivoli Workload Scheduler for z/OS engine that logs a message in the engine
message log indicating that the Symphony has been switched (or not).
The Tivoli Workload Scheduler for z/OS server master is started in USS and
the Input Translator starts to process new events. As in Tivoli Workload
Scheduler distributed, mailman, and batchman process events are left in local
event files and start distributing the new Symphony file to the whole Tivoli
Workload Scheduler.
When the Symphony file is created by the Tivoli Workload Scheduler for z/OS
plan programs, it (or more precisely the Sinfonia file) will be distributed to the
Tivoli Workload Scheduler for z/OS subordinate domain manager, which in turn
distributes the Symphony (Sinfonia) file to its subordinate domain managers and
fault tolerant agents (see Figure 2-21 on page 76).
75
MASTERDM
z/OS
Master
Domain
Manager
DomainZ
Domain
Manager
DMZ
AIX
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
TWS plan
FTA3
OS/400
Windows 2000
FTA4
Solaris
Figure 2-21 Symphony file distribution from TWS for z/OS server to TWS agents
After the Symphony file is created and distributed to the Tivoli Workload
Scheduler distributed agents, the Symphony file is updated by events:
When job status changes
When jobs or job streams are modified
76
When jobs or job streams for the Tivoli Workload Scheduler distributed
agents are added to the plan in the Tivoli Workload Scheduler for z/OS
engine.
If you look at the Symphony file locally on a Tivoli Workload Scheduler distributed
agent, from the Job Scheduling Console, or using the Tivoli Workload Scheduler
command line interface to the plan (conman), you will see that:
The Tivoli Workload Scheduler workstation has the same name as the related
Workstation. OPCMASTER is the hard-coded name for the master domain
manager workstation for the Tivoli Workload Scheduler for z/OS engine.
The name of the job stream (or schedule) is the hexadecimal representation
of the occurrence (job stream instance) token (internal unique and invariant
identifier for occurrences). The job streams are always defined on the
OPCMASTER workstation (having no dependencies, this does not reduce
fault tolerance).
Using this hexadecimal representation for the job stream instances makes it
possible to have several instances for the same job stream, since they have
unique job stream names. Therefore it is possible to have a plan in the Tivoli
Workload Scheduler for z/OS engine and a distributed Symphony file that
spans more than 24 hours.
The job name has the form: <T>_<opnum>_<applname> or
<T>_<opnum>_<reuse>_<applname> where:
<T> is J for normal jobs or P for jobs that are representing pending
predecessors.
<opnum> is the operation number for the job in the job stream.
<reuse> is incremented when the same operation is recreated; if 0 it is
omitted.
<applname> is the occurrence (job stream) name.
In normal situations the Symphony file is automatically generated as part of the
Tivoli Workload Scheduler for z/OS plan process. Since the topology definitions
are read and built into the Symphony file as part of the Tivoli Workload Scheduler
for z/OS plan programs, regular operation situations can occur where you need
to renew (or rebuild) the Symphony file form the Tivoli Workload Scheduler for
z/OS plan:
When you make changes to the script library or to the definitions of the
TOPOLOGY statement
When you add or change information in the plan, such as workstation
definitions
77
To have the Symphony file rebuilt or renewed, you can use the Symphony
Renew option of the Daily Planning menu (option 3.5 in the legacy TWS for z/OS
ISPF panels).
This renew function can also be used to recover from error situations like:
There is a non-valid job definition in the script library.
The workstation definitions are incorrect.
An incorrect Windows NT user name or password is specified.
When you make changes to the script library or to the definitions of the
TOPOLOGY statement.
78
MASTERDM
Standby
Engine
Standby
Engine
z/OS
SYSPLEX
Active
Engine
Server
DomainZ
Domain
Manager
DMZ
AIX
AIX
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
Backup
Domain
Manager
(FTA)
FTA3
O S/400
Windows 2000
FTA4
Solaris
Figure 2-22 Fail-safe configuration with standby engine and TWS backup DM
If the primary domain manager fails it will be possible to switch to the backup
domain manager. Since the backup domain manager has an updated Symphony
file and knows the subordinate domain managers and fault tolerant agents, the
backup domain manager will be able to take over the responsibilities of the
domain manager. This switch can be performed without any outages in the
workload management.
If the switch to the backup domain manager is going to be active across the Tivoli
Workload Scheduler for z/OS plan extension, you must change the topology
definitions in the Tivoli Workload Scheduler for z/OS DOMREC initialization
statements. The backup domain manager fault tolerant workstation is going to be
the focal point domain manager for the Tivoli Workload Scheduler distributed
network.
79
Example 2-1 shows how to change the name of the fault tolerant workstation in
the DOMREC initialization statement, if the switch to the backup domain
manager should be effective across Tivoli Workload Scheduler for z/OS plan
extension (See Section 4.6.5, Switch to TWS backup domain manager on
page 236 for more information).
Example 2-1 DOMREC initialization statement
DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMZ) DOMPARENT(MASTERDM)
Should be changed to:
DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMB) DOMPARENT(MASTERDM)
Where FDMB is the name of the fault tolerant workstation where the backup
domain manager is running.
If the Tivoli Workload Scheduler for z/OS engine or server fails it will be possible
to let one of the standby engines in the same sysplex take over. This takeover
can be accomplished without any outages in the workload management.
The Tivoli Workload Scheduler for z/OS server must follow the Tivoli Workload
Scheduler for z/OS engine. That is, if the Tivoli Workload Scheduler for z/OS
engine is moved to another system in the sysplex, the Tivoli Workload Scheduler
for z/OS server must be moved to the same system in the sysplex.
Extended plans also means that the current plan can span more than 24
hours.
Powerful run-cycle and calendar functions.
80
Considerations
Implementing the Tivoli Workload Scheduler for z/OS end-to-end also implies
some limitations:
Usage of JCL variables is not possible in end-to-end scripts (we have showed
a work-around in Section 4.4.6, TWS for z/OS JCL variables in connection
with TWS parameters on page 211).
It is not possible to define Tivoli Workload Scheduler for z/OS auto-recovery
for jobs on Tivoli Workload Scheduler for z/OS fault tolerant workstations.
Windows users passwords are defined directly (without any encryption) in the
Tivoli Workload Scheduler for z/OS server initialization parameters.
Some Tivoli Workload Scheduler functions are not available on Tivoli
Workload Scheduler distributed agents. For example:
The Job Scheduling Services and the connectors must be installed on top of the
Tivoli Management Framework (TMF). Together, the Tivoli Management
Framework, the Job Scheduling Services, and the connector provide the
interface between JSC and the scheduling engine.
81
2.4.3 Connectors
Connectors are the components that allow the Job Scheduling Services to talk
with different types of scheduling engines. When working with a particular type of
scheduling engine, the Job Scheduling Console communicates with the
scheduling engine via the Job Scheduling Services and the connector. A different
connector is required for each type of scheduling engine. A connector can only
be installed on a computer where the Tivoli Management Framework and Job
Scheduling Services have already been installed.
There are two types of connector for connecting to the two types of scheduling
engines in the Tivoli Workload Scheduler 8.1 suite:
Tivoli Workload Scheduler for z/OS connector
Tivoli Workload Scheduler connector
82
The JSS communicates with the engine via the connector of the appropriate
type. When working with a TWS for z/OS engine, the JSC communicates via the
TWS for z/OS connector. When working with a TWS engine, the JSC
communicates via the TWS connector.
The two types of connectors function somewhat differently: The TWS for z/OS
Connector communicates over TCP/IP with the TWS for z/OS engine running on
a mainframe (MVS or z/OS) computer. The TWS connector performs direct
reads and writes of the TWS plan and database files on the same computer as
where the TWS connector runs.
A connector instance must be created before the connector can be used. Each
type of connector can have multiple instances. A separate instance is required
for each engine that will be controlled by JSC.
We will now discuss each type of connector in more detail.
83
Connector instances
We will now give a couple of examples of how connector instances might be
installed in the real world.
z/OS SYSPLEX
TWS for z/OS
Master
Domain
Manager
Engine 1
Server
AIX Server
TWS
TCP/IP
connections
Job
Scheduling
Console
Domain
Manager
local read/write
of plan
Plan
Instance of TWS on
the AIX server
connector
instances
Desktop Computer
Laptop Computer
Engine A
TCP/IP
connection
Job
Scheduling
Console
UNIX Workstation
Job
Scheduling
Console
Figure 2-23 One TWS for z/OS connector and one TWS connector instance
84
The names of the connector instances are arbitrary. We chose the names we did
because the names convey some information about the scheduling engine with
which the connector instances are associated.
Tip: TWS connector instances must be created on the server where the TWS
engine is installed. This is because the TWS connector needs to be able to
have access locally to the TWS engine (specifically, to the plan and database
files). This limitation obviously does not apply to TWS for z/OS connector
instances because the TWS for z/OS connector communicates with the
remote TWS for z/OS engine over TCP/IP.
85
z/OS SYSPLEX
z/OS SYSPLEX
Master
Domain
Manager
Engine 1
Server
Engine 2
Server
AIX Server
TMR Server or Managed Node
TCP/IP
ENG-1
TWS for z/OS
Tivoli
connections
Management Connector
ENG-2
Framework
(with JSS
ENG-A
installed)
TWS
Connector
local read/write of
ENG-B
oserv
plan & databases
TCP/IP
connections
4 connector
instances
Job
Scheduling
Console
TWS
Engine B
Master
Domain
Manager
Domain
Manager
Databases
Plan
Plan
Desktop Computer
Laptop Computer
TWS
Engine A
UNIX Workstation
Job
Scheduling
Console
Job
Scheduling
Console
86
In this example, The TWS1 connector instance reads from and writes to the
databases and the plan of the Engine 1 TWS engine. The ENG-1 connector
instance reads from and writes to the plan of the Engine 2 TWS engine; there are
no databases associated with Engine 2 because it is a domain manager, not a
master.
Each TWS engine is installed as a distinct instance on the same AIX server. For
details about installing multiple TWS instances on the same computer, see
Section 3.5.1, Installing multiple instances of TWS on one machine on
page 142.
The two Tivoli Workload Scheduler for z/OS engines could instead be running on
two different nodes in the same sysplex. In many environments, there will be only
one Tivoli Workload Scheduler for z/OS engine. In these environments, only one
TWS for z/OS connector instance is needed.
Note: One connector instance must be created for each TWS for z/OS engine
or TWS engine that is to be accessed from the Job Scheduling Console.
It is also possible to have more than one TWS for z/OS connector instance
associated with the same TWS for z/OS engine, but this is not particularly useful.
You might wonder why one would need a separate TWS master domain
manager (Engine A) that is not part of the end-to-end environment (not under the
control of the TWS for z/OS master domain manager). You might also wonder
why one would need a TWS for z/OS engine (Engine 1) that is independent of
the end-to-end environment. In some situations, it might be best to have multiple
TWS networks, each with its own master. The divisions between TWS networks
might be based on organizational or functional divisions. For example, the
Accounting Department might want to use a TWS for z/OS engine as its master,
while the Engineering Department wants to use a TWS master. Or the
organization might choose to test the end-to-end scheduling environment
thoroughly before putting it into production. During the testing phase, it would be
necessary to have both networks up and running at the same time. You could
imagine that in the above example, Engine 2 and Engine B are the new
end-to-end scheduling networks, while Engine 1 and Engine A are parts of
independent TWS for z/OS and TWS networks.
There are many good reasons to have multiple separate scheduling networks.
Tivoli Workload Scheduler gives you a great deal of flexibility in this regard.
87
The main connector program, which contains the implementation of the main
connector methods (basically all the methods that are required to connect to
the TWS for z/OS engine and retrieve data from TWS for z/OS). It is
implemented as a threaded daemon, which means that it is automatically
started by the Tivoli Framework at the first request that should be handled by
it, and it will stay active until there has not been a request for a long time.
Once it is started, it handles starting new threads for all the JSC requests that
require data from a specific TWS for z/OS engine.
opc_connector2
The maestro_plan program reads from and writes to the TWS plan. It also
handles switching to a different plan. The program is started when a user
accesses the plan. It terminates after 30 minutes of inactivity.
maestro_database
The maestro_database program reads from and writes to the TWS database
files. It is started when a JSC user accesses a database object or creates a
new object definition. It terminates after 30 minutes of inactivity.
88
job_instance_output
89
90
Chapter 3.
In this chapter, we made the assumption that no scheduling products exist in our
example environment. So we will provide you general guidelines to plan, install,
and customize from scratch. You might already have an existing TWS for z/OS,
TWS for z/OS tracker agents, or TWS in your environment. We will cover these
different types of environments with detailed scenarios in Chapter 4, End-to-end
implementation scenarios and examples on page 169.
91
Tivoli Workload Scheduler for z/OS administrators must be familiar with the
domain architecture and the meaning of fault tolerant in order to understand
that the script is no longer located in the job repository database. This is
essential when it comes to reflecting the topology in Tivoli Workload Scheduler
for z/OS.
On the other hand, people who are in charge of Tivoli Workload Scheduler
need to know the Tivoli Workload Scheduler for z/OS architecture to
understand the new planning mechanism and Symphony file creation.
In conclusion, we recommend that all involved people (mainframe and
distributed scheduling) get familiar with both scheduling environments, which
are described throughout the book.
92
Important: If you plan to migrate the Tivoli Workload Scheduler for z/OS
agent to 8.1, but stay on a pre-8.1 release of the engine, you need to install
the tolerance APAR PQ52935.
Table 3-1 gives the PSP information for Tivoli Workload Scheduler for z/OS.
Table 3-1 PSP upgrade and subset ID
Upgrade
Subset
Description
TWSZOS810
HWSZ100
Agent
JWSZ102
Engine
JWSZ1A4
JWSZ101
TCP/IP communication
JWSZ103
End-to-end enabler
JWSZ1C0
Agent enabler
z/OS scheduler
(OPC) includes
agent (z/OS
tracker)
TWS 8.1
Tracker agent
enabler
Extended agent
for MVS and
OS/390
End-to-end enabler
TWS distributed
(Maestro)
93
Components
TWS 8.1
Job Scheduling
Console
The end-to-end enabler (FMID JWSZ103) is used to populate the base binary
directory in a HFS during System Modification Program/Extended (SMP/E)
installation. These files can be shared by different z/OS engines.
Important: If you want to use the end-to-end scheduling solution, you need to
order the Tivoli Workload Scheduler 8.1 suite, because it contains all the
necessary end-to-end components.
If you will install Tivoli Workload Scheduler for z/OS into the same SMP/E
zone where a prior Tivoli Workload Scheduler for z/OS release is already
installed, it will remove any tracker agent workstation code. Then any further
downloads or maintenance activities will be impossible. So, either choose a
different SMP/E zone or take a backup your SEQQEENU dataset.
If you plan to migrate from tracker agents to fault tolerant agents you must be
aware of following restriction in the current release Tivoli Workload Scheduler
8.1: You cannot use automatic recovery and variable substitution when running
scripts on a FTA. This restriction is planned to be removed in the next release.
In Section 4.4.6, TWS for z/OS JCL variables in connection with TWS
parameters on page 211, we show you a way to use TWS for z/OS JCL
variables in connection with TWS parameters, to circumvent the variable
substitution restriction.
94
3.1.6 EQQPDF
Member EQQPDF of dataset SEQQMISC contains the latest Tivoli Workload
Scheduler for z/OS documentation corrections in PDF format. You need to
download the member (in binary) with the extension .pdf. Then you can use the
Adobe Acrobat Reader to view it. EQQPDF will be updated regularly via program
temporary fixes, so make sure to always have the latest documentation
available.
95
96
Standby
Engine
Standby
Engine
wtsc65.itso.ibm.com
wtsc64.itso.ibm.com
z/OS
SYSPLEX
wtsc63.itso.ibm.com
Active
Engine
eastham.itso.ibm.com
AIX
AIX
Domain
Manager
FTA &
Backup DM
AIX
Domain
Manager
Solaris
Domain
Manager
HPUX
Domain
Manager
Fault
Tolerant
Agents
AIX
Solaris
AIX
OS/400
Windows NT
Windows 2000
AIX
HPUX
Whenever the end-to-end server starts it looks into the topology member to find
its hostname and port number. If you have not defined any hostname, the system
takes one out of the TCP/IP profile. Port and hostname will also be inserted into
the Symphony file and distributed to the domain manager when either a current
plan batch job or a Symphony renew is initiated. The domain manager in turn
uses this information to link back to the server.
If the z/OS engine fails on wtsc63.itso.ibm.com (see Figure 3-1), the standby
engine either on wtsc64 or wtsc65 can take over all the engine functions. The
engine that takes over depends on how the standby engines are configured.
Since the domain manager on eastham knows wtsc63 as its master domain
manager, the link would fail, no matter which system (wtsc64 or wtsc65) the
engine takes over. To solve this problem one solution could be to send a new
Symphony file (redistribute the Symphony file) from the engine that has taken
over to the primary domain manager. Doing a redistribute of the Symphony file
on the new engine will recreate the Symphony file and add the new z/OS
hostname to the Symphony file. The domain manager can use this information to
reconnect to the engine (or more precisely the server) on the new z/OS system.
97
This change takes effect dynamically (the next time the domain manager tries to
reconnect to the server). The change must be carried out by editing a local file on
the domain manager.
98
Tivoli Workload Scheduler Apar PQ55837 (has not been closed at the time we
are writing this book) implements the ability to use this variable by adding a new
DD statement to the end-to-end server started task procedure. This statement
points to a fixed 80 sequential DS or PDS member that contains the environment
variables to be initialized before the end-to-end processes are started.
Example:
//STDENV
DD
DISP=SHR,DSN=MY.FILE.PARM(STDENV)
This member can be used to set the stack affinity using the following
environment variable.
_BPXK_SETIBMOPT_TRANSPORT=xxxxx
Where xxxxx indicates the TCP/IP stack the server should bind to.
One disadvantage of stack affinity occurs in the case where the stack or sysplex
member must be shutdown. If this happens you must interact manually.
For more information see the z/OS V1R2 Communications Server: IP
Configuration Guide, SC31-8775.
99
You can find more details in Section 1.3.2 of the z/OS V1R2 Communications
Server: IP Configuration Guide, SC31-8775.
In addition, the redbook TCP/IP in a Sysplex, SG24-5235, provides useful
installation information.
100
Installation
task
z/OS engine
z/OS agent
End-to-end
server
Page
Execute
EQQJOBS
installation aid.
Section 3.2.1,
Executing
EQQJOBS
installation
aid on
page 102.
Installation
task
z/OS engine
z/OS agent
Define
subsystems in
SYS1.PARMLI
B.
End-to-end
server
Page
Section 3.2.2,
Defining Tivoli
Workload
Scheduler for
z/OS
subsystems
on page 106.
Allocate
end-to-end
datasets
(EQQPCS06).
Section 3.2.3,
Allocate
end-to-end
datasets on
page 107.
Create and
customize the
work directory
(EQQPCS05).
Section 3.2.4,
Create and
customize the
work directory
on page 109.
Section 3.2.5,
Create the
started task
procedures for
TWS for z/OS
on page 109.
Section 3.2.6,
Define
end-to-end
initialization
statements on
page 110.
Section 3.2.9,
Verify the
installation on
page 130.
Create JCL
procedures for
Tivoli
Workload
Scheduler for
z/OS.
Define
end-to-end
initialization
statements.
Verify your
installation.
101
--------------
102
===>
===>
===>
===>
===>
===>
===>
===>
Y
(Y= Yes ,N= No)
/usr/lpp/TWS/TWS810________
___________________________
___________________________
/tws/twsctpwrk______________
___________________________
___________________________
TWSRES1_
TWS810__
103
Starting from OS/390 2.9, you can use the shared HFS and see the same
directories from different systems. Some directories (usually /var, /dev, /etc,
and /tmp) are system specific. This means that those paths are logical links
pointing to different directories. When you specify the work directory, make
sure that it is not on a system-specific filesystem. Or, should this be the case,
make sure that the same directories on the filesystem of the other systems are
pointing to the same directory. For example, you can use /u/TWS; that is not
system-specific. Or you can use /var/TWS on system SYS1 and create a
symbolic link /SYS2/var/TWS to /SYS1/var/TWS so that /var/TWS will point to
the same directory on both SYS1 and SYS2.
If you are using OS/390 Version 2.6, Version 2.7, or Version 2.8, you need to
create an HFS dataset. Then you should mount it in Read/Write mode on
/var/TWS, and use it for the work directories. When you start the server on
another system, unmount the filesystem from the first system and mount it on
the new one. The filesystem can be mounted in R/W mode only on one
system at a time.
We recommend that you create a separate HFS cluster for the working
directory, mounted in Read/Write mode. This is because the working directory
is application specific and contains application-related data. Besides, it makes
your back up easier. The size of the cluster depends on the size of the
Symphony file and how long you want to keep the stdlist files. We recommend
you to allocate 2 Gigabytes of space.
104
Refresh CP group
This information is used to create the procedure to build the HFS directory
with the right ownership. In order to create the new Symphony file, the
user ID used to run the daily planning batch job must belong to the group
that you specify in this field. Also make sure that the user ID associated
with the server address space (the one specified in the User for OPC
address space field) belongs to this group or has this group as a
supplementary group.
As you can see in Figure 4-3 on page 133, we defined RACF user ID
TWSRES1 to the End to end server started task. User TWSRES1 belongs
to RACF group TWS810. Therefore all users of the RACF group TWS810
and its supplementary group get access to create the Symphony file.
Tip: The Refresh CP group field can be used to give access to the HFS file as
well a to protect the HFS directory from unauthorized access.
3. Press Enter to generate the installation job control language (JCL). Figure 3-4
on page 106 is a sample JCL with end-to-end modifications created.
Table 3-4 End-to-end example JCL
Member
Description
EQQCON
EQQCONO
EQQCONP
EQQCONOP
EQQPCS05
EQQPCS06
EQQSER
EQQSERV
105
5. Specify Y for the end-to-end FEATURE, if you want to inter-operate with Tivoli
Workload Scheduler fault tolerant workstations.
6. Press Enter and the new skeleton member will be created (see Table 3-5).
Table 3-5 End-to-end skeletons
Member
Description
EQQSYRES
106
107
Note: An SD37 abend code is produced when Tivoli Workload Scheduler for
z/OS formats a newly allocated dataset. Ignore this error.
EQQSCPDS
EQQSCPDS is the current plan backup copy dataset to create the Symphony
file. During the creation of the current plan, the SCP dataset is used as a CP
backup copy for the production of the Symphony file. The primary and alternate
CP datasets (CP1 and CP2) are used in a flip-flop manner; that is, Tivoli
108
Workload Scheduler for z/OS copies the active CP to the inactive dataset and
then uses this newly copied dataset as the active CP. The active dataset is called
the CP logical file. Updates to the CX file are made in the data space. During the
current plan backup process, the data space is refreshed to DASD.
3.2.5 Create the started task procedures for TWS for z/OS
Perform this task for a z/OS agent, engine, and server. You must define a started
task procedure or batch job for each Tivoli Workload Scheduler for z/OS address
space. The EQQJOBS dialog generates several members in the output library
that you specified. These members contain started task JCL that is tailored with
the values you entered in the dialog. Tailor these members further, according to
the datasets you require. See also Figure 3-4 on page 105. The Tivoli Workload
Scheduler for z/OS server with TCP/IP support requires access to the C
language runtime library (either as STEPLIB or as LINKLIST). If you have
multiple TCP/IP stacks or a TCP/IP started task with a name different than
TCPIP, then you need to have a SYSTCPD DD card pointing to a TCP/IP dataset
containing the TCPIPJOBNAME parameter.
109
You must have a server dedicated for end-to-end scheduling. You can use the
same server also to communicate with the Tivoli Job Scheduling Console. The
z/OS engine uses the server to communicate events to the distributed agents.
The server will start multiple tasks and processes using the UNIX System
Services.
Recommendations: The z/OS engine and server use TCP/IP services.
Therefore it is necessary to define a USS segment. There is no special
authorization necessary. It is only required to be defined in USS with any UID.
We recommend dividing the JSC and end-to-end server into two separate
started tasks. This has the advantage that you do not need to stop the whole
server processes in the case that the JSC server needs to be shutdown.
The server started task (STC) is important for handling JSC and end-to-end
communication. We recommend setting the server STC as non-swappable
and giving it the same dispatching priority as the z/OS engine.
The end-to-end server needs the Init calendar statement to start successfully.
Description
TPLGYSRV
TPLGYPRM
TOPOLOGY
DOMREC
CPUREC
USRREC
You can find more information in Tivoli Workload Scheduler for z/OS V8R1
Customization and Tuning, SH19-4544.
110
In order to be able to start end-to-end integration, you must set the parameters
for the engine (controller) as follows: Customize the PARMLIB member (DD
name is EQQPARM) for the Tivoli Workload Scheduler for z/OS engine
(controller) started task procedure: Customize the TPLGYSRV parameter in the
OPCOPTS statement by providing the name of the server started task used as
the end-to-end server. This will activate the end-to-end sub-tasks in the Tivoli
Workload Scheduler for z/OS engine.
For the Tivoli Workload Scheduler for z/OS server and the Tivoli Workload
Scheduler for z/OS plan programs to recognize the Tivoli Workload Scheduler
network you must customize the PARMLIB members (DD name is EQQPARM)
for the Tivoli Workload server and plan programs as follows:
1. Create a member containing a description of the Tivoli Workload Scheduler
network topology with domains and fault tolerant workstations. The topology
is described using the CPUDOM and DOMREC initialization statements.
2. If you have any fault tolerant workstations on Windows machines and you
want to run jobs on these workstations with the authority of another user, you
must create a member containing the users and passwords for Windows NT
users. The Windows users are described using USRREC initialization
statements.
3. Create a member containing end-to-end integration information, using the
TOPOLOGY initialization statement. The TOPOLOGY initialization statement
is used to define parameters related the Tivoli Workload Scheduler topology,
such as the port number for netman, the installation path in USS, and the part
to the server working directory in USS.
The TPLGYMEM parameter specifies the name of the member with the
CPUDOM and DOMREC initialization statement (item 1 above). The
USRMEM parameter specifies the name of the member with the USRREC
initialization statements (item 2 above).
4. Customize the TPLGYPRM parameter in the SERVOPTS and BATCHOPT
statements.
The TPLGYPRM parameter specifies the name of the member with the
TOPOLOGY definition (item 3 above).
Figure 3-5 on page 112 illustrates the relationship between the new initialization
statements and members.
111
OPCOPTS
SERVOPTS
BATCHOPT
TPLGYSRV
TPLGYPRM(TPLGPARM)
TPLGYPRM(TPLGPARM)
EQQPARM(TPLGPARM)
TOPOLOGY
BINDIR()
WRKDIR()
PORTNUMBER(31111)
TPLGYMEM(TPLGINFO)
USRMEM(USRINFO)
EQQPARM(USRINFO)
EQQPARM(TPLGINFO)
USRREC
DOMREC
USRCPU()
USRNAM()
USRPSW()
USRREC
DOMREC
CPUREC
CPUREC
TPLGYSRV(server_name)
Specify this keyword if you want to activate the end-to-end feature in the z/OS
engine. If you specify this keyword the Tivoli Workload Scheduler Enabler task is
started. The specified server name is that of the server that handles the events to
and from the distributed agents. Please note that only one server can handle
events to and from the distributed agents. This keyword is defined in OPCOPTS.
112
Tip: If you want let the z/OS engine start and stop the end-to-end server, use
the server keyword in OPCOPTS parmlib member.
TPLGYPRM(member name/TPLGPARM)
Specify this keyword if you want to activate the end-to-end feature in the server.
The specified member name is a member of the PARMLIB in which the
end-to-end options are defined by the TOPOLOGY statement. TPLGYPRM is
defined in the servopts of the server started task and the batchopt statement.
TOPOLOGY
This statement includes all the parameters related to the end-to-end feature.
TOPOLOGY is defined in the member of the EQQPARM library as specified by
the TPLGYPRM parameter in the BATCHOPT and SERVOPTS statements.
Figure 3-6 shows you the syntax of the topology member.
Topology parameters
The following sections describe the topology parameters.
113
BINDIR(directory name)
Specifies the base hierarchical file system (HFS) directory where the binaries,
catalogs and the other files are installed on the HFS and shared among
subsystems.
LOGLINES(number of lines/100)
Specifies the maximum number of lines that the job log retriever returns for a
single job log. The default value is 100. In all cases, the job log retriever does not
return more than half the number of records existing in the input queue.
PORTNUMBER(port/31111)
Defines the TCP/IP port number used to communicate with the distributed
agents. The default value is 31111. The accepted values are from 0 to 65535.
Note that the port number must be unique within a Tivoli Workload Scheduler
network. If you change the value, you also need to restart the Tivoli Workload
Scheduler for z/OS server and refresh the Symphony file.
TRCDAYS(days/14)
Specifies the number of days the trace files on HFS are kept before being
deleted. The default value is 14. Specify 0 if you do not want the trace files to be
deleted.
114
USRMEM(member name/USRINFO)
Specifies the PARMLIB member where the user definitions are. This keyword is
optional; it applies only for Windows FTAs. The default value is USRINFO.
WRKDIR(directory name)
Specifies the location of the HFS files of a subsystem. Each Tivoli Workload
Scheduler for z/OS subsystem using the end-to-end feature must have its own
WRKDIR.
115
Workstation FDMZ
Workstation FDMA
Workstation FDMB
DomainZ
Domain
Manager
DMZ
AIX
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
OS/400
FTA3
Windows 2000
FTA4
Solaris
Figure 3-7 The topology definitions for server and plan programs
DOMREC
This statement starts a domain definition. You must specify one DOMREC for
each domain in the Tivoli Workload Scheduler network, with the exception of the
master domain. The domain name used for the master domain is MASTERDM.
The master domain is made up of the z/OS engine, which acts as master domain
manager. The CPU name used for the master domain manager (that is the Tivoli
Workload Scheduler for z/OS engine) in the Tivoli Workload Scheduler network is
OPCMASTER. You must specify one domain, child of MASTERDM, where the
domain manager is a real fault tolerant workstation. If you do not define this
domain, Tivoli Workload Scheduler for z/OS will try to find a domain definition
that can function as a child of master domain. DOMREC is defined in the
member of the EQQPARM library, as specified by the TPLGYMEM keyword in
the TOPOLOGY statement. Figure describes the DOMREC syntax.
116
DOMAIN(domain name)
The name of the domain, consisting of up to 16 characters starting with a letter. It
can contain dashes and underscores.
DOMPARENT(parent domain)
The name of the parent domain. You can specify MASTERDM for one domain
only.
CPUREC
This statement begins a Tivoli Workload Scheduler workstation (CPU) definition.
You must specify one CPUREC for each workstation in the Tivoli Workload
Scheduler network, with the exception of the z/OS engine that acts as master
domain manager. The user must provide a definition for each workstation of
Tivoli Workload Scheduler for z/OS that is defined into the database as a Tivoli
Workload Scheduler fault tolerant agent. CPUREC is defined in the member of
the EQQPARM library, as specified by the TPLGYMEM keyword in the
TOPOLOGY statement. Figure 3-9 on page 118 explains the CPUREC syntax.
117
CPUNAME(cpu name)
The name of the Tivoli Workload Scheduler workstation consisting of up to four
alphanumerical characters, starting with a letter.
118
CPUOS(operating system)
The host CPU operating system related to the Tivoli Workload Scheduler
workstation. The valid entries are AIX, HPUX, POSIX, UNIX, WNT, and OTHER.
CPUNODE(node name)
The node name or the IP address of the CPU. Fully-qualified domain names up
to 52 characters are accepted.
CPUDOMAIN(domain name)
The name of the Tivoli Workload Scheduler domain of the CPU.
CPUHOST(cpu name)
The name of the host CPU of the agent. It is required for extended agents. The
host is the Tivoli Workload Scheduler CPU with which the extended agent
communicates and where its access method resides.
Note: The host cannot be another extended agent.
CPUACCESS(access method)
The name of the access method. It is valid for extended agents and it must be
the name of a file that resides in the <twshome>/methods directory on the host
CPU of the agent.
119
CPUAUTOLNK(OFF/ON)
Autolink is most effective during the initial start-up sequence of each plan. Then a
new Symphony file is created and all the workstations are stopped and restarted.
For a fault tolerant agent or standard agent, specify On so that, when the domain
manager starts, it sends the new production control file (Symphony) to start the
agent and open communication with it. For the domain manager, specify On so
that when the agents start they open communication with the domain manager.
Specify Off to initialize an agent when you submit a link command manually from
the Modify Current Plan dialogs.
Note: If the x-agent workstation is manually set to Link, Unlink, Active, or
Offline, the command is sent to its host CPU.
CPUFULLSTAT(ON/OFF)
This applies only to fault tolerant agents. If you specify Off for a domain manager,
the value is forced to On. Specify On for the link from the domain manager to
operate in Full Status mode. In this mode, the agent is kept updated about the
status of jobs and job streams running on other workstations in the network.
Specify Off for the agent to receive status information about only the jobs and
schedules on other workstations that affect its own jobs and schedules. This can
improve the performance by reducing network traffic. To keep the production
control file for an agent at the same level of detail as its domain manager, set
CPUFULLSTAT and CPURESDEP (see below) to On. Always set these modes to
On for backup domain managers.
CPURESDEP(ON/OFF)
This applies only to fault tolerant agents. If you specify Off for a domain manager,
the value is forced to On. Specify On to have the agents production control
process operate in Resolve All Dependencies mode. In this mode, the agent
tracks dependencies for all its jobs and schedules, including those running on
other CPUs.
120
Note: CPUFULLSTAT must also be On so that the agent is informed about the
activity on other workstations.
Specify Off if you want the agent to track dependencies only for its own jobs and
schedules. This reduces CPU usage by limiting processing overhead. To keep
the production control file for an agent at the same level of detail as its domain
manager, set CPUFULLSTAT (see above) and CPURESDEP to On. Always set
CPUSERVER(server ID)
This applies only to fault tolerant and standard agents. Omit this option for
domain managers. This keyword can be a letter or a number (A-Z or 0-9) and
identifies a server (mailman) process on the domain manager that sends
messages to the agent. The IDs are unique to each domain manager, so you can
use the same IDs for agents in different domains without conflict. If more than 36
server IDs are required in a domain, consider dividing it into two or more
domains. If a server ID is not specified, messages to a fault tolerant or standard
agent are handled by a single mailman process on the domain manager.
Entering a server ID causes the domain manager to create an additional
mailman process. The same server ID can be used for multiple agents. The use
of servers reduces the time required to initialize agents, and generally improves
the timeliness of messages. As a guide, additional servers should be defined to
prevent a single mailman from handling more than eight agents.
CPULIMIT(value, 99)
Specifies the number of jobs that can run at the same time in a CPU. The default
value is 99 and the accepted values are from 0 to 99.
CPUTZ(timezone/UTC)
Specifies the local time zone of the fault tolerant workstation. It must match the
timezone in which the agent will run. For a complete list of valid time zones,
please refer to Appendix A of the Tivoli Workload Scheduler 8.1 Reference
Guide, SH19-4556.
If the timezone does not match that of the agent, the message AWS22010128E is
displayed in the batchman stdlist (log file) of the distributed agent. The default is
UTC (universal coordinated time).
USRREC
This statement defines the passwords for the Tivoli Workload Scheduler users
installed on Windows. USRREC is defined in the member of the EQQPARM
library as specified by the USERMEM keyword in the TOPOLOGY statement.
The USRREC syntax is seen in Figure 3-10 on page 122.
121
USRCPU(cpuname)
The Tivoli Workload Scheduler workstation on which the user can launch jobs. It
consists of four alphanumerical characters, starting with a letter.
USRNAM(logon ID)
The user name. It can include a domain name and can consist of 47 characters.
Note that Windows NT user names are case-sensitive. The user must be able to
log on to the computer on which the hierarchical file system has launched jobs,
and must also be authorized to log on as batch. If the user name is not unique in
Windows NT, it is considered to be either a local user, a domain user, or a trusted
domain user, in that order.
USRPWD(password)
The user password. It can consist of up to 31 characters and must be enclosed in
single quotation marks. Do not specify this keyword if the user does not need a
password. You can change the password every time you create a Symphony file
(for example when creating a CP extension).
Attention: The password is not encrypted. You must take care that the parmlib
dataset is RACF-protected to avoid misuse.
122
MASTERDM
Standby
Engine
Standby
Engine
z/OS
SYSPLEX
Active
Engine
Server
DomainZ
Domain
Manager
DMZ
AIX
Eastham.ibm.com
DomainA
DomainB
AIX
Domain
Manager
DMA
Kopenhagen.ibm.com
Finn.ibm.com
HPUX
FTA1
Domain
Manager
DMB
FTA2
AIX
FTA3
Lowry.ibm.com
Ami.ibm.com
FTA4
Windows 2000
OS/400
Mainz.ibm.com
Solaris
Michaela.ibm.com
First we need to describe the domain topology with the DOMREC statement.
Example 3-2 Domain definitions
DOMREC
DOMREC
DOMREC
DOMAIN(DOMAINZ)
DOMMNGR(FDMZ)
DOMPARENT(MASTERDM)
DOMAIN(DOMAINA)
DOMMNGR(FDMA)
DOMPARENT(DOMAINZ)
DOMAIN(DOMAINB)
DOMMNGR(FDMB)
DOMPARENT(DOMAINZ)
/*
/*
/*
/*
/*
/*
/*
/*
/*
*/
*/
*/
*/
*/
*/
*/
*/
*/
The master domain (MASTERDM) is always the z/OS engine, therefore you
must define it in the DOMPARENT parameter. The DOMNGR keyword
represents the name of the workstation.
Now we must define the CPUREC keyword for every workstation in the network.
123
CPUNAME(FDMZ)
/* Domain manager for DMZ
*/
CPUOS(AIX)
/* AIX operating system
*/
CPUNODE(EASTHAM.IBM.COM)
/* IP address of system
*/
CPUTCPIP(31111)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DOMAINZ)
/* The TWS domain name for CPU */
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on
*/
CPUFULLSTAT(ON)
/* Full status on for DM
*/
CPURESDEP(ON)
/* Resolve dep. on for DM
*/
CPULIMIT(20)
/* Number of jobs in parallel
*/
CPUTZ(CST)
/* Time zone for this CPU
*/
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FDMA)
CPUOS(AIX)
CPUNODE(KOPENHAGEN.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINA)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(ON)
CPURESDEP(ON)
CPULIMIT(20)
CPUTZ(CST)
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FDMB)
CPUOS(HPUX)
/* HP UX FTA
*/
CPUNODE(MAINZ.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINB)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(ON)
CPURESDEP(ON)
CPULIMIT(20)
CPUTZ(CST)
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FTA1)
CPUOS(AIX)
CPUNODE(FINN.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINA)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(OFF)
/*No full status for this FTA * /
CPURESDEP(OFF)
/*Don't resolve dependencies
*/
CPULIMIT(20)
CPUTZ(CST)
124
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FTA2)
CPUOS(OTHER)
/*This is an OS/400 FTA
CPUNODE(LOWRY.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINA)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(OFF)
CPURESDEP(OFF)
CPULIMIT(20)
CPUTZ(CST)
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FTA3)
CPUOS(WNT)
/* WIN2000 FTA */
CPUNODE(MICHAELA.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINB)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(OFF)
CPURESDEP(OFF)
CPULIMIT(20)
CPUTZ(CST)
/*-------------------------------------------------------------------*/
CPUREC
CPUNAME(FTA4)
CPUOS(UNIX)
/* Solaris FTA */
CPUNODE(AMI.IBM.COM)
CPUTCPIP(31111)
CPUDOMAIN(DOMAINB)
CPUTYPE(FTA)
CPUAUTOLNK(ON)
CPUFULLSTAT(OFF)
CPURESDEP(OFF)
CPULIMIT(20)
CPUTZ(CST)
FTA1 and FTA4 do not need to have CPUFULLSTATUS and CPURESDEP on,
because dependency resolution within the domain is the task of the domain
manager. This improves performance by reducing network traffic.
Note: CPUOS(WNT) applies for all Windows platforms.
Because FTA3 runs on a Windows 2000 machine, you must provide the user ID
and password via the USRREC keyword in USERMEM member.
125
USRCPU(FTA3)
USRNAM(michael)
USRPSW('texas26')
After these customization steps, you can start the z/OS engine. Check the
engine and server logs for any syntax errors or messages. All z/OS messages
are prefixed with EQQ. See also the Tivoli Workload Scheduler for z/OS V8R1
Messages and Codes, SH19-4548.
If the z/OS engine uses the end-to-end feature to schedule on distributed
environments using distributed agents, check that the messages shown in
Example 3-5 appear in the z/OS engine EQQMLOG.
Example 3-5 Server messages
EQQZ005I
EQQZ085I
EQQZ085I
EQQG001I
EQQG001I
EQQG001I
When the end-to-end server is started, check that the messages shown in
Example 3-6 appear in the z/OS engine EQQMLOG.
Example 3-6 Server messages continued
EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED
EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/translator" has been started, pid is
100663307
EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/netman" has been started, pid is
83886093
If a Symphony file has been created and is active, the messages shown in
Example 3-7 will appear.
Example 3-7 Symphony creation messages
EQQPT20I Input Translator waiting for Batchman is started
EQQPT21I Input Translator finished waiting for Batchman
Otherwise, if the Symphony file is not present or a new one must be
produced, this message will follow:
EQQPT22I Input Translator thread stopped until new Symphony will be available
126
The end-to-end event datasets need to be formatted the first time the z/OS
engine is started with the end-to-end feature in use or after the end-to-end event
datasets (EQQTWSIN and EQQTWSOU) have been reallocated. The messages
shown in Example 3-8 will appear in the z/OS engine EQQMLOG before the
messages about the sender and receiver have started.
Example 3-8 Formatting messages
EQQW030I
EQQW030I
EQQW038I
EQQW038I
A
A
A
A
DISK
DISK
DISK
DISK
dataset
dataset
dataset
dataset
127
2. Enter the name of the workstation and a suitable description. Mark the Fault
Tolerant check box (Figure 3-13) and save the workstation definition in the
TWS for z/OS database.
3. To activate the workstation definition in the TWS for z/OS current plan, you
should extend or replan the plan. When doing this, TWS for z/OS will create a
Symphony file and distribute it to the domain manager. If the Symphony file is
128
successfully created and distributed, all your defined FTWs should be linked
and active.
To submit work to the FTWs in order to check their status, you need to complete
the job definitions, which is described next.
Job definitions
A new dataset called Scriptlib has been introduced in this version of TWS for
z/OS. The scriptlib dataset holds all definitions related to fault tolerant
workstations. See also the end-to-end script library (EQQSCLIB) in
Section 3.2.3, Allocate end-to-end datasets on page 107. A new statement
JOBREC within the scriptlib defines the fault tolerant workstation job properties.
You must specify JOBREC for each member of the SCRIPTLIB. For each job this
statement specifies the script or the command to execute and the user that must
run the script or command.
The syntax of the JOBREC command is as in Figure 3-14.
Parameters
The following are descriptions of the parameters.
JOBSCR (script name)
Specifies the name of the shell script or executable file to be run for the job. If
the script includes more than one word, it must be enclosed within quotation
marks.
JOBCMD (command name)
Specifies the name of the shell command to run the job. If the command
includes more than one word, it must be enclosed within quotation marks.
JOBUSR (user name)
Specifies the name of the user submitting the specified script or command.
129
Figure 3-15 shows the relationship between the job definition and the member
name in the script library. In this way you can define your jobs for all the FTAs.
130
No matter which option you choose, after the job finishes all of your FTW
workstations should be linked and active. If not, the following might be the cause:
The Symphony file has not been created successfully. See the server log for
the appropriate messages.
The DOMREC or CPUREC definitions are wrong.
TCP/IP-related failures.
3.3 Installing the TWS for z/OS TCP/IP server for JSC
usage
The TCP/IP server can be used for JSC connections and by the end-to-end
server. We recommend dividing it for dedicated use. The TCP/IP server is
needed by the connector to access the engine subsystem. The access uses the
Tivoli Workload Scheduler for z/OS program interface (PIF).
You can find an example for the started task procedure in installation member
EQQSER generated by the EQQJOBS installation aid. For prior releases, you
must install the enhancement APARs: PQ21320 or PQ21321. After the
installation, you can get full functionality of the Job Scheduling Console as with
the Tivoli Workload Scheduler for z/OS 8.1 release.
First, you have to modify the JCL of EQQSER in the following way:
Make sure that the C runtime library is concatenated in the server JCL
(CEE.SCEERUN).
If you have more than one TCP/IP stack or the name of the procedure that
was used to start the TCPIP address space is different from TCPIP, introduce
the SYSTCPD DD card pointing to a dataset containing the TCPIPJOBNAME
parameter (see DD SYSTCPD in the TCP/IP manuals).
Customize the server parameters file (see the EQQPARM DD statement in
the server JCL). The installation member EQQSERP already contains a
template. For example, you can provide the parameters shown in Figure 3-9
in the server parameters file.
Example 3-9 TCP/IP server parameter
SERVOPTS SUBSYS(TWSA)
USERMAP(MAP1)
131
PROTOCOL(TCPIP)
PORTNUMBER(3111)
CODEPAGE(IBM-037)
INIT CALENDAR(DEFAULT)
/*
/*
/*
/*
Communication protocol */
Port the server connects */
Name of the host codepage */
Name of the TWS calendar */
132
This member contains all the associations for a TME user with a RACF user
ID. You should set the parameter USERMAP in the SERVOPTS Server
Initialization Parameter to define the member name.
USERMAP(USERS)
The member, USERS, of the Initialization Parameter dataset could contain the
entries shown in Figure 3-10 with the same logic as the TMEADMIN class.
Example 3-10 Usermap member entries
USER 'DORIS@ITSO-REGION' RACFUSER(DORIS6) RACFGROUP(TIVOLI)
USER 'VERA@ITSO-REGION' RACFUSER(VERA21) RACFGROUP(TIVOLI)
For example, the TMF user VERA@ITSO-REGION maps the RACF user ID
VERA21 in the group Tivoli. The authorization level to use the plan and the
database of Tivoli Workload Scheduler for z/OS is defined in his RACF
definitions. A TMF user is connected to a certain authorization level within the
Tivoli Framework to perform actions within the management region. In our case
this is used to restrict the usage of the connector.
Tips:
Every RACF user who has update authority to this usermap table may get
access to the z/OS engine OPC subsystem. To maintain a high level of
security, the usermap table should be protected.
To manage the various levels of access, we recommend assigning one
133
The size of your network will help you decide whether to use a single domain
or the multiple domain architecture. If you have a small number of computers,
or a small number of applications to control with Tivoli Workload Scheduler,
there may not be a need for multiple domains.
How many geographic locations will be covered in your Tivoli Workload
Scheduler network? How reliable and efficient is the communication between
locations?
If they are discrete and separate from other applications, you may choose to
put them in a separate Tivoli Workload Scheduler domain.
Would you like your Tivoli Workload Scheduler domains to mirror your
Windows NT domains? This is not required, but may be useful.
134
135
MASTERDM
AIX
AIX
Backup
Domain
Manager
Master
Domain
Manager
DomainA
DomainB
AIX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
FTA3
OS/400
HPUX
FTA4
Windows 2000
Solaris
Event files: The response time to the events is improved so the message
flow is faster.
Daily plan: The access to the Symphony file is quicker for both read and
write. The daily plan can therefore be updated in a shorter time than it was
previously.
136
If you suffer from poor performance and have already isolated the bottleneck at
the Tivoli Workload Scheduler side, you may want a closer look at the new
localopts parameter.
Table 3-7 Performance-related localopts parameter
Syntax
Default value
mm cache mailbox=yes/no
No
32
sync level=low/medium/high
High
wr enable compression=yes/no
No
mm cache mailbox
Use this option to enable mailman to use a reading cache for incoming
messages. In that case not all messages are cached, but only those not
considered essential for network consistency. The default is No.
mm cache size
Specify this option if you also use mm cache mailbox. The default is 32 bytes.
Use the default for small and medium networks. Use larger values for large
networks. Avoid using a large value on small networks. The maximum value is
512 (higher values are ignored).
sync level
Specify the speed of write accesses on disk. This option affects all mailbox
agents and is applicable to UNIX workstations only. Values can be:
Low: Lets the operating system handle it.
Medium: Makes updates after a transaction has completed.
High: Makes updates every time data is entered.
wr enable compression
Use this option on fault tolerant agents. Specify if the FTA can receive the
Symphony file in compressed form from the master domain manager. The
default is no.
137
MASTERDM
Standby
Engine
Standby
Engine
z/OS
SYSPLEX
Active
Engine
Server
DomainZ
Domain
Manager
Tier 1
AIX
AIX
DomainA
DomainB
AIX
Tier 2
HPUX
Domain
Manager
Tier 3
FTA1
Domain
Manager
FTA2
AIX
FTA3
OS/400
138
Backup
Domain
Manager
Windows 2000
FTA4
Solaris
To take advantage of the four digits naming convention, it can look like:
First digit
Use to know the hierarchical level (tier), the workstation resides on the
domain manager as the top domain can be assigned the level 1, where the
FTA in this case belongs to tier 3.
Third and fourth digits
139
MASTERDM
Standby
Engine
Standby
Engine
z/OS
SYSPLEX
Active
Engine
Server
DomainZ
Domain
Manager
F100
Tier 1
AIX
AIX
DomainA
DomainB
AIX
Tier 2
HPUX
Domain
Manager
F201
Tier 3
F301
Backup
Domain
Manager
F101
Domain
Manager
F202
F303
F302
AIX
OS/400
Windows 2000
F304
Solaris
For better differentiation you can also use the workstation description field that
allows you to use up to 32 characters. See also Example 3-11 on page 141.
Tip: The hostname, in conjunction with the workstation name, provides you
with an easy way to illustrate your configured environment.
140
T
in
in
in
in
in
in
in
in
Z
Z
A
B
A
A
B
B
domain
domain
domain
Domain
domain
domain
domain
domain
R
C
C
C
C
C
C
C
C
A
A
A
A
A
A
A
A
Last update
user
date
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
TWSRES1
02/02/07
The the latest available releases at the time of this writing and the location
where they can be found are listed below.
Tivoli Workload Scheduler: 8.1-TWS-0001:
ftp://ftp.tivoli.com/support/patches/patches_8.1/8.1-TWS-0001
141
First of all, the user name and ID must be unique. There are many different ways
of naming these users. Choose user names that make sense to you. It may
simplify things to create a group called TWS and make all TWS users members
of this group. This would allow you to add group access to files to grant access to
all TWS users. When installing TWS on UNIX, the TWS user is specified by the
-uname option of the UNIX customize script. It is very important to specify the
TWS user. If the TWS user is not specified, the customize script will choose the
default user name maestro. Obviously, if you plan to install multiple TWS engines
on the same computer, they cannot both be installed as the user maestro.
Second, the home directory and the parent directory of the home directory must
be unique. When TWS is first installed, most of the files are installed into the
home directory of the TWS user. However, another directory called unison is
created in the parent directory of the TWS users home directory. The unison
directory is a relic of the days when Unison Softwares Maestro program (the
direct ancestor of TWS) was one of several programs that all shared some
common data between them. The unison directory was where the common data
were stored. Important information is still stored in this directory, including the
workstation database (cpudata), the NT user database (userdata), and the
security file. In order to keep two different TWS engines completely separate, it is
necessary to add an extra level to the directory hierarchy to ensure that each
TWS engine has its own unison directory.
Figure 3-19 on page 143 should give you an idea of how two TWS engines might
be installed on the same computer. You can see that each TWS engine has its
own separate TWS directory and unison directory.
142
tivoli
tws
TWS Engine A
TWS Engine B
TWS-A
unison
TWS-B
tws
unison
tws
network
Security
bin
mozart
network
Security
bin
mozart
mastsked
jobs
cpudata
userdata
mastsked
jobs
cpudata
userdata
Example 3-12 shows the /etc/passwd entries corresponding to the two TWS
users.
Example 3-12 Excerpt from /etc/passwd showing two different TWS users
tws-a:!:31111:9207:TWS Engine A User:/tivoli/TWS/TWS-A/tws:/usr/bin/ksh
tws-b:!:31112:9207:TWS Engine B User:/tivoli/TWS/TWS-B/tws:/usr/bin/ksh
Note that each TWS user has a unique name, ID, and home directory.
Next, the component group for each Tivoli Workload Scheduler engine must be
unique. The component group is a name used by TWS programs to keep each
engine separate. The name of the component group is up to you. It can be
specified using the -group option of the UNIX customize script. It is important to
specify a different component group name for each instance of the TWS engine
installed on a computer. Component groups are stored in the file
/usr/unison/components. This file contains two lines for each component group.
Example 3-13 on page 144 shows the components file corresponding to the two
TWS engines.
143
1.8.1.1
8.1
1.8.1.1
8.1
/tivoli/TWS/TWS-A/tws
/tivoli/TWS/TWS-A/tws
/tivoli/TWS/TWS-B/tws
/tivoli/TWS/TWS-B/tws
TWS-Engine-A
TWS-Engine-A
TWS-Engine-B
TWS-Engine-B
The component groups are called TWS-Engine-A and TWS-Engine-B. For each
component group, the version and path where netman and maestro (the TWS
engine) are listed. In this context, maestro refers simply to the TWS home
directory.
Finally, because netman listens for incoming TCP link requests from other TWS
agents, it is important that netman have a unique listening port for each TWS
engine on a computer. This port is specified by the nm port option in the TWS
localopts file. It is necessary to shut down netman and start it again for changes
made to the netman options to take effect.
In our test environment, we chose a netman port number and user ID that was
the same for each TWS engine. This makes it easier to remember and simpler
when troubleshooting. Table 3-8 shows the names and numbers we used in our
testing.
Table 3-8 If possible, choose user IDs and port numbers that are the same
Component
group
User name
User ID
Netman port
TWS-Engine-A
tws-a
31111
31111
TWS-Engine-B
tws-b
31112
31112
144
In the last few parts of this section, we discuss in more detail the steps specific to
end-to-end scheduling: Creating connector instances and TMF administrators.
The Tivoli Management Framework is quite easy to install. If you already have
the Framework installed in your organization, it is not necessary to install the
TWS-specific components (the JSS and connectors) on a node in your existing
Tivoli Managed Region. You may prefer to install a stand-alone TMR server
solely for the purpose of providing the connection between the Tivoli Workload
Scheduler suite and its interface, the Job Scheduling Console. If your existing
TMR is busy with other operations, such as monitoring or software distribution,
you might want to consider installing a separate stand-alone TMR server for
TWS. If you decide to install JSS and the connectors on an existing TMR server
or managed node, you can skip past the first few parts of this section.
145
3.7.1 Installing the connectors (for TWS and TWS for z/OS connector)
Follow the installation instructions in Chapters 3 and 4 of the Tivoli Job
Scheduling Console Users Guide, SH19-4552.
When installing the TWS connector, we recommend that you do not click the
Create Instance check box. Create the instances after the connector has been
installed.
The following are the hardware and software prerequisites for the Tivoli
Workload Scheduler for z/OS connector:
Software
146
The result of this will be that when we use JSC to connect to Yarmouth, a new
connector instance called TWSC appears in the Job Scheduling list on the left
side of the window. We can access the TWS for z/OS scheduling engine by
clicking that new entry in the list.
It is also possible to run wopcconn in interactive mode. To do this, just run
wopcconn with no arguments.
Refer to Appendix B, Connector reference on page 447 for a detailed
description of the wopcconn command.
147
The result of this will be that when we use JSC to connect to yarmouth, a new
connector instance called TWS-A appears in the Job Scheduling list on the left
side of the window. We can access the TWS-A scheduling engine by clicking that
new entry in the list.
Refer to Appendix B, Connector reference on page 447 for a detailed
description of the wtwsconn.sh command.
148
Important: When creating users or setting their passwords, disable any option
that requires the user to set his password the first time he logs in. If the
operating system requires a user to change his password the first time he logs
in, he will have to do this before he will be able to login via the Job Scheduling
Console.
149
It also represents a UID at the operating system level. See Figure 3-21 on
page 150.
4. Type in the login name and press Enter. Then select Set & Close
(Figure 3-22).
150
5. Enter the name of the group. This field is used to determine the GID under
which many operations are performed. Select Set & Close.
The TMR roles you assign to the administrator will depend on the actions the
user will need to perform.
Table 3-9 Authorization roles required for connector actions
An Administrator with this role...
User
151
6. Click the Set TMR Roles icon and add the role or roles desired (Figure 3-23).
7. Now select Set & Close to finish your input. This brings you back to the
Administrators desktop (Figure 3-24 on page 153).
152
153
MASTERDM
Master
Domain
Manager
DomainZ
z/OS
Job
Scheduling
Console
AIX
Domain
Manager
DMZ
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
AIX
Backup
Domain
Manager
FTA3
OS/400
Windows 2000
FTA4
Solaris
To run the latest version of the Job Scheduling Console (Version 1.2), the
software requirements have been changed so that you need Tivoli Framework
Version 3.7.1 for AIX, HP-UX, and Microsoft Windows. If you are using Linux,
Version 3.7B is required.
Table 3-10 on page 155 shows the only supported combinations of Workload
Scheduler Connectors and engines.
154
Connector
version
Engine
version 1
Supported
1.2
7.0
7.0
1.2
8.1
8.1
1.2
7.0
8.1
1.2
8.1
7.0
1.1
7.0
7.0
1.1
8.1
8.1
1.1
8.1
7.0
1.1
7.0
8.1
Notes:
The engine can be a fault tolerant agent or master.
Previous versions of the connector and engine must be the same.
There are limitations when using the Job Scheduling Console Feature
Level 1.1 if the database objects have been modified using a Job
Scheduling Console 1.2 with the new functions.
Software
The following is required software.
Tivoli Management Framework: Version 3.7.1 for Microsoft Windows, AIX,
HP-UX, and Sun Solaris. Version 3.7B for Linux.
Tivoli Workload Scheduler 8.1.
Tivoli Job Scheduling Services 1.2.
TCP/IP network communications.
A Workload Scheduler user account is required for proper installation. You
can create the account beforehand, or have the setup program create it for
you.
155
Hardware
The following is required hardware.
CD-ROM drive for installation.
Approximately 100 MB of free disk space for domain managers and fault
tolerant agents. Approximately 40 MB for standard agents. In addition, the
Workload Scheduler produces log files and temporary files, which are placed
on the local hard drive. The amount of space required depends on the
number of jobs managed by Workload Scheduler and the amount of time you
choose to retain log files.
128 MB RAM and 128 MB swap space for domain managers and fault
tolerant agents. Standard agents requires less.
IBM AIX
4.3.3
4.3.3s
5.1
Sun Solaris
2.7
2.8
HP-UX PA-RISC
11.0
11i
Linux
156
Hardware:
CD-ROM drive for installation.
70 MB disk space for full installation, or 34 MB for customized (English base)
installation plus approximately 4 MB for each additional language.
Hardware:
CD-ROM drive for installation.
70 MB disk space for full installation, or 34 MB for customized (English base)
installation plus approximately 4 MB for each additional language.
For Workload Scheduler for z/OS, you should migrate the Job Scheduling
Services and the connector to the latest level. The Workload Scheduler for z/OS
connector can support any Operations Planning and Control V2 release level as
well as Workload Scheduler for z/OS Version 8.1.
157
From the Start menu, select the Run option to display the Run
dialog.
In the Open field, enter F:\Install.
On AIX:
On Sun Solaris:
158
To complete the installation perform the following to start the JSC, depending on
your platform.
On Windows
Depending on the shortcut location that you specified during installation, click
the JS Console icon or select the corresponding item in the Start menu.
On Windows 95 and Windows 98
You can also start the JSC from the command line. Just type runcon from the
\bin\java subdirectory of the installation path.
On AIX
Type ./AIXconsole.sh.
On Sun Solaris
Type ./SUNconsole.sh.
A Tivoli Job Scheduling Console start-up window is displayed, as shown in
Figure 3-27 on page 160.
159
160
When a user attempts to display a list of defined jobs, submit a new job stream,
add a new resource, or any other operation related to the Tivoli Workload
Scheduler plan or databases, TWS performs a check to verify that the user is
authorized to perform that action.
TWS and root users:
Has full access to all areas
Operations Group:
Can manage the whole workload
but cannot create job streams
Has no root access
Application User:
Can document own
jobs and schedules
Applications Manager:
Can document jobs and
schedules for entire group
and manage some production
General User:
Has display access only
TWS users have different roles within the organization. The TWS security model
you implement should reflect these roles. You can think of the different groups of
users as nested boxes, as in the figure above. The largest box represents the
highest access, granted to only the TWS user and the root user. The smaller
boxes represent more restricted roles, with correspondingly restricted access.
Each group represented by a box in the figure would have a corresponding
stanza in the security file. TWS programs and commands read the security file to
determine whether the user has the access required to perform an action.
161
If you have one security file for a network of agents, you may wish to make a
distinction between the root user on a fault tolerant agent and the root user on
the master domain manager. This is possible. For example, you can restrict local
users to performing operations affecting only the local workstation, while
permitting the master root user to perform operations that affect any workstation
in the network.
A template file named TWShome/config/Security is provided with the software.
During installation, a copy of the template is installed as TWShome/Security and
a compiled copy is installed as TWShome/../unison/Security.
162
Master
Sol
Security
Logon ID:
johns
FTA
Venus
FTA
Mars
Command issued:
conman release mars#weekly.cleanup
1) Find user
2) Find object
3) Find access right
163
For NT user definitions (userobj), the users have full access to objects on all
workstations in the network.
Example 3-14 Sample security file
###########################################################
#Sample Security File
###########################################################
#(1)APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON THE
#MASTER DOMAIN MANAGER OR FRAMEWORK.
user mastersm cpu=$master,$framework +logon=maestro,root,Root_london-region
begin
#OBJECT ATTRIBUTES ACCESS CAPABILITIES
#-------------------------------------------job
access=@
schedule
access=@
resource
access=@
prompt
access=@
file
access=@
calendar
access=@
cpu
access=@
parameter name=@ ~ name=r@ access=@
userobj
cpu=@ + logon=@ access=@
end
164
Description
165
Simply exit the programs. The next time they are run, the new security definitions
will be recognized. Tivoli Workload Scheduler connectors must be stopped using
the wmaeutil command before changes to the security file will take effect for
users of JSC. The connectors will automatically restart as needed.
The user must have modify access to the security file.
Note: On Windows NT, the connector processes must be stopped (using the
wmaeutil command) before the makesec command will work correctly.
Synopsis
makesec -v | -u
makesec [-verify] in-file
Description
The makesec command compiles the specified file and installs it as the
operational security file (../unison/Security). If the -verify argument is
specified, the file is checked for correct syntax, but it is not compiled and
installed.
Arguments
166
Note: Add the Tivoli Administrator to the TWS security file after you have
installed the Tivoli Management Framework and Tivoli Workload Scheduler
connector.
3.11 Maintenance
The Tivoli maintenance strategy for Tivoli Workload Scheduler introduces a new
way to maintain the product more effectively and easily. On a quarterly basis
Tivoli provides you updates with recent patches and offers as a fix pack that is
similar to a maintenance release. This fix pack can be ordered either via the
common support Web page ftp.tivoli.com/support/patches or shipped on a
CD. Ask your local Tivoli support for more details.
167
168
Chapter 4.
End-to-end implementation
scenarios and examples
In this chapter we describe several different implementation scenarios and
examples for Tivoli Workload Scheduler for z/OS end-to-end scheduling.
First we describe the rationale behind the conversion to TWS for z/OS
end-to-end scheduling.
Next we describe and show four different implementation, migration, conversion,
and fail-over scenarios:
1. Implementing end-to-end scheduling with TWS distributed fault tolerant
agents in a TWS for z/OS environment with no distributed job scheduling.
You can use this scenario as an example of implementing the end-to-end
scheduling, if you have not used OPC tracker agents or Tivoli Workload
Scheduler before.
2. Migrating from OPC tracker agents to TWS for z/OS end-to-end fault tolerant
workstations.
3. Conversion from a TWS-managed network to a TWS for z/OS managed
network.
4. Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios:
Switch to TWS for z/OS backup engine.
169
170
Extended plans means that the current plan can span more than 24 hours.
The powerful run-cycle and calendar functions in Tivoli Workload Scheduler
for z/OS can be used for distributed Tivoli Workload Scheduler jobs.
Besides these benefits, using the Tivoli Workload Scheduler for z/OS end-to-end
also makes it possible to:
Reuse or reinforce the procedures and processes that are established for the
Tivoli Workload Scheduler for z/OS mainframe environment.
171
Installation and usage of the Tivoli Workload Scheduler MVS and OS/390
extended agent is described in the redbook End-to-End Scheduling with OPC
and TWS Mainframe and Distributed Environment, SG24-6013.
172
173
MASTERDM
SC63
SC65
Standby
Engine
Standby
Engine
z/OS 1.3
SYSPLEX
Active
Engine
Server
DM100
SC64
eastham
DM - F100
AIX 4.3.3
paris
chatham
FTA - F102
BDM - F101
Windows 2000
AIX 4.3.3
DM200
yarmouth
DM - F200
AIX 4.3.3
delhi
tokyo
FTA - F201
Windows 2000
FTA - F202
Windows 2000
istanbul
central
FTA - F203
FTA - F204
Windows 2000
Windows NT
Figure 4-1 Our configuration for TWS for z/OS end-to-end scheduling
174
3. Customize the TWS for z/OS server and plan program topology definitions for
the distributed agents.
4. Customize the TWS for z/OS engine for end-to-end scheduling.
5. Restart the TWS for z/OS engine and verify that:
The engine starts without any errors.
The engine starts the TWS for z/OS server task (if configured to do so).
The TWS for z/OS server starts without any errors.
6. Install the TWS distributed workstations (fault tolerant agents).
Refer to the Tivoli Workload Scheduler 8.1 Planning and Installation Guide,
SH19-4555, and Section 3.5 Installing TWS in an end-to-end environment
on page 141.
7. Remember to specify OPCMASTER as the name of the master domain
manager.
8. Start the netman process on the installed agents using the conman StartUp
command.
9. Define fault tolerant workstations in the workstation database on the TWS for
z/OS engine.
175
176
TWST
TWSC
TWSCTP
TWSCJSC
The distributed TWS fault tolerant workstations are configured with the home
directory:
/tivoli/TWS/E/tws-e (on UNIX systems)
C:\TWS\E\tws-e\ (on Windows systems)
And with the user ID, tws-e on UNIX and Windows systems.
We have two instances in the Job Scheduling Console:
1. TWSC pointing to the TWS for z/OS engine
This instance can be used to work with the TWS for z/OS database and plan
from the JSC.
2. TWSC-F100-Eastham pointing to the primary domain manager (F100)
This instance can be used to work with the Symphony file (the distributed
plan) on the F100 primary domain manager.
Note: Having a JSC instance pointing to the primary domain manager is a
good approach. We use this instance several times to check the link status
for workstations, and compare the job status in the TWS for z/OS plan with
the job status in the Symphony file on the primary domain manager.
We did not install a JSC instance pointing to the backup domain manager
(F101). But you may want to install a JSC pointing to the backup domain
manager, since it can be activated as the primary domain manager.
4.3.2 Customize the TWS for z/OS plan program topology definitions
When the TWS for z/OS engine and agents are installed and up and running in
the sysplex, the next step is to create the end-to-end server started task
procedure and customize the topology definitions.
The started task procedure for the TWS for z/OS end-to-end server task,
TWSCTP, is shown in Example 4-1.
Example 4-1 Started task procedure
//TWSCTP
EXEC PGM=EQQSERVR,REGION=6M,TIME=1440
//*********************************************************************
//* STARTED TASK PROCEDURE FOR THE TWS FOR Z/OS SERVER TASK
//* ------------------------------------------------------//*
//*********************************************************************
//SYSTCPD DD DISP=SHR,DSN=TCPIPMVS.&SYSNAME..TCPPARMS(TCPDATA)
//EQQMLIB DD DISP=SHR,DSN=EQQ.SEQQMSG0
//EQQMLOG DD SYSOUT=*
177
//EQQPARM
//EQQTWSIN
//EQQTWSOU
//SYSMDUMP
//EQQDUMP
//*
DD
DD
DD
DD
DD
DISP=SHR,DSN=EQQUSER.TWS810.PARM(TWSCTP)
DISP=SHR,DSN=EQQUSER.TWS810.TWSIN
DISP=SHR,DSN=EQQUSER.TWS810.TWSOU
DISP=MOD,DSN=EQQUSER.TWS810.SYSDUMPS
DISP=SHR,DSN=EQQUSER.TWS810.EQQDUMPS
We use a dedicated work HFS for the TWSCTP server. The work HFS for the
TWS for z/OS end-to-end server task is mounted as read/write on all three
systems in the sysplex (SC63, SC64, and SC65). The work HFS is allocated with
the attributes shown in Example 4-2.
Example 4-2 HFS work allocation
dataset name: OMVS.TWS810.TWSCTP.HFS
Block size . . . . . . : 4096
Total blocks . . . . . : 256008
Mount point: /tws/twsctpwrk
The HFS is initialized with the EQQPCS05 job described in Section 3.2.4,
Create and customize the work directory on page 109.
The HFS file can be created with a job that contains an IEFBR14 step, as in
Example 4-3.
Example 4-3 HFS file creation
//USERHFS EXEC PGM=IEFBR14
//D1
DD DISP=(,CATLG),DSNTYPE=HFS,
//
SPACE=(CYL,(prispace,secspace,1)),
//
DSN=OMVS.TWS810.TWSCTP.HFS
178
The subordinate domian managers will have the names DM200, DM300,
etc.
The name does not imply the level or tier for the domain.
TWS for z/OS fault tolerant workstations names:
The second number does not indicate the domain managers tier level in
the network. Since we are using numbers and characters we can define
36 domains and 1296 fault tolerant workstations in every domain. The
total number of fault tolerant workstations is 46656 (36*1296).
TWS for z/OS fault tolerant workstations description.
DD
DISP=SHR,DSN=EQQUSER.TWS810.PARM(TWSCTP)
179
/*********************************************************************/
SERVOPTS ARM(YES)
/* Use ARM to restart if fail */
CODEPAGE(IBM-037)
/* Use US codepage
*/
PROTOCOL(TCPIP)
/* This is a TCP/IP server
*/
SUBSYS(TWSC)
/* TWSC is our engine
*/
TPLGYPRM(TOPOLOGY)
/* Member with topology defs. */
PORTNUMBER(6000)
/* The portno. is not used if */
/* it is an E2E only server
*/
/* But if it isn't spec. then */
/* it will default to 425 !! */
/*-------------------------------------------------------------------*/*/*
CALENDAR parameter is mandatory for server when using TCP/IP
*/
/* server (also if the server is dedicated to end-to-end!)
*/
/*-------------------------------------------------------------------*/
INIT
CALENDAR(DEFAULT)
/* Must be spec. for IP server*/
/*-------------------------------------------------------------------*//
Note: TRCDAYS defines how many days files in stdlist directory should be
kept. At midnight Netman creates a new directory, ccyy.mm.dd, in the
stdlist directory. TRCDAYS specifies how old these files are going to be
before they are deleted.
3. Create the domain and fault tolerant agent definitions. This is done in the
member named in the TWSCTP SERVOPTS TPLGYMEM(TPDOMAIN))
initialization statement. The domain and fault tolerant agent definitions are
used to define the topology that we have outlined in Figure 4-1 on page 174
We use the definitions shown in Example 4-6 on page 181.
180
181
CPUREC
CPUREC
/*
CPUREC
CPUREC
182
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUNAME(F102)
/* Fault Tolerant Agent in DM100 */
CPUOS(WNT)
/* Windows operating system
*/
CPUNODE(PARIS.ITSC.AUSTIN.IBM.COM) /* IP address of CPU
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM100)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(OFF)
/* Full status off for this CPU */
CPURESDEP(OFF)
/* Resolve dep. off for this CPU */
CPUSERVER(1)
/* Start extra server Mailman p. */
CPULIMIT(10)
/* Number of jobs in parallel
*/
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUNAME(F200)
/* Domain manager for DM200
*/
CPUOS(AIX)
/* AIX operating system
*/
CPUNODE(YARMOUTH.ITSC.AUSTIN.IBM.COM) /*IP address of CPU
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(ON)
/* Full status on for DM
*/
CPURESDEP(ON)
/* Resolve dep. on for DM
*/
CPULIMIT(99)
/* Jobs in parallel 99 is default*/
CPUSERVER( )
Not allowed for domain mng.
*/
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUNAME(F201)
/* Fault Tolerant Agent in DM200 */
CPUOS(WNT)
/* Windows operating system
*/
CPUNODE(TOKYO.ITSC.AUSTIN.IBM.COM)
/* IP address of CPU
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(OFF)
/* Full status off for this CPU */
CPURESDEP(OFF)
/* Resolve dep. off for this CPU */
CPULIMIT(20)
/* Jobs in parallel
*/
CPUSERVER(2)
/* Start extra server Mailman p. */
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUNAME(F202)
/* Fault Tolerant Agent in DM200 */
CPUOS(WNT)
/* Windows operating system
*/
CPUNODE(DELHI.ITSC.AUSTIN.IBM.COM)
/* IP address of CPU
*/
CPUTCPIP(31281)
/* TCP port number of NETMAN
*/
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
*/
CPUTYPE(FTA)
/* This is a FTA CPU type
*/
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
*/
CPUFULLSTAT(OFF)
/* Full status off for this CPU */
CPURESDEP(OFF)
/* Resolve dep. off for this CPU */
CPULIMIT(20)
/* Jobs in parallel
*/
CPUSERVER(2)
/* Start extra server Mailman p. */
CPUTZ(CST)
/* Time zone for this CPU
*/
CPUREC
CPUREC
CPUNAME(F203)
/* Fault Tolerant Agent in DM200
CPUOS(WNT)
/* Windows operating system
CPUNODE(ISTANBUL.ITSC.AUSTIN.IBM.COM) /*IP address of CPU
CPUTCPIP(31281)
/* TCP port number of NETMAN
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
CPUTYPE(FTA)
/* This is a FTA CPU type
CPUAUTOLNK(ON)
/* Autolink is on for this CPU
CPUFULLSTAT(OFF)
/* Full status off for this CPU
CPURESDEP(OFF)
/* Resolve dep. off for this CPU
CPULIMIT(20)
/* Jobs in parallel
CPUSERVER(2)
/* Start extra server Mailman p.
CPUTZ(CST)
/* Time zone for this CPU
CPUNAME(F204)
/* Fault Tolerant Agent in DM200
CPUOS(WNT)
/* Windows operating system
CPUNODE(CENTRAL.ITSC.AUSTIN.IBM.COM) /* IP address of CPU
CPUTCPIP(31281)
/* TCP port number of NETMAN
CPUDOMAIN(DM200)
/* The TWS domain name for CPU
CPUTYPE(FTA)
/* This is a FTA CPU type
CPUAUTOLNK(ON)
/* Autolink is on this CPU
CPUFULLSTAT(OFF)
/* Full status off for this CPU
CPURESDEP(OFF)
/* Resolve dep. off for this CPU
CPULIMIT(20)
/* Jobs in parallel
CPUSERVER(2)
/* Start extra server Mailman p.
CPUTZ(CST)
/* Time zone for this CPU
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
183
In our example the primary domain manager (F100) will have one mailman
process for its own communication with the MASTERDM and an extra
mailman process for communication with the F101 and F102 fault tolerant
agents (specified in the CPUSERVER(1) for F101 and F102 CPUREC
definitions).
4. Create user and password definitions for Windows fault tolerant workstations.
This is done in the member named in the TWSCTP SERVOPTS
USRMEM(TPUSER) initialization statement. See Example 4-7.
Example 4-7 USRREC
/*********************************************************************/
/* USRREC: Windows users password definitions
*/
/*********************************************************************/
/*-------------------------------------------------------------------*/
/* You must specify at least one CPUREC for each Windows workstation */
/* in the distributed TWS network.
*/
/*-------------------------------------------------------------------*/
USRREC
USRCPU(F102)
/* The F102 Windows workstation*/
USRNAM(tws-e)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
USRREC
USRCPU(F201)
/* The F201 Windows workstation*/
USRNAM(tws-e)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
USRREC
USRCPU(F202)
/* The F202 Windows workstation*/
USRNAM(tws-d)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
USRREC
USRCPU(F203)
/* The F203 Windows workstation*/
USRNAM(tws-e)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
USRREC
USRCPU(F204)
/* The F204 Windows workstation*/
USRNAM(tws-e)
/* The user name
*/
USRPSW('chuy5')
/* Password for user name
*/
184
Note: It is not necessary to define these user IDs in the z/OS security
product, for example, RACF. These user IDs are not validated on the
mainframe.
DD DISP=SHR,DSN=EQQUSER.TWS810.PARM(BATCHOPT)
185
Note: It is important that you remember to update the JCL for the plan
programs used in job streams dedicated to do the daily TWS for z/OS
housekeeping.
TWSCTP is the name of our end-to-end server that handles the events to and
from the distributed Tivoli Workload Scheduler agents.
2. To let the TWS for z/OS engine start and stop the end-to-end server task we
specify:
SERVERS(TWSCTP,TWSCJSC)
*/
186
OPC SUBTASK
OPC SUBTASK
SUBTASK E2E
SUBTASK E2E
OPC SUBTASK
SUBTASK E2E
E2E ENABLER
IS BEING STARTED
E2E SENDER
IS BEING STARTED
SENDER HAS STARTED
RECEIVER HAS STARTED
E2E RECEIVER
IS BEING STARTED
ENABLER HAS STARTED
3. The end-to-end server task is started by the engine. The engine issues a start
command:
S TWSCTP
4. The TWS for z/OS end-to-end server task (TWSCTP) is started with no
errors. You should see messages as shown in Example 4-9.
Example 4-9 TWSCTP is started
EQQPH00I
EQQPH09I
EQQPH33I
EQQZ024I
EQQPT01I
187
Figure 4-2 List with all our fault tolerant workstations defined in TWS for z/OS
Tip: We used the standard to put the hostname for the fault tolerant
workstation as the first characters in the Description field for the workstation.
This way we can easily relate the four character workstation name to the
physical machine it points to.
The description field in Figure 4-2 looks a little strange in the Job Scheduling
Console because it does not use fixed-width fonts. In the legacy ISPF panels it
will look more orderly.
188
The fault tolerant workstations is checked using the Job Scheduling Console
(see Figure 4-3). The Fault Tolerant column indicates that it is a fault tolerant
workstation. The Linked column indicates if the workstation is linked. The Status
column indicates if the mailman process is up and running on the fault tolerant
workstations.
Figure 4-3 FTWs in the Tivoli Workload Scheduler for z/OS plan
The F204 workstation is Not Available, since we have not installed a TWS fault
tolerant workstation on this machine. We have prepared for a future installation
of the F204 workstation, by creating the CPUREC definitions (see Define the
topology initialization statements on page 179).
Tip: If the workstation does not link as it should, the cause can be that the
writer process has not initiated correctly or the run number for the Symphony
file on the fault tolerant workstation is not the same as the run number on the
master. If you mark the unlinked workstations and right click you will get a
pop-up menu, as shown in Figure 4-4. Then click Link to try to link the
workstation.
Figure 4-4 Right clicking one of the workstation shows a pop-up menu
You can check the Symphony run number and the Symphony status in the
legacy ISPF using option 6.6.
189
Tip: If the workstation is Not Available/Offline the cause can be that the
mailman, batchman, and jobman processes are not started on the fault
tolerant workstation. You can right click the workstation to get the pop-up
menu, shown in Figure 4-4 on page 189, and then click Set Status.... This will
give you a new panel (see Figure 4-5 on page 190), where you can try to
activate the workstation by clicking the Active radio button. This action will try
to start the mailman, batchman and jobman processes on the fault tolerant
workstation by issuing a conman start command on the agent.
4.3.7 Create jobs, user definitions, and job streams for verification
tests
To verify that we can run jobs on the fault tolerant workstations we did a
verification test. For the verification test we need job streams and jobs to run on
the fault tolerant workstation.
For every fault tolerant workstation defined in the TWS for z/OS engine, we did
the following:
1. Create a job stream with jobs dedicated to run on the fault tolerant
workstation.
2. Define the task (job or script) member in the SCRPTLIB library for all defined
jobs.
190
3. Define the user and password for the task on the Windows fault tolerant
workstation.
For every job stream we defined four fault tolerant workstation tasks (called FTW
Task in the JSC). We start the job stream with a dummy start job (called General
job in JSC) and end it with a dummy end job.
Tip: Since it is not possible to add dependency to the job stream level in TWS
for z/OS (as it is in TWS), dummy start and dummy end general jobs is a work
around for this TWS for z/OS limitation. When using dummy start and dummy
end general jobs you can always uniquely identify the start point and the end
point for the jobs in the job stream.
The jobs and dependencies are defined, as shown in Figure 4-7 on page 192,
where the F100DWTESTSTREAM is used as an illustration.
191
Figure 4-7 Task (job) definition for the test job stream used for verification
192
For every Windows job in the test job streams we add a member to the
SCRPTLIB library.
Example 4-12 shows a F102J001 job script definition (used in the
F102DWTESTSTREAM).
Example 4-12 F102J001 job script definition
EDIT
EQQUSER.TWS810.SCRPTLIB(F102J001) - 01.33
Columns 00001 00072
Command ===>
Scroll ===> CSR
****** ***************************** Top of Data ******************************
000001 /* Definiton for F102J001 job to be executed on F102 machine
*/
000002 /*
*/
000003 JOBREC JOBSCR('C:\TWS\E\scripts\japjob1.cmd') JOBUSR(tws-e)
****** **************************** Bottom of Data ***************************
Tip: We have placed all the job scripts in one common directory,
/tivoli/TWS/scripts, on UNIX systems and in C:\TWS\E\scripts on Windows
systems. This makes management and backup much easier.
The SCRPTLIB members can be reused in several job streams and on different
fault tolerant workstations of the same type (UNIX, Windows, etc.). If you, for
example, have a job (script) that is scheduled on all your UNIX systems, you can
create one SCRTPLIB member for this job and define this job in several job
streams on the associated fault tolerant workstations, though this requires that
the script is placed in the same directory on all your systems. This is another
good reason to have all the job scripts placed in the same directories across your
systems.
193
.
Tip: If we are going to define a new job, F203J011 on F203, that runs under
the user ID NTPROD, then we:
USRCPU(F203)
USRNAM(NTPROD)
USRPSW('prodpw')
Step 1. Verify the TWS for z/OS plan programs and job streams
Our daily TWS for z/OS housekeeping job stream contains jobs to handle the
long-term plan and jobs to handle the current plan in TWS for z/OS (see
Figure 4-8 on page 195).
194
We check the output from these jobs and verify that the topology information
pointed to by the EQQPARM initialization statement is read without any errors.
In all the plan jobs shown in Figure 4-8 we find EQQMLOG messages indicating
that the topology information is OK.
Example 4-13 Topology information
EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TOPOLOGY
.........
EQQZ016I RETURN CODE FOR THIS STATEMENT IS: 0000
EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TOPOLOGY IS: 0000
EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TPDOMAIN
........
........
RETURN CODE FOR THIS STATEMENT IS: 0000
MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000
CPU F100 IS SET AS DOMAIN MANAGER OF FIRST LEVEL
CPU F200 SET AS DOMAIN MANAGER
NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER
MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER
IS: 0000
In the EQQMLOG for the current plan extend job, TWSCCPE, we also see the
message:
EQQ3087I THE SYMPHONY FILE HAS BEEN SUCCESSFULLY CREATED
Which indicates that the Symphony file was created without errors.
Finally we verify that the end-to-end server, TWSCTP, activates the new
Symphony file. We see the messages shown in Example 4-14 on page 206 in the
TWSCTP EQQMLOG.
Example 4-14 TWSCTP EQQMLOG
EQQPT30I Starting switching Symphony
EQQPT22I Input Translator thread stopped until new Symphony will be available
EQQPT31I Symphony successfully switched
195
Note: It can take some minutes from which the end-to-end starts switching the
Symphony file (message EQQPT30I) until the input translator thread is
running again (message EQQPT23I). In our test environment we see times in
the range one to two minutes.
Step 3. Verify that FTW jobs are executed without any errors
This verification is straightforward. We check the status of the test job streams
and jobs dedicated to run on the fault tolerant workstations. In Figure 4-9 we
have listed jobs in two of the test job streams: F100DWTESTSTREAM and
F101DWTESSTREAM. All jobs are completed successfully. This is also the case
for the test job streams dedicated to run on the F102, F200, F201, F202, and
F203 fault tolerant workstations.
196
This shows that there is an error in the definition for one or more jobs in the job
stream and the job stream is not added to the current plan. If you look in the
EQQMLOG for the TWS for z/OS engine you will find messages like:
EQQM992E WRONG JOB DEFINITION FOR THE FOLLOWING OCCURRENCE:
EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED
EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED
We have a typo error: JOBRC should be JOBREC. The solution to this problem
is simply to correct the typo error and try to add the job stream again. The job
stream must be added to the TWS for z/OS plan again, because the job stream
was not added the first time (due to the typo error).
Note: You will get similar error messages in the EQQMLOG for the plan
programs, if the job stream is added during plan extension. The error
messages issued by the plan program are:
EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED
EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED
EQQ3077W BAD MEMBER F100J011 CONTENTS IN EQQSCLIB
Please note that the plan extension program will end with return code 0 .
Another common error is a misspelled name for the script or the user (in the
JOBREC, JOBSCR, or JOBUSR definition) in the FTW job.
If we have a JOBREC definition with a typo error like:
/* Definiton for F100J010 job to be executed on F100 machine
/*
JOBREC JOBSCR('/tivoli/TWS/scripts/jabjob1') JOBUSR(tws-e)
*/
*/
Here the typo error is in the name of the script. It should be japjob1 instead of
jabjob1. This typo will result in an error with the error code FAIL, when the job is
run. The error will not be caught by the plan programs or when you add the job
stream to the plan in TWS for z/OS.
It is possible to correct this error easily using the following steps:
1. First correct the typo in the member in the SCRPTLIB.
2. Then add the same job stream again to the plan in TWS for z/OS.
197
Tip: Before adding the job stream again you can modify the input arrival time
on the existing job stream to input arrival time minus one minute. This way you
can re-add the job stream again and give it the same input arrival time as the
original job stream. Using this approach, TWS for z/OS will automatically solve
all dependencies (if any) correctly.
This way of handling typo errors in the JOBREC definitions is actually the same
as if you performed a rerun from on a TWS master. The job stream must be
re-added to the TWS for z/OS plan to have TWS for z/OS send the new JOBREC
definition to the fault tolerant workstation agent. Remember when doing extend
or replan of the TWS for z/OS plan, that the JOBREC definition is built into the
Symphony file. By re-adding the job stream we ask TWS for z/OS to send the
re-added job stream, including the new JOBREC definition, to the agent.
If you have defined the wrong password for a Windows user ID in the USRREC
topology definition or the password has been changed on the Windows machine,
the FTW job will end with an error and the error code FAIL.
To solve this problem you have two choices:
Change the wrong USRREC definition and redistribute the Symphony file
(using option 3.5 from legacy ISPF).
This approach can be disruptive if you are running a huge batch load on
FTWs and are in the middle of a batch peak.
Another possibility is to log into the primary domain manager (the domain
manager directly connected to the TWS for z/OS server) and then alter the
password. This can be done either using conman or using a JSC instance
pointing to the primary domain manager. When you have changed the
password you simply rerun the job in error.
The USRREC definition should still be corrected so it will take effect the next
time the Symphony file is created.
Step 4. Test possibility to browse job log for all FTW jobs
The final verification is to test that it is possible to browse the job log for all FTW
jobs in the test job streams.
Note: The browse job log function used for FTW jobs does not use Tivoli
Workload Scheduler for z/OS Data Store to retrieve job logs for FTW jobs.
The test is done from a list with the FTW jobs in the JSC. To browse the job log
right click the FTW job and select Browse Job Log... in the pop-up menu (see
Figure 4-10 on page 199).
198
Figure 4-10 Browse Job Log from TWS for z/OS JSC
It the job log has not been retrieved before, we will receive a pop-up window with
an information message (see Figure 4-11). The message says that the job log is
not on the TWS for z/OS engine, but that the engine has requested a copy of the
job log from the remote FTW.
Figure 4-11 Pop-up window when browsing a job log that is not on the engine
199
We try to browse the job log again after some seconds, and the JSC shows a
new pop-up window with the job log, as shown in Figure 4-12.
The arrows on the right side of the pop-up window can be used to scroll up and
down in the job log. The arrows on the bottom of the pop-up window can be used
to scroll left and right in the pop-up window.
200
This is normal behavior for the TWS network and is due to the fact that events
can pass several TWS domain managers to reach their final destination.
Furthermore, TWS processes have some timers that define the intervals
when the TWS processes check job statuses.
Windows 2000.
Linux.
Linux z/OS.
Other third party access methods (e.g., Tandem).
201
Open extended agent interface, which allows you to write extended agents
for non-supported platforms.
User ID and password definitions of NT workstations are easier to implement
than the impersonation support.
You gain from the same SAP r3batch interface like the Tivoli Workload
Scheduler for z/OS tracker agents.
TBSM support lets you integrate the entire end-to-end environment.
You do not have to touch your planning-related definitions, like run cycles,
periods, and calendars.
202
You must be aware of the limitations with this version, when moving to the
fault tolerant agents. Refer to Considerations on page 81 for a detailed
discussion of these.
Tip: We recommend that you do not start migrating the most critical workload
of your environment first. The migration process needs some handling and
experience; therefore a good starting point could be to first migrate a test
tracker agent with test scripts. If this is successful, you can continue with less
critical production job streams and finally the most important ones.
Page
203
204
Standby
Controller
Standby
Controller
z/OS
SYSPLEX
Active
Controller
Tracker
Agents
AIX
Solaris
AIX
OS/400
Win NT
Win 2K
AIX
HPUX
Figure 4-14 on page 206 shows the existing environment adapted to the domain
topology. Now it consists of three domains with one backup domain manager.
205
MASTERDM
Standby
Engine
Standby
Engine
z/OS
SYSPLEX
Active
Engine
Server
DomainZ
Domain
Manager
DMZ
AIX
AIX
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
Backup
Domain
Manager
(FTA)
FTA3
OS/400
Windows 2000
FTA4
Solaris
During the migration process both environments coexist. This means that on
every machine a tracker agent and a fault tolerant workstation are installed.
206
Please note that the transferring of scripts needs the following considerations:
To make it consistent and simplify maintenance activities, use standard script
directories for all fault tolerant agents.
Variable substitution is not supported in the current release. As a
circumvention of this limitation please refer to Section 4.4.6, TWS for z/OS
JCL variables in connection with TWS parameters on page 211.
Automatic recovery statements should be commented out, since they are not
supported in the current release. The reason to comment them out rather
than removing is that, in case of a migration fallback, you will be able to reuse
them.
Be aware that the z/OS engine and the distributed operating systems use
different code pages. This means that after transmission, characters might be
misinterpreted. To resolve this issue, adjust the code page translation of your
used transfer program.
Tip: If you still want to use a single repository script library, you may transfer
all scripts to one script library on a domain manager or on a dedicated
machine within the TWS network. This machine distributes the script to the
local machine as an initial load, and acts as a focal point for all of the scripts
within the network. The script transfer can be accomplished for example with a
software distribution product such as Tivoli Software Distribution. Script
changes are performed only within this dedicated library. After the
modifications are done, they could be distributed to the local fault tolerant
agents.
207
Next, syntax and validation checking of the job definitions takes place. In case of
a syntax problem, a common message when Tivoli Workload Scheduler for z/OS
issues is: EQQM07E Job definition referenced by this occurrence is wrong.
If the joblib member contains only a command or the invocation of a local script
(where the script already resides on the local machine), then you can directly
write this in the script library member, following the scriptlib syntax.
Example
The joblibrary contains a member with the syntax shown in Example 4-15.
Example 4-15 Sample script in job library
#!/bin/ksh
#Sample script
/tivoli/TWS/scripts/cleanup.sh
The script resides on the local machine; the invocation path can be taken over
into the script library member. The submitting user ID is either the user ID of the
tracker agent, or is given by submit exit EQQUX001. Example 4-16 shows the
same script in the script library member.
Example 4-16 Sample script in script library syntax
/* This scriptlibrary job calls the claunup script */
JOBREC JOBSCR('/tivoli/TWS/scripts/cleanup.sh') JOBUSR(tws-e)
The invocation path is included into the jobscr keyword. The submitting user ID is
now part of the definition.
Note: The job submit exit EQQUX001 is not called for fault tolerant
workstations. You can still use the exit unchanged.
For more information about the job definition refer to Job definitions on
page 129.
208
If you use the impersonation support for NT tracker agent workstations, it does
not interfere with the USRREC definitions. The impersonation support assigns a
user ID based on the user ID from exit EQQUX001. Since the exit is not called for
fault tolerant workstations, impersonation support is not used anymore.
209
Parallel testing
Parallel testing offers you the ability to take over production job streams, with the
exception that they are running only test scripts. This extends your testing
capabilities in that you are able to monitor how your tracker agent workload
behaves in the end-to-end environment. The best way to accomplish this is to
run this task on your Tivoli Workload Scheduler for z/OS test system, with
identical end-to-end solution definitions. Follow these steps to run parallel
testing:
1. Create a new scriptlib member that contains entry shown in Example 4-17.
Example 4-17 Japjob1 test script
/* Sample Test job
*/
JOBREC JOBSCR('/tivoli/TWS/scripts/japjob1') JOBUSR(tws-e)
210
4.4.6 TWS for z/OS JCL variables in connection with TWS parameters
In this section we present a way to make a relation between TWS for z/OS JCL
variables and TWS parameters. This way you can transfer values for TWS for
z/OS JCL variables to TWS parameters. The TWS parameters can be
referenced in jobs (scripts) executed locally on the TWS workstation by using the
TWS parms command.
This is a workaround for the current limitation in TWS for z/OS, where you cannot
use TWS for z/OS JCL variables in jobs on fault tolerant workstations. With this
workaround, we show you a solution where you do not lose the fault tolerance.
The process used when distributing TWS for JCL variables to parameters on
fault tolerant workstations can be seen in Figure 4-15 on page 212.
211
FTW jobs
IEBGENER job
Updates member in
SCRPTLIB with TWS for
z/OS JCL variables
F100
Parameters
AIX
SCRPTLIB
F100
FJOB
Parameters
AIX
F100
Parameters
AIX
Figure 4-15 Process of distributing TWS for JCL variables to TWS parameters
212
*/
*/
The TWS for z/OS JCL variables are substituted by TWS for z/OS when the
job is submitted. The definitions on the SYSUT1 DD-card will be placed in the
member FPARMUPD in the dataset EQQUSER.TWS810.SCRPTLIB, pointed
by the SYSUT2 DD-card (our SCRPTLIB library).
2. The updated FPARMUPD member is defined on fault tolerant workstations in
a job stream (FTW jobs column in Figure 4-15 on page 212).
After the IEBGENER job is executed, the FPARMUPD member will look like
Example 4-19 (the job was executed on March the 4th, 2002).
Example 4-19 Definiton for FPARMUPD job
/* Definiton for FPARMUPD job to be executed on all UNIX machines
/*
JOBREC JOBSCR('/tivoli/TWS/scripts/setparms.sh parm1=020305
parm2=020318
parm3=Stefan
parm4=1
parm5=Michael
parm6=Finn
parm7=020304')
JOBUSR(tws-e)
*/
*/
3. When the fault tolerant workstation jobs are executed on the local
workstation, they will call a script named setparms.sh. This script will be
called with seven parameters in our example (parm1=, parm2=,...,parm7=)
The setparms.sh script looks like the information shown in Example 4-20.
Example 4-20 setparms.sh script
#!/bin/ksh
TWS_HOME=/tivoli/TWS/E/tws-e
for p in $*
do
PARM_NAME=$( echo ${p} | cut -d= -f1 )
PARM_VALUE=$( echo ${p} | cut -d= -f2 )
PARM_CMD="${TWS_HOME}/bin/parms -c $PARM_NAME ${PARM_VALUE}"
echo "Running command:
${PARM_CMD}"
${PARM_CMD}
done
213
Note that the setparms.sh script will loop through all the parameters supplied
in the JOBSCR definition. This way it is possible to specify more than seven
parameters. The length of the script defined in JOBSCR can be up to 4096
characters.
The parameters are added to the TWS parameter database by the parms
command:
PARM_CMD="${TWS_HOME}/bin/parms -c $PARM_NAME ${PARM_VALUE}"
The first time the parms command is run on a workstation, it will create the
parameters database if it does not already exist.
We define the IEBGENER in its own job stream. The job stream has the same
run-cycle as the daily housekeeping flow. The IEBGENER job is made
predecessor to the current plan extend job in the daily housekeeping job stream.
This way the FPARMUPD member will be updated right before the TWS for z/OS
plan is extended and a new Symphony file is created. The parameters will then
contain JCL variables for todays production.
The fault tolerant workstation jobs (FPARMUPD-10, FPARMUPD-15 and
FPARMUPD-20) are defined in the same job stream, FXXXDWPARMUPDATE
(see Figure 4-16).
214
The FXXXDWPARMUPDATE job stream has the same run-cycle as the daily
housekeeping flow. The DUMMYSTR-5 job is made successor to the current
plan extend job in the daily housekeeping job stream. This way the parameters
will be updated on the local workstation right after the daily plan job is completed.
The FXXXDWPARMUPDATE job stream must run right after the current plan
extend job. The Symphony file is updated and distributed to the distributed
network by the current plan job. The FPARMUPD job that was updated by our
IEBGENER job will be read into the Symphony file by the current plan job. Since
the Symphony file is distributed to the workstations, the local parameter
database will be updated with our JCL variable values.
The FXXDWPARMUPDATE job stream (and its jobs) must be executed before
any other job streams (or jobs) that use the parameters. These job streams
should have the FXXXDWPARMUDPATE jobs as predecessors.
In Figure 4-17 you can see the parameters after the FPARMUPD job has been
executed on the F100 workstation (remember that the IEBGENER job was
executed on March the 4th, 2002).
215
To view the parameter database, we use the JSC instance pointing to our
primary domain manager (TWSC-F100-Eastham in Figure 4-17 on page 215). In
this instance we have created a parameter database list, and called it
Parameters. This way we can check the parameters created or updated locally
on the primary domain manger.
Note: We did experience some minor problems with the parms command after
updating TWS with patch 8-1-TWS-0001. After the patch it was not possible to
issue the parms command. But we did find a workaround. The workaround is
to build the jobs database on the workstation.
216
MASTERDM
AIX
Master
Domain
Manager
DomainA
DomainB
AIX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
FTA3
OS/400
HPUX
FTA4
Windows 2000
Solaris
In Figure 4-19 on page 218, we have a TWS for z/OS managed network. The
database management and daily planning are carried out be the TWS for z/OS
engine.
217
Standby
Engine
Standby
Engine
z/OS
SYSPLEX
Active
Engine
The conversion process is to change the TWS master domain manager to the
primary domain manager and then connect it to the TWS for z/OS engine (new
master domain manager). The result of the conversion is a new end-to-end
network managed by the TWS for z/OS engine (see Figure 4-20 on page 219).
218
MASTERDM
Standby
Master DM
Standby
Master DM
z/OS
Sysplex
Master
Domain
Manager
DomainZ
AIX
Domain
Manager
DMZ
DomainA
DomainB
AIX
HPUX
Domain
Manager
DMA
FTA1
Domain
Manager
DMB
FTA2
AIX
FTA3
OS/400
Windows 2000
FTA4
Solaris
219
We have outlined the rationale for conversion to TWS for z/OS end-to-end
scheduling. See The rationale behind the conversion to end-to-end scheduling
on page 171. Some important aspects of the conversion you should consider
are:
How is your TWS and TWS for z/OS organization today?
Do you have dependencies between jobs in TWS and in TWS for z/OS
Or do most of the jobs in one schedueler (TWS or TWS for z/OS) run
independently of jobs in the other scheduler (TWS or TWS for z/OS)?
Have you already managed to solve dependencies between jobs in TWS
and in TWS for z/OS in a efficient way?
The current usage of TWS-specific functions are not available in TWS for
z/OS.
Extended planning capabilities, long term plan, current plan that spans
more than 24 hours?
Better handling of carry-forward job streams?
Powerful run-cycle and calendar functions?
220
Which platforms or systems are going to be managed by the TWS for z/OS
end-to-end scheduling?
What kind of integration do you have between TWS and, for example, SAP
R/3, PeopleSoft, or Oracle Applications?
Partial conversion of some jobs from the TWS-managed network to the TWS
for z/OS managed network?
Partial conversion: 15 percent of your TWS-managed jobs or workload is
directly related to the TWS for z/OS jobs or workload. This means the TWS
jobs are either predecessors to TWS for z/OS jobs or successors to TWS for
z/OS jobs. The current handling of these TWS and TWS for z/OS
inter-dependencies is not effective or stable with the solution you have today.
221
Names for workstations can be up to 16 characters in TWS (if you are using
expanded databases in TWS). In TWS for z/OS, workstation names can be
up to four characters. This means you have to establish a new naming
standard for the fault tolerant workstations in TWS for z/OS.
Naming standards for job names
In TWS you can specify job names with lengths of up to 40 characters (if you
are using expanded databases in TWS). In TWS for z/OS, job names can be
up to eight characters. This means that you have to establish a new naming
standard for jobs on fault tolerant workstations in TWS for z/OS.
Adoption of the existing TWS for z/OS object naming standards
You probably already have naming standards for job streams, workstations,
job names, resources, and calendars in TWS for z/OS. When converting
TWS database objects to the TWS for z/OS databases, you need to adapt the
TWS for z/OS naming standard.
Access to the objects in TWS for z/OS database and plan
Access to TWS for z/OS databases and plan objects are protected by your
security product (for example, RACF). Depending on the naming standards
for the imported TWS objects, you may need to modify the definitions in your
security product.
222
223
Note: If you decide to reuse the TWS distributed workstation instances in your
TWS for z/OS managed network, this is also possible. You maybe decide to
move the distributed workstations one-by-one (depending on how you have
grouped your job streams and how you are doing the conversion). When a
workstation is going to be moved to TWS for z/OS you simply change the port
number in the localops file on the TWS workstation. The workstation will then
be active in TWS for z/OS at the next plan extension, replan, or redistribution
of the Symphony file (remember to create the associated DOMREC and
CPUREC definitions in the TWS for z/OS initialization statements).
224
Note: If the same job script is going to be executed on several systems (it is
defined on several workstations in TWS) you only need to create one member
in the SCRPTLIB dataset. This job (member) can be defined on several fault
tolerant workstations in several job streams in TWS for z/OS. It requires that
the script is placed in a common directory (path) across all systems.
For these object definitions you have to design alternative ways of handling for
TWS for z/OS.
225
How TWS for z/OS end-to-end scheduling works (engine, server, domain
managers, etc.)
How the TWS network topology has been adopted in TWS for z/OS
226
create
create
create
create
create
calendars.txt
workstations.txt
jobdef.txt
jobstream.txt
parameter.txt
from
from
from
from
from
CALENDARS
CPU=@
JOBS=@#@
SCHED=@#@
PARMS
from RESOURCES
from PROMPTS
from USERS=@#@
These text files are a good starting point when trying to estimate the effort and
time for conversion from TWS to TWS for z/OS.
Use the workstaions.txt file when creating the topology definitions (DOMREC
and CPUREC) in TWS for z/OS.
In the jobdef.txt you have the workstation name for the script (used in the job
stream definition), the script name (goes to the JOBREC JOBSCR()
definiton), the stream logon (goes to the JOBREC JOBUSR() definition), the
description (can be added as comments in the SCRPTLIB member), and the
recovery definition. The recovery definition needs special consideration
because it cannot be converted to TWS for z/OS auto-recovery. Here you
need to make some workarounds. Usage of TWS CPU class definitions
needs special consideration. The job definitions using CPU classes probably
have to be copied to separate workstation (CPU) specific job definitions in
TWS for z/OS. The task can be automated coding using a program (scripts or
REXX) that reads the jobdef.txt file and converts each job definition to a
member in the SCRPTLIB. If you have many TWS job definitions, having a
program that can help automate this task can save a considerable amount of
time.
The users.txt file (if you have Windows NT/2000 jobs) is converted to
USRREC initialization statements on TWS for z/OS.
Be aware that the password for the user IDs is encrypted in the users.txt file,
so you cannot automate the conversion right away. You need to get the
password as it is defined on the Windows workstations, and type it in the
USRREC USRPSW() definition.
The jobstream.txt file is used to generate corresponding job streams in TWS
for z/OS. The calendars.txt file is used in connection with the jobstream.txt file
when generating run-cycles for the job streams in TWS for z/OS. It could be
necessary to create additional calendars in TWS for z/OS.
227
Dependencies on job stream level (use dummy start and dummy end
jobs in TWS for z/OS for job stream dependencies).
Note that dependencies are also dependencies to prompts, file
dependencies, and resources.
TWS job and job stream priority (0 to 101) must be amended to TWS
for z/OS priority (1 to 9). Furthermore, priority in TWS for z/OS is
always on job stream level (it is not possible to specify priority on job
level).
Description texts longer than 24 characters are not allowed for job streams
or jobs in TWS for z/OS. If you have TWS job streams or jobs with more
than 24 characters of description text, you should consider adding this text
as TWS for z/OS operator instructions.
If you have a large number of TWS job streams, manual handling of job
streams can be too time-consuming. The task can be automated to a certain
extend coding program (script or REXX).
A good starting point is to code a program that identifies all areas where you
need special consideration or action. Use the output from this program, to
estimate the effort of doing the conversion. Furthermore the output can be
used to identify and group used TWS functions where special workarounds
must be performed when converting to TWS for z/OS.
The program can be further refined to handle the actual conversion,
performing the following steps:
Read all the text files.
Analyze the job stream and job definitions.
Create corresponding TWS for z/OS job streams with amended run-cycles
and jobs.
228
Generate a file with TWS for z/OS batch loader statements for job streams
and jobs (batch loader statements are TWS for z/OS job stream definitions
in a format that can be loaded directly into the TWS for z/OS databases).
The batch loader file can be sent to the z/OS system and used as input to the
TWS for z/OS batch loader program. The TWS for z/OS batch loader will read
the file (dataset) and create the job streams and jobs defined in the batch
loader statements.
The resources.txt file is used to define the corresponding resources in TWS
for z/OS.
229
If you do not run sysplex, but have more than one z/OS system with shared
DASD, then you should make sure that the Tivoli Workload Scheduler for
z/OS engine can be moved from one system to another without any
problems.
Configure your z/OS systems to use VIPA.
VIPA is used to make sure that the TWS for z/OS end-to-end server always
gets the same IP address no matter which z/OS system it is run on. VIPA
assigns a system-independent IP address to the TWS for z/OS server task.
If you do not have the possibility to use VIPA, you should consider other ways
of assigning a system independent IP address to the Tivoli Workload
Scheduler for z/OS server task. This can, for example, be a hostname file,
DNS, or stack affinity.
Configure a backup domain manager for the primary domain manager.
Refer to the TWS for z/OS end-to-end configuration, shown in Figure 4-1 on
page 174, for the fail-over scenarios.
When the environment is configured to be fail-safe, the next step is to test that
the environment actually is fail-safe. We did the following fail-over tests:
Switch to the TWS for z/OS backup engine.
Switch to the TWS backup domain manager.
230
OPCHOST(PLEX)
In the initialization statements for the TWS for z/OS engine (pointed to by the
member of the EQQPARM library as specified by the parm parameter on the JCL
EXEC statement) OPCHOST(PLEX) means that the engine has to start as the
controlling system. If there already is an active engine in the XCF group, the start
up for the engine continues on standby engine.
Note: OPCOPTS OPCHOST(YES) must be specified if you start the engine
with an empty checkpoint dataset. This is, for example, the first time you start
a newly installed engine or after you have migrated from a previous release of
TWS for z/OS.
OPCHOST(PLEX) is valid only when xcf group and member have been
specified. Also, this selection requires that TWS for z/OS is running on a
z/OS/ESA Version 4 Release 1 or later. Since we are running z/OS 1.3 we can
use the OPCHOST PLEX(YES) definition. We specify the following in the xcf
group and member definitions for the engine:
XCFOPTS
/*
GROUP(TWS810)
MEMBER(TWSC&SYSNAME.)
TAKEOVER(SYSFAIL,HOSTFAIL)
Do takeover manually !!
*/
Tip: We use the z/OS sysplex-wide SYSNAME variable when specifying the
member name for the engine in the sysplex. Using z/OS variables this way we
can have common TWS for z/OS parameter member definitions for all our
engines (and agents as well).
231
Since we have not specified the TAKEOVER parameter, we are doing the
switch to one of the backup engines manually. The switch is done by issuing
the following modify command on the z/OS system where you want the
backup engine to take over:
F TWSC,TAKEOVER
Where TWSC is the name of our TWS for z/OS backup engine started task
(same name on all systems in the sysplex).
The takeover can, for example, be managed by SA/390. This way SA/390 can
integrate the switch to a backup engine with other automation tasks in the
engine or on the system.
We did not define a TWS for z/OS APPC server task for the Tivoli Workload
Scheduler for z/OS panels and PIF programs, as described in Remote panels
and program interface applications on page 43, but it is strongly recommended
to use a TWS for z/OS APPC server task in sysplex environments where the
engine can be moved to different systems in the sysplex. If you do not use the
TWS for z/OS APPC server task you must log off and then log on to the system
where the engine is active. This can be avoided by using the TWS for z/OS
APPC server task.
4.6.2 Configure DVIPA for the TWS for z/OS end-to-end server
To make sure that the engine can be moved from SC64 to either SC63 or SC65,
Dynamic VIPA is used to define the IP address for the server task. This DVIPA IP
address is defined in the profile dataset pointed to by PROFILE DD-card in the
TCPIP started task.
The VIPA definition used to define logical sysplex wide IP addresses for the TWS
for z/OS end-to-end server, engine JSC server, is as shown in Example 4-22.
Example 4-22 The VIPA definition
VIPADYNAMIC
viparange define 255.255.255.248 9.12.6.104
ENDVIPADYNAMIC
PORT
424
TCP TWSC
BIND 9.12.6.105
5000
TCP TWSCJSC
BIND 9.12.6.106
232
31281
TCP TWSCTP
BIND 9.12.6.107
In Example 4-22 on page 232 the first column under PORT is the port number,
the third column is the name of started task, and the fifth column is the logical
sysplex-wide IP address.
Port 424 is used for the TWS for z/OS tracker agent IP address, port 5000 for the
TWS for z/OS JSC server task, and port 31281 is used for the TWS for z/OS
end-to-end server task.
With these VIPA definitions we have made a relation between port number,
started task name, and the logical IP address that can be used sysplex-wide.
The TWSCTP hostname and 31281 port number used for the TWS for z/OS
end-to-end server is defined in the TOPLOGY HOSTNAME(TWSCTP)
initialization statement used by the TWSCTP server and TWS for z/OS plan
programs.
When the TWS for z/OS engine creates the Symphony file, the TWSCTP
hostname and 31281 port number will be part of the Symphony file. The primary
domain manager (F100) and the backup domain manager (F101) will use this
hostname when they establish outbound IP connections to the TWS for z/OS
server. The backup domain manager only establishes outbound IP connections
to the TWS for z/OS server if it is going to take over the responsibilities for the
primary domain manager.
Tip: Dynamic VIPA redirects the inbound connection from mailman on the
primary domain manager (F100) to IP address 9.12.6.107. But the outbound
connection from the TWS for z/OS server mailman will use the IP address for
the z/OS system. The local IP address for outbound connections is
determined by the routing table on the z/OS system.
233
The F101 fault tolerant agent can be configured to be the backup domain
manager simply by specifying:
CPUFULLSTAT(ON)
CPURESDEP(ON)
For the CPUREC definition for the F101 workstations, see Define the topology
initialization statements on page 179.
With CPUFULLSTAT (full status information) and CPURESDEP (resolve
dependency information) set to On, the Symphony file on F101 is updated with
the same reporting and logging information as the Symphony file on F100. The
backup domain manager will then be able to take over the responsibilities of the
primary domain manager.
234
When canceling the TWSCTP server task we get several BPXP018I messages:
BPXP018I THREAD 1058D09000000000, IN PROCESS 16908339, ENDED
WITHOUT BEING UNDUBBED WITH COMPLETION CODE 00222000,
AND REASON CODE 00000000.
111
We initiate the takover in the TWSC backup engine on the SC63 system by
issuing the following modify command:
F TWSC,TAKEOVER
The TWSC engine on SC63 does the takeover and issues the message:
EQQZ048I AN OPC MODIFY COMMAND HAS BEEN PROCESSED.
EQQZ129I TAKEOVER IN PROGRESS
MODIFY TWSC,TAKEOVER
As part of the takeover, the TWSC engine on SC63 issues a start command for
the TWSCTP end-to-end server task on SC63.
235
It can take some time before the job status is updated in the z/OS plan after the
switch to a backup engine. The mailman process on the primary domain
manager per default waits 600 seconds before it attempts to link to an unlinked
workstation (specified in the mm retry link option in the localopts file).
By a long-term switch we mean that the switch to the backup manager will be
effective across the current plan extension or replan.
236
Notice from Figure 4-23 that F100 is MANAGER (see the CPU Type column) for
the DM100 domain. F101 is FTA (see the CPU Type column) in the DM100
domain.
To simulate that the F100 primary domain manager is down or unavailable due to
a system failure, we issue the switch manager command on the F101 backup
domain manager. The switch manager command is initiated from the conman
command line on F101:
conman switchmgr "DM100;F101"
Where DM100 is the domain and F101 is the fault tolerant workstation we are
going to switch to.
The F101 fault tolerant workstation responds with the message shown in
Example 4-23.
Example 4-23 Message from F101
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for group 'TWS-EndToEnd'.
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for group 'TWS-EndToEnd'.
237
2. Then right click DM100 domain (see Figure 4-24) to get pop-up window (as
shown in Figure 4-25 on page 239).
238
Figure 4-25 Right click the DM100 domain to get pop-up window
3. Click Switch Manager... in the pop-up window shown in Figure 4-25. The
JSC shows a new pop-up window, where we can search for the agent we will
switch too (see Figure 4-26).
4. Click the search button ... (the square box with three dots to the right of the
F100 domain), as shown in Figure 4-26, and JSC shows the Find Workstation
Instance pop-up window (Figure 4-27 on page 240).
239
5. Click the Start button (Figure 4-27). JSC shows a new pop-up window, that
contains all the fault tolerant workstations in the network (see Figure 4-28 on
page 241).
6. If we specify a filter in the Find field (shown in Figure 4-27) this filter will be
used to narrow the list of workstations that are shown (Figure 4-28 on
page 241).
240
7. Finally mark the workstation to switch to F101 (in our example) and click the
OK button in the Find Workstation Instance pop-up window (Figure 4-28).
8. Click the OK button in the Switch Manager - Domain pop-up window, to
initiate the switch. Notice that the selected workstation (F101) appears in the
pop-up window (see Figure 4-29).
Figure 4-29 Switch Manager - Domain pop-up window with selected FTA
241
The switch to F101 is now initiated and TWS will perform the switch.
Figure 4-30 Status for the workstations after the switch to F101
From Figure 4-30 it can be verified that F101 is now MANAGER (see CPU Type
column) for the DM100 domain (see the Domain column). The F100 is changed
to an FTA (see the CPU Type column).
The OPCMASTER workstation has the status unlinked (see the Link Status
column in Figure 4-30). This status is correct, since we are using the JSC
instance pointing to the F100 workstation. The OPCMASTER has a linked status
on F101, as expected.
Switching to the backup domain manager takes some time, so be patient. The
reason for this is that the switch manager command stops the backup domain
manager and restarts it as the domain manager. All domain member fault
tolerant workstations are informed about the switch, and the old domain manager
is converted to a fault tolerant agent in the domain. The fault tolerant
workstations use the switch information to update their Symphony file with the
name of the new domain manager. Then they stop and restart to link to the new
domain manager.
In rare occasions the link status is not shown correctly in the JSC after a switch
to the backup domain manager. If this happens, try to Link the workstation
manually (by right clicking the workstation and clicking Link in the pop-up
window).
Note: To reactivate F100 as the domain manager, simply do a switch manager
to F100 or run Symphony redistribute. The F100 will also be reinstated as the
domain manager when you run the extend or replan programs.
242
243
From Figure 4-31 on page 243 it can be verified that F101 is now MANAGER
(see CPU Type column) for the DM100 domain (see the Domain column). F100
is changed to an FTA (see the CPU Type column).
The OPCMASTER workstation has the status unlinked (see the Link Status
column in Figure 4-31 on page 243). This status is correct, since we are using
the JSC instance pointing to the F100 workstation. The OPCMASTER has a
linked status on F101, as expected.
Step 3. Update the DOMREC definitions for server and plan program
We update the DOMREC definitions (described in Define the topology
initialization statements on page 179) so F101 will be the new primary domain
manager. See Example 4-24.
Example 4-24 DOMREC definitions
/**********************************************************************/
/* DOMREC: Defines the domains in the distributed Tivoli Workload
*/
/*
Scheduler network
*/
/**********************************************************************/
/*--------------------------------------------------------------------*/
/* Specify one DOMREC for each domain in the distributed network.
*/
/* With the exception of the master domain (whose name is MASTERDM
*/
/* and consist of the TWS for z/OS engine).
*/
/*--------------------------------------------------------------------*/
DOMREC
DOMAIN(DM100)
/* Domain name for 1st domain
*/
DOMMNGR(F101)
/* Chatham FTA - domain manager
*/
DOMPARENT(MASTERDM)
/* Domain parent is MASTERDM
*/
DOMREC
DOMAIN(DM200)
/* Domain name for 2nd domain
*/
DOMMNGR(F200)
/* Yarmouth FTA - domain manager */
DOMPARENT(DM100)
/* Domain parent is DM100
*/
244
Tip: If you let your system automation (for example, System Automation/390)
handle the switch to the backup domain manager, you can automate the entire
process.
System automation replaces the EQQPARM members.
System automation initiates the switch manager command remotely on the
fault tolerant workstation.
System automation resets the definitions when the original domain
manager is ready be activated.
Figure 4-32 Status for workstations after the TWS for z/OS replan program
245
From Figure 4-32 on page 245 it can be verified that F101 is still MANAGER (see
CPU Type column) for the DM100 domain (see the Domain column). The CPU
type for F100 is FTA.
The OPCMASTER workstation has the status unlinked (see the Link Status
column in Figure 4-32 on page 245). This status is correct, since we are using
the JSC instance pointing to the F100 workstation. The OPCMASTER has a
linked status on F101, as expected.
Note: To reactivate F100 as a domain manager, simply do a switch manager
to F100 or Symphony redistribute. The F100 will also be reinstated as domain
manager when you run the extend or replan programs.
Remember to change the DOMREC definitions before the plan programs are
executed or the Symphony file will be redistributed.
246
Before failover
Primary
Node
Shared
Disk
Array
After failover
Standby
Node
Primary
Node
Shared
Disk
Array
Standby
Node
Master
FTA
Master
FTA
/tivoli/TWS/TWS-A/
/tivoli/TWS/TWS-B/
/tivoli/TWS/TWS-A/
/tivoli/TWS/TWS-B/
Master
/tivoli/TWS/TWS-A/
Figure 4-33 HACMP failover of a TWS master to a node also running an FTA
AIX
/usr/lib/libatrc.a
Solaris
/usr/lib/libatrc.so
HP/UX
/usr/lib/libatrc.s
247
For inter-workstation communications, TWS obtains the node and society of the
target system (the domain manger or the master domain manager) from the
workstation definition (in an end-to-end environment, the CPUNODE and
CPUTCPIP definition in the CPUREC server initialization statement). Those
settings must continue to work after the failover to the backup node. For listening
on the local socket for incoming connection requests, the port for the netman
process is defined in the <twshome>/localopts file.
If the HACMP standby node does not normally have a TWS instance running on
it, no further configuration is necessary. If the HACMP standby system has its
own active instance of TWS, it is necessary to ensure that the TWS home
directory, user name, user ID, and netman port number are unique for each TWS
engine. When the standby node takes over the TWS engine from the primary
node, the standby node will be running two separate TWS instances. You must
also ensure that the /usr/unison/components file lists each instance and that they
are uniquely named. Configuration like this is not unusual if the HACMP standby
machine is running the TWS workload while it is in standby mode. In
Example 4-26 you can see how to define two TWS workstation engines.
Example 4-26 Definitions for multiple TWS instances
Definition for the adopting TWS workstation engine:
Userid (account): tws-a
Home directory: /tivoli/TWS/A
Netman port address: 31111 (defined in the localopts file)
Definition for the standby TWS workstation engine:
Userid (account): tws-b
Home directory: /tivoli/TWS/B
Netman port address: 31112 (defined in the localopts
file)
To get an idea of how this will work, look once more at Figure 4-33 on page 247.
For more detailed information about having multiple TWS instances (engines)
running on a single computer, refer to Section 3.5.1, Installing multiple instances
of TWS on one machine on page 142.
248
2. Ensure that the DNS server or hosts files are updated accordingly.
The DNS server or hosts files should be updated to resolve the workstation
name accordingly. In other words, after a failover occurs, the host name
should resolve to the standby node, not the primary (failed) node.
3. Ensure that the correct directories are mirrored or that a duplicate installation
of TWS is ready for failover, or both. There are two directories where the TWS
binaries and files reside. These are made by the <twshome> and
<twshome/../unison> directories.
Note: It is possible to configure HACMP so that the standby node takes over
the IP address and even MAC address of the failing node. This is not usually
necessary. As long as the TWS workstation definitions contain host names,
and the DNS is updated to resolve to the standby nodes IP address, the TWS
workstations will be able to reach the standby node using the host name.
HACMP can be configured to stop TWS on the failing system before switching
over to the standby node. It is a very good idea to implement this, because there
can be problems starting TWS back up if it is not shut down correctly.
Example 4-27 shows how to implement this.
Example 4-27 Sample HACMP start and stop functions for TWS
function customer_defined_start_cmds
{
print "$(date '+%b %e %X') - Node \"$(hostname)\": Starting TWS"
su twsuser -c "<twshome>/bin/conman 'start&link @;noask'"
print "$(date '+%b %e %X') - Node \"$(hostname)\": Starting TWS Completed"
}
function customer_defined_halt_cmds
{
print "$(date '+%b %e %X') - Node \"$(hostname)\": Halting TWS"
su twsuser -c "<twshome>/bin/conman 'unlink @;noask&shut;wait'"
print "$(date '+%b %e %X') - Node \"$(hostname)\": Halting TWS Completed"
}
Remember that twsuser should be replaced with the correct user ID (tws-a, in
our example; see Example 4-26 on page 248).
249
You should consider starting a kind of timer when doing the shutdown of TWS
(conman shut) and then issue kill commands for the TWS processes after this
timer has expired. If you have the wait parameter as part of the conman shut
command, for example, conman 'shut;wait, this command will not complete (or
return) before the TWS shutdown is complete. If there are problems that interrupt
the shutdown process, the HACMP shutdown process will be halted. Add a timer
in HACMP that expires, for example, after two minutes. If the TWS processes are
not completed then, it issues a kill command for the remaining TWS processes.
You are now ready to configure the HACMP solution in your environment.
250
251
If you are using parameters locally on the TWS agent and do not have a
central repository for the parameters, you should consider making daily
backups.
Are you using specific security definitions on the TWS agent?
If you are using specific security file definitions locally on the TWS agent and
do not have a central repository for the security file definitions, you should
consider to make daily backups.
Another approach is to make a backup of the TWS agent files, at least before
doing any changes to the files. The changes can, for example, be updates to
configuration parameters or a patch update of the TWS agent.
If the TWS engine is running as a TWS master domain manager you should at
least make daily backups. For a TWS master domain manager it is also good
practice to create text copies of the database files, using the composer create
command for all database files. The composer create command will create text
copies of the database files. The database files can be recreated from the text
files using the composer add and composer replace commands.
252
Each job that is run under TWSs control creates a log file in the TWSs stdlist
directory. These log files are created by the TWS job manager process (jobman)
and will remain there until deleted by the system administrator.
The easiest way to maintain the growth of these directories is to decide how long
the log files are needed and schedule a job under TWS for z/OSs (or TWSs)
control, which removes any file older than the given number of days. The TWS
rmstdlist command can perform the deletion of stdlist files. The rmstdlist
command removes or displays files in the stdlist directory based on age of the
files.
The rmstdlist command:
rmstdlist [-v |-u]
rmstdlist [-p] [age]
-u
253
-p
age
We suggest that you run the rmstdlist command on a daily basis on all your
fault tolerant agents. The rmstdlist command can be defined in a job in a job
stream and scheduled by TWS for z/OS (or TWS). You may need to save a
backup copy of the stdlist files, for example, for internal revision or due to
company policies. If this is the case, a backup job can be scheduled to run just
before the rmstdlist job.
The job (or more precisely the script) with the rmstdlist command can be coded
in different ways. If you are using TWS parameters to specify the age of your
rmstdlist files, it will be easy to change this age later on if required.
Example 4-29 shows an example of a shell script where we use the stdlist
command in combination with TWS parameters.
Example 4-29 Shell script using the TWS rmstdlist command in combination with TWS
parameters
#! bin/sh
#
# A TWS for UNX job to cleanup in TWS stdlist files
a=`parms stdlage`
; export a
Notice from the example that we are calling the rmstdlist command twice, first
with the -p flag and second without the flag. In the first call of stdlist, we will get a
list of all the stdlist directories that will be deleted in the second call of stdlist.
The age of the stdlist directories is specified using variable a. Variable a has
been assigned the value from the stdlage parameter. The stdlage parameter is
defined as five in the parameters database, meaning that stdlist files older than
five days will be removed. The stdlage parameter was created on the fault
tolerant agent using the FPARMUPD script described in Section 4.4.6, TWS for
z/OS JCL variables in connection with TWS parameters on page 211.
254
The script shown in Example 4-29 on page 254 can be defined in a TWS for
z/OS FTW job, using the JOBREC defintion show in Example 4-30.
Example 4-30 JOBREC defintion for the cleanup script
/* Definiton for F100J012 job to be executed on F100 machine
*/
/* The cleanup.sh script calls the TWS rmstdlist command to cleanup in */
/* the stdlist directory
*/
JOBREC JOBSCR('/tivoli/TWS/scripts/cleanup.sh') JOBUSR(tws-e)
The stdlist output for the job is shown in Figure 4-34. The job was run from a
TWS for z/OS engine. Notice that due to the rmstdlist command with the -p flag,
we get a list of the stdlist directories (message AWS222610502) before they are
deleted.
Figure 4-34 Output form the cleanup.sh when run from TWS for z/OS
255
The auditing options are enabled by two entries in the globalopts file in the TWS
or TWS for z/OS server:
plan audit level = 0|1
database audit level = 0|1
If either of these options are set to the value of 1, the auditing is enabled on the
fault tolerant agent. The auditing logs are created in the following directories:
<TWShome>/audit/plan
<TWShome>/audit/database
If the auditing function is enabled in TWS, files will be added to the audit
directories every day. Modifications to the TWS database will be added to the
database directory:
<TWShome>/audit/database/date (where date is in ccyymmdd format)
Modification to the TWS plan (the Symphony) will be added to the plan directory:
<TWShome>/audit/plan/date (where date is in ccyymmdd format)
We suggest that you regularly clean out the audit database and plan directories,
for example, on a daily basis. The clean out in the directories can be defined in a
job in a job stream and scheduled by TWS for z/OS (or TWS). You may need to
save a backup copy of the audit files, for example, for internal revision or due to
company policies. If this is the case, a backup job can be scheduled to run just
before the cleanup job.
The job (or more precisely the script) doing the clean up can be coded in different
ways. If you are using TWS parameters to specify the age of your audit files, it
will be easy to change this age later on if required.
Example 4-31 on page 257 shows an example of a shell script where we use the
UNIX find command in combination with TWS parameters.
256
Example 4-31 Shell script to clean up files in the audit directory based on age
#!/bin/ksh
#
# A TWS for UNX job to cleanup in TWS audit directory
a=`parms audage`
; export a
Notice from Example 4-31 that we first issue the find commands with the print
command to list the files that will be deleted. The deletion is done in the find
commands with the exec command.
The age of the audit files is specified using variable a. Variable a has been
assigned the value from the audage parameter. The audage parameter is
defined as 25 in the parameters database, meaning that files older than 25 days
will be removed from the audit directory. The audage parameter was created on
the fault tolerant agent using the FPARMUPD script described in Section 4.4.6,
TWS for z/OS JCL variables in connection with TWS parameters on page 211.
The job (or more precisely the script) doing the cleanup can be coded in different
ways. If you are using TWS parameters to specify the age of your schedlog files,
it will be easy to change this age later on if required.
Example 4-32 shows a sample of a shell script where we use the UNIX find
command in combination with TWS parameters.
Example 4-32 Shell script to clean out files in the schedlog directory
257
#!/bin/ksh
#
# A TWS for UNX job to cleanup in TWS schedlog directory
a=`parms schdage`
; export a
Notice from Example 4-32 on page 257 that we first issue the find command
with the print command to list the files that will be deleted. The delete is done in
the find command with the exec command.
The age of the schedlog files is specified using variable a. Variable a has been
assigned the value from the schdage parameter. The schdage parameter is
defined as 30 in the parameters database, meaning that schedlog files older than
30 days will be removed.
258
259
All changes to scripts are done in this production repository. On a daily basis, for
example, just before the plan is extended, the master scripts in the central
repository are distributed to the fault tolerant agents. The daily distribution can be
handled by a TWS scheduled job. This job can be defined as predecessor to the
plan extend job.
This approach can be made even more advanced, for example, by using a
software distribution application to handle the distribution of the scripts. This way
the software distribution application can help keep track of different versions of
the same script. If you encounter a problem with a changed script in a production
shift you can simply ask the software distribution application, to redistribute a
previous version of the same script and then rerun the job.
Security files
The TWS security file is used to protect access to TWS database and plan
objects. On every TWS engine (domain manager, fault tolerant agent, etc.) you
can issue conman commands for the plan and composer commands for the
database. TWS security files are used to ensure that the right people have the
right access to objects in TWS.
Security files can be created or modified on every local TWS workstation and
they can be different from TWS workstation to TWS workstation.
We suggest having a common security strategy for all TWS workstations in your
TWS network (and end-to-end network). This way the security file can be placed
centrally. Changes are only done in the central security file. If the security file has
been changed it is simply distributed to all TWS workstations in your TWS
network.
260
2. Create a text copy of the updated parameter database using the TWS
composer create command:
composer create parm.txt form parm
3. Distribute the text copy of the parameter database to all your TWS
workstations.
4. Restore the received text copy of the parameter database on the local TWS
workstation using the TWS composer replace command:
composer replace parm.txt
These steps can be handled by one job in TWS. This job could, for example, be
scheduled just before the plan is extended.
Note that this applies for both TWS networks managed by a TWS master domain
manager and by a TWS for z/OS server.
261
262
Chapter 5.
263
264
As the JSC starts, you are asked to enter the user name, password, and name of
the host machine on which the Tivoli Management Framework (TMF) is running,
as shown in Figure 5-2 on page 266.
265
Note: If you encounter problems while logging in, consult Section 3.7.3,
Creating TMF Administrators for Tivoli Workload Scheduler on page 148.
As the JSC starts, it briefly displays a window with the JSC level, as shown in
Figure 5-3 on page 267.
Note: Do not close the MS-DOS window (shown in Figure 5-2). Closing the
MS-DOS window terminates the JSC session; reduce the risk of doing this by
minimizing the window.
We suggest that you create a shortcut to the JSC start program and place this
shortcut on the task bar of your Windows workstation. This way you can easily
start the JSC simply by clicking the icon on the taskbar.
266
Tip: You can change the properties for the shortcut created in your task bar,
by simply right clicking the icon and then selecting Properties (Figure 5-4 on
page 268). If you select Minimized in the Run field, the MS-DOS window will
not show up on your desktop. This way, you will only have an entry in the
Windows task bar and not the MS-DOS window.
Figure 5-3 Job Scheduling Console release level notice (Windows desktop)
267
The first time the JSC is started (new user on new machine), the following
window in Figure 5-5 is shown.
Figure 5-5 Initial information to user first time the JSC is started
268
When you click the OK button (Figure 5-5 on page 268) you will see the window
in Figure 5-6.
The Open Location window shown in Figure 5-6, lets you copy a pre-customized
user profile, which could contain a list view, appropriate to the user. You may
consider it worthwhile to load a pre-customized standard profile file onto users
machines when you install the JSC. See The JSC preference file on page 270
for a detailed description.
If you click Cancel (Figure 5-6) the JSC will start, as in Figure 5-7.
269
This Welcome pop-up window in the JSC (see Figure 5-7 on page 269) gives you
the option to read the online tutorial. Unless you turn the future display of this
window off by selecting the Dont show this window again box, it will be shown
every time you start the JSC. If you have ticked Do not show this window
again you can still get the welcome pop-up window by opening the Help pull
down window, and then clicking the Welcome... entry. Then the Welcome pop-up
window will be shown again.
You are now ready to start working with the JSC.
The USER is the operating system logon on the TMR server or managed
node you are connecting to.
NODE is the name of the system running the connector followed by a dot (.)
sign on Windows machines or an underscore sign (_) on UNIX machines.
270
Every time you log onto a different host machine, a new user directory is added.
And every time you close the JSC, a GlobalPreferences.ser that matches your
connection is created or updated in the user directory.
Note: A different GlobalPreferences.ser file exists in
JSConsole\dat\.tmeconsole\TivoliConsole on Windows or in
<JSConsolehome>/.tmeconsole/TivoliConsole on UNIX. This file contains
preferences that affect JSC presentation and should not be propagated to
JSC users.
The JSC client uses Windows regional settings to display dates and times.
Change the regional settings from the Windows control panel according to your
country. After rebooting Windows, dates and times will be shown in the selected
format.
271
The left side of the window in Figure 5-8 on page 271 is a list of all the connector
instances that are installed in the TMR you are logged into. We have defined five
connector instances:
TWSC: Connector instance pointing to a TWS for z/OS controller.
TWSC-F100-Eastham: Connector instance pointing to a TWS for z/OS fault
tolerant agent or workstation (symbolized with a small screen or PC figure).
This fault tolerant agent is known as the F100 workstation in TWS for z/OS
and is running on node eastham (the hostname for the machine).
TWSC-F200-Yarmouth: Connector instance pointing to a TWS for z/OS fault
tolerant agent or workstation (symbolized with a small screen or PC figure).
This fault tolerant agent is known as the F200 workstation in TWS for z/OS
and is running on node yarmouth (the hostname for the machine).
Yarmouth-A: Connector instance pointing to a TWS master domain manager
running on yarmouth, TWS controller.
Yarmouth-B: TWS fault tolerant agent running on yarmouth.
Common Default Plan Lists: Predefined JSC group with common plan lists.
The Common Default Plan Lists are described in detail in Common Default
Plan Lists in JSC on page 322.
Note: Use naming conventions when creating the connector instances; this
makes it easier to relate the instance name to its purpose. From our example
in Figure 5-8 on page 271, we know the exact type of engine (TWS or TWS for
z/OS) the instance points to and the function of the engine (master or agent).
After installing the JSC on your machine, you will have access to some
predefined lists called Default Database Lists and Default Plan Lists. The entries
in the default lists vary, depending of which engine (TWS or TWS for z/OS) the
list is related to.
In Figure 5-9 on page 273, you will see the JSC Default Database Lists and
Default Plan Lists for a TWS for z/OS instance (TWSC controller in our example).
272
Figure 5-9 JSC Default Database and Plan Lists for TWS for z/OS instance
In Figure 5-10 on page 274 you will see the JSC Default Database Lists and
Default Plan Lists for a TWS instance (Yarmouth-A in our example).
273
Note: The Default Database Lists and Default Plan Lists will be exactly the
same for a JSC instance pointing to a fault tolerant agent. Since there are not
any databases on a fault tolerant agent, you should consider removing the
Default Database Lists from these instances. How to tailor user preferences
for JSC users is described in The JSC preference file on page 270.
Figure 5-10 JSC Default Database and Default Plan Lists for a TWS instance
Using the default JSC lists (or views) for database objects and plan instances
could cause long response times in the JSC. The default JSC list simply gives
you a list with all database objects in the database or plan instances in the plan.
If you are using a default database list for job streams and the job stream
database contains, for example, 5,000 job streams, loading the data in JSC and
preparing this data to be shown in the JSC will take a long time. Using dedicated
lists, created with appropriate filters, for example, only to show job streams
starting with PROD will optimize the JSC performance considerable.
You probably will also need some customized lists in JSC, for example, to show
all jobs in error, all jobs planned to run on a dedicated workstation, all jobs
waiting for a resource.
274
Figure 5-11 How to create new lists in the Job Scheduling Console
275
Figure 5-12 Submit TWS for z/OS job stream to current plan in TWS for z/OS
276
2. After clicking Submit Job Stream in Figure 5-12 on page 276, you will get a
new pop-up window, shown in Figure 5-13.
3. In the pop-up window (Figure 5-13), you can type the name of the job stream,
the start date and time (same as TWS for z/OS input arrival time), and the
deadline date and time.
Our task is to submit a job stream called (TWSCDISTPARJUPD). The job
stream should be put on hold (no jobs in the job streams must start running
before we release them). We will use the predefined start date and time as
well as the deadline date and time for the job stream (taken from the run-cycle
specification for the job stream).
In the window as shown in Figure 5-13, we can type the name of the job
stream or we can use the Find button (the grey box with three dots) to let the
JSC search for the job stream.
4. We click the Find button and get the pop-up window shown in Figure 5-14 on
page 278.
5. Figure 5-14 on page 278 shows the result of a search after we have filled the
Job Stream Name field with the TWSCD* and clicked the Start button.
277
Figure 5-14 Search result for job streams starting with TWSCD*
6. We highlight our job stream TWSCDISTPARJUPD (Figure 5-15) and click the OK
button. The result is shown in Figure 5-14.
Figure 5-15 JSC Submit Job Stream window filled with job stream information
278
Notice that JSC has filled the Start and Deadline fields for us. The information
was taken from the run-cycle definition for the job stream in the job stream
database.
7. Remember to change the Start Date and Time (Figure 5-14 on page 278) if
the job stream is already in the current plan with this time.
Tip: It is a good practice to predefine your ad hoc job streams with a dummy
run-cycle that does not schedule the job stream, but contains start date and
time and deadline date and time according to your installation standard. This
way it is very simple to add the ad hoc job stream to the current plan. You do
not have to specify any start or deadline information. TWS for z/OS will read
this information directly form the job stream database, and then conform to
your standards. Dependency resolution will be handled correctly.
8. Since the job stream should be submitted in hold, we click the Submit & Hold
button (Figure 5-14 on page 278) and then the OK button. The result can be
seen in Figure 5-16.
Figure 5-16 Jobs in TWSDISTPARJUPD for TWS for z/OS current plan
Note from Figure 5-16 that all jobs in the job stream are in Held status. They will
not start before they are released with a release command.
Note: It is not possible to submit a job stream with the hold option in legacy
TWS for z/OS IPSF panels.
9. To release the jobs (when it is OK to run the job), you simply right click the job
and click the release entry in the pop-up window. If all jobs are going to be
released at once, you can select all jobs in the list (by selecting the first job
with the mouse, pressing and holding the left button, and dragging the mouse
through all entries in the list). Then right click, and click Release (see
Figure 5-17 on page 280).
279
Figure 5-17 Release all held jobs in JSC with just one release command
Note: If you click the Submit & Edit button (Figure 5-15 on page 278), the
JSC will open a Job Stream Instance Editor window (see Figure 5-18). In this
window you will be able to edit the job stream and jobs in it quickly. This can,
for example, be to remove some of the jobs in the job stream, set a job in the
job stream to complete status, or change time specifications for a job in the job
stream.
Figure 5-18 JSC Job Stream Instance Editor - Adding job stream to current plan
280
Figure 5-19 Submit job stream from JSC database job stream list view
1. Right click the job stream that you want to submit. JSC will then show a
pop-up window.
2. Then click the Submit... entry, and JSC will show a new window, as in
Figure 5-20 on page 282.
281
Note: When submitting the job stream directly from the TWS for z/OS job
stream database, JSC has filled all required input fields in the Submit Job
Stream... window (Figure 5-20). This is because JSC read this information
from the database.
Remember that only the Start and Deadline fields will be filled if the job stream
is defined with a run-cycle in the TWS for z/OS database.
5.4.2 JSC text editor to display and modify JCLs in current plan
From the JSC, it is possible to display and modify JCLs for a z/OS job in the TWS
for z/OS current plan. The JSC editor provides import and export functions so
that users can store a JCL as a template and then reuse it for other job JCLs. It
also includes functions to copy, cut, and paste JCL text. The JCL editor displays
information on the current JCL, seen as the current row and column, the job
name, the workstation name, and who has last updated it.
Let us now see how it works:
1. From a job instance view, right click the job where you want to edit JCL. Then
you will get a pop-up window where you can select Edit JCL... (see
Figure 5-21 on page 283).
282
Figure 5-21 Edit JCL for a z/OS job in the Job Scheduling Console
Our task is now to copy the JCL from the Waiting job with Job ID 10
(operation number 10) in the TWSCDISTPARJUPD job stream to the Held job
with Job ID 10 in the TWSCDISTPARJUPD job stream (see Figure 5-21).
2. First we right click the waiting job with Job ID 10 in the TWSCDISTPARJUPD
job stream and then we click Edit JCL..., as shown in Figure 5-21. The result
is an Edit JCL window, as shown in Figure 5-22 on page 284.
283
In the Edit JCL window, you can see that the JCL is read directly from the
TWS for z/OS job library (status is not from JS). You can also see that the
cursor was placed on row 15 and column 67 (see the upper right corner).
It is now possible to start editing the JCL and eventually correct JCL errors if
there are any. Our task is to copy this JCL to another job definition.
To perform this task we have two possibilities:
a. We can click the Actions bar in the top of the window (see Figure 5-22).
This will show a pull-down menu where we can chose Select All to select
all JCL lines. Then we click the Edit bar. This will show a pull-down menu
where we can chose to delete, copy, or paste. To copy we can select the
Copy option.
284
Note: The Delete, Copy, and Paste options are also available from the
three icons to the right in the Edit JCL window (see Figure 5-22 on
page 284).
When the JCL is copied we can open a new Edit JCL window for the job
where the JCL should be copied to and then paste the JCL there.
b. We can click the File bar in the top of the window (see use the Figure 5-22
on page 284). This will show a pull-down menu where we can chose
Export or Import. The Export function can be used to save a copy of the
JCL in a file stored locally on your machine. Import can be used to read a
locally stored copy of the JCL from a file on your machine.
Note: The Export and Import options are also available from the two icons
to the left in the Edit JCL window (see Figure 5-22 on page 284).
3. We will use the Export and Import options to accomplish our task. We select
File and then Export from the pull-down menu (see Figure 5-23).
Figure 5-23 The Job Scheduling Console File Import/Export pull-down menu
4. JSC shows a new Save window (Figure 5-24 on page 286). In this window we
specify the file name, TWSCIEBG_copy_jcl for the file to be saved locally on the
machine running the JSC. Then we click Save to save the file.
285
Now we have a local copy of the TWS for z/OS JCL in a PC file.
5. The next step is to go back to the JSC window to see the list of jobs. We do
this by clicking Cancel in the Edit JCL window shown in Figure 5-22 on
page 284.
Then we return to the window shown in Figure 5-21 on page 283.
6. Since we have to import the JCL into the Held job with Job ID 10 in the
TWSCDISTPARJUPD job stream (see Figure 5-21 on page 283), we simply
right click this job, and select Edit JCL (Figure 5-25 on page 287).
286
Figure 5-25 Select Edit JCL for the job where the JCL should be imported
7. In the new Edit JCL window, we select all JCLs (using the Actions and
Select All pull-down entries) and delete the selected JCLs (using the Edit
and Delete pull-down entries). The result is a window with no JCL, as shown
in Figure 5-26.
Figure 5-26 JSC Edit JCL window after deleting records of all JCLs
287
8. Then we select File and Import from the pull-down menu. This shows a new
Open window, where we can select the local file with the JCL we will import
(see Figure 5-27).
9. We click the file we want to import and the file name is placed in the File name
field in the Open window.
10.Finally we click the Open button. The result can be seen in Figure 5-27.
The Edit JCL window with the imported JCL is shown in Figure 5-28 on
page 289.
288
Figure 5-28 The Edit JCL window with the import JCL
5.4.3 JSC read-only text editors for job logs and operator
instructions
The JSC read-only text editors can be used to view:
Job logs produced by job instance runs
The Operator Instructions (OI) associated with a job instance.
289
Figure 5-29 The Job Scheduling Console Browse Job Log... entry
If the TWS for z/OS controller has a copy of the job log in its JCL repository
dataset, the job log will be shown right away in the JSC. If the TWS for z/OS
controller does not have a copy of the job log, it will request a copy of the job log.
This copy is requested in two ways:
If the job log is for a z/OS job, the controller asks the TWS for z/OS data store
to send a copy of the job log. The JSC user is informed, via a pop-up window
(message GJS1091E), that the controller has asked for a copy of the job log
(see Figure 5-30 on page 291).
If the job log is for a fault tolerant workstation job, the TWS for z/OS controller
asks the associated fault tolerant agent to send a copy of the job log. The
JSC user is informed, via a pop-up window (message GJS1091E), that the
controller has asked for a copy of the job log (see Figure 5-30 on page 291).
The JSC pop-up window with message GJS1091E is shown in Figure 5-30 on
page 291.
290
Figure 5-30 JSC pop-up window when controller does not have copy of job log
If you get the message GJS1091E, then simply wait a few seconds, and try to
browse the job log again (as shown in Figure 5-29 on page 290).
The JSC Browse Job Log window, shown as a result of the actions performed in
Figure 5-29 on page 290, is shown in Figure 5-31 on page 292.
Note: In the JSC Browse Job Log window you can:
Mark some of the text and copy the marked text to, for example, a mail.
The copy can be done simply by right clicking after marking the text and
then selecting Copy.
Save the job log to a local file your workstation. The job log can be saved
by clicking File and then selecting Export in the pull-down menu (as
shown in Figure 5-24 on page 286).
291
Figure 5-31 The Job Scheduling Console Browse Job Log window
292
Figure 5-32 The Job Scheduling Console Browse Operator Instruction entry
Notice that the OIs shown in Figure 5-33 can be saved in a file locally on your
workstation or copied to, for example, a mail.
293
Restart a job instance with the option to choose which step must be first,
which must be last, and which must be included or not (requires TWS for
z/OS data store). Furthermore, you can work with expanded JCLs. That is,
JCLs generated from the job log.
Rerun a job instance that will entirely execute all the steps of a job
instance.
Clean the list of datasets used by the selected job instance.
Display the list of datasets cleaned by a previous clean up action.
Rerun an entire job stream instance.
This function opens a job stream instance editor with a set of reduced
functionalities where you can select the job instance that will be the starting
point of the rerun. When the starting point is selected, an impact list is
displayed that shows all possible job instances that will be impacted from this
action. For every job instance within the current job stream instance, you can
perform a clean-up action and display its results.
294
Note: To use the Restart and Cleanup function, the TWS for z/OS data store
function must be activated because TWS for z/OS uses job log information
when the restart JCL is built.
2. After clicking the Restart and Cleanup selection, JSC shows the Restart and
Cleanup window, as in Figure 5-35.
Figure 5-35 The Job Scheduling Console Restart and Cleanup window
295
3. In this window there are several types of actions. We will show the Step
Restart action, so we select the Step Restart button and click OK. The result
is shown in Figure 5-36.
In the Step Restart panel you can work with the different steps in the job. JSC
shows information like program name, completion code, and step status
(extracted from the job log if the job is executed). Furthermore, TWS for z/OS
has supplied step restart information for each job step (Best Restart Step, Not
Restartable).
4. If you double click an entry in the Action column (Figure 5-36), you will be able
to specify if:
The step should be the restart step.
The step should be included (executed) when the job is rerun.
The step should be excluded (not executed) when the job is rerun.
The step is the end step (steps after this step will not be executed).
Furthermore, you can specify what your next step should be using the Next
Step box in the button of the window (Figure 5-36). After performing this you
will get the following next step actions:
Dataset List: If this entry is selected, JSC will open a window with a list of
datasets that will be deleted when the job is rerun (these datasets are
within the step restart range). You can, for example, remove datasets from
this list. It is possible to go to Edit JCL from this Dataset List window.
Edit JCL: If this entry is selected, JSC will open the JCL edit window,
where you can edit the JCL. This is the same window as shown in
Figure 5-28 on page 289.
Tip: Clicking OK in this Edit JCL window will start the job.
296
Figure 5-37 Rerun selection for job stream in Job Scheduling Console
2. JSC will then show the Job Stream Instance Editor window (see Figure 5-38
on page 298).
Note: JSC uses the same Job Stream Instance Editor when editing job
streams in the database and job stream instances in the plan. So you have the
same look and feel, but there is one difference: Using the Job Stream Instance
Editor for plan instances, the job status is depicted in the upper left corner of
the icon representing the job (see Figure 5-38).
297
3. Our task is to rerun the job stream from the TWSCCPET-20 job instance. So
we right click this job, in the Job Stream Instance Editor. JSC then shows a
new pop-up window (see Figure 5-39 on page 299), where we can choose
between Start From, Stat Cleanup, or Display Cleanup Result.
298
Figure 5-39 The JSC pop-up window with rerun actions for a job in a job stream
These jobs are impacted because TWS for z/OS will change the successor
job status to waiting before rerunning our job. Our Start From job is waiting
(and can be seen from the little hour glass in the upper left corner of the job
icon in Figure 5-39). The job instances shown in the Rerun Impact List
window are all the predecessors to the job we are rerunning (see Figure 5-40
on page 300).
When TWS for z/OS is performing the rerun, all these predecessors will be set to
complete status.
299
300
Once you have found the job stream instance you wish to edit, double click it.
The Job Stream Instance Editor will appear, as in Figure 5-42.
As you can see, many of the buttons are disabled. The job stream instance editor
does not give you all the capabilities that you will have when editing a job stream
definition. The following operations are possible in the Job Stream Instance
Editor:
Change the job stream properties.
Change the predecessor dependencies (links) between the jobs in the job
stream and predecessor dependencies on external jobs or job streams.
Change the properties of any of the jobs within the job stream.
301
It is also possible to submit new jobs into an existing job stream instance. To do
this, simply select the job stream instance, right click the selected job stream
instance, and choose Submit -> Job... from the pop-up menu, as in Figure 5-43.
You will then have to type in or find the job you wish to submit, as shown in
Figure 5-44.
Figure 5-44 Specifying job that is to be submitted into job stream instance
302
Because no two job stream instances on the same workstation may have the
same name, it is necessary to specify an alias that will be used as the name of
the resubmitted job stream, as in Figure 5-46.
The job stream will be resubmitted into the plan. Reload the job stream instance
list, and you should see the resubmitted job stream appear in the list, as shown
in Figure 5-47 on page 304.
303
Figure 5-47 The job stream has been resubmitted, with a new name
304
In the above example, the Job Stream Editor is in Graph mode. You can tell this
because the Graph mode button is selected (the left-most button in the group of
three buttons on the right side of the tool bar).
Graph mode is the mode in which jobs and dependencies are added to the job
stream. In this example, there is only one job in the job stream. The second
button in the group of three mode buttons is the Timeline mode button. Timeline
mode is useful for seeing when the jobs in a job stream will run. The last button
(the one that looks like a calendar) is the Run Cycle mode button. If you click this
button, the Job Stream Editor switches to run cycle mode, and should look
something like Figure 5-49 on page 306.
305
Figure 5-49 The Job Stream Editor window in run cycle mode
Notice that the Run Cycle button is now selected and the Job Stream Editor
window looks completely different from before. There are also three buttons in
the toolbar that appear only when the Job Stream Editor is in Run Cycle mode.
With these buttons, you can add simple, weekly, or calendar run cycles.
Run cycles can now be inclusive or exclusive. If a run cycle is inclusive, the job
stream will run on the days specified by that run cycle. If a run cycle is exclusive,
the job stream will not run on the days specified by the run cycle, even if some of
these days are also specified in an inclusive run cycle. This way, multiple run
cycles, some inclusive and others exclusive, can be combined together. The job
stream will run only on days specified in an inclusive run cycle and not specified
in an exclusive run cycle.
306
In Figure 5-50 you can see the General page of the Simple Run Cycle window.
Here you will find the option to make the run cycle inclusive or exclusive. You can
also see the option to select the freedays rule. The General page looks the same
for all types of run cycles.
Figure 5-50 Simple Run Cycle window with freedays rule menu displayed
The rule determines what day, if any, will be selected by the run cycle when the
current day is a freeday. By default, freedays include Saturday, Sunday, and the
days in the freedays calendar. The default freedays calendar is the holidays
calendar. The freedays calendar, and whether Saturday or Sunday is considered
a freeday, can be specified on a per-job stream basis in the General page of the
Job Stream Properties window under the Freedays Calendar section. See
Specifying freedays for a job stream on page 312 for instructions on how to do
this.
307
Figure 5-51 shows the Rule page of the Simple Run Cycle window. This is where
you can select the days that will be a part of the simple run cycle.
Figure 5-51 Rule page of Simple Run Cycle window; 10th of April is selected
The Rule page of the Simple Run Cycle window looks very much like the window
used when editing a calendar. You can select or de-select days by clicking them.
You can also switch from Monthly to Yearly mode if you want to view the whole
year instead of just one month. To do this, click the Yearly tab at the top of the
window.
308
In Figure 5-52, Tuesday and Thursday are selected. Note that it is also possible
to select weekdays, freedays, work days, or everyday.
309
Figure 5-53 Rule page of Calendar Run Cycle window; LOTTA selected
In Figure 5-53, the calendar LOTTA has been selected. Note that it is also
possible to specify an offset from the days defined in the calendar. The offset can
be positive or negative, and can be specified in days, workdays, or weekdays. In
this example, no offset (an offset of 0) is specified.
310
The blue bar in Figure 5-55 on page 312 indicates the single day that has been
specifically included by the inclusive calendar run cycle selected from the list on
the left.
311
Figure 5-55 An inclusive calendar run cycle; the 10th of April is included
The white bars on weekend days in the above figures indicate that these days
are freedays. Freedays are defined on a per-job stream basis. By default,
freedays are Saturdays, Sundays, and the days defined in the holidays calendar.
The freeday settings can be changed for any job stream.
312
Figure 5-56 Freedays calendar for job stream set to UK-HOLS calendar
You can also see the option to specify whether Saturday or Sunday is considered
a freeday.
These freedays settings apply only to the job stream where the change is made.
313
314
The filter row will then appear just below the title row. A filter can be set for any
column displayed (Figure 5-58).
Figure 5-58 A job streams list with the filter row displayed but no filters set
4. To set a filter, click the Filter button below the column heading. If no filter is
set this button will be labeled <no filter>. In our example, to set a filter for the
name of a job stream) we click the <no filter> button just below the Name
column heading. The Edit Filter window appears, as in Figure 5-59.
In this case we want to see only the job streams that contain ACC in their names.
You can also set a filter for objects whose names start with the filter text.
With the filter enabled, only job streams whose names contain ACC are displayed
(Figure 5-60 on page 316).
315
Figure 5-60 A job streams list displayed with a filter for ACC set
Note that it was not necessary to change the properties of the query list; we
simply filtered the results from the list.
5. If we now click the gray bar beneath the ACC filter button, we can quickly
disable that particular filter (Figure 5-61 on page 317).
316
Figure 5-61 A job streams list with a filter for the name ACC set, but disabled
With the filter disabled, the results displayed are the same as when no filter at all
was set. The advantage of having a filter set but disabled is that one can quickly
turn filters on and off to see the results in which you are interested.
Smart use of filters can save you a lot of time. You do not have to reload a list just
to find a specific object or group of objects. Just set a filter or two and find the
pertinent objects quickly.
Tip: A good strategy for getting the most out of JSC is to define query lists that
narrow down the search to a large group (say, your department or
geographical location), and then use filters to find the specific object or small
set of objects that you seek.
317
another for sorting in descending order. To sort in ascending order, click the
button that looks like an upward pointing triangle; to sort in descending order,
click the button that looks like a downward pointing triangle. In Figure 5-62, you
can see what the sort buttons look like.
After clicking the right-hand button, the list of jobs is sorted in descending
(reverse alphabetical) order according to the job name. (See Figure 5-63.)
If you wish to undo any sort that has been applied to a list, simply choose Clear
Sort from the pop-up menu on the right side of the window, as in Figure 5-64 on
page 319.
318
319
The next step is to paste the job into the target job stream. Right-click in the Job
Stream Editor window of the target job stream and choose Paste from the
pop-up menu (Figure 5-66).
At this point, the Job Properties window for that job will appear (Figure 5-68 on
page 321). Here is where the properties of the job in the new job stream are
specified. The properties set in the source job stream will be copied into the
target job stream; the Job Properties window gives you an opportunity to change
the properties if they need to be different in the target job stream from what they
were in the source job stream.
320
Figure 5-67 Window appears when you paste a job into a job stream
Note: When copying a TWS job from one job stream to another, it may be
necessary to specify the name and workstation of the job in the Job Properties
window. You can specify these by clicking on the ellipsis button (...). This
limitation will probably be removed in a future version of JSC.
Now the job should have been copied to the target job stream, as shown in
Figure 5-68.
Figure 5-68 The job ACCJOB01 has been copied to the new job stream
321
322
Important: The name of the predefined JSC Common Default Plan Lists is
hard coded and cannot be changed.
Figure 5-69 Common Default Plan Lists group and default lists in the group
Our task is to create a common list of job instances for our TWS for z/OS
controller instance (TWSC) and for our TWS master (Yarmouth-A) instance. The
common list should show all jobs in error in the two engines and should be
automatically refreshed every second minute.
323
2. Selecting Create Plan List gives a new pop-up window with two entries: Job
Stream Instance... and Job Instance....
Figure 5-70 Creating a new entry in the Common Default Plan List group
3. Since our task is to create a job instance list in the Common Default Plan List
group, we select Job Instance....
JSC shows a new window, the Properties - Job Instance Common List (see
Figure 5-71 on page 325).
324
Note: If you are going to create a job stream instance list, you can simply click
Job Stream Instance..., shown in Figure 5-70 on page 324. The process is
the same for creating Job Stream Instance Lists as for creating Job Instance
Lists.
Figure 5-71 JSC Properties window used to create a common job instance list
4. In the Properties - Job Instance Common List, we specify the name for the
list, the job status and the names of the engines we want to be in the list. We
name the list CommonErrorList, simply by typing this name in the Name field
(see Figure 5-72 on page 326).
5. Then we select the job status we want to track by clicking the grey box to the
right of the Status field.
6. This gives us a new pop-up window, where we check the Error status (see
Figure 5-72 on page 326).
325
Figure 5-72 The Status pop-up window where we check the error code
7. Next we have to define which engines we want include in the common list.
This is done by clicking the grey box to the right of the Engine field name. In
the new pop-up window (shown in Figure 5-73 on page 327) we un-check the
engines that should not be part of our common list (TWSC-F100-Eastham,
TWSC-F200-Yarmouth, and Yarmouth-B).
326
Figure 5-73 Unchecking the engines that should not be part of the common list
327
Figure 5-74 The periodic refresh is activated and set to 120 second (2 minutes)
9. We are almost finished now. To save our new Job Instance Common List
view, we have two possibilities:
a. Click the OK button (Figure 5-74).
If you click the OK button, the new Job Instance Common List view will be
saved and the window will be closed.
b. Click the Apply button (Figure 5-74).
If you click the Apply button, the new Job Instance Common List view will
be saved, but the window will not be closed. The JSC will try the new view
to see if there are any jobs satisfying the search criteria (error job status
and job run on engine TWSC or Yarmouth-A).
Note: Using the Apply button is an efficient way to try the filter criteria right
away and to see the result of your filter. Since the Properties window is not
closed, it is very easy to change filter criteria and retry the changed filter.
10.In Figure 5-75 on page 329 you can see the result after clicking the Apply
button. The Properties window is still there, and JSC has created a list of jobs
328
Figure 5-75 JSC Properties window with the filter specification and the result
11.Maybe you would like to have an extra window, with only the jobs in error.
This can be done by clicking the Detach button in the top of the JSC window
(see Figure 5-76).
Using the Detach button, JSC will detach the CommonErrorList into its own
window. This window will then stay open while you are working with other lists
in the JSC. Checking the window once and awhile you can see if there are
any new jobs in error. Remember that we have defined the detached window
to be automatically refreshed every second minute.
329
Note: You can have up to seven detached windows running at the same time,
though you should be aware of performance degradation if all these windows
are using periodic refresh options.
From the detached window in Figure 5-77, it can be seen that we have five
jobs in error in the TWSC controller (engine) and six jobs in error in the
Yarmouth-A master (engine). The total number of jobs in error is 11 (shown in
the bottom of the window). The window will be periodically refreshed every
120 seconds (shown in the bottom of the window).
12.If we want to check details for a particular job in the list in Figure 5-77, we
simply right click the job or double click the job to see all jobs in the job stream
for the job in error. In Figure 5-78 on page 331, we right click a z/OS job in
error in the TWSC controller.
330
Figure 5-78 Pop-up window shown when clicking a TWS for z/OS job in error
In Figure 5-79 on page 332 you will see a pop-up window for a TWS job ended in
error. Note the differences between the pop-up window shown in Figure 5-78 and
Figure 5-79 on page 332. As explained before, this difference reflects that the
two jobs are being handled by two different engines; TWS for z/OS and TWS.
331
Figure 5-79 Pop-up windows shown when right clicking a TWS job in error
Invoke TWS for z/OS batch jobs so you, for example, can extend the
long-term plan and current plan, generate different reports, and do
redistribution of the Symphony file (in end-to-end environment).
TWS for z/OS service functions
Access TWS for z/OS service functions so, for example, you can stop or start
job submission, automatic recovery, and event-triggered tracking.
332
Chapter 6.
Troubleshooting in a TWS
end-to-end environment
In this chapter we want you to get familiar with the identification and isolation of
the most common problems encountered in Tivoli Workload Scheduler
end-to-end solution. In order to have a common troubleshooting chapter for the
entire Tivoli Workload Scheduler for z/OS product, we also want to mention
troubleshooting methods based on Tivoli Workload Scheduler for z/OS only.
333
334
Keywords meaning
ABEND
Abnormal end
ABENDU
DOC
Documentation
LOOP
Loop
WAIT
Wait
MSG
Message
PERF
Performance
INCORROUT
Incorrect output
ABEND
Choose the ABEND keyword when the Tivoli Workload Scheduler for z/OS
program comes to an abnormal end with a system abend code. You should also
use ABEND when any program that services Tivoli OPC (for example, VTAM)
terminates it, and one of the following symptoms appears:
An abend message at an operator console. The abend message contains the
abend code and is found in the system console log.
A dump is created in a dump dataset.
335
ABENDU
Choose the ABENDU keyword when the Tivoli Workload Scheduler for z/OS
program comes to an abnormal end with a user abend code and the explanation
of the abend code states that it is a program error. Also, choose this keyword
when a user abend (which is not supposed to signify a program error) occurs
when it should not occur, according to the explanation. If a message was issued,
use the MSG keyword to document it.
DOC
Choose the DOC keyword when one or more of the following symptoms appears:
There is incomplete or inaccurate information in a Tivoli OPC publication.
The published description of Tivoli OPC does not agree with its actual
operation.
INCORROUT
Choose the INCORROUT keyword when one or more of these symptoms
appears:
You received unexpected output, and the problem does not appear to be a
loop.
The output appears to be incorrect or incomplete.
The output is formatted incorrectly.
The output comes from damaged files or from files that are not set up or
updated correctly.
LOOP
Choose the LOOP keyword when one or more of the following symptoms exists:
Part of the program (other than a message) is repeating itself.
A Tivoli Workload Scheduler for z/OS command has not completed after an
expected period of time, and the processor usage is at higher-than-normal
levels.
The processor is used at higher-than-normal levels, a workstation operator
experiences terminal lockout, or there is a high channel activity to a Tivoli
Workload Scheduler for z/OS database.
336
MSG
Choose the MSG keyword to specify a message failure. Use this keyword when
a Tivoli Workload Scheduler for z/OS problem causes an error message. The
message might appear at the system console or in the message log, or both. The
messages issued by Tivoli Workload Scheduler for z/OS appear in the following
formats:
EQQ FnnnC
EQQ FFnnC
EQQ nnnnC
If the message that is associated with your problem does not have the EQQ
prefix, your problem is probably not associated with Tivoli Workload Scheduler
for z/OS, and you should not use the MSG keyword.
PERFM
Choose the PERFM keyword when one or more of the following symptoms
appears:
Tivoli Workload Scheduler for z/OS event processing or commands, including
commands entered from a terminal in session, take an excessive amount of
time to complete.
Tivoli Workload Scheduler for z/OS performance characteristics do not meet
explicitly stated expectations. Describe the actual and expected
performances and the explicit source of the performance expectation.
337
WAIT
Choose the WAIT keyword when one or more of the following symptoms
appears:
The Tivoli Workload Scheduler for z/OS program, or any program that
services this program, has suspended activity while waiting for a condition to
be satisfied without issuing a message to indicate why it is waiting.
The console operator cannot enter commands or otherwise communicate
with the subsystem and Tivoli Workload Scheduler for z/OS does not appear
to be in a loop.
User abends originate in the application program. Abend codes are documented
in Appendix A, Abend Codes of the Tivoli Workload Scheduler for z/OS V8R1
Diagnosis Guide and Reference, LY19-6410, and Tivoli Workload Scheduler for
z/OS Messages and Codes, SH19-4548.
A common user abend is 3999, as shown in Example 6-1 on page 339.
338
You may find the occurrence Data Router task abended while processing the
following queue element in the system log (Example 6-2).
Example 6-2 Symptom dump output
IEA995I SYMPTOM DUMP OUTPUT
USER COMPLETION CODE=3999
TIME=15.46.40 SEQ=00456 CPU=0000 ASID=0031
PSW AT TIME OF ERROR 078D1000
800618CE ILC 2 INTC 0D
ACTIVE LOAD MODULE
ADDRESS=00054DF8 OFFSET=0000CAD6
NAME=EQQBEX
DATA AT PSW 000618C8 - 00181610 0A0D1812 0A0D47F0
GPR 0-3 80000000 80000F9F 00000F9F 000844E8
GPR 4-7 C5D8D8C2 C4C54040 C5E7C9E3 40404040
GPR 8-11 00000000 00000001 00000F9F 00061728
GPR 12-15 00000000 001DA4C0 800579D2 00000000
END OF SYMPTOM DUMP
System abends can occur, for example, when a program instruction refers to a
storage area that does not exist anymore.
339
340
Please note that //SYSOUT=* destroys the internal format of the dump and
renders it useless. When you experience an abend and find no dumps in your
dump datasets have a look to your dump analysis and elimination (DAE) set up.
DAE can be used to prevent the creating of certain kind of dumps. See also the
z/OS V1R3.0 MVS Initialization and Tuning Guide, SA22-7591.
341
With OS/390 V2R5, Tivoli Workload Scheduler for z/OS introduced a new
task, EQQTTOP, that handles the communication between the TCP/IP server
that now runs on UNIX System Services (USS) in full function mode.
EQQTTOP uses C coding in order to use the new C socket interface. New
messages are implemented, some of them pointing to other z/OS manuals.
Example:
EQQTT20E THE RECEIVE SOCKET CALL FAILED WITH ERROR CODE 1036
To find the cause, look in the System Error Codes for socket calls chapter of the
z/OS V1R2 Communications Server: IP and SNA Codes, SC31-8791.
Table 6-2 Socket error codes
Error number
Message name
Error description
1036
EIBMNOACTIVETCP
New modify commands in Tivoli Workload Scheduler for z/OS are a handy way
to get important information very quickly. When you want to find out which Tivoli
Workload Scheduler for z/OS task is active or inactive (other than by looking into
MLOG for related messages), enter the command in SDSF shown in
Example 6-3.
Example 6-3 Modify command
/F procname,status,subtask
/*where procname is the subsystem name of engine or agent */
342
IS
IS
IS
IS
ACTIVE
ACTIVE
ACTIVE
ACTIVE
EQQZ207I
EQQZ207I
EQQZ207I
EQQZ207I
EQQZ207I
EVENT MANAGER
GENERAL SERVICE
JT LOG ARCHIVER
EXTERNAL ROUTER
WS ANALYZER
IS
IS
IS
IS
IS
ACTIVE
INACTIVE
ACTIVE
ACTIVE
ACTIVE
The above screen shows that the general service task has an inactive status. To
find more details, have a look into MLOG. The modify commands are described
in the Tivoli Workload Scheduler for z/OS V8R1 Quick Reference, GH19-4541.
343
JOBNAME
OPCA
ASID
003F
TCBADDR
EXC/SHR STATUS
007DE070 EXCLUSIVE OWN
As you see, there are two tasks trying to get access (or lock) for one resource
exclusive. Exclusive means that no other task can get the lock at the same time.
An exclusive lock is usually an update access. The second task has to wait until
the first, which is currently the owner, releases it. Message ISG343I returns with
two fields, called Major and Minor name. In our example, SYSZDRK is the major
name, and OPCATURN2 is the minor name. SYSZDRK represents the active
current plan while the first four digits of the minor name represents your Tivoli
Workload Scheduler for z/OS subsystem name. With this information, you can
search for known problems in the software database. If you find no hint, your
Tivoli support representative may ask you for a console dump.
344
SDUMP indicates the options for the SYSMDUMP, which is the preferred type of
dump. The options shown are sufficient for almost every dump in Tivoli Workload
Scheduler for z/OS. For a detailed explanation, refer to the z/OS system
commands, options for SDUMP types. If you miss one of these options, you can
change it with the change dump command (CD). For GRSQ, as an example:
CD SET,SDUMP=(GRSQ)
You need to be sure that the dump datasets, which have been provided by the
z/OS installation, are free to be taken.
Example 6-7 Display dump datasets
COMMAND INPUT ===>/d d,t
RESPONSE=MCEVS4
IEE853I 18.43.40 SYS1.DUMP TITLES 385
SYS1.DUMP DATA SETS AVAILABLE=003 AND FULL=000
CAPTURED DUMPS=0000, SPACE USED=00000000M, SPACE FREE=00001200M
The previous example shows that all three can be used for console dumps. If not,
you can clear a certain one. Make sure that nobody needs it anymore. To clear a
certain dump dataset you can issue following command:
//dd clear,dsn=00
00000290
00000090
00000090
00000090
00000090
R 17,TSONAME=(OPCA)
IEE600I REPLY TO 17 IS;TSONAME=(OPCA)
IEA794I SVC DUMP HAS CAPTURED: 482
DUMPID=049 REQUESTED BY JOB (*MASTER*)
DUMP TITLE=DEMO
345
00000290
00000290
00000290
00000090
IEF196I
IEF196I
IEF196I
IEA611I
346
9. Specify any unique information about the problem or about your system:
Indicate any other applications that were running when the problem
occurred.
Describe how Tivoli Workload Scheduler for z/OS was started.
Describe all user modifications to active Tivoli Workload Scheduler for
z/OS programs.
If more information is needed, a Tivoli Support Center representative will guide
you concerning any additional diagnostic traces that you can run.
Event name
Meaning
Reader
Start event
3S
3J
3P
Print event
Purge event
347
The events are prefixed with either A (for JES2) or B (for JES3). At least the set of
type 1, 2, 3J, and 3P events is needed to correctly track the several stages of a
jobs life. The creation of step-end events (3S) depends on the value you specify
in the STEPEVENTS keyword of the EWTROPTS statement. The default is to
create a step-end event only for abending steps in a job or started task. The
creation of print events depends on the value you specify in the PRINTEVENTS
keyword of the EWTROPTS statement. By default, print events are created.
If you find that the current plan status of a job is not reflecting the status in JES,
you may have missing events. A good starting point is to run the Tivoli Workload
Scheduler for z/OS AUDIT package for the affected occurrence to easily see
which events are processed from the engine and which are missing, or you can
browse your event datasets for the job name and job number to prove which
events are not written.
Problem determination depends on which event is missing and whether the
events are created on a JES2 or JES3 system. In Table 6-4, the first column
refers to the event type that is missing, and the second column tells you what
action to perform. The first entry in the table applies when all event types are
missing (when the event dataset does not contain any tracking events).
Table 6-4 Problem determination of tracking events
Type
ALL
348
Type
A1
B1
349
Type
A2/B2
350
Type
A3S/B3S
A3J/B3J
A3P
351
Type
B3P
352
Type
A4/B4
353
Type
A5
B5
This kind of dialog uses ISPF service and is useful if you are not familiar
with the native shell environment.
The ishell can be run from the TSO command processor if you enter ish in
the command line.
354
Native shell is similar the UNIX shell, except that not all commands are
supported.
Using the native shell, enter omvs from the TSO command processor
command line.
We recommend using the ishell to better illustrate the directory layout. Run ish to
list the contents of your working directory. The directory is the same as the one
you defined in the wrkdir parameter of the topology member (in our installation,
/tws/twsctpwrk).
Example 6-10 Listing the work directory
Directory List
Select one or more files with / or action codes.
EUID=0
/tws/
Type Filename
_ Dir
.
_ Dir
..
l Dir
twsctpwrk
We will explain each file in more detail in Table 6-5 on page 356.
355
Explanation
Intercom.msg
localopts
Mailbox.msg
Mozart directory
NetConf
NetReq.msg
Pobox directory
Sinfold
Sinfonia
Stdlist directory
Symold
Symphony
Translator.chk
Translator files
Translator.wjl
Translator files
356
If you list the directory for a specific date, you will see three files, as listed in
Example 6-13.
Example 6-13 Stdlist files
Type Filename
_ Dir
.
_ Dir
..
_ File NETMAN
_ File STC
_ File TRANSLATOR
The netman file holds all message files related to the netman process while
batchman, writer, and mailman write to the STC file. This name can vary from the
type of the installation. File STC represents the same file name as the TWS user
ID in Tivoli Workload Scheduler stdlist directory. The translator file is used by the
translator process.
Example 6-14 Batchman messages
BATCHMAN:01:20/Received Bl:
BATCHMAN:01:20/OPCMASTER#B73EA5E0519ECF25.J_010_F202DWTESTSTREAM #J1086
BATCHMAN:01:20/Jobman streamed
BATCHMAN:01:20/OPCMASTER#B73EA5E0519ECF25.J_010_F202DWTESTSTREAM (#J1086)
BATCHMAN:01:20/AWS22010075I Changing schedule B73EA5E0519ECF25 status to
BATCHMAN:01:20/EXEC
BATCHMAN:01:20/Received Us: F202
BATCHMAN:01:20/+ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
BATCHMAN:01:20/+ AWS22010001E Unable to stream job
BATCHMAN:01:20/+ OPCMASTER#B73EA5E0519ECF25.J_010_F202DWTESTSTREAM in file
BATCHMAN:01:20/+ DIR: Error launching Invalid argument:
BATCHMAN:01:20/+ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
357
You can use the message number as a search argument in the knowledge
database, which can be accessed from the following site:
http://www.tivoli.com/support
You might need to investigate in the Tivoli Workload Scheduler for z/OS, the
batchjob output and the end-to-end server log too, to get an entire picture of the
error you are facing.
The controller message logs and batch output contains information about
Symphony creation and its switch.
The server log includes Starter and Translator log information.
358
You can check the Symphony run number and Symphony status in the legacy
ISPF using option 6.6 (Figure 6-2 on page 360).
359
Command ===>
Current plan created
Planning period end
Backup information:
Last CP backup
First logged event
after backup
: 02/02/28 11.02
: 02/03/01 13.30
: 02/02/28 11.45
: 02/02/28 11.46
: No
: No
Symphony status:
Symphony run number
Under production
New Symphony ready
: 118
: No
: No
: EQQCP2DS
Time stamp:
0102059F 16460266
360
If you still encounter link problems you may have a closer look at the following
definitions:
Check if the TPLGYPRM parameter in servopts of the end-to-end server and
the batchopt points to the same topology member.
Verify the hostname and port number definitions of the topology member.
Ensure that the port numbers are equal within the entire end-to-end
environment.
Check your DOMREC and CPUREC definitions, especially the cpunode
and cputcpip keywords.
If you modified the member, make sure that you either run a replan, a daily
plan extend, or a Symphony renew job.
Investigate into the end-to-end server logs as to whether the Symphony file
has been successfully created and switched.
Use the netstat command from the TSO command processor to check if the
connection between the end-to-end server and the distributed domain
manager is established.
361
But, sometimes during the regular operation situations you might need to renew
the Symphony file such as when:
You make changes to the script library or to the definitions of the TOPOLOGY
statement.
You add or change information in the current plan, such as workstation
definitions.
If a problem occurs during the building of the Symphony file, the Symphony file
will not be built. To create the Symphony file, you must perform a Symphony
renew after correcting the errors. You need to look to the following logs to check
if the Symphony file has been created successfully:
The return code of the batchjob may be the first place to look. The messages
produced by the job should include the messages shown in Example 6-16.
Example 6-16 Symphony creation message in batch output
EQQ3101I 0000048 JOBS ADDED TO THE SYMPHONY FILE FROM THE CURRENT Plan
EQQ3087I THE SYMPHONY FILE HAS BEEN SUCCESSFULLY CREATED
Note: In a certain situation, when the scriptlib contains syntax errors indicated
by the message EQQZ086E, the Symphony renew job ended with return code 0.
We have already addressed this issue.
The end-to-end server log must issue the messages that the input translator
finished waiting for batchman.
Example 6-17 Creation messages in end-to-end server log
EQQPT30I
EQQPT22I
EQQPT31I
EQQPT20I
EQQPT21I
EQQPT23I
Messages in the log of the Tivoli Workload Scheduler for z/OS engine should
contains the messages shown in Example 6-18 on page 363.
362
If the Symphony file is not created successfully you need to investigate the logs
for any error messages. Look in the Tivoli Workload Scheduler for z/OS 8.1
Message and Codes, SH19-4548, for an explanation and system programmer
response, or contact your Tivoli customer support.
Note: Recovering the current plan from an error situation may also imply
recovering the Symphony file. If the Symphony file is not up-to-date with the
Current Plan submit the Symphony renew or the daily plan batch job.
See also the Disaster recovery planning chapter in Tivoli Workload Scheduler for
z/OS V8R1 Customization and Tuning , SH19-4544.
363
LATCHWAITPID=
TWSRES1 TWSCTP
LATCHWAITPID=
TWSRES1 TWSCTP
LATCHWAITPID=
TWSRES1 TWSCTP
LATCHWAITPID=
TWSRES1 TWSRES1
LATCHWAITPID=
TWSRES1 TWSRES1
LATCHWAITPID=
TWSRES1 TWSCTP
LATCHWAITPID=
004F
004F
004F
001F
001F
004F
0 CMD=EQQTTTOP
33620089
65618 1F---- 12.34.18
102.46
0 CMD=EQQTTTOP
33620089
65618 1F---- 12.34.18
102.46
0 CMD=/usr/lpp/TWS/TWS810/bin/writer -- 2001
50397311
65618 1F---- 12.34.11
102.46
0 CMD=/usr/lpp/TWS/TWS810/bin/mailman -parm 32
33620135
1 MRI--- 12.00.09
1.83
0 CMD=EXEC
67174569
33620135 1CI--- 12.01.40
1.83
0 CMD=sh -L
33620140
50397311 1F---- 12.34.12
102.46
0 CMD=/usr/lpp/TWS/TWS810/bin/batchman -parm
The display output shows that job name TWSCTP (our end-to-end server started
task) running in address space id x4f started several end-to-end processes like
translator, netman, writer, mailman, and batchman at a specific time. Every
process has a process ID (PID) assigned. Looking to the parent process ID
(PPID) you can see that mailman and writer have been spawned by the netman
process.
Example 2: To display detailed file system information on currently mounted files,
enter:
/DISPLAY OMVS,FILE
364
MODE
RDWR
RDWR
RDWR
RDWR
PATH=/SC65
OWNER=SC65
AUTOMOVE=N CLIENT=Y
HFS
50 ACTIVE
NAME=OMVS.TWS810.TWSCTP.HFS
PATH=/tws/twsctpwrk
OWNER=SC65
AUTOMOVE=Y CLIENT=Y
RDWR
The display command returns the name of the HFS dataset, mountpoint, owner,
and mode once it has been mounted.
Example 3: To display information about current system-wide parmlib limits,
enter:
/ DISPLAY OMVS,L
365
This is not possible via the display omvs command. Instead of you can use the
df -k shell command. In the Ishell type u in the command line before the working
directory.
Example 6-22 File system utilization
File System Attributes
File system name:
OMVS.TWS810.TWSCTP.HFS
Mount point:
/tws/twsctpwrk
Status . . . . .
File system type
Mount mode . . .
Device number .
Type number . .
DD name . . . .
Block size . . .
Total blocks . .
Available blocks
Blocks in use .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
:
:
:
:
:
:
:
:
:
:
Available
HFS
R/W
50
1
4096
256008
251383
4595
366
2. Select option 2.
Example 6-24 Instance selection
******** OPC Connector manage program ********
Select instance menu
1. OPC
0. Exit
3. Choose the instance where you need to activate the trace level.
Example 6-25 Changing connector attributes
******** OPC Connector manage program ********
Manage menu
Name
Object id
Managed node
Status
:
:
:
:
OPC
1929225022.1.1771#OPC::Engine#
itso7
Active
OPC version
: 2.3.0
1. Stop
the OPC Connector
2. Start
the OPC Connector
3. Restart the OPC Connector
4. View/Change attributes
367
5. Remove instance
0. Exit
:
:
:
:
OPC
1929225022.1.1771#OPC::Engine#
itso7
Active
OPC version
: 2.3.0
2. Name
: OPC
: 524288
: 1
0. Undo changes
1. Commit changes
368
Level
Trace data
Errors
Called methods
Connections
IDs
Level
Trace data
Filters
PIF requests
Numbers of elements returned in queries
JSC error
GJSQxxx
GJSWxxx
Read error details for explanations and suggested behaviors. The console and
error log can be found in the \Jsconsole\dat\.tmeconsole directory. Consult the
trace file. Remember that error tracing is active by default. Also, check the file
bin\java\error.log for untraced errors.
369
370
Possible cause: Check if the version of that engine is supported by the JSC. For
a complete list of the compatible versions, see Chapter 3, Planning, installation,
and configuration of the TWS 8.1 on page 91.
Error description: There is a problem with Tivoli Framework (for example oserv
down, marshal error).
Possible cause: Check Framework status. Compare also the JSC and connector
trace files.
371
Possible cause: Check network integrity and TWS for z/OS connector
parameters.
Find the section where the user can customize variable values. Locate the two
variables, TRACELEVEL and TRACEDATA. They should be set to 0 by default.
Example 6-27 Console.bat file
REM
REM
set
set
REM
372
---------- Section to be customized --------change the following lines to adjust trace settings
TRACELEVEL=0
TRACEDATA=0
------ End of section to be customized -------
Change the value of the variable TRACELEVEL to activate the control flow trace
at different levels. Change the value of the variable TRACEDATA to activate the
data flow trace at different levels. Acceptable values range from 0 to 3.
TRACELEVEL also allows the value -1, which completely disables the trace, as
shown in Table 6-7.
Table 6-7 Tracelevel values
Trace level value
Trace types
-1
Table 6-8 lists trace data values and their corresponding trace types.
Table 6-8 Tracedata
Tracedata value
Trace type
No data is traced.
Note: Tracing can adversely affect the performance of JSC. Use major values
of TRACELEVEL and TRACEDATA only when necessary.
Trace files can become huge. Use advanced customization to optimize disk
space allocation. Move or delete log files related to the previous executions.
373
If netman has not been started, start it from the command line with the
StartUp command. Note that this will start only netman, not any other
TWS processes.
If netman started as root and not as a TWS user, bring TWS down
normally, and then start up as a TWS user through the conman command
line on the master or FTA:
unlink <FTA name>
stop <FTA name>; wait
shut <FTA name>; wait
StartUp
If the file system is full, open some space in the file system.
If a file with the same name as the directory already exists, delete the
file with the same name as the directory. The directory name would be
in a yyyy.mm.dd format.
If the directory or netman standard list is owned by root and not TWS,
change the ownership of the directory standard list from the command
line in UNIX with the chown yyyy.mm.dd TWS command. Note that this
must be done as root user.
374
If the mailman read corrupted data, try to bring TWS down normally. If
this is not successful, kill the mailman process with the following steps.
UNIX:
Run ps -ef | grep maestro to find the process ID.
Run kill -9 <process id> to kill the mailman process.
Windows (commands in TWShome\unsupported directory):
Run listproc to find the process ID.
Run killproc <process id> to kill the mailman process.
If batchman is hung:
Try to bring TWS down normally. If not successful, kill the mailman
process as explained in the previous bullet.
If the writer process for FTA is down or hung on the master, it means that:
Use netstat -a | grep <netman port> for both UNIX and NT systems
to check if netman is listening.
375
Check the size of the message files (files whose names end with .msg) in the
TWS home directory ad pobox subdirectory. 48 bytes is the minimum size of
these files.
Use the evtsize command to expand temporarily, and then try to start
TWS:
evtsize <filename> <new size in bytes>
For example:
evtsize Mailbox.msg 2000000
If necessary, remove the message file (only after failing with the EVTSIZE
and start).
Message files contain important messages being sent between TWS
processes and between TWS agents. Remove a message file only as a
last resort; all data in the message file will be lost.
Jobman not owned by root.
If jobman (in the bin subdirectory of the TWS home directory) is not owned by
root, correct this problem by logging in as root and running the following
command chown root jobman.
Read bad record in Symphony file.
376
If NT authorizations for TWS users are not in place, you can try the
following:
Increase quotas.
Log on as a service.
Log on locally.
Valid NT or Domain user for FTA not in the TWS user database
Add the TWS user for FTA in the TWS user database. Do not fill in the
CPU Name field if the TWS user is a domain account.
Password for NT user has been changed.
Do one of the following:
Note that changes to the TWS user database will not take effect until
Jnextday.
If the user definition existed previously, you can use the altpass command
to change the password for theproduction day.
Jobs not running on NT or UNIX
Batchman down.
See Batchman not up or will not stay up (batchman down) on page 376.
377
Limit set to 0.
To change the limit to 10 via the conman command line:
For a single FTA:
lc <FTA name>;10
378
This may be due to bad or missing data in schedule or job. You can perform
the following actions:
Check for missing calendars.
Check for missing resources.
Check for missing parameters.
Jnextday not completing stageman process.
379
The file snapfile.at is the captured trace file. This file can be named anything you
like, but it is customary to end the filename with .at to identify it as an AutoTrace
snap file. The file is not readable without special tools and library files.
380
Chapter 7.
381
Agent
Protocol
SNMP
MIB
382
383
384
385
tokyo (NT SP 5)
Fault Tolerant Agent
Token
Ring
386
Note: In NetView 4.1 and later, the management node functions can be
distributed across a server and one or more clients. In that case you have to
install Tivoli Workload Scheduler and magent on the client NetView machines
as well. We will not use NetView clients in our scenario, but if you need more
information on how to install and configure magent on NetView clients, please
refer to Chapter 9, Integration with Other Products of the Tivoli Workload
Scheduler 8.1 Reference Guide, SH19-4556.
387
Installation tips:
It is a good policy to configure both TWS master and TWS backup domain
managers as managed nodes. If you are using only one managed node,
which is not the Tivoli Workload Scheduler master, you have to set
OPTIONS=MASTER in the BmEvents configuration file on the managed
node. For more explanation of the BmEvents configuration file refer to
Chapter 9, Integration with Other Products, of the Tivoli Workload
Scheduler 8.1 Reference Guide, SH19-4556.
Although we have included, management nodes (server and client) need
not necessarily be members of Managed Workload Scheduler Networks,
but making them so makes your configuration easier.
The best location for a management node is a Tivoli Workload Scheduler
FTA workstation. Choosing a Tivoli Workload Scheduler master for this
purpose is generally not recommended, especially for busy Tivoli Workload
Scheduler networks, due to the additional overhead of the NetView
application (although it does have one benefit of minimizing Tivoli
Workload Scheduler/NetView manager-agent polling traffic).
388
The customize script should be run on both management nodes (NetView server
--and clients, if any--and managed nodes (Tivoli Workload Scheduler/UNIX
workstations).
1. If you are installing on a NetView client instead of a server, you use the
-client keyword such as:
/bin/sh /tivoli/TWS/D/tws-d/OV/customize -client hostname
389
390
4. Execute the startup script to restart the TWS netman process using:
/tivoli/TWS/D/tws-d/StartUp
5. mdemon is started with NetView as part of the ovstart sequence so start the
NetView deamon as follows:
/usr/OV/bin/ovstop
/usr/OV/bin/ovstart
Or you can use the smitty panels: Smitty -> Commands Applications and
Services -> Tivoli NetView -> Control -> Stop all running daemons, and then
Restart all stopped daemons.
Tip: The run options of mdemon are included in the /usr/OV/lrf/Mae.mgmt.lrf
file.
391
4. Execute the startup script to restart the TWS netman process and magent
using:
/tivoli/TWS/D/tws-d/StartUp
When you define the user that will manage Tivoli Workload Scheduler from
NetView, the user must have access to run commands from the NetView server.
This is done by adding the name of the NetView server to the users
$HOME/.rhosts file on each managed node. In our case the user was root, so the
/.rhosts file was edited to add the entry:
tividc11.itsc.austion.ibm.com root
The second step is to add this user to the Tivoli Workload Scheduler security file.
Since we are using the root user, we do not need to perform this step. (Root user
and the user that you perform the Tivoli Workload Scheduler installation, also
known as the maestro user, is included in the security file by default.) But if you
are using any other user, you have to modify the security file. to add the required
user using the makesec command. Please refer to the Tivoli Workload Scheduler
8.1 Planning and Installation Guide, SH19-4555, for more information on how to
add a user to the security file.
392
5. Click Configure For This Map... to specify options for the TWS map.
6. When the Configuration dialog appears, the Enable Maestro for this map
option must be set to True. All other options, the commands run under the
Tivoli Workload Scheduler menu items, are left as default (Figure 7-3 on
page 394).
393
7. To complete the addition of the new map, click OK. The new map, TWS, is
shown in Figure 7-4 on page 395.
394
Note: The TWS map (Tivoli Systems,Inc. (c) labeled icon) will be seen first in
the color blue. This means that NetView has not yet received any information
from the Tivoli Workload Scheduler application. After the polling cycle it will
turn to the color green.
8. Next you need to add the Tivoli Workload Scheduler management information
base (MIB). MIB is a formal description of a set of network objects that can be
managed using the Simple Network Management Protocol, which is the main
protocol that NetView uses to get information from its agents.
Although this is not a mandatory step, it is highly recommended, since if you
add the MIB, you will be able to use NetViews MIB browser application.
Select Options -> Load/Unload MIBs -> SNMP from the NetView GUI.
9. Since the Workload Scheduler MIB is not loaded by default (Figure 7-5 on
page 396), press the Load button to load it.
395
10.When the Load MIB From File dialog opens, find and select the Maestro MIB
(/usr/OV/snmp_mibs/Maestro.mib) from the drop-down list and press OK
(Figure 7-6).
396
397
2. Choose the TWS symbol by double clicking. There is only one Tivoli
Workload Scheduler network in this application submap, labelled
YARMOUTH-D:Maestro (Figure 7-8 on page 399). If there were multiple Tivoli
Workload Scheduler networks there would be multiple icons, each labelled
with the specific Tivoli Workload Scheduler network name. The icon color
represents the status of all workstations and links that comprise the Tivoli
Workload Scheduler network.
398
3. Opening the Tivoli Workload Scheduler network icon shows all workstations
and links in the network, as in Figure 7-9 on page 400.
399
Figure 7-9 is equivalent to the diagram of our Tivoli Workload Scheduler network
(Figure 7-1 on page 386). There are three Tivoli Workload Scheduler nodes, with
the center one yarmouth being the master node. If there were more nodes, they
would be arranged in a star pattern around the master node.
Each node symbol represents the job scheduling on that workstation. The color
represents the status of the job scheduling. If a trap is received indicating a
change in status of a job scheduling component, the icon color will be changed.
The links represent the Tivoli Workload Scheduler workstation links with the color
representing the status of the workstation link.
In addition to monitoring Tivoli Workload Scheduler status, you can run several
tasks from this window.
400
These actions are also available from the object context menu by clicking a
symbol with mouse button three (or right mouse button, if you are using a
two-button mouse).
The menu actions are:
View
Open a child submap for a Workload Scheduler/NetView symbol. Choosing View
after selecting a workstation symbol on the Workload Scheduler network submap
opens the monitored processes submap. Choosing View after selecting a
workstation symbol on the IP node submap returns to the Workload Scheduler
network submap.
Master conman
Run the conman program on the Workload Scheduler master. Running the
program on the master permits you to execute all conman commands (except
shutdown) for any workstation in the Workload Scheduler network.
401
Acknowledge
Acknowledge the status of selected Workload Scheduler/NetView symbols.
When acknowledged, the status of a symbol returns to normal. It is not
necessary to acknowledge critical or marginal status for a monitored process
symbolit will return to normal when the monitored process itself is running
again. Critical or marginal status of a Workload Scheduler workstation symbol
should be acknowledged either before or after you have taken some action to
remedy the problemit will not return to normal otherwise.
Conman
Run the conman program on the selected Workload Scheduler workstations.
Running the program on a workstation other than the master permits you to
execute all conman commands on that workstation only.
Start
Issue a conman start command for the selected workstations. By default, the
command for this action is:
remsh %H %P/bin/conman 'start %c'
Down (stop)
Issue a conman stop command for the selected workstations. By default, the
command for this action is:
remsh %H %P/bin/conman 'stop %c'
Start up
Execute the Workload Scheduler StartUp script on the selected workstations. By
default, the command for this action is:
remsh %h %P/StartUp
Rediscover
Locate new agents and new Workload Scheduler objects, and update all
Workload Scheduler/NetView submaps.
Note: You need to run the re-discover function each time you change the
Workload Scheduler workstation configuration.
402
2. Select Maestro and then select Down (stop) from the menu, as shown in
Figure 7-11. This will stop the batchman process on tividc11.
3. We can verify that the batchman process has been stopped by checking the
status on tividc11 as in Example 7-2.
Example 7-2 Check status on tividc11
tividc11:/tivoli/TWS/D/tws-d>conman status
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for group 'TWS-Distributed'.
Locale LANG set to "En_US"
Schedule (Exp) 02/08/02 (#10) on TIVIDC11-D. Batchman down. Limit: 10,
Fence:
0, Audit Level: 1
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
403
There are no additional submaps to TWS maps. If you want to see the Tivoli
Workload Scheduler process information, you need to browse the node symbol
under the main IP submap, which we are going to explain next.
2. Type tividc11 in the Symbol Label field and then press Apply for NetView to
locate the node for you.
404
This will open up the map that contains the tividc11 node, as shown in
Figure 7-14 on page 406.
405
406
Physical interfaces
of the node.
407
We see that BATCHMAN, JOBMAN, and MAILMAN icons are yellow, which
means that they are stopped. Remember that we have previously issued a
stop conman command from the NetView console.
Tip: Yellow means a marginal problem or warning. Red means a severe
problem. For example, if we were to kill the mailman process manually using
the kill command (simulating an abend), the mailman process icon would have
turned red.
408
409
7. After the polling cycle (which is 60 seconds by default) all Tivoli Workload
Scheduler processes should be started on tividc11. When you click the
Maestro icon again you should see all processes started and all colors
should turn green (Figure 7-19 on page 411).
410
1. Edit the file to add the -timeout option to the mdemon command line.
2. Delete the old registration by running the ovdelobj command.
3. Register the manager by running the ovaddobj command and supplying
the name of the lrf file.
411
When we checked the NetView console we saw that the tividc11 icon in the TWS
map turned red (Figure 7-21 on page 413) and this also has turned the Tivoli
Systems Inc icon (seen at the left side) yellow.
412
Note that when the schedules complete successfully after initial failures, the
NetView icons are not restored to a normal green state. The display looks the
same as Figure 7-21. This is because the Tivoli Workload Scheduler/NetView
does not change the symbol state with the incoming success traps by default.
You have two options to turn it back to the green state:
You can configure the Tivoli Workload Scheduler/NetView to send events that
are not sent by default (mostly successful completion events or warning
messages, a list is given in Table 7-2 on page 418). But in this case, you have
to be careful about the extra network traffic you are introducing by doing so.
The other method is to use NetViews Acknowledge function. To do this:
413
are passed on to the agent, which may convert them to SNMP traps,
depending on the settings in its configuration file. The BMEvents.conf file is
located in the /<TWShome>/OV directory.
Example 7-3 shows the BMEvents.conf file on yarmouth, our TWS master.
Example 7-3 BmEvents.conf file
# cat BmEvents.conf
# @(#) $Header:
/usr/local/SRC_CLEAR/maestro/JSS/maestro/NetView/RCS/BmEvents.co
nf,v 1.6 1996/12/16 18:19:50 ee viola_thunder $
# This file contains the configuration information for the BmEvents module.
#
# This module will determine how and what batchman notifies other processes
# of events that have occurred.
#
# The lines in this file can contain the following:
# OPTIONS=MASTER|OFF
#
#
#
#
#
#
#
#
MASTER This tells batchman to act as the master of the network and
information on all cpus are returned by this module.
OFF This tells batchman to not report any events.
default on the master cpu is to report all job scheduling events
for all cpus on the Maestro network (MASTER); default on other cpus
is to report events for this cpu only.
# LOGGING=ALL|KEY
# ALL This tells batchman all the valid event numbers are reported.
#
# KEY This tells batchman the key-flag filter is enabled
#
# default is ALL for all the cpus
# EVENTS = <n> ...
# <n> is a valid event number (see Maestro.mib traps for the valid event
# numbers and the contents of each event.
#
# default is 51 101 102 105 111 151 152 155 201 202 203 204 251 252 301
#
#
#
#
#
414
Example 7-4 shows the MAgent.conf file on yarmouth, our TWS master.
Example 7-4 MAgent.conf file
# cat MAgent.conf
# @(#) $Header:
/usr/local/SRC_CLEAR/maestro/JSS/maestro/NetView/RCS/MAgent.conf
,v 1.6 1996/12/16 18:20:50 ee viola_thunder $
# This file contains the configuration information for the snmp agent.
# OPTIONS=MASTER|OFF
#
#
#
#
#
#
MASTER This tells the agent to act as the master of the network and
information on all cpus are returned by this module.
OFF This tells the agent not report any events, i.e., no traps are sent.
default is MASTER on the master cpu, and OFF on other cpus
415
#
#
#
#
#
#
#
+SYSLOG
/etc/syslogd.pid
416
Tip: Whether or not traps are generated is controlled by the two different
settings:
EVENTS parameter in the BmEvents configuration file:
It lists the events to be sent as SNMP traps. With the exception of events 1,
52, and 53, traps will not be generated unless the corresponding events
are turned on in the BmEvents configuration file.
For example, the following settings will enable the events numbers 1, 52,
53, 54, and101 to be sent to NetView.
EVENTS parameter in the MAgent configuration file:
EVENT=1,52,53,54,101:
The Additional Actions column in Table 7-1 lists the actions available to the
operator for each event. The actions can be initiated by selecting Additional
Actions from the Options menu, then selecting an action from the Additional
Actions window.
Note: The operator must have the appropriate Tivoli Workload Scheduler
security access to perform the chosen action.
Table 7-1 gives the Tivoli Workload Scheduler traps that are enabled by default.
Table 7-1 TWS traps enabled by default
Trap #
Name
Description
Additional actions
uTtrapReset
N/A
52
uTtrapProcessGone
A monitored process is
no longer present.
N/A
53
uTrapProcessAben
d
A monitored process
abended.
N/A
417
Trap #
Name
Description
Additional actions
54
uTrapXagentConnL
ost
N/A
101
uTtrapJobAbend
A scheduled job
abended.
102
uTtrapJobFailed
105
uTtrapJobUntil
151
uTtrapSchedAbend
A schedule ABENDed.
Show schedule,
cancel schedule
152
uTtrapSchedStuck
A schedule is in the
STUCK state.
Show schedule,
cancel schedule
155
uTtrapSchedUntil
Show schedule,
cancel schedule
201
uTtrapGlobalPrompt
Reply
202
uTtrapSchedPrompt
Reply
203
uTtrapJobPrompt
Reply
204
uTtrapJobRerunPro
mpt
Reply
252
uTtrapLinkBroken
Link
418
Trap #
Name
Description
Additional actions
51
uTtrapProcessRes
et
N/A
103
uTtrapJobLaunch
Trap #
Name
Description
Additional actions
104
uTtrapJobDone
Show schedule,
cancel schedule
153
uTtrapSchedStart
Show schedule,
cancel schedule
154
uTtrapSchedDone
Show schedule,
cancel schedule
251
uTtrapLinkDropped
Link
Next we will cover how to configure Tivoli Workload Scheduler to send events to
any SNMP manager.
b. For the file option, uncomment the File option and name the file
accordingly. The output will be an ascii file that will have the trap number in
419
the first field with varying numbers of fields following the trap number,
depending upon the trap. For example:
FILE=/<TWShome>/event.log
Tip: The difference between using the File option versus the PIPE option is
that the File option creates a flat ascii file which you can then parse to get the
snmp traps to your application. The Pipe option, on the other hand, creates a
fifo file with the traps in it and the management node should have to read the
file as a pipe. We found that the File option is easier to implement.
3. After making the configuration changes above, run the customize script
contained in the /<TWShome>/OV directory to enable Maestro to trap the
events. This will add a line to start magent on the master pointing to the peer
(which should be the hostname of the workstation on which the SNMP
management node is running or the hostname of the SNMP manager).
/bin/sh customize -manager hostname-of-host-receiving-traps
Note: In addition to running this script, you need to install and configure the
TEC SNMP adapter on the Tivoli Workload Scheduler master to send SNMP
events directly to TEC. Refer to TEC manuals for more information on
customizing the TEC SNMP adapter.
The other way to the send Tivoli Workload Scheduler events to TEC is to use
the TWS/TEC adapter that is shipped with the Tivoli Workload Scheduler Plus
module.
Also, if you are using NetView and TEC together, you can send TWS events
from NetView to TEC, as well. This configuration does not require the TEC
SNMP adapter to be installed.
4. To enable the magent daemon, stop maestro with the following command:
conman stop;wait and then shut;wait
420
Chapter 8.
421
The TWS for z/OS recovery statements are directly coded inside the job script
in the TWS for z/OS job library. When the TWS for z/OS engine submits a
tracker agent job, it strips the recovery statements from the job script and
stores them internally to use in case of job failure to run the recovery of the
job. The current implementation of the common agent technology does not
allow defining and performing any automatic recovery action on a job running
on an FTA. This functionality is planned to be provided in the next versions.
422
TWS will support coding of user variables and TWS for z/OS built-in variables
directly inside the scripts that run on FTAs (as it is the case with tracker
agents today).
Removal of one domain manager limitation
TWS will be shipped with a migration utility, which will provide an assisted (or
possibly automated) migration path from an TWS for z/OS tracker agents
configuration to an TWS for z/OS-FTAs configuration.
Improved memory management
TWS tasks are planned to be moved above the current 16 MB memory line.
The local security approach is much the same as in previous versions. This
option is maintained for added flexibility and backward compatibility.
423
Centralized security
Firewall support
TWS will fully support working across the firewalls. Currently all TWS TCP/IP
communications between two nodes use one well-defined and user-configurable
listening port. But on the other hand, the ports used to open connections are
allocated dynamically. This behavior will be changed in the future release(s) to
allow users to configure the port used for opening connections.
Also, with the current implementation, several TWS administration commands
require a direct TCP/IP connection between the node where the command is
issued and the destination node where the command is executed. This will be
removed, allowing the routing of all administration commands through the TWS
master -> domain managers -> agents hierarchy.
424
Firewall support
The JSC will handle new workstation definition options related to the
command routing.
Common agent technology enhancements
JSC will handle the centrally managed job in the database and plan. It will
show rerun jobs and recovery jobs that are added by recovery actions, and
recovery prompts will be added as a result of a Recovery prompt action.
425
426
Appendix A.
427
Introduction
This document describes how to use the conversion utility to help export jobs
from centrally stored OPC controllers to the new fault tolerant agents (also called
workstation tolerant agents) that become available in Version 8.1 of Tivoli
Workload Scheduler for z/OS.
Software prerequisites
The following are the software prerequisites of the utility:
Tivoli OPC, any version
ISPF
FTP
428
FTP considerations
The default method TWSXPORT uses is to push the JCL, using FTP, from the
EQQJBLIB libraries to the remote destinations, in a single FTP step. However,
for this to work, all of the remote boxes must have an FTP server/service in effect.
Though this is commonplace with UNIX, it is not the default configuration for
Windows, so pushing may not be possible for all of your remote agents.
It is possible however, to pull from a remote box without an FTP server, simply an
FTP client, which usually comes with the TCP/IP stack.
Switching from push to pull will make the following differences to the FTP script:
From the mainframe, the very first open and user statements are not actually
coded with the command keywords, just the values. On subsequent
connections to other boxes open and user must be specified. When run from
a distributed machine it uses an open statement, but does not need a user
statement, only the value, on every connection, including the first.
From the mainframe, files are put. When run from distributed you must get
them.
From the mainframe, DEST, USER, and PASSWORD refer to the remote box.
When run from a distributed box these parameters should point back to the
mainframe.
The directory commands cd and lcd are reversed when running from a
remote box.
From the mainframe, the script can contain transfers to many boxes, one after
the other. From distributed you will typically only perform transfers between
the mainframe and one box at a time.
429
TWSXPORT can set the case and add a prefix or suffix to the member name.
If more involved changes are needed then you can possibly automate this
yourself by writing your own REXX code to be included in the TWSEXPORT
REXX at the point where the comment USER_RENAME is included in the
code. Variable MEM_NAME contains the member name as retrieved from
OPC; MEM_DEST should be set to contain the name to be used on the
remote agent.
Some of the behavior of FTP is based upon site installation options. The way
TWSXPORT deals with open, user, and password commands are based on
some of these assumptions, and these assumptions are documented above. If
these assumptions do not work for your site then the code may need to be
amended to fit with you installation options.
430
TWSXPORT or amend the TWSXPORT code to build in your exit logic to set the
dataset name yourself. The dataset name is stored as the second word in stem
variable WSTA_DEF.CURR_WSID.NEW_ITEM, which is set shortly after the
USER_RENAME comment.
Member names
TWSXPORT needs to know which members to export to which remote boxes.
This is done by providing it with a list of member names, or member name masks
(using % and * wildcards as with ISPF/PDF) and a list of OPC workstation names
to which each member should be sent. Each member listed explicitly can be sent
to a list of workstations unique to that entry if necessary. If a mask is used, then
every member identified by the mask will be sent to the same destination(s)
listed against the mask.
You may wish to obtain the list of member names by using such utilities as the
Batch Command Interface Tool, or using other OPC program interface tools or
techniques.
Installing TWSXPORT
TWSXPORT consists of three pieces of REXX code and one JCL procedure.
The REXX code should be copied into a library and this library name be declared
in the TWSREXX symbolic parameter in the TWSXPORT JCL procedure.
The procedure should be stored in your procedure library concatenation, or in a
library referenced by JCLLIB, whichever mechanism fits in with your site
procedures. It consists of two steps, the first is a batch ISPF step, the second is
an FTP step. You may wish to amend either of these steps to fit in with your site
standards. The system ISPF libraries are named in symbolic parameters at the
top of the procedure. If your installation does not use these names then you will
need to amend them to match your site installation.
431
Security implications
To transmit data between the mainframe and remote boxes you are going to
need to provide user IDs and passwords. These will be stored in two places
throughout this process:
The workstation definition file (WSTADEF)
The FTP script (FTPOUT)
You must ensure that these two datasets are protected by security rules for the
duration of their existence to prevent the passwords being inadvertently
disclosed.
Planning
Once you understand all of the implications highlighted in this section you should
produce a plan to determine the number of TWSXPORT jobs you are going to
need by determining which boxes you can push to, and which you must pull from,
and considering how to deal with implications of the exits, if you have used them
to retrieve JCL from alternate DD statements within the OPC controller.
432
//EXP.JCLLIB04 DD DSN=CSSA.OPC.DEVT.FETCH.JOBLIB,DISP=SHR
//EXP.WSTADEF DD *
* Set Site Defaults
WSTASTART DIR(/opt/tws/user/bin) SUFFIX(.sc)
* Define workstations
WSTASTART WSID(I101) DEST(10.20.30.101) USER(tws) PASSWORD(xxxxxxxx)
WSTASTART WSID(H121) DEST(10.20.30.121) USER(tws) PASSWORD(xxxxxxxx)
//EXP.MEMBERS DD *
C%%101* I101R%%121* H121SJS*
This example will create and run an FTP script to transmit members matching the
mask C%%101* to IP address 10.20.30.101, members matching the mask
R%%121* to IP address 10.20.30.121, and members beginning with SJS, to
both destinations.
In all cases the jobs will be stored in the /opt/tws/user/bin directory and have .sc
appended to the end of the file names.
433
Example: A-2 Example of UNIX FTP job using the heredoc technique
ftp<<EOF
open 10.20.30.5
xfer
xferpswd
lcd /opt/tws/user/bin
cd 'CSSA.OPC.DEVT.DYNAM.JOBLIB'
get CJSAOLST cjsaolst.sc
cd 'CSSA.OPC.DEVT.BATCH.JOBLIB'
get SJSATEST sjsatest.sc
get SJSAUNAM sjsaunam.sc
close
EOF
Under Windows the technique to convert this into a job is a little more difficult, but
it can be achieved by using Windows commands to echo the FTP script into a file
(on the remote box), which is then executed remotely. Essentially the conversion
of the FTP script into an OPC job is achieved by the following changes:
1. Add a line to the beginning of the script to delete the file, in case it already
exists, that the FTP script is about to be built in on the remote box.
2. Prefix each line of the script with cmd.exe /c echo.
3. Suffix each line of the script with a redirect to the remote file, for example:
c:\temp\ftpscript.txt
4. Add a line to the end of the script to execute FTP using the remotely created
script file.
Example: A-3 Example of using the FTP script on a Windows tracker agent
cmd.exe /c del c:\temp\ftpscript.txtcmd.exe /c echo open 10.20.30.5.>>
c:\temp\ftpscript.txt cmd.exe /c echo xfer.>> c:\temp\ftpscript.txtcmd.exe /c
echo xferpswd.>> c:\temp\ftpscript.txtcmd.exe /c echo lcd /tws/user/bin.>>
c:\temp\ftpscript.txtcmd.exe /c echo cd 'CSSA.OPC.DEVT.DYNAM.JOBLIB'.>>
c:\temp\ftpscript.txt cmd.exe /c echo get CJSAOLST cjsaolst.sc.>>
c:\temp\ftpscript.txtcmd.exe /c echo cd 'CSSA.OPC.DEVT.BATCH.JOBLIB'.>>
c:\temp\ftpscript.txtcmd.exe /c echo get SJSATEST sjsatest.sc.>>
c:\temp\ftpscript.txtcmd.exe /c echo get SJSAUNAM sjsaunam.sc.>>
c:\temp\ftpscript.txtcmd.exe /c echo close.>> c:\temp\ftpscript.txtcmd.exe /c
ftp -s:c:\temp\ftpscript.txt
434
There may well be other techniques for other platforms, but since most of these
techniques are likely to be heavily influenced by site standards, it is not really
possible to write a generic utility to create these jobs that would perfectly fit all
customer permutations. However, the two above examples could easily be
performed by a simple piece of REXX code or an Edit Macro. Making your
conversion process for workstations that have to pull is simply a schedule of
three steps:
1. Run TWSXPORT to generate the FTP script.
2. Run a process to convert the FTP script into a job appropriate for the platform.
3. Run the job generated by step 2.
435
The WSTADEF file can contain all of the remote boxes your controller knows
about, or just the ones that will be used in this particular TWSXPORT job.
However, be aware that if you include all of the workstations in the file, then any
member that is referred to within the members file that do not contain a list of
destination workstations will be sent to every workstation within the WSTADEF
file.
The WSTASTART statements, essentially, tie workstation names within OPC to
the information necessary to perform the transmission of the JCL members to the
remote box on which the workstation exists.
If you create a WSTASTART statement without using the WSID parameter, then
this statement can be used to set the defaults for all subsequent WSTASTART
statements. If you have standardized configuration across many of your boxes
then this should greatly simplify your WSTADEF file. If an entry in the members
file refers to a workstation that is not declared within the WSTADEF file, the
process will fail in the EXP step with a return code of 8.
Comments can be made within the WSTADEF file by using either * or /*; anything
following this on a line will be considered a comment. There is no need to code */
to close a comment, since a comment does not span more than one line.
No continuation characters are needed. A WSTASTART statement can span
many lines.
Keywords can be abbreviated to their shortest unique form if necessary.
WSTASTART statement
Use the WSTATART statement to signal the start of a new workstation tolerant
agent. The syntax of the command is shown in Figure A-1 on page 437.
436
Parameters
The following sections describe the parameters.
WSID(workstation)
Name of the workstation being defined. If omitted, the WSTASTART statement
will be used to define defaults for all following workstations (until another
WSTASTART statement without WSID alters defaults again).
DEST(address)
Specifies the host name or IP address of the workstation being defined. If
communication to this workstation must use a specific port then supply the
address in the format IP-ADDRESS:PORT-NUMBER.
USER(userid)
Specifies the user ID on the remote box to be used for the transfer.
PASSWORD(password)
Specifies the password on the remote box to be used for the transfer. Care
should be taken to store these parameters in a well-protected dataset to ensure
against inadvertent disclosure of these passwords. The passwords will be output
into the FTPSCRIPT file, which should equally be protected.
437
DIR(directory)
Specifies the directory into which the transfer will be made. If omitted, the
transfer will take place to the default directory that the user will be presented with
at log in.
PREFIX(prefix)
Specifies a prefix to add to the beginning of every member name as it is
transferred in the remote server.
SUFFIX(suffix)
Specifies a suffix to add to the end of every member name as it is transferred to
the remote server. If you wish the suffix to be added as a file extension then you
should include a period in the suffix. For example SUFFIX(.txt) will transfer a
member called ABCDEF to become abcdef.txt on the remote server.
CASE(UPPER|LOWER)
Determines whether the filename on the remote server will be stored in upper or
lower case. The default is lower.
Note: If you are using the pull method then the USER and PASSWORD
keywords will be for a mainframe user ID that has access to read the JCL
libraries. The DEST keyword will be the hostname or IP address by which the
mainframe is known from the remote box.
438
In this example all members beginning with ABC will be sent to workstation X002
except a member called ABCMAST, which will be sent to workstation X001.
Workstation ABCSLAVE will still be sent to workstation X002, instead of X003 as
you might expect, since it first matches the second line of the members file and
will therefore not be processed by the third line, which in this example is
completely redundant.
Error codes
When you run the EXP step of TWSXPORT, it may not end with return code zero
for one of the following reasons:
Return code 2
You have selected METHOD=PULL. RC=2 means that no errors have occurred,
but the FTP step could not run, since the FTP script needs to be run remotely.
Return code 4
A record in the MEMBERS DD statement has not found a match in any of the
JCLLIBnn libraries.
Return code 8
A serious error has occurred, which could be one of the following:
Unable to read or write to a file.
Unable to initialize a library.
A syntax error in the WSTADEF statements.
A workstation referenced in MEMBERS that does not exist within WSTADEF.
439
Variables
Table A-1 shows the variables used in the program.
Table A-1 Variables
440
Variable name
Usage
CURR_CASE
CURR_DEST
CURR_DIR
CURR_PASSWOR
D
CURR_PREFIX
CURR_SUFFIX
CURR_USER
CURR_WSID
DEST_LIST
EQQJBLIB
FTPOUT.
HOST_CD
LAST_LIB
LASTRC
LC
LIB_COUNT
LIB_DD
LIB_DSN
LIB_LOOP
MEM_COUNT.
MEM_DEST
MEM_FOUND
MEM_LOOP
Variable name
Usage
MEM_MASK
MEM_NAME
MEM_SPEC
MEMBERS
NEW_ITEM
REMOTE_CD
SCRIPT_COUNT
SCRIPT_LINE
UC
WSTA_DEF
Workstation definitions
WSTA_KEYWORD
WSTA_LIST
WSTA_LOOP
WSTA_SPEC
WSTA_TOKEN
WSTA_VALUE
WSTADEF
XMIT_CMD
OPEN_CMD
USER_CMD
REXX members
Table A-2 on page 442 shows the REXX members used in the program.
441
Usage
BATISPF
EXITRC
TWSXPORT
442
For each member found it looks up the dataset name. Then for each workstation
in the destination list it generates the member name that it will be transmitted to
on the remote box and stores all of this information in the WSTA_DEF stem for
later use by the next section.
It then checks how many members were looked up for each entry on the
members file. If any entries did not find any references in the library then it exits
with return code 4.
Again to be tolerant of different user coding habits, the Translate function is used
to cope with users delimiting members and workstations with commas instead of
spaces.
443
//
SYSPROC='ISP.SISPCLIB',
//
ISPMLIB='ISP.SISPMENU',
//
ISPPLIB='ISP.SISPPENU',
//
ISPSLIB='ISP.SISPSENU',
//
ISPTLIB='ISP.SISPTENU',
//
METHOD=PUSH,
//
TMPDIR=',20',
//
TMPORG=PO,
//
@=
//* +------------------------------------------------------------------+
//* | MODULE: TWSXPORT : CASSDH2 - 09FEB02
|
//* | PURPOSE : EXPORT JCL TO WORKSTATION TOLERANT AGENTS
|
//* |
|
//* | HISTORY ---------------------------------------------------------|
//* |
|
//* +------------------------------------------------------------------+
//* +------------------------------------------------------------------+
//* | SYSPROC - NAME OF LIBRARY CONTAINING CLIST/REXX-EXEC CODE
|
//* | ISPMLIB - NAME OF LIBRARY CONTAINING ISPF MESSAGES
|
//* | ISPPLIB - NAME OF LIBRARY CONTAINING ISPF PANELS
|
//* | ISPSLIB - NAME OF LIBRARY CONTAINING ISPF SKELETONS
|
//* | ISPTLIB - NAME OF LIBRARY CONTAINING ISPF TABLES
|
//* | METHOD - WHICH FTP METHOD TO USE (PUT/GET)
|
//* | TMPDIR - DIRECTORY BLOCKS FOR ISPF TEMP DATASET
|
//* | TMPORG - ORGANISATION FOR ISPF TEMP DATASET
|
//* | @
- DUMMY SYMBOL TO ENSURE SUBSTITUTION WITHIN QUOTES
|
//* +------------------------------------------------------------------+
//* +------------------------------------------------------------------+
//* | CREATE FTP SCRIPT TO EXPORT FROM OPC
|
//* +------------------------------------------------------------------+
//EXP
EXEC PGM=IKJEFT01,
00030003
//
DYNAMNBR=70,
//
PARM=&@'BATISPF TWSXPORT &METHOD'
//SYSPROC DD DSN=&TWSREXX,DISP=SHR
00262103
//
DD DSN=&SYSPROC,DISP=SHR
//SYSPRINT DD DUMMY
00360003
//SYSOUT
DD SYSOUT=*
00370003
//ISPMLIB DD DSN=&ISPMLIB,DISP=SHR
00262603
//ISPPLIB DD DSN=&ISPPLIB,DISP=SHR
00263103
//ISPSLIB DD DSN=&ISPSLIB,DISP=SHR
00263703
//ISPTLIB DD DSN=&ISPTLIB,DISP=SHR
00263903
444
//ISPPROF DD SPACE=(80,(500,1000,20)),AVGREC=U,
//
DSORG=PO,RECFM=FB,LRECL=80
//ISPCTL1 DD SPACE=(80,(15000,12000&TMPDIR)),AVGREC=U,
//
DSORG=&TMPORG,RECFM=FB,LRECL=80
//ISPCTL2 DD SPACE=(80,(15000,12000&TMPDIR)),AVGREC=U,
//
DSORG=&TMPORG,RECFM=FB,LRECL=80
//ISPLST1 DD SPACE=(121,(500,1000,20)),AVGREC=U,
//
DSORG=PO,RECFM=FB,LRECL=121
//ISPLST2 DD SPACE=(121,(500,1000,20)),AVGREC=U,
//
DSORG=PO,RECFM=FB,LRECL=121
//ISPLOG DD SYSOUT=*,
//
DSORG=PO,RECFM=FB,LRECL=121
//FTPOUT DD DSN=&FTPOUT,DISP=OLD
//SYSTSPRT DD SYSOUT=*
00350003
//SYSTSIN DD DUMMY
00380003
//*
//* +------------------------------------------------------------------+
//* | RUN THE FTP PROCESS IF IN PUSH MODE
|
//* +------------------------------------------------------------------+
//
IF (EXP.RC = 0) THEN
//FTP
EXEC PGM=FTP,REGION=8192K
//OUTPUT DD SYSOUT=*
//INPUT
DD DSN=&FTPOUT,DISP=SHR
//
ENDIF
Sample job
Example A-6 shows a sample job.
Example: A-6 Sample Job
//*A JOB CARD ACCCORDING TO YOUR INSTALLATION STANDARDS IS REQUIRED
//*
//
JCLLIB ORDER=CASS.TWS.JOBLIB
//*
//* +------------------------------------------------------------------+
//* | MODULE : MEMLIST : CASSDH2 - 09FEB02
|
//* | PURPOSE : SAMPLE EXPORT JOB
|
//* |
|
//* | HISTORY ---------------------------------------------------------|
//* |
|
//* +------------------------------------------------------------------+
//TWSXPORT EXEC TWSXPORT,
//
METHOD=PUSH,
//
FTPOUT='CASS.TWS.FTP.SCRIPT'
//EXP.JCLLIB01 DD DSN=CSSA.OPC.DEVT.DYNAM.JOBLIB,DISP=SHR
//EXP.JCLLIB02 DD DSN=CSSA.OPC.DEVT.DB2.JOBLIB,DISP=SHR
445
//EXP.JCLLIB03 DD DSN=CSSA.OPC.DEVT.BATCH.JOBLIB,DISP=SHR
//EXP.JCLLIB04 DD DSN=CSSA.OPC.DEVT.FETCH.JOBLIB,DISP=SHR
//EXP.WSTADEF DD *
* Set Site Defaults
WSTASTART DIR(/opt/tws/user/bin) SUFFIX(.sc)
* Define workstations
WSTASTART WSID(I101) DEST(10.20.30.101) USER(tws) PASSWORD(xxxxxxxx)
WSTASTART WSID(H121) DEST(10.20.30.121) USER(tws) PASSWORD(xxxxxxxx)
//EXP.MEMBERS DD *
C%%101 I101
R%%121 H121
SJS*
446
Appendix B.
Connector reference
In this appendix we describe the commands related to the Tivoli Workload
Scheduler and Tivoli Workload Scheduler for z/OS connectors. We also describe
some Tivoli Management Framework commands related to the connectors.
447
sh or ksh
. /etc/Tivoli/setup_env.sh
csh
source /etc/Tivoli/setup_env.csh
DOS
(Windows)
%SYSTEMROOT%\system32\drivers\etc\Tivoli\setup_env.cmd
bash
user
Note: To control access to the scheduler, the TCP/IP server associates each
Tivoli administrator to a Remote Access Control Facility (RACF) user. For this
reason, a Tivoli administrator should be defined for every RACF user. For
additional information, refer to Tivoli Workload Scheduler V8R1 for z/OS
Customization and Tuning, SH19-4544.
448
Create an instance
Stop an instance
Start an instance
Restart an instance
Remove an instance
Where:
Node is the name or the object ID (OID) of the managed node on which you
are creating the instance. The TMR server name is the default.
instance_name is the name of the instance.
object_id is the object ID of the instance.
new_name is the new name for the instance.
Address is the IP address or hostname of the z/OS system where the Tivoli
Workload Scheduler for z/OS subsystem to which you want to connect is
installed.
Port is the port number of the OPC TCP/IP server to which the connector
must connect.
449
Example
We used a Tivoli Tivoli Workload Scheduler for z/OS with the hostname twscjsc.
On this machine, a TCP/IP server connects to port 5000. Yarmouth is the name
of the TMR managed node where we installed the OPC connector. We called this
new connector instance twsc.
With the following command, our instance has been created:
wopcconn -create -h yarmouth -e twsc -a twscjsc -p 5000
You can also run the wopcconn command in interactive mode. To do this, perform
the following steps:
1. At the command line, enter wopcconn with no arguments.
2. Select choice number 1 in the first menu.
Example: B-1 Running wopcconn in interactive mode
Name
Object id
Managed node
Status
:
:
:
:
TWSC
1234799117.5.38#OPC::Engine#
yarmouth
Active
OPC version
: 2.3.0
2. Name
: TWSC
: 524288
: 0
0. Exit
450
Much of the following information is excerpted from the Tivoli Job Scheduling
Console Users Guide, SH19-4552. Note that there is an error on page 29 of that
guide: The command used to create TWS connector instances is called
wtwsconn.sh, not wtwsconn.
Create an instance
Stop an instance
Remove an instance
Where:
Node specifies the node where the instance is created. If not specified, it
defaults to the node where the script is run from.
Instance is the name of the new instance. This name identifies the engine
node in the Job Scheduling tree of the Job Scheduling Console. The name
must be unique within the Tivoli Managed Region.
twsdir specifies the home directory of the Tivoli Workload Scheduler engine
associated with the connector instance.
Example
We used a Tivoli Workload Scheduler for z/OS with the hostname twscjsc. On
this machine a TCP/IP server connects to port 5000. Yarmouth is the name of the
TMR managed node where we installed the TWS connector. We called this new
connector instance Yarmouth-A.
451
For example:
barb 1318267480.2.19#Maestro::Engine#
The number before the first period (.) is the region number and the second
number is the managed node ID (1 is the Tivoli server). In a multi-Tivoli
environment, you can determine where a particular instance is installed by
looking at this number because all Tivoli regions have a unique ID.
wuninst -list lists all the products that can be un-installed.
wuninst {ProductName}-list lists the managed nodes where a product is
installed.
wmaeutil Maestro -Version lists the versions of the installed engine, database,
and plan.
wmaeutil Maestro -dbinfo lists information about the database and the plan.
wmaeutil Maestro -gethome lists the installation directory of the connector.
452
Appendix C.
453
Alternatives to consider
Customers having both TWS and TWS for z/OS installed at their site often ask
this question and there is no straight answer. The customer must look at the
original decision of why they installed both TWS for z/OS and TWS and ask
themselves whether these facts are still relevant in today's environment.
It is true that TWS for z/OS can manage both the mainframe and distributed
environment from one single engine and in fact the same can be said for TWS.
OPC, up until Version 2.3, used tracker agents for both the MVS and distributed
environments, whereas TWS uses fault tolerant agents for the distributed
environment and extended agents to manage the MVS systems.
So you have three choices, all of which have pros and cons:
Continue as today with both TWS for z/OS and TWS engines and use the
common Graphical User Interface (GUI), to maintain and monitor both
engines.
Migrate all the TWS schedules into the TWS for z/OS database and use TWS
for z/OS to schedule both the distributed and mainframe environments.
Migrate all the TWS for z/OS schedules into TWS and use TWS to schedule
both the distributed and mainframe environments.
454
Benefits
There is no loss of engine-specific function when keeping both scheduling
engines. Users who have implemented TWS for z/OS to run their production
workload on the mainframe will almost certainly have utilized many of the
functions available to them specifically for this environment. Many functions
available to TWS for z/OS users are just not available when using the extended
agents with TWS to manage the mainframe.
455
that it has been set up to run. For example, it could modify the execution JCL
based on the day of the week.
Workload Manager Integration ensures that a job under the control of TWS
for z/OS that is deemed to be running late will have its internal priority within
the MVS operating system adjusted to ensure it gets maximum resources
from the operating system, thus improving its throughput.
Hiperbatch is an MVS facility that allows large I/O bound files to reside in
storage until they are no longer required. When a job in TWS for z/OS is using
a file that has the Hiperbatch flag set, then this file is read and placed into
storage. When the job using this file ends, TWS for z/OS checks to see if any
other scheduled job requires this file. If it finds another job then it leaves it in
storage thus removing the I/O for second or subsequent operations, but if no
other job requiring this file is found in the schedule then the file is removed
from Hiperbatch.
Feedback/Smoothing is the facility that allows the actual duration of jobs to be
reported back to the database when it seems as though the planned duration
is either too long or too short. This ensures accurate future plans.
Users who have implemented TWS to run their production workload within the
distributed world will have almost certainly utilized many of the TWS-specific
functions. Some of these functions are just not available as standard options
within the TWS for z/OS product.
TWS-specific functions
The following are TWS-specific functions.
Global prompts is a method of holding back specific parts of the schedule
until a manual prompt has been received. In TWS for z/OS this is achieved by
using a manual workstation and creating a Global Prompt job stream that
contains the manual task. Any other job streams waiting for this manual task
to be completed would have a predecessor of the manual task defined in the
Global Prompt job stream.
Recovery = Continue is an option in TWS that allows the user to continue with
successor jobs even in the case of a failure. To do this in TWS for z/OS
requires the use of automatic recovery statements placed with the job stream
and a dummy step added to the end of the job.
Resources at job stream level are used to ensure that a resource is only set to
available or not when all jobs in a job stream have completed, regardless of
the sequence they run in, and that the resource is held for the duration of the
job stream. In TWS for z/OS this is achieved by adding a dummy step to the
start and end of the job stream and attaching the resource to these new
steps.
456
Run every n minutes is used to make a one-line entry into the scheduling
database to run a job at specified intervals. For example, run every 60
minutes. In TWS for z/OS this would require 24 line entries. The first one
specifying 0000, the second 01:00, the third 02:00 up to and including 24:00.
Use the new GUI, which is free, to maintain and monitor both environments
from one single graphical view. In the GUI shipped with TWS 8.1.0, one single
graphical view can see all engines and the same operators can use one
common interface to monitor and manage both environments.
Keeping both environments the same would incur no migration costs since
this would be business as usual.
Considerations
The following are things to consider.
Dependencies between TWS for z/OS and TWS have to be handled by the
user, but this can be achieved outside of the current scheduling dialogues by
using such techniques as data-set triggering or special resource flagging,
both of which are available to TWS for z/OS and TWS users, to communicate
between the two environments.
Keeping both TWS for z/OS and TWS engines does mean that there are two
pieces of scheduling software to maintain.
Benefits
We will now discuss the benefits of moving TWS schedules into TWS for z/OS.
Dependencies between mainframe and distributed jobs are handled within
the single scheduling engine and there is no need to do anything special
outside of the scheduling dialogues, such as dataset triggering or resource
flag setting.
In TWS for z/OS for ZOS (TWS 8.1.0) the same fault tolerant agents are used
as in TWS, therefore reducing maintenance effort and technical support
education requirements.
There is only one scheduling engine to maintain, thus reducing maintenance
and technical support overheads especially when installing upgrades, etc.
A common GUI can be used.
457
The carryforward issue is resolved. When TWS creates the next days
schedule, JNEXTDAY, uncompleted work can be dropped from the schedule
or carried over into the next schedule dependent on configuration options set
by the user. However, since TWS can only handle one version of the same
job in its plan file, it renames the original job to another name of the format
CFnnnnnn where nnnnnn is a TWS-generated number. While this does not in
itself cause a problem in running the correct task, it does, however, make it
somewhat difficult for operators to monitor its progress once it has been
renamed. Since TWS for z/OS can handle multiple jobs with the same name
in the same schedule this issue is resolved.
Both tracker agents and fault tolerant agents can coexist on the same
distributed box, thus simplifying migration.
Considerations
Consider the following things.
Some standard functions that may be utilized within the TWS engine will need
to be converted to TWS for z/OS equivalents. See TWS-specific functions
on page 456.
Some TWS-functions cannot be converted. CPU classes, for example. This
function is TWS specific and can be used to make adding new jobs to the
schedule very easy when a new distributed box is added to the scheduled
environment.
For example: If we have five UNIX boxes and on each of these five boxes we
run the same backup script everyday, instead of scheduling the script on each
UNIX box separately we can create a class called UNIXSERVER and
associate the backup script with this class. The class can then be associated
with each of the UNIX servers. The backup script, defined only once, still runs
on all the UNIX boxes and when a new UNIX server arrives, in order to run
the same backup script on this new server, we just associate the new server
to the UNIXSERVER class.
Although this would probably not be a major effort, the work involved in
moving the TWS schedules to TWS for z/OS should not be underestimated.
TWS schedules can be unloaded and, by using REXX or equivalent
programming languages, be converted into TWS for z/OS batch loader
statements ready for loading into the existing or a new TWS for z/OS
controller database using the TWS for z/OS utility provided as a standard part
of the product. Care should be taken here though since, as already stated,
some TWS-specific functions will not automatically convert.
Many fields in TWS, such as workstation name, job name, and job stream
name, are a lot longer than the equivalent fields within TWS for z/OS. For
example, the workstation name in TWS can be 40 characters in length whilst
458
in TWS for z/OS it is only four. These of course are not problems that cannot
be resolved, but they do have to be considered and catered for in any
migration. See Chapter 4, End-to-end implementation scenarios and
examples on page 169 for some techniques that you can use for easy
migration.
File dependencies defined in TWS (distributed), when converted to TWS for
z/OS, will not be recognized when they are created on a distributed box by
the TWS for z/OS master. These file dependencies, however, may not be
required; if all the jobs are controlled by TWS for z/OS then it may be enough
to just have a job dependency. In the cases where there is a true need for a
file dependency, then a looping script scheduled on the distributed engine at
the appropriate time may satisfy this requirement. (See EQQFM in the TWS
for z/OSSAMPLIB.)
Benefits
Below are some of the benefits of moving TWS for z/OS schedules into TWS.
Dependencies between mainframe and distributed jobs are handled within
the single scheduling engine and there is no need to do anything special
outside of the scheduling dialogues such as dataset triggering or resource
flag setting.
There is only one scheduling engine to maintain, thus reducing maintenance
and technical support overheads, especially when installing upgrades.
A common GUI can be used.
Considerations
Below are some things to consider about moving TWS for z/OS schedules into
TWS.
Many functions that are currently found within the TWS for z/OS engine will
not be available to TWS when running jobs in the mainframe environment.
See TWS for z/OS-specific functions on page 455.
There is a limit as to the number of jobs TWS can schedule from one master
database, and the current limit would need to be understood if moving all of
the TWS for z/OS operations into the TWS database.
459
Depending on the number of jobs within the TWS for z/OS database, this
could quite easily become a major effort. TWS for z/OS schedules can be
unloaded and converted by REXX or an equivalent programming language
into TWS control statements ready for loading into the existing or a new TWS
master database using the TWS utility provided as a standard part of the
product. Care should be taken here though since, as already stated, many
TWS for z/OS-specific functions cannot be migrated to TWS and would be
lost to the user.
This migration is also against the direction of the TWS and TWS for z/OS
products. Any major enhancements to the TWS extended agent for OS/390
feature should not be expected.
460
Appendix D.
Additional material
This redbook refers to additional material that can be downloaded from the
Internet as described below.
Select the Additional materials and open the directory that corresponds with
the redbook form number, SG246022.
461
File name
SG246022.zip
Description
Zipped conversion utility to help export jobs from centrally
stored OPC controllers to the FTAs
1 MB minimum
Windows/UNIX
462
Advanced Communications
Function
PID
Process id
PIF
Program interface
PSP
PTF
RACF
RFC
RODM
RTM
API
Application Programing
Interface
ARM
COBRA
CP
Control point
DMTF
EM
Event Manager
FTA
SCP
FTW
SMF
GID
SMP
GS
General Service
SMP/E
GUI
System Modification
Program/Extended
HFS
SNMP
IBM
International Business
Machines Corporation
STLIST
Standard list
ISPF
Interactive System
Productivity Facility
TME
Tivoli Management
Environment
ITSO
International Technical
Support Organization
TMF
Tivoli Management
Framework
JCL
TMR
JES
TSO
JSC
TWS
JSS
USS
MIB
Management Information
Base
VTAM
Virtual Telecommunications
Access Method
MN
Managed nodes
WA
Workstation Analyzer
NNM
WLM
Workload Monitor
OMG
X-agent
Extended Agent
OPC
XCF
463
464
Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks
on page 466.
End-to-End Scheduling with OPC and TWS Mainframe and Distributed
Environment, SG24-6013
TCP/IP in a Sysplex, SG24-5235
Other resources
These publications are also relevant as further information sources:
NetView for Unix Users Guide for Beginners V7.1, SC31-8891
Tivoli Framework 3.7.1 Installation Guide, GC32-0395
Tivoli Framework 3.7.1 Reference Manual, SC31-8434
Tivoli Framework 3.7.1 Users Guide, GC31-8433Tivoli Job Scheduling
Console User's Guide, SH19-4552
Tivoli Job Scheduling Console Users Guide, SH19-4552
Tivoli Workload Scheduler 8.1 Error Messages, SH19-4557
Tivoli Workload Scheduler 8.1 Planning and Installation Guide, SH19-4555
Tivoli Workload Scheduler 8.1 Reference Guide, SH19-4556
Tivoli Workload Scheduler for z/OS V8R1 Controlling and Monitoring the
Workload, SH19-4547
Tivoli Workload Scheduler for z/OS V8R1 Customization and Tuning,
SH19-4544
Tivoli Workload Scheduler for z/OS V8R1 Diagnosis Guide and Reference,
LY19-6410
Tivoli Workload Scheduler for z/OS V8R1 Installation Guide, SH19-4543
Tivoli Workload Scheduler for z/OS V8R1 Messages and Codes, SH19-4548
465
TWS updates
ftp://ftp.tivoli.com/support/patches/
466
Related publications
467
468
Index
Numerics
24/7 availability 2
A
ABEND 335
ABENDU 335
Acknowledge 402
Acrobat reader 95
Agent 382
agents 5
AIX 146
Alternatives to consider 454
altpass 377
AS/400 job 2
Automated reroute 455
Automatic recovery statements 207
B
best restart step 59
C
calendar 27
CARRYFORWARD schedule 379
Catalogue management 455
central repository 259
chown 374
cleanup 6
CLIST 61
CODEPAGE 114
common interface 5
communications between workstations 18
Connector 17
controller 3
corrupted data 375
CP extension 122
CPU 26
CPU type 119
CPUFULLSTAT 234
CPUNODE 248
CPUREC 115
CPUSERVER 184
CPUTCPIP 248
D
daily production plan 33
dedicated library 207
default user name 142
dependency object 4
dependency resolution 11
Discovering network devices 382
DNS server 249
domain 34
domain manager 5
domain topology 115
DOMREC 115
dumpsec 165
Dynamic Virtual IP Addressing
See DVIPA
E
e-commerce 2
Endian 38
End-to-end scheduling
activating end-to-end feature 186
activating FTWs 188
configuration 176
considerations for scripts 207
conversion process 216
creating the script library member 207
creating Windows user and password definitions
184
469
F
failover 246
470
G
gcomposer 153
gconman 153
generation data group 59
globalopts 36
Graph mode 305
grep 375
H
HACMP 246
HACMP for AIX 246
hardcopy 95
HFS
See Hierarchical File System
Hierarchical File System 95
HFS directory 104
in a sysplex 95
OS/390 V2R9 95
sharing 95
High Availability
Configuring TWS 248
failover 246
fail-over IP address 248
HACMP 246
HACMP standby node 248
HP Service Guard 251
shared disk array 246
Windows NT cluster 251
High Availability Cluster Multi-Processing
HACMP
Hiperbatch 456
HOLIDAYS calendar 312
home directory 143
HP-UX 146
HP-UX PA-RISC 146
Hyperbolic view 9
I
IBM AIX 146
IBM support center 366
idle time 4
INCORROUT 335
Init calendar 110
initial load 207
Installing
connectors 146
Job Scheduling Services 145
multiple instances of TWS 142
TCP/IP Server 131
Tivoli Management Framework 3.7B 145
Tivoli Workload Scheduler 141
Tivoli Workload Scheduler for z/OS 100
instance 451
Intel 379
ISPF panels 4
J
Java GUI interface 4
JCL 10
JCL tailoring 455
JCL variables 211
Jnextday script 33
job 2, 26
Job Completion Checker 455
job duration 6
Job FOLLOW 378
job instances 34
Job Scheduling Console 2
availability 153
Calendar run cycles 309
Common Default Plan Lists 322
Common view 9
compatibility 154
connector trace 366
Copying jobs 319
Creating connector instances 147
Editing a job stream instance 300
enhancements 8
error examples 369
filter row 314
freedays rule 307
future 424
General enhancements 313
Graph mode 305
Graphical enhancements 9
hardware and software prerequisites 157
installation on AIX 158
installation on Sun Solaris 158
installation on Windows 158
installing 156
Installing Job Scheduling Services 145
JSC 1.2 319
migration considerations 154
multiple run cycles 310
Non-modal windows 9, 319
Re-Submitting a job stream instance 302
Severity code 337
Simple run cycles 306
Sorting list results 317
Specifying free days 312
trace table 339
traces 372
troubleshooting 369
TWS for z/OS connector troubleshooting 366
TWS for z/OS-specific enhancements 10
TWS-specific enhancements 10
Usability enhancements 8
WAIT keyword 338
Weekly run cycles 308
Job Scheduling Services
See JSS
job stream 26
job stream instances 34
job_instance_output 89
JSC 4
See Job Scheduling Console
JSS 12
K
kill 375
killproc 375
L
legacy GUI 153
Legacy system 2
Linux Red Hat 7.1 146
listproc 375
local machine 206
logging 5
Index
471
LOGLINES 114
LOOP 335
M
Maestro 11
maestro_database 88
maestro_engine 88
maestro_plan 88
maestro_x_server 89
mailman 375
maintenance strategy 167
Maintenence release 141
makesec 166
manage TCP/IP 382
Managed Workload Scheduler Network 387
management hub 18
Management Information Base
See MIB
Manager 382
master 5, 11
Master conman 401
master domain manager 5, 7
menu actions 401
MIB 382
mixed environment 322
mount point 247
N
nested INCLUDEs 59
Netman 8
netstat 361, 375
NetView GUI 383
NetView management node 34
network 5
network conditions 382
Network management 382
Network management ABC 382
Network manager 34
network traffic 19
NT cluster 251
O
OPC 35
opc_connector 88
opc_connector2 88
OPCMASTER 141
OPENS file 378
472
P
parameter 27
parent directory 142
parent domain manager 12
Performance improvements
complex relations 8
Daily plan creation 8
Daily plan distribution 8
Event files 8
I/O optimization 8
massive scheduling plans 8
mm cache mailbox 137
mm cache size 137
sync level 137
wr enable compression 137
polling traffic 388
PORTNUMBER 114
predecessor 3
preventive service planning
See PSP
process id 375
production day 29
Program Directory 92
Program Interface 4
program interface 61
program temporary fix
PTF
prompt 27
prompt dependency 34
Protocol 382
ps 375
PSP 93
R
RACF user 148
real-time monitoring 382
Redbooks Web site 466
Contact us xxiii
Refresh CP group field 105
S
sample security file 164
SAP/3 2
scalable agent 7
schedlog directory 257
Scheduling 2
scheduling engine 12
Script repository 100
SD37 abend code 108
SEQQMISC dataset 95
server 12
server started task 110
shared disk array 246
Simple Network Management protocol
See SNMP
Sinfonia 379
Sinfonia file 32
SNMP 382
SNMP Manager 382
SNMP traps 416
start Up 402
Started Tasks 455
StartUp 374
stdlist files 104
submit jobs 3
Submitting userid 100
subordinate agents 11
subordinate domain manager 30
Sun Solaris 146
switch manager 246
Symnew file 32
Symphony file 122
creating 31
distribution 31
Monitoring instances 34
Symnew file 32
updates 31
sysplex 5, 104
System Automation/390 245
System Display and Search Facility 59
system documentation 95
T
TBSM
See Tivoli Business Systems Manager
TCP/IP considerations 96
Apar PQ55837 99
Dynamic Virtual IP Addressing 99
stack affinity 98
Usage of the host file 98
terminology 11
time stamp 379
Tivoli Business Systems Manager 6, 34
Tivoli managed node 387
Tivoli Managed Region
See TMR
Tivoli Management Framework 148
Tivoli Management Framework 3.7.1 145
Tivoli Management Framework 3.7B 145
Tivoli NetView 382
Tivoli Software Distribution 207
Tivoli Workload cheduler/Netview
TWS maps 397
Tivoli Workload Scheduler 2, 4
architecture 5
Auditing 35
auditing log files 255
Backup and maintenance guidelines for FTAs
251
backup domain manager 138
Benefits of integrating with TWS for z/OS 5
Calendar 27
central repositories 259
Centralized security 424
configuring for HACMP 248
creating TMF Administrators 148
database 28
database files 5
defining objects 28
definition 11
dependency resolution 11
Distributed 11
end-to-end scheduling 2
Index
473
engine 12
enhancements 7
Extended Agents 5
fault tolerant workstation 12
Firewall support 424
four tier network 20
Free Day Rule 7
future 423
HA in Windows NT/2000 environment 251
HACMP 250
home directory 36
HP Service Guard 251
Installation improvements 8
installing 141
installing an agent 141
installing and configuring Tivoli Framework 144
installing Job Scheduling Services 145
installing multiple instances 142
introduction 4
Job 26
Job Scheduling Console 2
Job stream 26
maintenance 167
master domain manager 30
MASTERDM 11
Monitoring file systems 258
multi-domain configuration 19
Multiple holiday calendars 7
naming conventions 138
network 5
network fail safe 37
new security mechanism 423
NT cluster 251
Options 36
overview 4
Parameter 27
Parameters files 260
Performance improvements 8
plan 4, 29
production day 29
Prompt 27
Reporting 35
schedlog directory 257
scheduling engine 12
scripts files 259
Security files 260
security model 160
Serviceability enhancements 424
Setting Global options 36
474
Troubleshooting 333
ABEND 335
ABENDU 336
Abnormal termination 338
AutoTrace 380
batchman down 376
Byte order problem 379
compiler processes 379
connector 366
console dump 345
diagnostic file 339
end-to-end 354
end-to-end working directory 354
evtsize 376
FTA not linked 379
FTAs not linking to the master 374
INCORROUT 336
Information needed 346
internal trace 366
Jnextday in ABEND 379
Jnextday is hung 378
Job Scheduling Console 369
Jobs not running 377
JSC error examples 369
LOOP procedure 341
missing calendars 379
missing resources 379
MSG 335
MSG keyword 337
multiple netman processes 375
negative runtime error 379
PERFM keyword 337
preparing a console dump 344
Problem analysis 338
Problem-type keywords 335
software-support database 335
standard list directory 356
standard list messages 358
Starter log information 358
Symphony renew 362
System dump dataset 340
TCP/IP server 366
Tivoli Workload Scheduler for z/OS 334
Trace information 340
TRACEDATA 373
TRACELEVEL 373
tracing facility 380
tracking events 347
Translator log information 358
Index
475
WRKDIR 115
WSTASTART statement 436
wtwsconn.sh command 147, 451
Z
z/OS job 2
U
Unison 4
UNIX 11, 247
UNIX System Services 95
user 27
USRMEM 115
USRREC 115
USS
See UNIX System Services
V
virtual ip address 99
W
Web browser 2
Windows 11
Windows NT 2
WLM/2
See Workload Manager/2
wlookup 452
work directory 104
workday 7
Workload 2, 94
Workload Monitor/2 94
workstation 26, 34
workstation class 26
Write to Operator 455
476
Back cover
End-to-End Scheduling
with Tivoli Workload
Scheduler 8.1
Plan and implement
your end-to-end
scheduling
environment with
TWS 8.1
Model your
environment using
realistic scheduling
scenarios
Learn the best
practices and
troubleshooting
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
ISBN 0738425079