Beruflich Dokumente
Kultur Dokumente
Syncsort is the global leader in Big Iron to Big Data software. We organize data everywhere, to keep the
world working - the same data that powers machine learning, AI and predictive analytics. We use our
decades of experience so that more than 7,000 customers, including 84 of the Fortune 100, can quickly
extract value from their critical data anytime, anywhere. Our products provide a simple way to optimize,
assure, integrate, and advance data, helping to solve for the present and prepare for the future. Learn
more at syncsort.com.
Chapter 18: Configuring and Using the Ironstream API ................................ 18-1
Overview of the Ironstream API .................................................................... 18-2
Single-send versus Multi-send API ........................................................... 18-2
System Requirements ................................................................................ 18-2
Defining the IRONSTREAM_API Data Type ................................................ 18-3
Data Type Parameters............................................................................... 18-3
CLASS, TYPE, and SUBTYPE Parameters .............................................. 18-3
Ironstream API Configuration Example.................................................... 18-4
Using the Single-send API ............................................................................. 18-5
Single-send API Parameters...................................................................... 18-6
RACF Authorization for the Single-send API............................................ 18-7
Using the Single-send SSDFAPI Routine.................................................. 18-7
Performance and Maintenance Considerations....................................... 18-10
Linking SSDFAPI Into a Load Module.................................................... 18-10
Starting a Single-end API Instance ......................................................... 18-10
Using the Single-send API in CICS ......................................................... 18-11
Using the Multi-send API ............................................................................ 18-12
Multi-send API Request Types ................................................................ 18-12
Multi-send API Parameters..................................................................... 18-13
RACF Authorization for the Multi-send API ........................................... 18-14
Using the Multi-send SSDFPAPI Routine............................................... 18-15
Performance and Maintenance Considerations....................................... 18-16
Linking SSDFPAPI Into a Load Module ................................................. 18-17
Starting a Multi-send API Instance ........................................................ 18-17
Troubleshooting the Ironstream API ........................................................... 18-17
Return Codes and Reason Codes Generated by the Ironstream API ...... 18-18
Handling Data Store Full Conditions ...................................................... 18-20
Ironstream API Coding Examples ............................................................... 18-21
Single-Send API Examples ...................................................................... 18-21
Multi-send API Coding Examples ........................................................... 18-36
Chapter 28: Diagnostics and Contacting Syncsort Product Support ........... 28-1
Before Calling Syncsort Product Support ...................................................... 28-1
Searching the Syncsort Knowledge Base................................................... 28-1
Contacting Syncsort Product Support............................................................ 28-2
North America ........................................................................................... 28-2
Europe, Middle East, and Africa ............................................................... 28-2
Other Regions ............................................................................................ 28-2
Index ............................................................................................................................. i
This section provides introductory and overview information about Ironstream and
how it works.
• “Introduction”
• “Understanding Ironstream”
Chapter 1 Introduction
The Ironstream Configuration and User’s Guide provides instructions for configuring and
running Ironstream® instances. It also describes how to configure the supported data source
parameters to optimize data collection and forwarding efficiency for your environment.
Topics:
• “What’s in this Guide” on page 1-2
• “Audience” on page 1-4
• “Related Resources” on page 1-4
• “Conventions” on page 1-5
Title Description
SECTION I: About Ironstream
• Introduction This chapter
• “Understanding Ironstream” Provides a detailed overview of Ironstream and its
components.
SECTION II: Configuring Ironstream Target Destinations
• “Setting Up Splunk for Describes how to set up Splunk indexes and TCP
Ironstream” ports for forwarding data to Splunk.
• “Setting Up Elastic for Describes how to set up Elastic products to ingest
Ironstream” data forwarded by Ironstream.
• “Setting Up Kafka for Describes how to set up Ironstream’s internal Kafka
Ironstream” producer to publish z/OS data to Kafka brokers.
SECTION III: Configuring and Running Ironstream
• “Configuring Ironstream Describes how to configure the Ironstream forwarder
Components” components for newly-installed Ironstream instances
using the Ironstream Configurator.
• “Manually Setting Ironstream Explains how to configure the Ironstream
Parameters” configuration file parameters to define data sources
and destinations. It also includes descriptions of
typical Ironstream parameters and example
configuration files.
• “Controlling Ironstream Instructions for starting and stopping the Ironstream
Components” forwarders and the their components.
• “Configuring Data Loss Describes how to configure Ironstream to minimize
Prevention” loss of forwarded Splunk data due to extended
network or Splunk server outages.
SECTION IV: Setting Up Ironstream Data Sources
• “Syslog Message Filtering” Describes how to configure syslog message filtering.
• “SMF Record Filtering” Describes how to configure field-level filtering for
SMF record types, either by manually creating control
records in the Ironstream configuration file or using
the GUI-based “SMF Filter Configuration Builder” in
the Ironstream Desktop.
Title Description
• “SYSOUT Forwarding” describes how to configure the SYSOUT data
forwarder to select and forward data sets.
• “Alerts and SyslogD Describes the optional Network Monitoring
Forwarding” components that enable alert monitoring and SyslogD
forwarding.
• “DB2 Data Forwarding” Describes how to configure DB2 table data for
forwarding.
• “Sequential File Forwarding” Describes how to capture data written and stored in
sequential datasets.
• “System State Forwarding” Describes how to capture z/OS LPAR system
performance metrics for forwarding.
• “Configuring and Using the Describes how to configure the Ironstream API to
Ironstream API” capture application information for analysis and
visualization.
• “Setting Up Log4j” Describes how to configure log4j configuration files to
collect log4j records.
• “IMS Log Record Forwarding” Describes how to configure Ironstream to gather IMS
log records.
SECTION V: Setting Up Data Collection Extension Data Types
• “Configuring the DCE Describes how to configure the Data Collection
Parameters” Extension (DCE), an Ironstream component that
provides extensions for collection of data from a
variety of data types.
• “Setting Up USS File Describes how to configure DCE to offload Unix
Collection” System Services (USS) to Ironstream.
• “Setting Up the RMF Data Describes how to configure the RMF Data Forwarder
Forwarder” to collect Resource Measurement Facility III (RMF
III) system performance and utilization data.
SECTION VI: Troubleshooting Ironstream
“Ironstream Commands” Describes how to use Ironstream system commands.
“Operational Considerations” Describes some operational considerations when
using Ironstream, such as message flood automation,
network contention, and data store conditions.
“Ironstream Messages” Contains the system messages generated by core
Ironstream and DCE.
Title Description
“Diagnostics and Contacting Contains basic diagnostic information and contact
Syncsort Product Support” information for Syncsort Product Support.
SECTION VII: Ironstream Audit Reporting
“Using the Ironstream Data Provides information about how to use Ironstream
Usage Reporter” auditing reports.
SECTION VIII: Integration with Splunk Premium Applications
“Splunk Enterprise Security and Describes how Ironstream Technology Add-on can be
Ironstream” configured to work with the Splunk Enterprise
Security application.
SECTION IX: Appendices
“Forwarded Data Formats” Contains the data formats that are forwarded to
Splunk.
“The SSDFCPR Utility” Describes how to use the SSDFCPR utility.
Audience
This document is intended for system administrators who are configuring and running
Ironstream.
Related Resources
For more information about Ironstream functionality and enhancements, refer to the
Syncsort Knowledge Base and the additional Ironstream manuals described below.
Additional Documentation
The following documentation is available for download in the Ironstream section of the
Syncsort MySupport portal:
• Ironstream Features and Functions V2.1 – A detailed compendium of all Ironstream
V2.1 enhancements.
• Ironstream Program Directory V2.1 – For system programmers responsible for program
installation and maintenance. It contains information concerning the materials and
procedures associated with the installation of Ironstream V2.1.
• Ironstream SMF Record Field Reference – An HTML-based reference that describes all
of the SMF record fields that are forwarded to Splunk. This reference is available on the
“Ironstream V2.1 Documents” page on Syncsort’s MySupport page for Ironstream.
• Network Management Component (NMC) Manuals – Full details of all of the
configuration options outlined in “Alerts and SyslogD Forwarding” on page 14-1 are
available in the appropriate Configuration and Reference or Administration manual for
the ZEN component concerned and/or in the ZEN Help system.
Conventions
The following text conventions are used in this document:
Table 1-2: Text Conventions
Convention Meaning
boldface Boldface type indicates graphical user interface
elements associated with an action, or terms
defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or
placeholder variables for which you supply
particular values.
monospace Monospace type indicates commands within a
paragraph, URLs, code in examples, text that
appears on the screen, or text that you enter.
Topics:
• “What Is Ironstream?” on page 2-1
• “Ironstream Components” on page 2-2
• “Supported Data Sources” on page 2-5
• “Ironstream Design Roadmap” on page 2-7
• “Ironstream Starter Edition” on page 2-8
• “Integration with Splunk Premium Applications” on page 2-9
What Is Ironstream?
Ironstream enables you to access real-time, mainframe operational insights through a
number of supported data destination platforms, such as Splunk®, Elastic, and Kafka.
Ironstream captures many different types of data from your z/OS mainframe and generates
JSON formatted data that is forwarded via TCP/IP connections directly into a target
destination repository.
Note: While most examples in this guide are shown using Splunk, there is no restriction on
which supported destination repository can be used, unless specifically noted in that
example.
an API for users of COBOL, C, PL/I, REXX, and Assembler applications for forwarding
user-defined data to a target destination.
Ironstream Components
Ironstream Forwarders
Ironstream delivers data to configured destination using what is referred to as a forwarder
and each forwarder is configured for a specific data source. For some data sources, the
forwarder is also a data gatherer and it filters and formats data before it is forwarded to
destinations. To illustrate this, consider SMF and syslog data sources:
• SMF data is filtered by record type, in some cases by subtype. Many of the fields across
all SMF records contain numeric data, often in binary format. Ironstream performs
significant formatting that is required to forward these fields in a way that makes them
useful and usable in destinations.
• Syslog data is filtered by product/subsystem or message prefix, but the data is entirely
EBCDIC, so Ironstream converts it to ASCII and delivers it to destinations.
Whereas, other data sources, such as the USS file monitor, use a combination of the Data
Collection Extension (DCE) and the Ironstream Desktop (IDT) for filtering and gathering,
but still rely on a forwarder to deliver the data to destinations. Ultimately, an Ironstream
forwarder delivers a specific data source to destinations, but different data sources require
different processing to prepare the data being sent to destinations.
For more information about how Ironstream forwarders interact with other Ironstream
components, see “DCE, IDT, and XCF Configuration Considerations” on page 6-8.
SyslogD support can collect SyslogD messages from the z/OS Syslog Daemon, remote z/OS
Syslog Daemons, or any other network device. Messages can be filtered based on Origin,
Facility, and/or Priority.
Table 2-1 describes how each component can generate alerts for a variety of events that can
occur in the z/OS system. For more information, see Chapter 14, “Alerts and SyslogD
Forwarding”.
Table 2-1: Network Monitoring Components
Component Description
ZEN ZEN is the core of the Network Monitoring component set and
provides many common functions for all other ZEN
components: a browser-based online interface; software
routing; a centralized alerting function (including
user-defined alerts); enhanced system log; SyslogD support;
IP tools; CSM, ECSA and USS panels; common reporting;
REXX support with ZEN Function Pack; user-definable
menus and panel support; timer, message and alert-driven
automation.
ZEN IP MONITOR Real-time IP stack monitor providing full insight into all IP
network data via both 3270 VTAM and ZEN-based panels.
Many IP monitors are provided such as: IP stack (services,
interfaces, activity, connections, gateways), TN3270 response
time, OSPF, EE, X.25, MIBs, OMVS thresholds. Many alerts
available in several categories such as Availability,
Performance, Capacity and Message. Several utilities
available for network analysis and monitoring. Handles single
or multiple stacks on single or multiple LPARs and across as
many CPUs and Sysplex configurations as required.
ZEN EE MONITOR Sophisticated monitor for EE and APPN/HPR data providing:
insight into EE activity and performance; dynamic alerting
and incident reporting; online and offline historical data
recording; comprehensive diagnostic toolset; Netview/REXX
interface for automated network monitoring via ZEN; optional
collection and display of SNA session awareness data.
ZEN OSA MONITOR Provides a convenient single point from which you can
monitor all OSAs accessible to the LPAR on which ZEN is
running. Provides both event and threshold monitoring and
panels for OSA information, channels and ports, interfaces,
LPARs, VTAM and interface information. Particularly useful
is the Health Check function which provides a way of
dynamically checking whether there are any problems or
potential problems with your OSAs.
Component Description
ZEN LINUX MONITOR Enables you to monitor all of your Linux systems using a set
of clear panels. Also enables you to set thresholds for key
Linux resources such as the System and User CPU, memory
and swap utilization, TCP connections and retransmissions
and number of active processes, which are monitored and an
alert raised should the usage exceed the threshold set.
ZEN FTP CONTROL Monitors activity in all z/OS FTP servers and clients. History
file available for browsing online that records all FTP activity.
Provides many monitoring panels both as 3270 VTAM and
ZEN-based panels. Every FTP action such as RACF logon
failures, invalid userid/password, unrecognized command,
complete or incomplete transfers both inbound and outbound,
cause an alert message to be issued.
Source Description
API User applications can create and send user data and forward it to
a destination. Ironstream supports two user API functions:
• Single-send for use where a persistent environment is not avail-
able, such as in a CICS transaction.
• Multi-send for use where the environment is consistent from
call-to-call, such as in a batch job.
See “Configuring and Using the Ironstream API,” on page 18-1.
DB2 Data inserted into DB2 tables. See “DB2 Data Forwarding,” on
page 15-1.
FILELOAD Any file containing records that have displayable EBCDIC data.
See “Sequential File Forwarding,” on page 16-1.
IMS Many IMS log records can be captured by Ironstream. For a list of
all supported IMS record types, see “IMS Log Record Forwarding,”
on page 20-1.
Log4j Application user log data. See “Setting Up Log4j,” on page 19-1.
Network Alerts Alerts generated by the ZEN suite of Network Monitoring
components. See “Alerts and SyslogD Forwarding,” on page 14-1.
Source Description
RMF DCE data type that collects user-selected RMF III system
performance and utilization data. See “Setting Up the RMF Data
Forwarder,” on page 23-1.
SMF Many SMF record types can be captured by Ironstream. For a list
of all supported SMF record types, see “SMF Record Filtering,” on
page 12-1.
Syslog All messages can be captured, or selected messages from ACF2,
CICS, DB2, IMS, RACF, Top Secret, USS, WebSphere Application
Server and WebSphere MQ and z/OS IEF messages. You may also
specify three and four-character message prefixes to capture. See
“Syslog Message Filtering,” on page 11-1.
SyslogD Any messages written to SyslogD and captured by the ZEN
Network Monitoring core component. See “Alerts and SyslogD
Forwarding,” on page 14-1.
SYSOUT Spool data sets for both active and completed jobs as well as input
data sets and JES system data sets: JESJCLIN, JESMSGLG,
JESJCL and JESYSMSG. See “SYSOUT Forwarding,” on
page 13-1/
SYSTEMSTATE Generates z/OS system-level metrics data to forward to a
destination. See “System State Forwarding,” on page 17-1.
USS DCE data type that monitors USS directories and files at specified
intervals for offloading to Ironstream according to user-defined
filters. See “Setting Up USS File Collection,” on page 22-1.
For descriptions of all the fields in the supported SMF record types, refer to the
HTML-based Ironstream SMF Record Field Reference available on the “Ironstream V2.1
Documents” page on Syncsort’s MySupport page for Ironstream.
For syslog and FILELOAD data, refer to “Forwarded Data Formats” on page A-1, where a
list of the fields and an example of the fields’ formatting in a chosen destination is provided.
Ironstream forwards one type of data in each Ironstream instance. As an example, this
means that to collect syslog, SMF and log4j data requires three address spaces to be started.
Each instance of Ironstream requires a separate configuration file member that includes a
NAME parameter in the SYSTEM section. This parameter defines the unique ‘instance
name’ of that specific instance of Ironstream running in the LPAR.
• Data Collection Extension for USS monitoring and RMF III metric collection:
▪ Requires IBM Cross System Coupling Facility (XCF)
▪ Requires IDT for dynamic monitoring and configuration changes
▪ DCE USS requires file permissions in HFS
▪ DCE RMF requires RACF user ID to access the RMNF Distributed Data Server
(DDS)
• Syslog – No access restrictions to OPERPARM segment of a RACF user ID:
▪ If the z/OS uses the Message Processing Facility, or any similar product, all messages
to be forwarded by SDFLOG are set to AUTO(YES) in the MPF configuration.
• SYSOUT – Requires READ access to any JESSPOOL data set to be forwarded
• ZEN NMC –Requires VTAM application major node
This chapter describes how a Splunk administrator needs to set up one or more indexes for
forwarding records to Splunk, as wells as a custom TCP port for the mainframe data.
Topics:
• “Overview of Setting Up Splunk” on page 3-1
• “Setting Up a Non-SSL Port” on page 3-2
• “Setting Up an SSL Port” on page 3-2
• “Setting Up a Splunk Index” on page 3-3
• “Next Steps” on page 3-3
Splunk Platform
Following is a sample procedure to set up a Splunk SSL port. In this example, cacert.pem is
used for the default Splunk SSL CA certificate, 9998 is used for the SSL port number, and
server.pem is used for the Splunk server’s certificate.
1. Edit $SPLUNK_HOME/etc/system/local/inputs.conf:
[tcp://9998]
connection_host = dns
sourcetype = json
[tcp-ssl:9998]
compressed = false
[SSL]
password = password
rootCA = $SPLUNK_HOME/etc/auth/cacert.pem
serverCert = $SPLUNK_HOME/etc/auth/server.pem
Mainframe Platform
Follow these steps to configure mainframe security when using a Splunk SSL port.
1. Set up a keyring database in the OMVS keyring database, using a RACF key data set or
with a key token.
2. Import the Splunk CA certificate (cacert.pem) sent from the Splunk platform to one or
all key databases.
3. Assign a label to this certificate in the key data set.
Next Steps
These chapters have information on configuring Ironstream for Splunk destinations:
• Chapter 6, “Configuring Ironstream Components” – Describes how to configure the
Ironstream forwarder components for newly installed Ironstream instances using the
Ironstream Configurator utility.
• Chapter 7, “Manually Setting Ironstream Parameters” – How to configure the
Ironstream configuration file parameters to define data sources and target destinations.
• Chapter 10, “Configuring Data Loss Prevention” – How to configure Ironstream to
minimize any loss of forwarded Splunk data due to extended network or Splunk
outages.
This chapter describes how an Elastic administrator needs set up the Logstash,
Elasticsearch, and Kibana products to ingest data forwarded by Ironstream.
Topics:
• “Overview of Elastic Support in Ironstream” on page 4-1
• “Forwarding Data to Logstash” on page 4-2
• “Receiving Ironstream Data in Elasticsearch” on page 4-4
• “Displaying Ironstream Data in Kibana” on page 4-4
• “Field Mappings and Elastic Defaults” on page 4-4
• “Next Steps” on page 4-4
• DCE USS – Data can be delivered to Elastic, but due to current limitations the source
(file name) will not be supplied. For more information, see Chapter 22, “Setting Up USS
File Collection”.
Logstash Configuration
Logstash is an open source data collection engine with real-time pipe-lining capabilities.
Data arriving from Ironstream is processed in the usual way by Logstash.
Here is an example Logstash configuration file (logstash.yml) that shows the “input”,
“filter”, and “output” sections, where different types of data are received (input) over
different ports, processed (filter), and sent (output) to different indexes:
input {
# Port 4300 receives SMF data
tcp {
port => 4300
type => "smf"
codec => "json"
}
# Port 4310 receives SYSLOG data
tcp {
port => 4310
type => "syslog"
codec => "json"
}
# Port 4315 receives SYSTEMSTATE data
tcp {
port => 4315
type => "systemstate"
codec => "json"
}
}
filter {
#
# Use grok, mutate, ruby etc. to manipulate data to requirements.
#
}
Notes
• In the “input” section of the above Logstash configuration, each “tcp” section has:
codec => "json"
Which could equally be:
codec => "line"
Or:
codec => "json_lines"
Even though Ironstream sends JSON formatted data, it arrives as a single-line of
data for each event/record. Therefore, all three codecs can be used.
• To send data via HTTP (non-SSL) or secure TCP, refer to the Elastic documentation.
Standard Elastic configuration and processing is supported.
Refer to the Elastic documentation for details on how to update this setting.
2. Use Ironstream filtering to decrease the number of fields sent to Elastic, and keep under
the default limit. Refer to chapters in the Ironstream documentation that describe
message or record filtering.
Next Steps
These chapters have information on configuring Ironstream for Elastic destinations:
• Chapter 6, “Configuring Ironstream Components” – Describes how to configure the
Ironstream forwarder components for newly installed Ironstream instances using the
Ironstream Configurator utility.
• Chapter 7, “Manually Setting Ironstream Parameters” – Describes how to manually
configure the Ironstream configuration file parameters to define data sources and target
destinations.
This chapter describes how to configure Ironstream’s internal Kafka producer to publish
z/OS data to Kafka brokers via Ironstream.
Topics:
• “Overview of Apache Kafka Support in Ironstream” on page 5-2
• “Downloading and Installing Kafka to z/OS OMVS Systems” on page 5-3
• “Applying the Kafka Function to Ironstream” on page 5-4
• “Providing APF Authorization for Ironstream Programs” on page 5-5
• “Configuring Ironstream to Use the Kafka Producer” on page 5-6
• “Confirming Kafka Activity Status in Ironstream” on page 5-8
• “Next Steps” on page 5-10
Sample JCL:
//RECEIVE EXEC PGM=GIMSMP,REGION=0M,PARM='CSI=<your.hlq>.SMP.CSI'
//SMPPTFIN DD DISP=SHR,DSN=<your.hlq>.SDFPTF
//SMPCNTL DD *
SET BDY(GLOBAL).
RECEIVE SYSMODS LIST.
If the above APPLY CHECK was successful, resubmit with the following change:
APPLY
C(ALL) GROUP SELECT(LSDK210) BYPASS(HOLDSYSTEM).
extattr +a libSDFKafkaJNI.so
The Unix “find” command searches all the subdirectories for the *.so file names and pipes
the results into xargs, which converts them into the appropriate input format for extattr.
"TARGET": "target_name" - This required parameter defines Kafka as the data target for
this Ironstream instance. The valid values are ELASTIC, KAFKA, or SPLUNK.
If a target is not specified, the default value for TARGET is SPLUNK.
"TOPIC":"topic_name" - This required parameter defines the name of the Kafka topic for
this Ironstream instance.
"JAVA64BIT":"YES" | "NO" - This required parameter specifies whether to use the 64-bit
or 31-bit version of the JVM. This value is based on which directory the JAVAHOME
parameter points to.
• YES - 64-bit version of Java is used
• NO - 31-bit version of java is used
"PORT": "port" - This required parameter specifies the port number on which the Kafka
broker is listening.
Note: Any number of IPADDRESS and PORT combinations can be specified. This would be
considered as the broker list for the Kafka cluster, and it is solely used by the Kafka
producer for load balancing.
[KEYS]
"KEY_WARN_DAYS":"30"
"KEY":"NNNNNNNNNNNNNNNN"
[SYSTEM]
"NAME":"SDF"
[DESTINATION]
"TARGET":"KAFKA"
"TOPIC":"syslog1"
"JAVAHOME":"/usr/lpp/java/J7.1"
"JAVA64BIT":"NO"
"SSDFHOME":"/usr/lpp/ironstream/kafka"
"KAFKHOME":"/usr/lpp/kafka/kafka_2.11-0.10.0.0/libs"
"IPADDRESS":"192.168.61.3"
"PORT":"9092"
[SOURCE]
"DATATYPE":"SYSLOG"
"FILTER":"SSDFFLOG"
--------------------------------------------------------
* KAFKA STATISTICS *
--------------------------------------------------------
Status Blocks sent Bytes transmitted
ACTIVE 69 1668159
--------------------------------------------------------
Note that the Status is ACTIVE. When a KAFKA broker becomes unavailable to send data,
the status will change to INACTIVE.
For more information about the Ironstream commands, refer to Chapter 25, “Ironstream
Commands”.
Next Steps
These chapters have information on configuring Ironstream for Kafka destinations:
• Chapter 6, “Configuring Ironstream Components” – Describes how to configure the
Ironstream forwarder components for newly installed Ironstream instances using the
Ironstream Configurator utility.
• Chapter 7, “Manually Setting Ironstream Parameters” – Describes how to manually
configure the Ironstream configuration file parameters to define data sources and target
destinations.
This section provides instructions for configuring and running Ironstream forwarders
and components.
• “Configuring Ironstream Components”
• “Manually Setting Ironstream Parameters”
• “Controlling Ironstream Components”
• “Dynamically Modifying a Running Ironstream Configuration”
• “Configuring Data Loss Prevention”
Chapter 6 Configuring Ironstream
Components
This chapter describes how to configure the Ironstream forwarder components for newly
installed Ironstream instances, either manually or using the Ironstream Configurator. The
ensuing chapters in this manual describe how to configure the data source parameters to
optimize data collection and forwarding efficiency for your environment.
Topics:
• “Overview” on page 6-2
• “Manually Configuring Ironstream” on page 6-4
• “About the Ironstream Configurator Utility” on page 6-7
• “Running the Ironstream Configurator Utility” on page 6-11
• “Post Configurator: Additional Actions” on page 6-23
Note: The configuration instructions in this chapter assume that the Ironstream V2.1
SMP/E installation steps have been successfully completed. For installation instructions, see
the Ironstream Program Directory for version 2.1. This guide is for system programmers
responsible for program installation and maintenance.
Overview
Ironstream delivers data to destinations using what is referred to as a forwarder and each
forwarder is configured for a specific data source. Data source parameters are defined in the
Ironstream configuration file, along with other configurable parameters such as license
keys, local system symbols, and various system-related values. Every data source requires a
forwarder with an appropriate configuration file. Some data sources have additional
components requiring additional configuration files.
The configuration file is fully described in “Manually Setting Ironstream Parameters,” on
page 7-1.
To validate all the above, there are two tests that we recommend you perform.
1. Configure the SYSTEMSTATE data source as described in Chapter 17, “System State
Forwarding” and start the forwarder. We recommend using the SYSTEM STATE data
source as it is the easiest to configure.
a. The following message is issued if the dataset your-hlq.SSDFAUTH is not APF
authorized.
SDF0615A SDF LOAD LIBRARY MUST BE APF AUTHORIZED. TERMINATING
b. The following message is issued if the product license key in the configuration file is
invalid:
SDF0151S SDF IS TERMINATING BECAUSE NO VALID KEY WAS FOUND
The Ironstream forwarder ends with Return Code 16 in both these error situations.
Configure the target destination’s port by following the instructions in Chapter 3,
“Setting Up Splunk for Ironstream” or Chapter 4, “Setting Up Elastic for Ironstream”
and verify that the SYSTEM STATE data is being successfully forwarded.
2. Configure the SMF data source as described in “Manually Defining SMF Filtering
Configurations” on page 12-7 and specify a low volume SMF record type that you are
collecting: "SELECT":"SMFnnn". Review the messages issued by the SMF forwarder task
during initialization.
a. Message SDF0706A indicates that Ironstream is unable to activate its SMF exits.
The forwarder task terminates.
b. Any message in the range SDF0707I to SDF0711I indicates an issue with the
definition of the IEUF83/4/5 exits in your SMFPRMxx member, meaning that not all
SMF records are available to Ironstream. The Ironstream forwarder remains active.
c. If your SMFPRMxx member correctly defines the IEFU83/4/5 exits, you will see
multiple instances of message SDF0705I for all three exits for SYSTEM, and for each
subsystem (SUBSYS=STC, SUBSYS=TSO, etc.).
If you encounter the messages described either a.) or b.), issue the “D SMF,O” system
command, review the output, and correct any omissions.
"SYSTEM"
"NAME":"SLOG"
"DESTINATION"
"INDEX":"mfslogindex"
"TYPE":"TCP"
"IPADDRESS":"nnn.nn.nn.nnn"
"PORT":"nnnnn"
"SSL":"NO"
"SOURCE"
"DATATYPE":"SYSLOG"
"FILTER":"SSDFFLOG"
For more information about modifying the configuration file, see “Manually Setting
Ironstream Parameters,” on page 7-1.
Source Description
Syslog Syslog message filtering is accomplished by building a message
filtering module using the SSDFFLOG macro. See “Syslog
Message Filtering,” on page 11-1.
SYSOUT There is a specific sequence in which the SELECT and FILTER
parameters must be specified or the Ironstream initialization will
fail. See “SYSOUT Forwarding,” on page 13-1.
Source Description
SMF Ironstream provides an SMF filtering facility for selecting specific
fields within an SMF record or subtype, so you can control how
much data is forwarded to a destination. See “SMF Record
Filtering,” on page 12-1.
IMS Many IMS log records can be captured by Ironstream. For a list of
all supported IMS record types, see “IMS Log Record Forwarding,”
on page 20-1.
DB2 To enable the collection of DB2 data, you must use the SSDFTRIG
and SSDFPROC members of SDFSAMP to create the necessary
DB2 objects and modify the configuration file. See “DB2 Data
Forwarding,” on page 15-1.
FILELOAD Multiple files can be loaded in a single invocation of Ironstream,
either by concatenation using the standard rules of concatenation,
or by using multiple DD statements and multiple FILELOAD
configuration commands. See “Sequential File Forwarding,” on
page 16-1.
IRONSTREAM_API User applications can create and send user-defined EBCDIC data
and forward it to a destination as ASCII. See “Configuring and
Using the Ironstream API,” on page 18-1.
SYSTEMSTATE Generates z/OS system-level metrics data to forward to a
destination. See “System State Forwarding,” on page 17-1.
Note: There are no required parameters for SYSTEMSTATE.
Next Steps
• Configure destination targets for Ironstream:
▪ Set Up Splunk for Ironstream – You need to set up dedicated TCP ports and one
or more indexes in Splunk so that Ironstream can successfully forward your chosen
data sources to it. For more information, see “Setting Up Splunk for Ironstream,” on
page 3-1.
▪ Set Up Elastic for Ironstream – You need to set up an open port on your Logstash
server so that Ironstream can successfully forward your chosen data sources to it. For
more information, see “Setting Up Elastic for Ironstream,” on page 4-1.
• Start the Ironstream Forwarder – After manually configuring your data source you
can start the Ironstream forwarder tasks, as described in “Controlling Ironstream
Components,” on page 8-1.
Ironstream Forwarder(s)
Each Ironstream forwarder is a separate started task and so must have a unique
four-character instance name in its configuration file, in the format of "NAME":xxxx". The
instance name must be unique within a SYSPLEX. The Ironstream configuration file also
includes the XCF group to which the forwarder belongs, and the XCF member name within
the group.
Figure 6-4 shows a configuration that defines an XCF forwarder for USS processing:
"SYSTEM"
"NAME":"USS1"
"DESTINATION"
"INDEX":"ussindex"
"TYPE":"TCP"
"SOURCE"
"DATATYPE":"XCF"
"XCFGROUP":"USSXCFG1"
"XCFMEMBER":"USSXCFM1"
Figure 6-5 shows an Ironstream configuration that defines an XCF forwarder for RMF
processing:
"SYSTEM"
"NAME":"RMF1"
"DESTINATION"
"INDEX":"rmfindex"
"TYPE":"TCP"
"SOURCE"
"DATATYPE":"XCF"
"XCFGROUP":"RMFXCFG2"
"XCFMEMBER":"RMFXCFM1"
DCE Tasks
Each DCE data type must have its own DCE task and associated Ironstream forwarder
task(s). For example, for the USS data type, the DCE configuration specifies only
parameters for USS data collection.
Note: DCE USS data can be delivered to Elastic Stash destinations, but due to current
limitations the source (file name) will not be supplied. For more information, see “Setting Up
Elastic for Ironstream,” on page 4-1.
Figure 6-6 shows the significant DCE Global Settings parameters for USS.
Define Globals
InstanceName DCEUSS
Maxthreads 4
On start-up, an XCF connection is established between DCE and all of the currently active
members in the same XCF group. RMF is a single thread process and the first member in
the DCE Global Settings is used. If this primary forwarder is stopped or terminates,
forwarding is automatically switched to the second task. USS is a multi-threaded process
and the workload is balanced across all available XCF forwarders.
For more information about configuring DCE parameters for USS and RMF, see
“Configuring the DCE Parameters,” on page 21-1.
• The zFS root name is not required if you are configuring the base Ironstream
component only, in which case leave the default value. If you blank out the zFS root
name field, the only component that is configurable is base Ironstream.
Press Enter to continue to Panel 6-2.
Note that the Date column in this example is displayed in national language format.
//SDFSMF PROC
//*------------------------------------------*
//* IRONSTREAM SMF data forwarder task *
//*------------------------------------------*
//IEFPROC EXEC PGM=SSDFMAIN
//STEPLIB DD DISP=SHR,
// DSN=yourhlq.SSDFAUTH
//SYSPRINT DD SYSOUT=*
//SYSABEND DD SYSOUT=*
//SSDFCONF DD DISP=SHR,
// DSN=yourhlq_parmlib(CFGSMF)
//CEEOPTS DD DISP=SHR,
// DSN=yourhlq_parmlib(CEEOPTS)
// PEND
2. A CEEOPTS DD statement is included in each started task member. This is optional for
all data sources except SMF and log4j, and can be commented out elsewhere.
3. The SYSABEND DD statement is optional but it is advisable to have it for a period after
installation.
• Forwarder Parameters – Specify the STC Name of the Ironstream Forwarders for
RMF and USS, the Index Names, and the IP Addresses and Port numbers of the
Splunk or Logstash servers.
Note: The Index Name is valid for RMF data forwarding; however, for the USS data
type, this field is ignored: the actual USS index names are defined in the USS filters.
For more details, see Chapter 22, “Setting Up USS File Collection”.
• XCF Group Parameters – Each DCE data source uses XCF, so provide separate XCF
Group and Member names. For optional failover, you can manually add additional
XCF members to the generated DCERMF and DCEUSS configuration members. For
instruction on adding additional XCF members, see “Configuring the DCE Parameters,”
on page 21-1.
• DCE Parameters – Specify the STC Name for RMF and USS. For RMF, provide the
IP Address and Port of the RMF III Distributed Data server (DDS). This is typically
defined in a system PARMLIB member called GPMSRVnn. The help screen provides
additional information about RACF authorities that are needed for the RMF DDS.
Press Enter to Proceed to the Ironstream Desktop configuration panel shown in Panel 6-6.
you don’t want them to be able to change RMF III or USS definitions. Using this scenario,
here are the required steps:
1. Designate an IDT administration user. For this example, let’s assume their RACF
userid is IDTADM.
2. Create the class FACILITY profile WDS.ZEN.MASTER, with UACC(NONE).
3. Permit IDTADM READ access to WDS.ZEN.MASTER.
4. Within IDT, user IDTADM uses the Admin menu to select Ironstream(ussn) > USS
Defaults. (For a sample screenshot, refer to Panel 22-4, “USS Defaults”.)
5. Right-click in the USS Defaults panel and select Set Security. This opens a pop-up
that allows you to set the access level for the USS Defaults panel.
For more detailed information about using each Network Monitoring Component with
Ironstream, see Chapter 14, “Alerts and SyslogD Forwarding”.
• “Controlling the Ironstream Desktop (IDT)” on page 8-3 explains the options for
managing IDT for NMC components and for DCE data types.
• “Controlling the Data Collection Extension (DCE)” on page 8-4 describes the options for
managing DCE.
• “Controlling the Network Monitoring Components (NMC)” on page 8-7 discusses the
options for the NMC components.
Next Steps
• Modify Basic Configuration File Parameters – After running the Configurator,
minor modifications to the Ironstream configuration file are required for some data
sources, such as log4j and the Ironstream API. Sample configuration members for some
data sources are provided in the your_hlq.SSDFSAMP library. For more information
about modifying the configuration file, see Chapter 7, “Manually Setting Ironstream
Parameters”.
• Configure target destinations for Ironstream:
▪ Set Up Splunk for Ironstream – You need to set up dedicated TCP ports and one
or more indexes in Splunk so that Ironstream can successfully forward your chosen
data sources to it. For information on how you do this, see Chapter 3, “Setting Up
Splunk for Ironstream”.
▪ Set Up Logstash for Ironstream – You need to set up an open port on your
Logstash server so that Ironstream can successfully forward your chosen data
sources to it. For more information, see Chapter 4, “Setting Up Elastic for
Ironstream”.
This chapter explains how to configure the Ironstream configuration file parameters to
define data sources and target destinations. It also includes descriptions of typical
Ironstream parameters and example configuration files.
Topics:
• “Overview of the Configuration File” on page 7-2
• “General Syntax Rules” on page 7-3
• “Configuration Parameters” on page 7-6
• “Typical Ironstream Parameters” on page 7-27
• “Configuration File Examples” on page 7-29
Keywords
• All keywords must be entered in upper case. Most values should also be entered in
upper case. However, there are certain exceptions, including:
▪ Log4j SOURCE parameters NAMEDPIPE, JAVA HOME, SDFHOME, FILENAME,
TIMESTAMP and PATTERN keywords.
▪ Certain DESTINATION parameters, such as INDEX, IPADDRESS, and
CERTIFICATE.
▪ SMF record filtering selection INCLUDE and WHERE statements.
INCLUDE":"SMF30DTE,SMF30TME,
SMF30JNM"
An example of an invalid parameter string is:
"INCLUDE":"SMF30DTE, SMF30TME,
SMF30JNM"
The blanks between the commas and the value are invalid.
▪ In the second method, column 72 is used to indicate the continuation of data on the
following record. The continued data starts in column 1 and is logically concatenated
to the data in column 71 of the previous record. This method is typically used for a
long single parameter, such as a NAMEDPIPE name. This method overrides the first
method and is incompatible with it.
▪ An example of this method, with column numbers as the first two lines for clarity, is:
----+----1----+----2----+----3----+----4----+----5----+----6----+----7--
¨SOURCE¨
"DATATYPE":"LOG4J"
"NAMEDPIPE":"\user\directory_level_1\directory_level_2\directory_level_+
003100 3\filename.type"
Note that the plus sign ‘+’ in column 72 could be any non-blank character.
Configuration Parameters
The configuration file starts with the KEYS section and is followed by one each of the
SYMBOLS (optional), SYSTEM, DESTINATION, and SOURCE sections. The combination
of the parameters defined in these sections forms the characteristics of the run-time
environment of an Ironstream instance to handle one data source. Should you require
multiple data sources, simply define several configuration files, one for each data source.
Some of the parameters described in this chapter have further optional subparameters that
can be used to fine-tune the data that is extracted and forwarded. These are fully covered in
other chapters and appropriate references are made in the text that follows.
• “KEYS Section” on page 7-6
• “SYMBOLS Section” on page 7-6
• “SYSTEM Section” on page 7-7
• “DESTINATION Section” on page 7-11
• “SOURCE Section” on page 7-16
KEYS Section
The KEYS section is used to supply the Ironstream license keys and optionally to set an
expiration warning period. License keys are provided by Syncsort, Inc.
Note: If you are using the Ironstream Starter Edition to forward only syslog and/or SMF
record type 205 data, then you cannot have a KEYS section in your configuration; otherwise,
this section is mandatory.
You may need to generate an SSDFCPR report to provide information for the creation of
license keys. For more information, refer to “The SSDFCPR Utility” on page B-1.
"KEYS" - Mandatory parameter that identifies the start of the KEYS section.
SYMBOLS Section
The SYMBOLS section is used to define local system symbols that can be used for
substitution in any subsequent configuration parameters (symbols may not be used in this
section).
This section is optional, but if defined it must appear immediately after the KEYS section
and precede the SYSTEM section.
To define a local system symbol use the following parameter format:
SYSTEM Section
The SYSTEM section specifies the name of this Ironstream instance that is running and
other global parameters. With the exception of the "NAME", "USAGE_SMF_NUMBER", and
"IMPORT" parameters, the SYSTEM parameters can appear in any sequence.
"SYSTEM" - Mandatory parameter that identifies the start of the SYSTEM section.
"LOADLIB":"library" - (optional) library is the data set name of the load library where the
Ironstream modules reside. If this is not specified, the load modules must reside in
the LINKLST or in a library specified by a JOBLIB or STEPLIB DDNAME
concatenation.
"FILELOAD":"ddname" - ddname is the name of the DD statement for the sequential file to
be loaded in the step that executes PGM=SSDFMAIN. Multiple files can be loaded in
a single run. Use multiple FILELOAD parameters with different ddname values to
load multiple files. Refer to the section “Capturing Sequential Data” on page 16-1 for
an example of the JCL required to load sequential files.
"IMPORT":"ON" - Optional parameter that specifies that data is to be imported from a flat
file rather than an active data source and forwarded to a destination in the usual
way. Ironstream can import data from syslog, SMF, log4j or other USS data files.
Important!
• The RECOVERY parameter that performed this function in Ironstream V1.2 is
still available for backward compatibility. Existing JCL decks will continue to
work without change. However, Syncsort strongly recommends that you change
any RECOVERY parameters to IMPORT as soon as practical since V2.1 will be
the last release to support the RECOVERY parameter.
• IMPORT is incompatible with Data Loss Recovery (DLP). Configuring both
functions will generate message SDF0190A and cause the Ironstream instance to
terminate. For more information about DLP parameters, refer to Chapter 10,
“Configuring Data Loss Prevention”.
When the IMPORT parameter is used, the START DATE, START TIME, END DATE
and END TIME parameters must also be specified and be in the format appropriate
to the data source you intend to import.
For syslog and SMF data the formats are:
"START DATE":"mm/dd/yyyy"
"START TIME":"hh:mm:ss.th"
"END DATE":"mm/dd/yyyy"
"END TIME":"hh:mm:ss.th"
For log4j data the formats are:
"START DATE":"yyyy/mm/dd"
"START TIME":"HH:mm:ss,SSS"
"END DATE":"yyyy/mm/dd"
"END TIME":"HH:mm:ss,SSS"
Only records within the specified date and time boundaries are forwarded to a
destination.
DESTINATION Section
The DESTINATION section describes where the Ironstream data is to be sent and how it is
to be forwarded.
For Splunk destinations, multiple indexers and ports can be specified by repeating the
IPADDRESS and associated parameters for each indexer and/or port. Since each indexer
and port are individually specified, a mix of TCP and TCP/SSL destinations may be
specified. All indexers addressed must be able to store data in the index_name specified by
the INDEX parameter.
Figure 7-1 provides a summary of all the possible parameters in DESTINATION section.
The following sections provide detailed explanations for each parameter.
DLP Parameters:
"DATA_LOSS_PREVENTION":"NO|YES" Refer to “Data Loss Prevention Parameters” on
"DLP_STREAM_NAME":"value" page 7-13.
"AUX_SPACE_NAME":"SSDFAUX/
auxiliary_space_name"
"ACK_TIMEOUT_LIMIT":"60/nnnn"
"ACK_RETRY_LIMIT":"1/nn"
"DLP_COLD_START":"NO|YES"
"DATASTORESIZE":"nnnn" - Optional parameter that specifies the size of the data store
in megabytes. Valid range is 100–2000. The default value is a calculated value based
on the MAXUSER value in the IEASYSnn member of PARMLIB as 400K *
MAXUSER / 1M.
Important! For more information about correctly configuring size of the data store,
see “Data Store Filling or Full Condition” on page 26-2.
"RETRY_ON_START": "NO" | "YES" - Optional parameter that when set to YES instructs
Ironstream to make multiple attempts to connect to the destination repository when
an Ironstream instance is starting up. The NO default means that Ironstream will
terminate when no connection is made upon start-up.
"PORT": "port" - Mandatory parameter that specifies the port number on which the
targeted destination is listening. Any available and valid port number can be used.
"SSL": "YES" | "NO" - Specify YES if SSL is active or NO if SSL is not active.
"STASH_FILE":"stash" - stash is the name of the stash file for an OMVS keyring file, such
as /u/sdf/mykey.sth. If a password is specified for the OMVS keyring file, this stash
value will be ignored and password will be used. For a RACF-type environment or for
a key token, specify "" after the colon to signify a null value.
"CERTIFICATE":"label" - label is the certificate label of the client End Entity certificate
that must be supplied when the destination server requires a client certificate; for
example, “CLSPLUNK” for Splunk servers. To complete the SSL configuration, the
client certificate must be signed by a CA certificate installed on the destination
server.
For more information about Splunk client certificates, refer to the descriptions of
“requireClientCert” in the “inputs.conf” section of the Splunk Admin Manual.
SOURCE Section
The SOURCE section specifies the data source that is being collected and sent to the target
destination specified in the DESTINATION section. Only one SOURCE section can be
specified because each Ironstream instance is dedicated to forwarding one specific data
source.
When the TRANSLATE_TABLE parameter is specified it defines a different EBCDIC to
ASCII translate table that Ironstream uses when forwarding data to a destination. All other
parameters are dependent on the specification of the DATATYPE parameter.
"SOURCE" - Mandatory parameter that identifies the start of the SOURCE section.
You can create your own translate tables and add them to a library in the SDFxxx
STEPLIB concatenation using a member name of the format SSDFUxxx. The module
must be 256 bytes in size. Reminder: All libraries in the concatenation must be
APF-authorized.
For example, if your system’s code page is EBCDIC 273 or 1141, add
"TRANSLATE_TABLE":"SSDF1141" to your configuration’s SOURCE section.
If your system’s code page is EBCDIC 1141 or 1148, and you would like to display "€"
correctly in Splunk, set CHARSET=ISO8859-15 in the Splunk props.conf file. Refer
to the Splunk documentation for instructions on how to set CHARSET.
Unconverted EBCDIC Hex Values When Using “Remove ASCII Control Characters”
Format Module
The following EBCDIC hex values are the only values that are not converted into spaces.
Table 7-2: Unconverted EDBCDIC Hex Values
Name Definition
DB2TRIGGER Specify DB2TRIGGER to capture and forward messages
generated by a DB2 trigger procedure driving a DB2 Stored
Procedure.
See “DB2TRIGGER Subparameters” on page 7-20.
SYSLOG Specify SYSLOG to capture and forward messages sent to syslog.
See “SYSLOG Subparameters” on page 7-20.
SMF Specify SMF to capture and forward any supported SMF records.
See “SMF Subparameters” on page 7-21.
SYSOUT Specify SYSOUT to forward spool data sets from active or
completed jobs to a destination.
See “SYSOUT Subparameters” on page 7-25.
LOG4J Specify LOG4J to capture and forward messages from log4j
processes sent to z/OS.
See “LOG4J Subparameters” on page 7-24.
XCF Specify XCF to capture USS files and file updates, as well as RMF
III DDS metrics. XCF is also specified if you intend to forward
alerts and/or SyslogD data from the ZEN Network Monitoring
components to a destination.
See “XCF Subparameters” on page 7-25.
FILELOAD Specify FILELOAD to read and transmit sequential records from
DD statements defined by the FILELOAD parameter in the
(“SYSTEM Section” on page 7-7).
See “Batching FILELOAD Data” on page 16-3.
RACF Specify RACF to read and transmit an unloaded RACF database
from the DD statement defined by the RACFLOAD parameter in
the SYSTEM section (page 8). Ironstream processes the unloaded
RACF database and forwards each record to a destination
formatted as name pairs (JSON format) and then shuts down.
IRONSTREAM_API Specify IRONSTREAM_API to create user-defined data to send to
one or more Ironstream instances to forward on to a destination.
See “IRONSTREAM_API Subparameters” on page 7-26.
Name Definition
SYSTEMSTATE Specify SYSTEMSTATE to generate z/OS system-level metrics
data to forward to a destination.
There are no required parameters for SYSTEMSTATE. See
“System State Forwarding” on page 17-1 for a description of all
the generated data elements.
IMSLOG Specify IMSLOG to capture and forward any supported IMS
record logs.
See “IMS Subparameters” on page 7-27.
DB2TRIGGER Subparameters
An Ironstream instance must be active to receive the DB2 data from the Stored Procedure
program.
SYSLOG Subparameters
In addition to the parameters required when selecting syslog records for forwarding, you
must also filter syslog messages by application, or any defined 3 or 4-character message
prefix. Otherwise, you may be forwarding far more messages than you actually require. The
filter module name is specified using the FILTER subparameter. For more details about the
syslog filtering facility and examples, see Chapter 11, “Syslog Message Filtering”.
"SCOPE":"scope" - This optional parameter enables you to select the scope of syslog
message forwarding, either sysplex-wide, or local to the LPAR on which Ironstream is
running. Valid settings for scope are SYSPLEX, which is the default value when the
parameter is omitted, or LOCAL.
In SYSPLEX mode only one Ironstream instance per sysplex should be started which
will process syslog messages for all LPARs in the sysplex. When LOCAL is specified,
an Ironstream instance is required for each LPAR in the sysplex, and each instance
will only process syslog messages for the LPAR in which Ironstream is active. Note
that each instance requires a unique NAME value in the SYSTEM section of the
configuration member.
"ROUTCDE":"rout_cde" - Optional parameter that specifies the route code for syslog
gathering. Individual codes or ranges of codes can be specified. Codes can be repeated
and can overlap, but a range specification must be in low to high sequence, for
example: "1,2,3-96". The default is 1-96.
SMF Subparameters
These parameters are required when you select SMF records for forwarding to a destination.
You can control how much information is forwarded to a destination, by selecting specific
fields within an SMF record and subtype. Field-level filtering is accomplished by using the
SUBTYPE and INCLUDE parameters for the SMF record specified in SELECT.
For a detailed description of how to filter SMF records, along with some sample
configurations, see Chapter 12, “SMF Record Filtering”.
Note: A running Ironstream instance collecting SMF records can have its existing SOURCE
configuration parameters dynamically modified without having to stop and restart it. This
includes switching from a currently-configured SMF record type to another SMF type,
changing/adding the subtypes, and tuning the SMF filtering parameters, such as INCLUDE
and WHERE. Such dynamic reconfiguration alleviates the need to stop running tasks,
thereby preventing the loss of records by eliminating unnecessary downtime while gathering
SMF data. For more information, see Chapter 9, “Dynamically Modifying a Running
Ironstream Configuration”.
• SYSTEM – The default. Use the security product running on the z/OS system.
• RACF – Process SMF Type 80 RACF data.
• ACF2 – Process SMT Type 230 ACF2 data (same as RACF).
• TS – Process SMF Type 80 “Top Secret” data.
This command lists active modules in the JPA CDE chain by way of message
SDF0831I: F <Ironstream jobname>,LIST=MODULES
For more information about the LIST=MODULES command, refer to Chapter 25,
“Ironstream Commands”.
"SELECT":"SMFnnn" - Mandatory parameter that identifies one or more SMF record types
to collect. The SMFnnn value is a decimal number, with or without leading zeros, of
one or more SMF record types to collect. If multiple record types are specified, no
SUBTYPE or INCLUDE options are permitted for those record types.
LOG4J Subparameters
To forward log4j data to a destination requires changes to both your log4j configuration file
and the definition of appropriate parameters in Ironstream. For full details of how to set up
this facility see also Chapter 19, “Setting Up Log4j”.
"NAMEDPIPE":"pipe" - Mandatory parameter that specifies the name of the path that the
Ironstream’s Log4j Appender, SDFAppender or SDF2Appender, is using to send log4j
records to Ironstream. It must be a valid absolute path name for USS, for example:
/tmp/.mfsplunk or /tmp/afile. Relative pathnames are not supported.
"JAVAHOME":"javahome" - Specifies the location of the IBM JAVA SDK for z/OS which
the Offline Log4j Reader Facility requires to read log data and parse information.
The SDK is usually installed in the OMVS directory /usr/lpp/java.
Enter the UNIX command: ‘ls /usr/lpp/java’ to determine whether the SDK is
installed there. The installed releases of the SDK will be displayed. If J6.0 is
displayed for example, then the SDK 8 version is installed under directory
/usr/lpp/java/J8.0. In this case, you would specify /usr/lpp/java/J8.0 for the
javahome keyword value.
"JAVA64BIT":"NO" |"YES" - Specifies the address mode of the IBM JAVA SDK for z/OS.
Specify YES for 64-bit JAVA, or NO for 31-bit JAVA. The default when this
parameter is omitted is NO.
"FILENAME":"your_log_file" - Specifies the name (or DD) of the log file(s). For a log4j log
file or USS log file under OMVS, it can be a valid absolute path name for USS, for
example, "/tmp/logfile1". If the log file is a z/OS data set or a member of a z/OS data
set, "//X.Y.Z" or "//X.Y.Z(M)" can be used. "//DD:ddname" can also be used for multiple
log files.
"PATTERN":"pattern" - Optional parameter that is required only when the first field of the
log record is not a date/time field. When PATTERN is specified, the TIMESTAMP
parameter must be specified even if date/time is in the default log4j format. For
instructions on constructing the PATTERN keywords, see “How to use PATTERN in
the Log4j Reader Facility” on page 19-4.
SYSOUT Subparameters
You can specify one or more SELECT and FILTER parameters following a specification of
SYSOUT for the DATATYPE to control the selection of files; further parameters are
available to manage the way in which the lines and blocks of data are formed and forwarded.
For full details of all available parameters and a description of this facility, see Chapter 13,
“SYSOUT Forwarding”.
The basic parameters required for SYSOUT forwarding are:
"SELECT_JOB_IF_job_keyword":"job_keyword_value"
"SELECT_DATA_SET_IN_JOB_WITH_data_set_keyword":"data_set_keyword_value"
"FILTER_JOB_ON_filter_keyword":"filter_keyword_value"
"PRINT_MODE":"print_mode
"PRINT_WRAP":"print_wrap"
"PRINT_SEND":"print_send"
"SYSOUT_NEW_JOB_SCAN_WAIT_TIME":"nnnnn"
"SYSOUT_NEW_OUTPUT_SCAN_WAIT_TIME":"nnnnn"
"EXCLUDE_JOB_IF_JOBNAME":"<jobname1>,<jobname2>,…<jobname8>"
"EXCLUDE_JOB_WHEN_OUTCLASS":"<outclass1>,<outclass2>,…<outclass8>"
This set of parameters is only available when using a data type of SYSOUT. The first three
parameters together specify the criteria by which data sets are selected for forwarding to a
destination. The last three parameters are concerned with the way the lines and blocks of
data are formed and forwarded to a destination. For details of these parameters see Chapter
13, “SYSOUT Forwarding”.
XCF Subparameters
Cross System Coupling Facility (XCF) parameters are specified in the Data Collection
Extension (DCE) configuration to capture USS files and file updates, and when collecting
RMF III metrics. For more information on correctly configuring the XCF parameters for the
DCE data types, see “DCE, IDT, and XCF Configuration Considerations” on page 6-8.
The XCF parameters are also relevant when you plan to use one or more Network
Monitoring components (NMC) to forward alerts and/or SyslogD data via Ironstream to a
destination; this facility also requires some configuration in the Ironstream Desktop (IDT).
For more information, see “Alerts and SyslogD Forwarding” on page 14-1.
When you use the XCF data type, you must specify the XCFGROUP and XCFMEMBER
parameters that together specify the XCF group and member name that this instance of
Ironstream will join.
IRONSTREAM_API Subparameters
These parameters are required when you plan to use the Ironstream API to create
user-defined EBCDIC or ASCII data to send to a destination. When you use this data type you
must also specify the XCFGROUP, XCFMEMBER, and CLASS parameters. For more
information, see Chapter 18, “Configuring and Using the Ironstream API”.
These parameters are specified when using a data type of IRONSTREAM_API and together
specify the XCF group and member name that this instance of Ironstream will join. You can
also categorize the collected data by CLASS, and optionally by TYPE and SUBTYPE.
"CLASS":"nnn" - Mandatory parameter that defines at least one CLASS entry. The nnn
value can be any value in the range of 128–255. Values in the range of 1–127 are
reserved for use by Ironstream.
"TYPE":"ttt" - Optional parameter that defines one or more TYPE entries under a "CLASS"
entry. The ttt value can be any value in the range of 0–255.
"SUBTYPE":"sss" - Optional parameter that defines one or more SUBTYPE entries under a
"TYPE" entry. The sss value can be any value in the range of 0–255.
IMS Subparameters
These parameters are required when you select IMS log records for forwarding to a
destination. For more information, see Chapter 20, “IMS Log Record Forwarding”.
"KEYS"
"KEY_WARN_DAYS":"value" (optional)
"KEY":"value"
"SYSTEM"
"NAME":"value"
"LOADLIB":"value" (optional)
"DESTINATION"
"INDEX":"value"
"TYPE":"TCP"
"SOURCE"
"DATATYPE":"value"
"SCOPE":"value" (optional,for syslog)
"FILTER":"value" (for syslog)
"SELECT":"value" (for SMF)
"NAMEDPIPE":"value" (for Log4j)
"TRANSLATE_TABLE":"value" (optional)
"KEYS"
"KEY_WARN_DAYS":"value" (optional)
"KEY":"value"
"SYSTEM"
"NAME":"value"
"LOADLIB":"value" (optional)
"DESTINATION"
"INDEX":"value"
"TYPE":"TCP"
"IPADDRESS":"value"
"PORT":"value"
"SSL":"YES"
"KEYRING":"value"
"PASSWORD":"value"
"STASH_FILE":"value"
"CERTIFICATE":"value"
"SOURCE"
"DATATYPE":"value"
"SCOPE":"value" (optional,for syslog)
"FILTER":"value" (for syslog)
"SELECT":"value" (for SMF)
"NAMEDPIPE":"value" (for Log4j)
"TRANSLATE_TABLE":"value" (optional)
"SYSTEM"
"NAME":"SDF"
"DESTINATION"
"INDEX":"zsmf"
"TYPE":"TCP"
"IPADDRESS":"192.168.1.1"
"PORT":"9999"
"SSL":"NO"
"SOURCE"
"DATATYPE":"SMF"
"SELECT":"SMF14,SMF15,SMF50,SMF80,SMF101,SMF110"
"SELECT":"SMF30"
"SUBTYPE":"4,5"
"INCLUDE":"SMF30JBN,SMF30STN,SMF30CPT,SMF30CPS,
SMF30_TIME_ON_ZIIP,SMF30_TIME_zIIP_ON_CP"
"WHERE":"SMF30JBN EQ 'TST*' OR SMF30JBN EQ 'PRD*'"
Note: For reliability and scalability purposes, the use of a single connection is not
recommended.
"SYSTEM"
"NAME":"SSD1"
"DESTINATION"
"INDEX":"zsyslog"
"TYPE":"TCP"
"IPADDRESS":"192.168.1.1"
"PORT":"9997"
"SSL":"YES"
"KEYRING":"/u/sdf/mykey.kdb"
"PASSWORD":"MyKey"
"STASH_FILE":""
"CERTIFICATE":"CASPLUNK1"
"IPADDRESS":"192.168.1.2"
"PORT":"9998"
"SSL":"YES"
"KEYRING":"/u/sdf/mykey.kdb"
"PASSWORD":"MyKey"
"STASH_FILE":""
"CERTIFICATE":"CASPLUNK2"
"SOURCE"
"DATATYPE":"SYSLOG"
"FILTER":"SSDFFLOG"
"SYSTEM"
"NAME":"SSD2"
"DESTINATION"
"INDEX":"zlindex"
"TYPE":"TCP"
"IPADDRESS":"192.168.2.114"
"PORT":"10007"
"SSL":"NO"
"SOURCE"
"DATATYPE":"LOG4J"
"NAMEDPIPE":"/tmp/.mfxsplunk"
"SYSTEM"
"NAME":"SDF2"
"IMPORT":"ON"
"START DATE":"01/01/2014"
"START TIME":"00:00:00.00"
"END DATE":"01/31/2014"
"END TIME":"23:59:59.99"
"DESTINATION"
"INDEX":"zsmf"
"TYPE":"TCP"
"IPADDRESS":"192.168.1.1"
"PORT":"9999"
"SSL":"NO"
"SOURCE"
"DATATYPE":"SMF"
"VERSION":"Z/OS(02.03.00)"
"SELECT":"SMF14,SMF15,SMF50,SMF80,SMF101,SMF110"
"SELECT":"SMF30"
"SUBTYPE":"4,5"
"INCLUDE":"SMF30JBN,SMF30STN,SMF30CPT,SMF30CPS,
SMF30_TIME_ON_ZIIP,SMF30_TIME_zIIP_ON_CP"
"WHERE":"SMF30JBN EQ 'TST*' OR SMF30JBN EQ 'PRD*'"
"SYSTEM"
"NAME":"SSD3"
"IMPORT":"ON"
"START DATE":"2014-09-25"
"START TIME":"14:04:32,383"
"END DATE":"2014-09-25"
"END TIME":"16:24:32,471"
"DESTINATION"
"INDEX":"log4jtst"
"TYPE":"TCP"
"IPADDRESS":"192.168.1.1"
"PORT":"9999"
"SSL":"NO"
"SOURCE"
"DATATYPE":"LOG4J"
"NAMEDPIPE":"/tmp/.mfxsplunk"
"JAVAHOME":"/usr/lpp/java/J7.0"
"JAVA64BIT":"NO"
"SDFHOME":"/u/wwczxl/ssdfrec"
"FILENAME":"file:/u/wwczxl/ssdfrec/logfiledefault.log"
"SYSTEM"
"NAME":"SSD4"
"IMPORT":"ON"
"START DATE":"2014-09-25"
"START TIME":"14:04:32,383"
"END DATE":"2014-09-25"
"END TIME":"16:24:32,471"
"DESTINATION"
"INDEX":"log4jtst"
"TYPE":"TCP"
"IPADDRESS":"192.168.1.1"
"PORT":"9999"
"SSL":"NO"
"SOURCE"
"DATATYPE":"LOG4J"
"NAMEDPIPE":"/tmp/.mfxsplunk"
"JAVAHOME":"/usr/lpp/java/J7.0"
"JAVA64BIT":"NO"
"SDFHOME":"/u/wwczxl/ssdfrec"
"FILENAME":"//'WWCZXL.LOGSIMP'"
"TIMESTAMP":"EEE, d MMM yyyy HH:mm:ss Z"
"SYSTEM"
"NAME":"SSD5"
"IMPORT":"ON"
"START DATE":"2014-09-25"
"START TIME":"14:04:32,383"
"END DATE":"2014-09-25"
"END TIME":"16:24:32,471"
"DESTINATION"
"INDEX":"log4jtst"
"TYPE":"TCP"
"IPADDRESS":"192.168.1.1"
"PORT":"9999"
"SSL":"NO"
"SOURCE"
"DATATYPE":"LOG4J"
"NAMEDPIPE":"/tmp/.mfxsplunk"
"JAVAHOME":"/usr/lpp/java/J7.0"
"JAVA64BIT":"NO"
"SDFHOME":"/u/wwczxl/ssdfrec"
"FILENAME":"//'WWCZXL.LOGSIMP'"
"TIMESTAMP":"EEE, d MMM yyyy HH:mm:ss Z"
"PATTERN":"LEVEL TIMESTAMP MESSAGE"
"SYSTEM"
"NAME":"SSD6"
"DESTINATION"
"INDEX":"zsyslog"
"TYPE":"TCP"
"IPADDRESS":"192.168.1.1"
"PORT":"9997"
"SSL":"YES"
"KEYRING":"/u/sdf/mykey.kdb"
"PASSWORD":"MyKey"
"STASH_FILE":""
"CERTIFICATE":"CASPLUNK1"
"IPADDRESS":"192.168.1.2"
"PORT":"9998"
"SSL":"YES"
"KEYRING":"/u/sdf/mykey.kdb"
"PASSWORD":"MyKey"
"STASH_FILE":""
"CERTIFICATE":"CASPLUNK2"
"SOURCE"
"DATATYPE":"SYSLOG"
"FILTER":"MSG001"
"TRANSLATE_TABLE":"SSDF1141"
[DESTINATION]
"INDEX":"value"
"TYPE":"TCP"
"IPADDRESS":"value"
"PORT":"value"
"KEEP_ALIVE":"100" > Applies only to this IP/PORT
"SSL":"NO"
"IPADDRESS":"value"
"PORT":"value"
"KEEP_ALIVE":"260" > Applies only to this IP/PORT
"SSL":"NO"
"IPADDRESS":"value"
"PORT":"value"
"KEEP_ALIVE":"0" > KEEP_ALIVE turned off for this IP/PORT
"SSL":"NO"
"IPADDRESS":"value"
"PORT":"value"
"SSL":"NO" > Use the System KEEP_ALIVE - not specified for this IP/PORT
This chapter contains instructions for starting and stopping the basic Ironstream data type
forwarders, like SMF and Syslog, as well as for the IDT, DCE, and NMC components:
Topics:
• “Controlling Ironstream Forwarders” on page 8-1
• “Controlling the Ironstream Desktop (IDT)” on page 8-3
• “Controlling the Data Collection Extension (DCE)” on page 8-4
• “Controlling the Network Monitoring Components (NMC)” on page 8-7
The SDF0601I message in Figure 8-1 that precedes the remaining responses to the STATUS
command is not written to the requesting console.
Starting DCE
The started task JCL for DCE has the following PROC statement:
The MEMBER variable value resolves to a parameter in the EXEC PGM=SSDFDCE statement
and identifies the name of the configuration file in the DCEPARM data set. The ZEN
parameter is similar and TRACE can be ignored from the perspective of routinely started
DCE.
The USS and RMF parameters can have one of the following start-type values:
• HOT
• WARM
• COLD
There are differences in how each start type effects RMF and USS. The affects and
implications of each start-up parameter are described in Table 8-1. Ensure you read these
instructions in conjunction with “Starting RMF for the First Time” on page 8-5 and “Starting
USS for the First Time” on page 8-6.
Each DCE start type has a different impact, as described in the following table.
Table 8-1: DCE Start Types
Start Data
Type Type Explanation
HOT USS USS processes the configuration that is saved in its entirety
to zFS each time changes are applied with IDT. Processing is
resumed from the status information that was saved when
the task previously ended.
HOT start is the default set in the DCE procedure by the
configurator.
RMF RMF restarts collection using the previously set filters. No
discovery of any new resources is done.
HOT start is the default set in the DCE procedure generated
by the configurator.
WARM USS USS processes the configuration from the configuration file,
ignoring previous changes made with IDT.
RMF RMF restarts collection using the previously set filters and
performs a discovery to identify any more resources that
have been added since the last COLD/WARM start.
COLD USS USS processes the configuration from the z/OS member and
disregards all previous USS file offload activity. This type is
normally for recovery since the base z/OS configuration is
used and every eligible USS file is re-offloaded.
RMF RMF deletes all previously set filters and metrics and
reverts to “factory settings” for filters. RMF performs
discovery for metrics and rebuilds the knowledge base.
Stopping DCE
Use the regular z/OS stop command to stop DCE:
P sdfdce
EE Monitor
There are two start-up options for EE Monitor:
• VERIFY option (default)
• COLD option
The start-up option is a symbolic parameter in the started task JCL and the default is
VERIFY. Start EE Monitor with a regular z/OS start command:
S sdfzem
Use the regular z/OS command to stop the EE Monitor:
P sdfzem
COLD should only be used if product maintenance requires it, or if the topology DIV needs to
be initialized. Refer to the full set of ZEN product documentation for further information.
FTP Control
The only start-up parameter is the VTAM ACB name, which is specified in the started task
JCL. Use the regular z/OS start and stop parameters.
IP Monitor
The IP monitor has at least two started tasks: one for the IP monitor and one for the
Dataspace Manager for each TCP/IP stack. In all cases, start-up parameters are coded in the
started task JCL.
Use the regular z/OS start and stop commands for the IP monitor and each Dataspace
Manager.
OSA Monitor
The default start-up option WARM is coded in the started task JCL. Use the regular z/OS
start and stop commands.
UNIX Server
The UNIX Server has no start-up options. Use the regular z/OS start and stop commands.
This chapter describes how Ironstream allows you to dynamically reconfigure a running
instance that is collecting SMF data.
Topics:
• “Overview”
• “Dynamic Reconfiguration Limitations”
• “How Ironstream Performs a Dynamic Change in the Current Configuration”
• “Dynamic Reconfiguration Commands”
• “Dynamic Reconfiguration Procedure”
• “Messages Issued by Dynamic Reconfiguration”
Overview
A running Ironstream instance collecting SMF records can have its existing SOURCE
configuration parameters dynamically modified without having to stop and restart the
instance. This includes switching from a currently-SELECTed SMF record type to another
record type, changing/adding/deleting any supported subtypes, and also modifying the SMF
filtering parameters via the INCLUDE and WHERE statements. Such dynamic
reconfiguration alleviates the need to stop your running tasks, thereby preventing the loss of
records by eliminating unnecessary downtime while gathering SMF data.
You can dynamically modify most of the "DATATYPE":"SMF" parameters described in “SMF
Subparameters” on page 7-21 while an instance of Ironstream is running. Then execute the
reconfiguration commands described in “Dynamic Reconfiguration Commands” on page 9-3.
It is important to note that if any other configuration parameter changes are made when
dynamically changing SMF type parameters – such as adding another TCP connection – the
changes are verified as correct by the reconfigure commands, but are not implemented until
that Ironstream instance is restarted. There are additional initial limitations in the current
version that are outlined in “Dynamic Reconfiguration Limitations” on page 9-2.
Command Notes:
• When a valid RECONFIGURE command is issued for either VALIDATE or EXECUTE,
the configuration is printed in the SYSPRINT file.
• When an invalid configuration change is encountered, the RECONFIGURE command
will not change the current configuration.
For information about all Ironstream commands, see Chapter 25, “Ironstream Commands”.
"SOURCE"
"DATATYPE":"SMF"
"SELECT":"SMF14,SMF15
"SOURCE"
"DATATYPE":"SMF"
"SELECT":"SMF30"
"SUBTYPE":"4,5"
"INCLUDE":"SMF30JBN,SMF30STN,SMF30CPT,SMF30CPS,
SMF30_TIME_ON_ZIIP,SMF30_TIME_zIIP_ON_CP"
"WHERE":"SMF30JBN EQ 'TST*' OR SMF30JBN EQ 'PRD*'"
This chapter describes how to configure Ironstream to minimize the loss of forwarded
Splunk data due to extended network or Splunk outages.
Note: Ironstream DLP uses the Splunk Indexer Acknowledgment function and therefore
cannot be configured to work with Elastic destinations.
Topics:
• “Overview”
• “Ironstream System Requirements for Using DLP”
• “Configuring Ironstream DLP Parameters”
• “Configuring Splunk Parameters”
• “Best Practices When Using DLP”
• “Messages Issued by DLP”
Overview
Data Loss Prevention (DLP) is an optional Ironstream feature that minimizes data loss
through the use of IBM’s Coupling Facility’s System Logger functions, combined with the
use of Splunk’s Indexer Acknowledgement feature.
Data received by Ironstream is logged in the coupling facility and is only deleted from there
once a positive acknowledgement has been received from the Splunk indexer. The use of the
coupling facility allows Ironstream to continue to collect and retain data during extended
network or Splunk outages.
Unless DLP is explicitly activated via Ironstream configuration file changes, there are no
modifications required to existing Ironstream configuration files.
Note: In this release of Ironstream, DLP cannot be configured to work with the Data
Collection Extension’s USS file monitoring data source.
• To connect to a log stream with an authorization level of READ, the caller must have
read access to RESOURCE(log_stream_name) in SAF class CLASS(LOGSTRM).
• To connect to a log stream with an authorization level of WRITE, the caller must have
alter access to RESOURCE(log_stream_name) in SAF class CLASS(LOGSTRM).
If SAF is not available or if CLASS(LOGSTRM) is not defined to SAF, no security checking is
performed. In that case, the caller is connected to the log stream with the requested or
default AUTH parameter value.
"DESTINATION"
"DATA_LOSS_PREVENTION":"YES"
"DLP_STREAM_NAME":"value"
"INDEX":"value"
"TYPE":"HTTP" (DLP requires either HTTP or HTTPS)
"IPADDRESS":"value"
"PORT":"value"
"TOKEN":"value" (DLP value provided by Splunk when defining
the HTTP or HTTPS port on the Splunk indexer)
"SSL":"NO" (Must be set to YES when TYPE is HTTPS)
Important! The SYSTEM-level IMPORT data parameter is not compatible with (DLP).
Ironstream verifies whether both parameters are configured, and if true, message
SDF0190A is generated and the Ironstream instance is terminated.
For detailed descriptions of all DLP-related configuration file parameters, refer to “Data
Loss Prevention Parameters” on page 7-13.
Topics:
• “Overview of Filter Modules” on page 11-1
• “Syslog Message Filtering” on page 11-2
You can also select messages by specifying a 3 or 4-character message prefix using the
CHAR3 =(3-byte prefix list), and CHAR4=(4-byte prefix list) parameters.
SELECT, CHAR3 and CHAR4 are all optional parameters. Parameter values are separated
by commas and any number can be specified.
The specified parameters are processed in the order in which they are specified and there is
no checking for duplicates or overlaps (such as those illustrated in the following example):
SSDFFLOG SELECT(ACF2,CICS,DB2),CHAR3=(ACF,CIA,DSN), *
CHAR4=($HAS,DSNL)
In this example, ACF2 builds table entries for messages prefixed ACF and CIA, while DB2
builds an entry for message prefix DSN. The selection table will therefore have two entries
for each message prefix because the values in the CHAR3 parameter duplicate both ACF
and CIA. This will not cause any problems in the message selection processing, but it will
consume slightly more CPU time unnecessarily.
The DSNL value will never be matched since the message prefix matches the DSN specified
in the CHAR3 list which is processed first.
You should therefore be aware of the way in which the message prefixes are processed when
you define the message prefixes that are to be selected for forwarding to a destination.
This chapter describes how to configure field-level filtering for SMF record types, either by
manually creating control records in the Ironstream configuration file or by using the
GUI-based “SMF Filter Configuration Builder” in the Ironstream Desktop GUI.
Topics:
• “Overview of SMF Record Filtering” on page 12-2
• “Gathering SMF Data” on page 12-3
• “Supported SMF Record Types” on page 12-4
• “Manually Defining SMF Filtering Configurations” on page 12-7
• “Limiting SMF Record Selection with WHERE Search Conditions” on page 12-7
• “Using the SMF Filter Configuration Builder in IDT” on page 12-13
• “Using the READ Command to Share SMF Filter Configurations” on page 12-22
• “Implementing a Custom CICS Monitor Dictionary in Ironstream” on page 12-23
• “Sample SMF Filter Configurations” on page 12-31
This example expands on the previous one by adding a test for any zIIP processing:
"INCLUDE":"SMF30JBN,SMF30STN,SMF30CPT,SMF30CPS,
SMF30_TIME_ON_ZIIP,SMF30_TIME_zIIP_ON_CP"
"WHERE":"SMF30JBN EQ 'TST*' OR SMF30JBN EQ 'PRD*' AND
SMF30_TIME_ON_ZIIP GT 0"
Parameter Description
NULL=TRUE/FALSE An optional directive, as described in “NULL Processing Command”
on page 12-11.
field1 The name of a field defined by the SMF DSECT for that record.
operator Comparison operators: EQ, NE, GT, GE, LT, LE, IN, and NOTIN, as
described in “WHERE Search Conditions” on page 12-9.
'string' A character string of characters and numbers enclosed in single
quotes. The maximum length is 2037 bytes. Wildcards are supported.
See “‘string’ Operand” on page 12-9.
number An integer value that is a string of digits not enclosed in quotes. The
maximum size is 19 digits with no commas or decimal points. See
“Number Operand” on page 12-10.
hexadecimal A string of up to 32 hexadecimal character pairs in single quotes,
preceded by an X. See “Hexadecimal Operand” on page 12-10.
field2 Another included SMF field name in the same record.
Parameter Description
Date A date enclosed in single quotes and preceded by a D, as in
D'mm/dd/yyyy'. See “Date Operand” on page 12-10.
Comparison Operators
The following comparison operators are case-sensitive and must be specified as shown:
• EQ – Equal
• NE – Not equal
• GT – Greater than
• GE – Greater than or equal
• LT – Less than
• LE – Less than or equal
• IN – Returns a TRUE condition when a string in the (list) operand matches the
fieldname value
• NOTIN – Returns a TRUE condition when no string in the (list) operand matches the
fieldname value
Comparison Operands
The following comparison operands can be used to further filter WHERE clauses.
‘string’ Operand
A quote-delimited character string of characters and numbers enclosed in single quotes, as
in 'A STRING OF LETTERS, BLANKS AND NUMBERS 0123456789'. The maximum length is
2037 bytes.
Wildcards can be used with character strings. A ‘string’ operand with wildcards can be used
only with the EQ and NE comparators.
• * – Asterisk matches zero or more characters
Hexadecimal Operand
A string of up to 32 hexadecimal character pairs, enclosed in single quotes and preceded by
an X, as in: X'0123456789ABCDEF'. The maximum size is 32 pairs of digits.
Number Operand
An integer value that is a string of digits not enclosed in quotes. Maximum size of 19 digits
with no commas or decimal points.
An integer value that is a string of digits not enclosed in quotes. Maximum size of 19 digits
with no commas or decimal points. Values less than 231 are evaluated as 4-byte binary
values. Values greater than 231 are evaluated as 8-byte binary values. Comparisons using
integer values against SMF fields of unequal lengths will result in erroneous results.
Comparing fields of unequal lengths should be done using a hexadecimal constant.
Date Operand
Defines a date enclosed in single quotes and preceded by a D, as in: D'mm/dd/yyyy', where
mm is the month in the range of 01–12; dd is the day in the range 01–31 (as appropriate for
the month); and yyyy is the year in the range of 1900–9999.
The specified date is converted into a form consistent with the date when the record was
moved into the SMF buffer, in the form 0cyydddF (where c is 0 for 19xx and 1 for 20xx; yy is
the current year (0–99); ddd is the current day (1–366); and F is the sign).
Time Operand
Defines a time of day enclosed in single quotes and preceded by a T, as in: T'hh:mm:ss.th',
and converts it into a time of day in hundredths of a second, which is consistent with most
SMF time fields. A time value where hh is the hours in the range 00–23; mm is the minutes
in the range 00–59; ss is the seconds in the range of 00–59; and t is tenths of a second and h
is hundredths of a second in the range of 00–99.
The time can be specified truncated as T'hh', T'hh:mm', T'hh:mm:ss', or fully as
T'hh:mm:ss.th'. Missing digits are defaulted to zeros.
The Time operand can be used in two ways: first, as a “time of day”; secondly, as an “elapsed
time”. This is because the time specified is converted into hundredths of a second since 12:00
AM.
These four examples result in the same records being selected; namely, all SMF type 30
records that show an elapsed zIIP time of greater than one second:
"WHERE":"SMF30_TIME_ON_ZIIP GT T'00:00:01.00'"
"WHERE":"SMF30_TIME_ON_ZIIP GT T'00:00:01'"
"WHERE":"SMF30_TIME_ON_ZIIP GT 100"
"WHERE":"SMF30_TIME_ON_ZIIP GT X'00000064'"
Whereas, these three examples demonstrate using the “time of day” to select jobs that start
after 13:30 hours. The first example uses the full time format, while the second and third
examples use truncated forms, but all result in the same selection of jobs.
"WHERE":"SMF30SIT GT T'13:30:00.00'"
"WHERE":"SMF30SIT GT T'13:30:00'"
"WHERE":"SMF30SIT GT T'13:30'"
• NULL=FALSE indicates a search condition with a missing operand field will always
evaluate to a FALSE condition. For example, when you don’t want the SMF data to be
forwarded to a destination unless all search conditions are satisfied.
The NULL=TRUE/FALSE parameter can be specified before any search condition and will be in
effect until another NULL parameter is encountered. Its precedence is strictly left to right.
In the following example, if SMF30JBN or SMF30_TIME_ON_ZIIP is not in the record being
processed, then the search condition for each will evaluate as TRUE. And if
SMF30_TIME_ON_CP or SMF30JBN is not in the record, those search conditions will evaluate as
FALSE.
"WHERE":"NULL=TRUE
(SMF30JBN EQ 'TST*' AND (SMF30_TIME_ON_ZIIP GT T'00:00:01.00'
OR NULL=FALSE
SMF30_TIME_ON_CP GT 0)) OR SMF30JBN EQ 'PRD*'"
When examined in order of precedence, only SMF30_TIME_ON_CP would result in a FALSE
condition.
Figure 12-1. Sample IDT JCL for SMF Filter Configuration Panels
SMFDICT DD
The SMF Filter Configuration Builder gets the SMF record type layout from the Ironstream
data dictionary that is packaged with Ironstream V2.1. The dictionary must be a PDS or
PDSE data set with a fixed block format (RECFM=FB) and a record length of 255 bytes.
Sample JCL to create the dictionary is provided in the your_hlq.SSDFSAMP member,
IDTSMFAL. The data format in the dictionary is XML and the your_hlq.SSDFSAMP
member, IDTSMFAL, executes a batch RECEIVE to create the member DEFAULT.
You can use the New Configuration button on the SMF Filter Configuration panel to
create new configurations by either copying the default SMFDICT (read-only) dictionary or
by using a previously-customized filter configuration as a template. The New
Configuration operation adds a new configuration in the SMFDICT dictionary that can be
reloaded, customized, and re-saved whenever necessary. At the same time, it creates a
member in the SMFOUT data set with SELECT statements in it for other Ironstream
instances to use.
Whenever the Save operation is executed, both the dictionary and SMFOUT data set
member are updated. If you manually change a member in the SMFOUT data set, the IDT
does not parse the change, and instead uses the member in the dictionary for input.
SMFOUT DD
The SMF Filter Configuration panels creates standard Ironstream control records in the
data set specified by SMFOUT DD, which you must create as a PDS or PDSE with fixed-
block, 80-byte records. When you use the SMF Filter Configuration Builder in IDT to select
the SMF records and fields that you want to forward, clicking Save prompts you for a name,
which is the member name that is created in the SMFOUT data set. You must manually add
this saved configuration syntax to the SMF forwarder configuration file.
Table 12-4 describes the controls available on the Configuration CONFNAME panel.
Table 12-4: Configuration CONFNAME Panel Controls
Panel 12-5 illustrates a customized Configuration BOBDYLAN panel with the filtering
completely switched off for SMF Type 04. For SMF Type16, filtering is switched off for
subtypes 1–3, as well as for the ICESUBSY, ICEMVSES, ICEMVSXA, and ICEMVS37
fields.
STEP010
Sample JCL is provided as STEP010 in the Ironstream SAMPLIB member SSDFGDIC.
//SYSUT2 DD UNIT=SYSDA,
// SPACE=(TRK,(75,75))
//SYSUT3 DD UNIT=SYSDA,
// SPACE=(TRK,(75,75))
//SYSPRINT DD SYSOUT=*
//SYSLIN DD DSN=&&OBJ(SSDFCICD),
// UNIT=3390,
// SPACE=(TRK,(10,10,10)),
// DISP=(NEW,PASS,DELETE),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=8000)
//STEP030 EXEC PGM=HEWLH096,
// PARM='MAP,LIST,LET,XREF,NCAL'
//SYSLIB DD DSN=<IRONSTREAM LOADLIB>,
// DISP=SHR
//SYSLMOD DD DSN=<IRONSTREAM LOADLIB>,
// DISP=SHR
//SYSUT1 DD UNIT=SYSDA,
// DCB=BLKSIZE=1024,
// SPACE=(1024,(200,20))
//SYSPRINT DD SYSOUT=*
//SYSLIN DD DSN=&&OBJ(SSDFCICD),
// DISP=SHR
// DD *
ENTRY SSDFCICD
NAME SSDFCICD(R)
//STEP040 EXEC PGM=ASMA90
//SYSIN DD DSN=&&STATMNT2,
// DISP=(OLD,DELETE,DELETE)
//SYSLIB DD DSN=&&SYSLIB,
// DISP=(NEW,DELETE,DELETE),
// SPACE=(CYL,(3,1,1),RLSE),
// UNIT=3390,
// DSORG=PS,
// RECFM=FB,
// LRECL=80,
// BLKSIZE=80,
// STORCLAS=SC2,
// DSNTYPE=BASIC
//SYSUT1 DD UNIT=SYSDA,
// SPACE=(TRK,(75,75))
//SYSUT2 DD UNIT=SYSDA,
// SPACE=(TRK,(75,75))
//SYSUT3 DD UNIT=SYSDA,
// SPACE=(TRK,(75,75))
//SYSPRINT DD SYSOUT=*
//SYSLIN DD DSN=&&OBJ(SSDFCICX),
// UNIT=3390,
// SPACE=(TRK,(10,10,10)),
// DISP=(NEW,PASS,DELETE),
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=8000)
//STEP050 EXEC PGM=HEWLH096,
// PARM='MAP,LIST,LET,XREF,NCAL'
//SYSLIB DD DSN=<IRONSTREAM LOADLIB>,
// DISP=SHR
//SYSLMOD DD DSN=<IRONSTREAM LOADLIB>,
// DISP=SHR
//SYSUT1 DD UNIT=SYSDA,
// DCB=BLKSIZE=1024,
// SPACE=(1024,(200,20))
//SYSPRINT DD SYSOUT=*
//SYSLIN DD DSN=&&OBJ(SSDFCICX),
// DISP=SHR
// DD *
ENTRY SSDFCICX
NAME SSDFCICX(R)
Ironstream resolves this dilemma by creating unique field names when duplicate nicknames
are recognized. The SDF7015I message indicates that a duplicate nickname was found in
the dictionary record and a new nickname has been created to take its place. The new
nickname will be used by Ironstream to create a JSON pair to reference the data originally
represented by the duplicate nickname. Follow-on processing will be required to use the new
unique name and not the duplicate name.
hh:mm:ss SDF7015I Nickname <duplicate field name> renamed as <unique field name>
to allow unique JSON Pair name values
SDF7020-SDF7023 report the number of times each input statement (SYSIN records) cause
a modification to the output created by SSDFGDIC. This should be carefully reviewed to
ensure the desired effect of the input command was realized.
The following messages complete the listing by describing dictionary records read
(SDF7012I), dictionary records skipped as a duplicate record of a region (SDF7013I), and the
number of assembler statements created (SDF7014I).
hh:mm:ss SDF7012I nnnnn SMF RECORDS READ
hh:mm:ss SDF7013I nnnnn SMF RECORDS SKIPPED
hh:mm:ss SDF7014I nnnnn DICTIONARY CARDS CREATED
SDF7007I END TIME - hh:mm:ss.th DATE - mm/dd/yyyy
"SOURCE"
"DATATYPE":"SMF"
"SELECT":"SMF014,SMF015"
"SELECT":"SMF030"
"SUBTYPE":"4,5"
"INCLUDE":"SMF30WID,SMF30SIT,
SMF30STP,SMF30STD,
SMF30PNM,SMF30USR,
SMF30OSL,SMF30GRP,
SMF30SYN,SMF30RUD,
SMF30SYP,SMF30TID,
SMF30JBN,SMF30CPT,
SMF30PGM,SMF30CPS,
SMF30STM,SMF30ICU,
SMF30UIF,SMF30ISB,
SMF30JNM,SMF30ASR,
SMF30STN,SMF30ENC,
SMF30CLS,SMF30DET,
SMF30_TIME_ON_ZIIP,SMF30CEP,
SMF30_ENCLAVE_TIME_ON_ZIIP,SMF30_DEPENC_TIME_ON_ZIIP,
SMF30DDN,
SMF30BLK"
Note that this sample demonstrates:
• Multiple SMF record types can be selected at once, if there are no SUBTYPE or
INCLUDE commands for them.
• Commands do not have to start in column 1.
• Continuation of a record can be indicated by a trailing comma.
• Continuation records signalled by a comma do not have to start in a particular column.
With the addition of WHERE clause statements in V2.1, field-level filtering can be further
refined by setting field search conditions to select records only when all search conditions
are met. Refer to “Limiting SMF Record Selection with WHERE Search Conditions” on
page 12-7 for examples of WHERE clause usage.
"SOURCE"
"READ":"SAMPLIB(ALLSMF)"
where the READ command points to an ALLSMF member in SSDFSAMP that has control
statements to select all the supported record types.
This chapter describes how to configure the SYSOUT data forwarder to select and forward
data sets to a destination.
Topics:
• “Using the SYSOUT Forwarding Function” on page 13-1
• “SYSOUT Selection and Filtering” on page 13-4
• “Using the Advanced PRINT Data Block Parameters” on page 13-13
• “SYSOUT Forwarding Parameter Examples and Sample Output” on page 13-15
Note that the JES system data sets (JESMSGLG, JESJCL, JESYSMSG, JESJCLIN) and
in-stream SYSIN are forwarded as a block, ignoring the RECFM definition of JES or SYSIN.
• PRINT_WRAP controls the insertion or otherwise of a new line escape sequence at the
end of each line read from JES and concatenated into the print block.
• PRINT_SEND modifies the default behavior of Ironstream regarding the point at which
it forwards its data block to a destination.
• PRINT_MAXIMUM_LINE_COUNT suspends forwarding of SYSOUT dataset by
specifying a maximum forwarding count.
These facilities are provided to cater for unusual print line/page formats so that they are
easier to process in a destination, to override the normal default of waiting until a full page
of data is available before forwarding it, or to cope with data in JES that is folded in such a
way that wrapping occurs at a column rather than a word break (such as Java log data).
See the section “Using the Advanced PRINT Data Block Parameters” on page 13-13 for full
details of the way in which these parameters are used.
For example:
• A specification of "MYJOB*" selects all jobs whose first five characters are ‘MYJOB’.
• A specification of "MYJOB%%*" requires all job names to be 7-bytes long, beginning
with "MYJOB" and having any two characters following "MYJOB".
Jobs may also be selected using a variety of other criteria such as batch jobs, started tasks,
APPC jobs, held jobs, and so on. Further, the FILTER parameter enables jobs to be selected
according to a range of criteria such as class, destination ID, priority, service class and so on.
For a complete list see the section “Selection and Filter Keywords and Rules” on page 13-6.
Once job selection has been made, you can select data sets within the selected job(s) using
the data set selection parameters provided:
"SELECT_DATA_SET_IN_JOB_WITH_DDNAME":"ddname"
"SELECT_DATA_SET_IN_JOB_WITH_PROCSTEP":"procstep"
"SELECT_DATA_SET_IN_JOB_WITH_STEPNAME":"stepname"
Data sets are selected for forwarding when the given DD name, EXEC step name, or PROC
step name matches the name in the selected job.
As with job selection, for each case you can use the asterisk ‘*’ wildcard character to
represent ‘any number of any character’, and the percent symbol ‘%’ to represent ‘any one of
any character’. The default value for each of these parameters is ‘*’ so all data sets for the
specified job(s) are selected by default.
You can use multiple sets of SELECT_JOB and (optionally) SELECT_DATA_SET
parameters in combination with one another.
See the section “SYSOUT Forwarding Parameter Examples and Sample Output” on
page 13-15 for several examples of possible ways in which the SELECT parameters can be
used.
Filtering Criteria
Once basic job selection has been done, you can further filter the jobs to be forwarded
according to one or more filter criteria.
You can filter jobs based on a keyword using this parameter:
"FILTER_JOB_ON_filter_keyword":"filter_keyword_value"
The filter_keyword has many possible settings that provide great flexibility in the ways that
jobs can be selected. Examples of filter fields are: class, destination, owner, priority, service
class, subsystem and so on.
As with the SELECT parameters, the FILTER parameters must also conform to the
required sequence. See the following section for a complete list given in the required order.
There can be up to eight job names (or JOBNAME wild-card values) that can be excluded
from the set of jobs the JES matched to the SELECT_JOB_IF_JOBNAME values. This
statement must immediately follow the SELECT_JOB_IF_JOBNAME statement.
Up to eight OUTCLASS values can be used to exclude SYSOUT data sets from processing
that were selected by matching other selection criteria. This statement must follow the
FILTER_JOB_ON_OUTCLASS statement. However, OUTCLASS is not required for
EXCLUDE_JOB_WHEN_OUTCLASS to be recognized.
Note: When excluding specific job Exclude the specified job name(s) from
names from wild-carded job the wild-carded job selection. Up to eight
selections, the EXCLUDE filtering job names can be specified as
statement must directly follow the value1,value2, ... ,value8.
JOBNAME select statement:
"EXCLUDE_JOB_IF_JOBNAME":"jobname"
Once the job selection process is complete, each spool data set is examined for a match on DD
name, PROC step name and/or EXEC step name. When Ironstreamfinds a match it forwards
the corresponding spool data set to a destination.
For each keyword you can use the asterisk ‘*’ and the percent symbol ‘%’ wildcard
characters. The default value for each of these parameters is ‘*’ so all data sets for the
specified job(s) are selected by default. A maximum of eight characters in each case may be
specified.
Value Description
NPUT Job is active in input processing.
WTCONV Job is queued for conversion.
CONVERT Job is actively converting.
Value Description
VOLWT Job is queued for setup (not currently used).
SETUP Job is active in setup (not currently used).
SELECT Job is queued for execution.
ONMAIN Job is actively executing.
SPIN JES2 is processing spin data sets for the job.
WTBKDN Job is queued for output processing.
BRKDWN Job is active in output processing.
OUTPT Job is on the hard copy queue.
WTPURG Job is queued for purge.
PURGE Job is currently being purged.
RECEIVE Job is active on an NJE SYSOUT receiver.
WTXMIT Job is queued for execution on another NJE node.
XMIT Job is active on an NJE job transmitter.
EXEC Job has not completed execution (combines multiple requests).
POSTEX Job has completed execution (combines multiple requests).
Value Description
NOSUB No sub-chain exists.
FSSCI Job is active in conversion/interpretation in an FSS address
space.
PSCBAT Job is awaiting post-scan (batch).
PSCDSL Job is awaiting post-scan (demand select).
FETCH Job is awaiting volume fetch.
VOLWT Job is awaiting start setup.
SYSSEL Job is awaiting or active in MDS system selection.
ALLOC Job is awaiting resource allocation.
Value Description
VOLUAV Job is awaiting unavailable volume(s).
VERIFY Job is awaiting volume mount(s).
SYSVER Job is awaiting or active in MDS system verification.
ERROR Job encountered an error during MDS processing.
SELECT Job is awaiting selection on main.
ONMAIN Job is scheduled on main.
BRKDWN Job is awaiting breakdown.
RESTART Job is awaiting MDS restart processing.
DONE Main and MDS processing complete for job.
OUTPT Job is awaiting output service.
OUTQUE Job is awaiting output service writer.
OSWAIT Job is awaiting RSVD services.
CMPLT Output service complete for job.
DEMSEL Job is awaiting selection on main (demand select job).
EFWAIT Ending function request waiting for I/O completion.
EFBAD Ending function request not processed.
MAXNDX Maximum request index value.
"PRINT_WRAP":"LEAVE" | "UNFOLD"
This parameter can be used to modify the way that Ironstream inserts new line
characters ‘\n’ at the end of each print line as it adds them to the current data
block in storage.
LEAVE – The default. Each line of print data read from JES is added with a new
line escape sequence appended.
UNFOLD – A specification of UNFOLD has the opposite effect, concatenating the
line directly with the preceding line without any new line being inserted. This
means that the lines appear in a destination as one long line.
Note: Splunk can be set to Format/Wrap/Yes to make the data easier to read.
One impact of choosing all the default options for log4j data is that it will not arrive at the
destination near real-time since Ironstream will be waiting for the New Page carriage
control or the 150-line limit to be reached in its internal data block. Should it be important
that the data arrives near real-time, you could choose instead to specify LINE for the
PRINT_MODE parameter leaving the other two advanced options parameters to default.
This would cause the data to be forwarded line-by-line, but bear in mind that it has the
disadvantage that wrapped lines cannot be searched across the wrap point in most
destinations, and in Splunk they are also seen last-line first.
An alternative would be to choose the ANY_ADVANCE specification for the PRINT_MODE
parameter leaving the other two advanced options parameters to default. Since this causes
Ironstream to complete each data block at any page advance in the JES output (with the
exception of single line spacing), Ironstream would send the log4j data to a destination at
each “pseudo page” boundary. Although not as rapid as the LINE setting, data would appear
in a destination faster than using the default value, but again this has the disadvantage that
wrapped lines cannot be searched across the wrap point in most destinations, and in Splunk
they are also seen last-line first.
You could modify the disadvantage in each of the above two possible configurations using
the UNFOLD option for the PRINT_WRAP parameter, and EOF or JAVA for the
PRINT_SEND parameter. Even though using these options would mean that wrapped lines
would be merged to create long lines in most destinations, it is possible to aid viewing in
Splunk by using the Format/Wrap/Yes option.
The principles outlined in this section can be applied to other scenarios in which the print
data forwarded to a destination does not appear in the way you would expect or desire. By
examining the format of the data as it is created in JES by the application, you should be
able to choose a combination of advanced options that improve its appearance in Spunk and
thereby the ease with which it can be examined.
Forwarded Data
{"MFSOURCETYPE":"SYSOUT","SYSNAME":"GD01","SMFID":"GD01",
"DATETIME":"2018-08-02 06:28:33.15 -0400","JOBNAME":"SDFPRTJ","JOBID":"JOB08113",
"PROCSTEP":"","STEPNAME":"PRINT","DDNAME":"SYSUT2","SYSNAME_WHERE_ACTIVE":"",
"DSID":"1",{"Id":"ABC01","TranID":"A9902","Logtime":12,"Archive":"Yes","Status":"Complete"}}
{"MFSOURCETYPE":"SYSOUT","SYSNAME":"GD01","SMFID":"GD01",
"DATETIME":"2018-08-02 06:28:35.00 -0400","JOBNAME":"SDFPRTJ","JOBID":"JOB08114",
"PROCSTEP":"","STEPNAME":"PRINT","DDNAME":"SYSUT2","SYSNAME_WHERE_ACTIVE":"",
"DSID":"1",{"Id":"ABC01","TranID":"B3200","Logtime":34,"Archive":"Yes","Status":"Complete"}}
{"MFSOURCETYPE":"SYSOUT","SYSNAME":"GD01","SMFID":"GD01",
"DATETIME":"2018-08-02 06:28:39.22 -0400","JOBNAME":"SDFPRTJ","JOBID":"JOB08115",
"PROCSTEP":"","STEPNAME":"PRINT","DDNAME":"SYSUT2","SYSNAME_WHERE_ACTIVE":"",
"DSID":"1",{"Id":"AMC05","TranID":"A9902","Logtime":15,"Archive":"No","Status":"Complete"}}
This chapter describes the optional Network Monitoring components that enable alert
monitoring and SyslogD forwarding.
Topics:
• “Overview of Network Management Components” on page 14-2
• “Configuring ZEN for Ironstream” on page 14-2
• “ZEN Component Alert Generation” on page 14-3
• “The OSA MONITOR (ZOM)” on page 14-3
• “The LINUX MONITOR (ZLM)” on page 14-7
• “FTP CONTROL (ZFC)” on page 14-8
• “The EE MONITOR (ZEM)” on page 14-10
• “The IP MONITOR (ZIM)” on page 14-13
• “Routing SyslogD Messages to Ironstream” on page 14-19
• “More Information about Alerts and SyslogD Forwarding” on page 14-20
The support in Ironstream for receiving alerts and SyslogD data from ZEN is configured by
specifying a source type of ‘XCF’ in the SOURCE section of the configuration file. Also in the
SOURCE section the XCFGROUP and XCFMEMBER parameters must be specified to
define the XCF group and member name that Ironstream should join to receive the data
from ZEN. See “XCF Subparameters” on page 7-25 for the description of these parameters.
The first table lists MIB Performance Thresholds which when exceeded cause the ‘Out Of
Range’ alert to be issued; this issues message ZOM4002I as well as sending the alert to ZEN:
Table 14-1: MIB Performance Threshold Alerts
MIB Name Low High OOTB?
ibmOSAExpChannelPCIBusUtil1Min 0 90 Yes
ibmOSAExpChannelProcUtil1Min 0 90 Yes
ibmOSAExpV2PerfProcUtil1Min 0 90 Yes
zomOSAExcessiveQDepth 0 90 Yes
Below is the table of MIBs which cause the ‘Status Change’ alert to be issued if the OSA
MONITOR detects that the MIB has changed state. Message ZOM4004I is issued when this
is the case as well as the alert being sent to ZEN:
Table 14-2: MIB Status Change Alerts
MIB Name OOTB?
ifOperStatus Yes
ibmOsaExpEthLanTrafficState Yes
ibmOsaExpEthDisabledStatus Yes
ibmOsaExpTRLanTrafficState Yes
ibmOsaExpTRDisabledStatus Yes
ibmOsaExpTRRingStatus Yes
ibmOsaExpTRRingState Yes
ibmOsaExpTRRingOpenStatus Yes
ibmOsaExpATMLanTrafficState Yes
ibmOsaExpATMDisabledStatus Yes
ibmOsaExpATMClientCurrentState Yes
ibmOsaExp10GigEthLanTrafficState Yes
ibmOsaExp10GigEthDisabledStatus Yes
ibmOsaExp3LanTrafficState Yes
ibmOsaExp3DisabledStatus Yes
ibmOsaExp5SLanTrafficState Yes
ibmOsaExp5SDisabledStatus Yes
There are two ‘special’ alerts issued when the OSA MONITOR directly detects a problem
with an OSA either congested or not responding at all:
Table 14-3: OSA MONITOR-Generated Alerts
The first condition causes the ‘Queue Congested’ alert to be issued and message ZOM4005I
is issued at the same time. For the second condition, the ‘OSA Not Responding’ alert is
issued and message ZOM4001I is issued at the same time.
Below is the table of MIBs which cause the ‘New Errors Encountered’ alert to be issued if the
OSA MONITOR detects that the MIB counter has increased in value since the last sample
was taken. Message ZOM4003I is issued when this is the case as well as the alert being sent
to ZEN:
Table 14-4: MIB Overall Counter Increase Alerts
For each of the OSA MONITOR alerts, the full details of the MIB field involved, counter
value, or range exceeded is provided.
Key Points:
• The Alert Monitor is activated by default in the OSA MONITOR so all enabled alerts
will be issued.
• The following alerts are issued if the default settings in the ‘Rules and Alerts’ panel
remain as they are at installation time:
▪ The four forms of the ‘MIB Performance Threshold’ alert issued when a pre-defined
MIB value falls outside the range 0 to 90 percent
▪ The ‘OSA Not Responding’ alert
▪ All of the ‘Status Change’ alerts
▪ The ‘Queue Congested’ alert
▪ None of the forms of the ‘New Errors Encountered’ alerts will be issued without
activating the appropriate alert definition from the ‘OSA Alert Rules’ panel accessed
using the ZEN drop-down menu option Admin/OSA Admin/Rules and Alerts.
Key Points:
• All but two of the FTP CONTROL alerts will be issued as soon as the component is
active and intercepting FTP activity.
• To activate the two alerts not activated at start-up requires RACF definitions to be
created that secure the FTP Server and Client. You can find a description of this process
in the ZEN FTP CONTROL Configuration and Reference manual, in “Chapter 5
Implementing FTP Security”, section “Lock Down the FTP Server and Client”.
• Note that currently no alerts related to client-FTP activity (that is, outbound FTP from
z/OS) are issued.
Key Points:
• Only selected alerts are issued ‘out-of-the-box’. These are indicated in the table above.
• Inactive alerts are enabled using Monitoring Profiles that are assigned to systems using
Profile Assignments. Both the Monitoring Profiles and the Profile Assignments panels
are accessed from the Admin/SNA Admin drop-down menu from the ZEN Desktop.
• Although default monitoring profiles are provided and the default profile for APPN
monitoring is active for all systems by default, it is highly likely that these will require
review before being used.
• Alerts for Session Awareness Monitoring require EE MONITOR configuration changes
(and possibly also some z/OS changes) as well as updates to the Monitoring Profiles and
Profile Assignments.
• The alert relating to EE HPR gap packets will only be able to be issued when the
PktTrace configuration parameter is set to ‘On’ or ‘Start’.
• Message alerts are only issued when the EE MONITOR is run as a PPO.
• The alert ‘No CP-CP connections to a Netid’ is disabled by default in its ZEN alert
definition. Therefore, irrespective of whether it is enabled in the EE MONITOR, it will
not be issued until this default is changed. This is the only EE MONITOR alert that
requires this action.
Key Points:
• Only a few of the IP MONITOR’s alerts are issued ‘out-of-the-box’ as indicated in the
rightmost column of the table above.
• Alerts that are not issued out-of-the-box are generated by separate monitoring tasks
each of which may require configuration using the appropriate Administration function
of the IP MONITOR’s VTAM interface in order to activate the required alerts.
Specifically:
▪ Services/PortsTN3270 Settings
▪ InterfacesFTP Settings
▪ HostsEE Settings
▪ SubnetsOSPF Settings
▪ GatewaysNTT Settings
▪ ThresholdsX.25 Settings
▪ AlertsSystem Monitors
▪ Monitored MIBsTelnet Services
Key Points:
• No SyslogD messages are eligible for routing to Ironstream by default.
• SyslogD requires configuration to route messages to ZEN.
• After providing suitable configuration parameters to both ZEN and Ironstream, further
configuration work is required to set up the filter definitions in ZEN to include all
required SyslogD messages.
• There may be some TCP/IP configuration to do (for example to remove any port
reservation on the port used by ZEN to receive SyslogD messages).
This chapter of the manual is provided so that you are aware of which alerts are issued
‘out-of-the-box’ for each of the ZEN components. At the same time it provides an overview of
what needs to be done in each ZEN component to activate the alerts that are optional and
also how to access the SyslogD filter panel to set up message inclusion filters.
Full details of all of the configuration options outlined in this chapter are available in the
appropriate Configuration and Reference or Administration manual for the ZEN component
concerned and/or in the ZEN Help system.
This chapter describes how to configure DB2 table data for forwarding to a destination.
Topics:
• “Overview of DB2 Tables” on page 15-1
• “Configuration for DB2 Table Data” on page 15-1
DB2 Definitions
Member SSDFTRIG in the SDFSAMP data set contains sample DDL to create a DB2 insert
trigger. Member SSDFPROC in the SDFSAMP data set contains sample DDL to create a
stored procedure.
Sample SSDFTRIG
The DB2 trigger builds a single VARCHAR parameter that is passed to the stored
procedure. The parameter concatenates an Ironstream token, followed by one or more
name/value pairs corresponding to columns in the table. The parameter must conform to the
following rules:
• A two-part name specifying the Ironstream parameter SSDFDB2TOKEN and a 16
character token name. The parameter value in the sample SSDFTRIG is:
'"SSDFDB2TOKEN":"nnnnnnnnnnnnnnnn"'
The entire parameter must be enclosed in single quotes and each of the two parts must
be enclosed in double quotes separated by a colon.
• A concatenated list of the name/value pairs for each column value to be passed to a
destination, enclosed in curly brackets { and }. Each name/value pair must be enclosed
in single quotes and each of the two parts must be enclosed in double quotes separated
by a colon.
• At least one name/value pair must include a TIMESTAMP as the value. If your data
does not include one, this can be accomplished as shown in the sample SSDFTRIG.
Use appropriate DB2 scalar functions to convert values in non-character columns. The
sample SSDFTRIG shows the CHAR function for a DATE column, and the combination of
CHAR and DIGITS functions for DECIMAL and INTEGER/SMALLINT column types.
Sample SSDFPROC
The DB2 stored procedure calls the Ironstream utility SSDFDB2F. SSDFDB2F takes the
DB2 data, locates an instance of Ironstream that is active for the DB2 data identified by the
token and then forwards the data to it. SSDFDB2F assigns "MFSOURCETYPE":"DB2TABLE" to
all records it forwards.
The value of the input parameter must be large enough to contain the parameter list that is
passed by the trigger.
The DB2 stored procedure address space that is set up to run SSDFDB2F must be
authorized, that is, all libraries in the STEPLIB JCL must be APF authorized. You may
decide to have a WLM environment that is dedicated to Ironstream for this reason.
The same stored procedure can be called by multiple triggers.
[SOURCE]
"DATATYPE":"DB2TRIGGER"
**************************************************************
* THE DB2TOKEN VALUE IS USED TO ASSOCIATE THE DB2 TRIGGER *
* STORED PROCEDURE TO THIS INSTANCE OF IRONSTREAM. *
* THE VALUE SPECIFIED MUST BE 16 CHARACTERS. *
**************************************************************
"DB2TOKEN":"abcdefghijklmnol"
For detailed descriptions of the DLP parameters, refer to the “Data Loss Prevention
Parameters” section in Chapter 7, “Manually Setting Ironstream Parameters”.
Topics:
• “Capturing Sequential Data” on page 16-1
• “Sequential File Forwarding Example” on page 16-2
• “Batching FILELOAD Data” on page 16-3
"KEYS"
"KEY":"0123456789ABCDEF"
"SYSTEM"
"NAME":"SSD9"
"FILELOAD":"FILE0001"
"FILELOAD":"FILE0002"
"DESTINATION"
"INDEX":"zfile"
"TYPE":"TCP"
"IPADDRESS":"192.168.1.1"
"PORT":"9997"
"IPADDRESS":"192.168.1.2"
"PORT":"9998"
"SSL":"YES"
"KEYRING":"/u/sdf/mykey.kdb"
"PASSWORD":"MyKey"
"STASH_FILE":""
"CERTIFICATE":"CASPLUNK2"
"SOURCE"
"DATATYPE":"FILELOAD"
"TRANSLATE_TABLE":"SSDF1141"
"BATCH_AND_BREAK_ON_CONDITION":"<character string>"
"BATCH_COUNTER":"NO" | "YES"
"BATCH_RECORDS":"YES" | "NO"
Ironstream instance in a set of batch steps is sent to a destination in each event where the
BATCH keyword is sent. The additional keyword is "STEP" and is assigned the relative
number of the step of the batch job invoking Ironstream.
As an example, if Ironstream were to be invoked in the first step of a batch job, the keyword
would be set to "STEP":"1". The maximum value for STEP is 255. This value should be used
with the BATCH value to uniquely identify batches of data.
If instead you were to use the batching facility, but specifying NO for the
BATCH_RECORDS statement and YES for the BATCH_COUNTER statement, the records
would be sent as individual records to the destination but with the Batch metadata added as
illustrated below:
Table 16-2: FILELOAD Batching Example 2
This chapter describes how to capture z/OS LPAR system performance metrics to send to a
destination.
Topics:
• “Overview” on page 17-1
• “Configuring Ironstream for System State Forwarding” on page 17-2
• “System-level Data Fields Forwarded to Destinations” on page 17-2
Overview
The SYSTEMSTATE data source captures z/OS LPAR system performance metrics.
• This data source provides low overhead data collection that gets information from
system control blocks every few seconds and sends this state data as a record to a
destination.
• System state information includes metrics like MSUs, CPU utilization, storage
utilization, and paging rates.
• Ironstream produces a dashboard with metrics, including 4HRA of MSUs for each LPAR
mapped against the CEC defined MSU capacity.
• Optionally, the TRANSLATE_TABLE parameter can be used to specify a user-defined
EBCDIC to ASCII translate table.
This chapter describes how to configure the Ironstream API to capture application
information for analysis and visualization.
Topics:
• “Overview of the Ironstream API” on page 18-2
• “Defining the IRONSTREAM_API Data Type” on page 18-3
• “Using the Single-send API” on page 18-5
• “Using the Multi-send API” on page 18-12
• “Troubleshooting the Ironstream API” on page 18-17
• “Ironstream API Coding Examples” on page 18-21
System Requirements
The Ironstream API requires the availability of a Monoplex or Sysplex Cross System
Coupling Facility (XCF) service. For more information, see the MVS Programming: Sysplex
Services Guide or contact your system programmer.
"CLASS":"nnn" - At least one CLASS entry is required. The nnn value can be any value in
the range of 128–255. Values in the range of 1–127 are reserved for use by
Ironstream.
"TYPE":"ttt" - Optionally, you can define one or more TYPE entries under a CLASS entry.
The ttt value can be any value in the range of 1-255.
"SUBTYPE":"sss" - Optionally, you can define one or more SUBTYPE entries under a TYPE
entry. The sss value can be any value in the range of 1-255.
Parameter Description
NUMPARM 4-byte integer field with the number of parameters.
This value must at least equal the eight mandatory
parameters. If the optional CCSID parameter is
passed, then the parameter count must equal nine.
REQUEST 4-byte character string of "SEND".
CLASS Identifies the Ironstream instance running with the
same class.
1-byte value in the range 128–255.
TYPE Identifies the Ironstream instance running with the
same CLASS and TYPE combo.
1-byte value in the range of 1-255. A value of 0
indicates no type is assigned.
SUBTYPE Identifies the Ironstream instance running with the
same CLASS, TYPE, and SUBTYPE combination.
1-byte value in the range of 1-255. A value of 0
indicates no type is assigned.
DATA The EBCDIC/ASCII data address to be sent.
Maximum length is 384K.
LENGTH 4-byte integer field with the length of the data to be
sent.
RETCODE 4-byte field the return code will be stored in.
Parameter Description
REASON 4-byte field the reason code will be stored in.
CCSID 4-byte field the CCSID code will be stored in.
CCSID optionally specifies if the type of data is
ASCII or EBCDIC. When specified, the parameter
count must equal nine. Valid CCSID values are:
• 0037, 1047, 0500 and 1141 are supported for
EBCIDIC to ASCII translation.
• 0437 (ASCII) – When specified, Ironstream will
not translate ASCII, but will instead perform the
translation based on the TRANSLATE_TABLE
parameter specified in the Ironstream job.
For more information on translation tables, see
“"TRANSLATE_TABLE":"table"” on page 7-17.
Note: All the above are 4-byte address pointers (for 31 bit) or 8-byte address pointers (for 64
bit).
For descriptions of the SSDFAPI parameter formats, see “Single-send API Parameter List
Format” on page 18-9.
parameters are passed as 64-bit addresses. If the caller is not in 64-bit mode, all parameters
are passed as 31-bit parameters.
Table 18-2: Ironstream API Environment
Register Conventions
When calling the SSDFAPI routine, the following registers have to be set.
Table 18-3: Ironstream API Input Registration
Convention Description
R1 Contains the address of the parameter list.
R13 Contains the address of an 18-word save area
(AMODE31) or 36-word save area (AMODE64).
R14 Contains the return address.
R15 Contains the address of the SSDFAPI routine.
On return from the SSDFAPI routine, the registers are set as follows:
Table 18-4: Ironstream API Output Registration
Convention Description
R0–R1 Used as work registers.
R2–R13 Unchanged.
R14 Used as work register.
R15 Return code.
Address Description
+0 Fullword that contains the total count of mandatory
parameters, plus any optional parameters.
+4 4-byte string that contains the SEND request.
+8 4-byte string that contains the CLASS value.
+12 4-byte string that contains the TYPE value.
+16 4-byte string that contains the SUBTYPE value.
+20 The start of the EBCDIC or ASCII data to send.
+24 Fullword that contains the length of the data.
+28 Fullword that on return is set with the return code.
+32 Fullword that on return is set with the reason code.
+36 (Optional) Fullword that specifies the CCSID code
page number of the collected data. If this option is
not specified, the user data is treated as EBCDIC.
//STUBLIB DD DISP=SHR,DSN=hlq.SSDF011.R14.LOADLIB
//SYSLIN DD *
.
.
.
INCLUDE STUBLIB(SSDFAPI)
.
.
.
NAME YOURPGM(R)
/*
INIT Request
The INIT request initializes the Multi-send API environment and finds the Ironstream
targets through the Ironstream configuration file’s CLASS/TYPE/SUBTYPE parameters. If
the initialization is successful, the Multi-send API will return a HANDLE that must be
supplied on all subsequent calls to the Multi-send API using the environment created. The
handle should be a pointer to a list of all the target Ironstream instances (or name/token
pair) for the CLASS/TYPE/SUBTYPE. The RACF authorization is done during the INIT
process.
CALL SSDFPAPI,(NUMPARM,REQUEST,CLASS,TYPE,SUBTYPE,TOKEN,RETCODE,RSNCODE)
SEND Request
The SEND request is the Multi-send API call used to send data to the Ironstream instances.
Callers can call the SEND routine multiple times by using the HANDLE returned by the
INIT request.
CALL SSDFPAPI,(NUMPARM,REQUEST,TOKEN,DATA,LENGTH,RETCODE,RSNCODE,CCSID)
All the above are mandatory parameters except CCSID. Only ASCII and EBCIDIC codes are
allowed for CCSID. This is similar to the Ironstream API.
TERM Request
The TERM request is used to terminate the Multi-send API environment established with
the INIT request. This request will free any storage or resources allocated.
CALL SSDFPAPI,(NUMPARM,REQUEST,TOKEN,RETCODE,RSNCODE)
Note: All the above are 4-byte address pointers (for 31-bit) or 8-byte address pointers (for
64- bit).
For descriptions of the Multi-send SSDFPAPI parameter formats, see “Multi-send API
Parameter List Format” on page 18-15
Address Description
+0 Fullword that contains the total count of mandatory
parameters, plus any optional parameters.
+4 4-byte string that contains the INIT request.
+8 4-byte string that contains the CLASS value.
+12 4-byte string that contains the TYPE value.
+16 4-byte string that contains the SUBTYPE value.
+20 Fullword that is empty when calling the Multi-send API.
Upon return from the stub module, it will have an address
that needs to be used for SEND. (TOKEN)
+24 Fullword that on return is set with the return code.
+28 Fullword that on return is set with the reason code.
Address Description
+0 Fullword that contains the total count of mandatory
parameters, plus any optional parameters.
+4 4-byte string that contains the SEND request.
+8 Fullword whose value is populated by the stub module after
successful completion of INIT. (TOKEN)
+12 The start of the EBCDIC or ASCII data to send.
+16 Fullword that contains the length of the data.
+20 Fullword that on return is set with the return code.
+24 Fullword that on return is set with the reason code.
+28 Fullword (optional) that specifies the CCSID code page
number of the collected data (0037 for EBCIDIC and 0437 for
ASCII). If not specified, the data is treated as EBCDIC.
Address Description
+0 Fullword that contains the total count of mandatory
parameters, plus any optional parameters.
+4 4-byte string that contains the TERM request.
+8 4-byte string that contains the token address.
+12 Full word that on return is set with the return code.
+16 Full word that on return is set with the reason code.
//STUBLIB DD DISP=SHR,DSN=hlq.SSDF011.R14.LOADLIB
//SYSLIN DD *
.
.
.
INCLUDE STUBLIB(SSDFPAPI)
.
.
.
NAME YOURPGM(R)
/*
** RETURN CODE 12 , REASON CODE 22 - ZERO VALUE PASSED FOR LENGTH PARAMETER
Explanation: The LENGTH parameter cannot have a value of zero.
Action: Supply a non-zero value for the LENGTH parameter.
** RETURN CODE 4, REASON CODE 1 - ONE OR MORE INSTANCES OF IRONSTREAM NOT FOUND
DURING SEND PROCESS
Explanation: At the time of the INIT process, the number of Ironstream instances that
matched the specified CLASS/TYPE/SUBTYPE combination were not found.
Action: Verify whether all your Ironstream instances are running.
** RETURN CODE 4, REASON CODE 1 - ONE OR MORE INSTANCES OF IRONSTREAM NOT FOUND
DURING SEND PROCESS
Explanation: At the time of the INIT process, the number of Ironstream instances that
matched the specified CLASS/TYPE/SUBTYPE combination were not found.
Action: Verify whether all your Ironstream instances are running.
CALL
SSDFAPI,(NUMPARM,REQUEST,CLASS,TYPE,SUBTYPE,DATA,LENGTH,-
RETCODE,RSNCODE,CCSID)
LTR R15,R15
BZ CALLOK
*********************************************************************
* ERROR OCCURED, R15 HAS THE RETURN CODE *
* RETCODE AND RSNCODE WILL HAVE RETURN AND REASON CODE TOO *
*********************************************************************
LR R5,R15
WTO 'USER API ERROR IN ASM PROGRAM',ROUTCDE=(11)
B CLEANUP
*********************************************************************
* SENDING DATA SUCCESSFUL, FREE THE STORAGE WE GET AT THE BEGINNING *
*********************************************************************
CALLOK DS 0H
LR R5,R15
WTO 'USER API ASM PROGRAM SUCCESSFULLY FINISHED',ROUTCDE=(11+
)
*
CLEANUP DS 0H
*
LR R15,R5
PR ,
*
*
REQUEST DC CL4'SEND'
DATA DC CL40'TEST DATA BEING SENT TO SPLUNK USING ASM'
LENGTH DC F'40'
RETCODE DC F'0'
RSNCODE DC F'0'
CLASS DC X'80' ONLY SUPPORT FROM X'80' TO X'FF' FOR USER API
TYPE DC X'01' TYPE CAN BE FROM X'00' TO X'FF' FOR USER API
SUBTYPE DC X'01' SUBTYPE CAN BE FROM X'00' TO X'FF'
CCSID DC F'0037'
NUMPARM DC F'9'
**************************************************************
* *
* REGISTER EQUATES *
* *
**************************************************************
R0 EQU 0
R1 EQU 1
R2 EQU 2
R3 EQU 3
R4 EQU 4
R5 EQU 5
R6 EQU 6
R7 EQU 7
R8 EQU 8
R9 EQU 9
R10 EQU 10
R11 EQU 11
R12 EQU 12
R13 EQU 13
R14 EQU 14
R15 EQU 15
END BIBDASM1
LA R5,TYPE
ST R5,PARMADDR+12
LA R5,SUBTYPE
ST R5,PARMADDR+16
LA R5,DATA
ST R5,PARMADDR+20
LA R5,LENGTH
ST R5,PARMADDR+24
LA R5,RETCODE
ST R5,PARMADDR+28
LA R5,REASON
ST R5,PARMADDR+32
LA R5,CCSID
ST R5,PARMADDR+36
CALL
SSDFAPI,(NUMPARM,REQUEST,CLASS,TYPE,SUBTYPE, -
DATA,LENGTH,RETCODE,REASON,CCSID),MF=(E,PARMADDR)
LTR R15,R15
BZ CALLOK
*********************************************************************
* ERROR OCCURED, R15 HAS THE RETURN CODE *
* RETCODE AND RSNCODE WILL HAVE RETURN AND REASON CODE TOO *
*********************************************************************
LR R5,R15
WTO 'USER API ERROR IN ASM PROGRAM',ROUTCDE=(11)
B CLEANUP
*********************************************************************
* SENDING DATA SUCCESSFUL, FREE THE STORAGE WE GET AT THE BEGINNING *
*********************************************************************
CALLOK DS 0H
LR R5,R15
WTO 'USER API ASM PROGRAM SUCCESSFULLY FINISHED',ROUTCDE=(11+
)
*
CLEANUP DS 0H
LA R0,WORKLEN
FREEMAIN RC,LV=(R0),A=(R8)
*
LR R15,R5
PR ,
DROP R8,R12
*
*
MYCLASS EQU X'80' ONLY SUPPORT FROM X'80' TO X'FF' FOR USER API
MYTYPE EQU X'01' TYPE CAN BE FROM X'01' TO X'FF' FOR USER API
MYSUBTP EQU X'01' SUBTYPE CAN BE FROM X'01' TO X'FF'
MYCCSID DC F'037' CCID EBCDIC (0037) OR ASCII (0437)
MYREQUST DC CL4'SEND'
MYDATA DC CL40'TEST DATA BEING SENT TO SPLUNK USING ASM'
*********************************************************************
* TEMPORARY WORK AREA DSECT *
*********************************************************************
SAVEWORK DSECT
NUMPARM DS F
REQUEST DS F
DATA DS CL80
LENGTH DS F
RETCODE DS F
REASON DS F
CLASS DS XL1
TYPE DS XL1
SUBTYPE DS XL1
CCSID DS F
PARMADDR CALL ,(NUMPARM,REQUEST,CLASS,TYPE,SUBTYPE,DATA,LENGTH, -
RETCODE,REASON,CCSID),MF=L
WORKLEN EQU *-SAVEWORK
**************************************************************
* *
* REGISTER EQUATES *
* *
**************************************************************
R0 EQU 0
R1 EQU 1
R2 EQU 2
R3 EQU 3
R4 EQU 4
R5 EQU 5
R6 EQU 6
R7 EQU 7
R8 EQU 8
R9 EQU 9
R10 EQU 10
R11 EQU 11
R12 EQU 12
R13 EQU 13
R14 EQU 14
R15 EQU 15
END BIBDASM2
PR ,
*
*
REQUEST DC CL4'SEND'
DATA DC CL40'TEST DATA BEING SENT TO SPLUNK USING ASM'
LENGTH DC F'40'
RETCODE DC F'0'
RSNCODE DC F'0'
CLASS DC X'80' ONLY SUPPORT FROM X'80' TO X'FF' FOR USER API
TYPE DC X'01' TYPE CAN BE FROM X'00' TO X'FF' FOR USER API
SUBTYPE DC X'01' SUBTYPE CAN BE FROM X'00' TO X'FF'
CCSID DC F'0037'
NUMPARM DC F'9'
**************************************************************
* *
* REGISTER EQUATES *
* *
**************************************************************
R0 EQU 0
R1 EQU 1
R2 EQU 2
R3 EQU 3
R4 EQU 4
R5 EQU 5
R6 EQU 6
R7 EQU 7
R8 EQU 8
R9 EQU 9
R10 EQU 10
R11 EQU 11
R12 EQU 12
R13 EQU 13
R14 EQU 14
R15 EQU 15
END BIBDAS64
numparm=9;
request= "SEND";
class=0x80;
type=0x01;
subtype=0x01;
ccsid=37;
funcV *stub;
stub = ironstream_load();
if (stub == NULL)
{
printf("Error: Fetch of SSDFAPI Failed\n");
}
else
{
printf("Executing the fetched module\n");
rc = ironstream_send(&numparm,&class,&type,&subtype,data,
&length,&retcode,&rsncode,&ccsid);
if (rc != 0)
printf("C PGM ERROR, RETURN CODE %d, REASON CODE %d\n",
retcode,rsncode);
else
printf("USER API C PROGRAM SUCCESSFULLY FINISHED");
return rc;
}
}
REQUEST
CLASSV
TYPEV
SUBTYPE
DATAV
LENGTHV
RETCODE
RSNCODE
CCSID.
IF (RETCODE EQUAL ZEROS AND RSNCODE EQUAL ZEROS) THEN
DISPLAY "USER STUB COBOL PROGRAM SUCCESSFULLY FINISHED"
ELSE
DISPLAY "COBOL PGM ERROR: "
DISPLAY "RETURN CODE " RETCODE ", REASON CODE " RSNCODE
END-IF.
A200-EXIT.
EXIT.
rc1 = CALL_IRONSTREAM(NUMPARM,REQUEST,CLASS,TYPE,SUBTYPE,DATA,LENGTHA,
,RETCODE,RSNCODE,CCSID)
if (rc1==0) then do
Say 'USER API REXX PROGRAM SUCCESSFULLY FINISHED'
end
EXIT
CALL_IRONSTREAM:
NUMPARM = d2c(arg(1),4)
CLASS = d2c(arg(3),1)
TYPE = d2c(arg(4),1)
SUBTYPE = d2c(arg(5),1)
LENGTHA = d2c(arg(7),4)
RETCODE = d2c(arg(8),4)
RSNCODE = d2c(arg(9),4)
CCSID = d2c(arg(10),4)
RETCODD = c2d(RETCODE,4)
RSNCODD = c2d(RSNCODE,4)
******************************************************************
01 K-LEN PIC 99 COMP VALUE 5 .
01 K-VAL PIC X(05) .
PROCEDURE DIVISION.
MAIN-START.
SET NUMPARM-PTR TO ADDRESS OF NUMPARM.
SET REQUEST-PTR TO ADDRESS OF REQUEST.
SET CLASSV-PTR TO ADDRESS OF CLASSV .
SET TYPEV-PTR TO ADDRESS OF TYPEV .
SET SUBTYPE-PTR TO ADDRESS OF SUBTYPE.
SET DATAV-PTR TO ADDRESS OF DATAV .
SET LENGTHV-PTR TO ADDRESS OF LENGTHV.
SET RETCODE-PTR TO ADDRESS OF RETCODE.
SET RSNCODE-PTR TO ADDRESS OF RSNCODE.
SET CCSID-PTR TO ADDRESS OF CCSID.
A200-SEND-PARA.
MOVE "SEND" TO REQUEST
MOVE "SOURCE1 IRONSTREAM_API FROM COBOL *CICS*" TO DATAV
MOVE 40 TO LENGTHV
PERFORM A400-SEND-PARA.
MOVE 0 TO RETURN-CODE .
EXEC CICS RETURN END-EXEC.
MAIN-END .
STOP RUN.
GOBACK.
A400-SEND-PARA.
EXEC CICS LINK PROGRAM('SSDFCAPI')
COMMAREA(APIPARM) END-EXEC.
IF (RETCODE EQUAL ZEROS AND RSNCODE EQUAL ZEROS) THEN
NEXT SENTENCE
ELSE
DISPLAY "COBOL PGM ERROR: "
DISPLAY "RETURN CODE " RETCODE ", REASON CODE " RSNCODE
END-IF.
A400-SEND-PARA-END.
*********************************************************************
* ERROR OCCURED, R15 HAS THE RETURN CODE *
* RETCODE AND RSNCODE WILL HAVE RETURN AND REASON CODE TOO *
*********************************************************************
LR R5,R15
WTO 'USER API ERROR IN ASM PROGRAM',ROUTCDE=(11)
B CLEANUP
*********************************************************************
* SENDING DATA SUCCESSFUL, FREE THE STORAGE WE GET AT THE BEGINNING *
*********************************************************************
LHI R10,0
CALLOK DS 0H
**SEND CALL
MVC REQUEST,=CL4'SEND'
CALL SSDFPAPI,(NUMPARM,REQUEST,TOKEN,DATA,LENGTH,RETCODE,RSNC-
ODE,RSNCODE,CCSID)
LTR R15,R15
JNZ CALLTERM
L R7,RETCODE
L R8,RSNCODE
AHI R10,1
J CALLOK
*
CALLTERM DS 0H
MVC REQUEST,=CL4'TERM'
CALL SSDFPAPI,(NUMPARM,REQUEST,TOKEN,RETCODE,RSNCODE)
*
LR R5,R15
WTO 'USER API ASM PROGRAM SUCCESSFULLY FINISHED',ROUTCDE=(11+
)
*
CLEANUP DS 0H
LR R15,R5
PR ,
*
*
REQUEST DC CL4'INIT'
DATA DC CL40'HELLO THIS IS PERSISTENT API TEST'
LENGTH DC F'40'
TOKEN DC F'0'
RETCODE DC F'0'
RSNCODE DC F'0'
CLASS DC X'80' ONLY SUPPORT FROM X'80' TO X'FF' FOR USER API
TYPE DC X'01' TYPE CAN BE FROM X'00' TO X'FF' FOR USER API
SUBTYPE DC X'02' SUBTYPE CAN BE FROM X'00' TO X'FF'
NUMPARM DC F'8'
CCSID DC F'0037'
TIMERID DS F
STIMERMS STIMERM SET,MF=L
**************************************************************
* *
* REGISTER EQUATES *
* *
**************************************************************
R0 EQU 0
R1 EQU 1
R2 EQU 2
R3 EQU 3
R4 EQU 4
R5 EQU 5
R6 EQU 6
R7 EQU 7
R8 EQU 8
R9 EQU 9
R10 EQU 10
R11 EQU 11
R12 EQU 12
R13 EQU 13
R14 EQU 14
R15 EQU 15
END PAPIASM1
*********************************************************************
* ERROR OCCURED IN INIT PHASE *
*********************************************************************
LR R5,R15
WTO 'USER API ERROR IN INIT CALL',ROUTCDE=(11)
B CLEANUP
**SEND CALL
CALLSEND DS 0H
MVC REQUEST,=CL4'SEND'
CALL SSDFPAPI,(NUMPARM,REQUEST,TOKEN,DATA,LENGTH,RETCODE,RSNC-
ODE,CCSID)
*
LTR R15,R15
BZ CALLTERM
*********************************************************************
* ERROR OCCURED IN SEND PHASE *
*********************************************************************
LR R5,R15
WTO 'USER API ERROR IN SEND CALL',ROUTCDE=(11)
B CLEANUP
*TERM CALL
CALLTERM DS 0H
MVC REQUEST,=CL4'TERM'
CALL SSDFPAPI,(NUMPARM,REQUEST,TOKEN,RETCODE,RSNCODE)
*
LTR R15,R15
BZ CALLOK
*********************************************************************
* ERROR OCCURED IN SEND PHASE *
*********************************************************************
LR R5,R15
WTO 'USER API ERROR IN TERM CALL',ROUTCDE=(11)
B CLEANUP
CALLOK DS 0H
LR R5,R15
WTO 'USER API ASM PROGRAM SUCCESSFULLY FINISHED',ROUTCDE=(11+
)
*
CLEANUP DS 0H
*
LR R15,R5
PR ,
*
*
REQUEST DC CL4'INIT'
DATA DC CL40'TEST DATA BEING SENT FROM ASM 64 BIT'
LENGTH DC F'40'
RETCODE DC F'0'
RSNCODE DC F'0'
TOKEN DC F'0'
CLASS DC X'80' ONLY SUPPORT FROM X'80' TO X'FF' FOR USER API
TYPE DC X'01' TYPE CAN BE FROM X'00' TO X'FF' FOR USER API
SUBTYPE DC X'02' SUBTYPE CAN BE FROM X'00' TO X'FF'
CCSID DC F'0037'
NUMPARM DC F'8'
**************************************************************
* *
* REGISTER EQUATES *
* *
**************************************************************
R0 EQU 0
R1 EQU 1
R2 EQU 2
R3 EQU 3
R4 EQU 4
R5 EQU 5
R6 EQU 6
R7 EQU 7
R8 EQU 8
R9 EQU 9
R10 EQU 10
R11 EQU 11
R12 EQU 12
R13 EQU 13
R14 EQU 14
R15 EQU 15
END PAPIAS64
char subtype;
char *data;
int token;
int length;
int retcode;
int rsncode;
int ccsid;
int rc;
numparm=9;
token = 0;
request= "SEND";
class=0x80;
type=0x01;
subtype=0x02;
ccsid=37;
funcV *stub;
stub = ironstream_load();
if (stub == NULL)
{
printf("Error: Fetch of SSDFAPI Failed\n");
}
else
{
rc = ironstream_init(&numparm,&class,&type,&subtype,
&token,&retcode,&rsncode);
if (rc == 0)
printf("INIT completed successfully\n");
else
{
printf("INIT ERROR, RETURN CODE %d, REASON CODE %d\n",
retcode,rsncode);
return rc;
}
rc = ironstream_send(&numparm,&token,data,
&length,&retcode,&rsncode,&ccsid);
if (rc == 0)
printf("SEND completed successfully\n");
else
{
printf("SEND ERROR, RETURN CODE %d, REASON CODE %d\n",
retcode,rsncode);
return rc;
}
rc = ironstream_term(&numparm,&token,&retcode,&rsncode);
if (rc == 0)
printf("TERM completed successfully\n");
else
{
printf("TERM ERROR, RETURN CODE %d, REASON CODE %d\n",
retcode,rsncode);
return rc;
}
return rc;
}
}
A300-SEND-PARA.
MOVE 8 TO NUMPARM
MOVE "SEND" TO REQUEST
MOVE '{"TYP":"TXT","TEXT":"TEST DATA FROM COBOL"}'
TO DATAV
MOVE LENGTH OF DATAV TO LENGTHV
MOVE 37 TO CCSID
MOVE "SSDFPAPI" TO PGM-NAME
CALL PGM-NAME USING NUMPARM
REQUEST
TOKEN
DATAV
LENGTHV
RETCODE
RSNCODE
CCSID.
IF (RETCODE EQUAL ZEROS AND RSNCODE EQUAL ZEROS) THEN
DISPLAY "SEND COMPLETED SUCCESSFULLY"
ELSE
DISPLAY "ERROR IN SEND CALL"
DISPLAY "RETURN CODE " RETCODE ", REASON CODE " RSNCODE
END-IF.
A300-EXIT.
EXIT.
A400-TERM-PARA.
MOVE 5 TO NUMPARM
MOVE "TERM" TO REQUEST
MOVE "SSDFPAPI" TO PGM-NAME
CALL PGM-NAME USING NUMPARM
REQUEST
TOKEN
RETCODE
RSNCODE.
IF (RETCODE EQUAL ZEROS AND RSNCODE EQUAL ZEROS) THEN
DISPLAY "TERM COMPLETED SUCCESSFULLY"
ELSE
DISPLAY "ERROR IN TERM CALL"
DISPLAY "RETURN CODE " RETCODE ", REASON CODE " RSNCODE
END-IF.
A400-EXIT.
EXIT.
EXIT
/* INIT ROUTINE */
CALL_INIT:
NUMPARM = d2c(arg(1),4)
CLASS = d2c(arg(3),1)
TYPE = d2c(arg(4),1)
SUBTYPE = d2c(arg(5),1)
RETCODE = d2c(arg(7),4)
RSNCODE = d2c(arg(8),4)
RETCODD = c2d(RETCODE,4)
RSNCODD = c2d(RSNCODE,4)
/* SEND ROUTINE */
CALL_SEND:
NUMPARM = d2c(arg(1),4)
LENGTHA = d2c(arg(5),4)
RETCODE = d2c(arg(6),4)
RSNCODE = d2c(arg(7),4)
CCSID = d2c(arg(8),4)
RETCODD = c2d(RETCODE,4)
RSNCODD = c2d(RSNCODE,4)
exit RETCODD
end
RETURN RETCODD
/* TERM ROUTINE */
CALL_TERM:
NUMPARM = d2c(arg(1),4)
RETCODE = d2c(ARG(4),4)
RSNCODE = d2c(ARG(5),4)
RETCODD = c2d(RETCODE,4)
RSNCODD = c2d(RSNCODE,4)
This chapter describes how to configure log4j configuration files to collect log4j records.
Topics:
• “Overview of Log4j” on page 19-1
• “Defining the Log4j Parameters” on page 19-2
• “Sample Log4j Configurations” on page 19-2
• “How to use PATTERN in the Log4j Reader Facility” on page 19-4
Overview of Log4j
On OMVS, you must change your log4j configuration file (log4j.xml or log4j.properties,
log4j2.xml or other files defined in log4j.configurationFile) to collect log4j log records and
send them to a destination.
The SDFAppender is used to collect Log4j 1.x log records on OMVS and send these records to
Ironstream through a namedpipe. The SDF2Appender is used to collect Log4j 2.x records.
The namedpipe must be unique for each Ironstream instance on your system. The
SDFAppender.class and SDF2Appender.class are included in the jar file in
your_directory/sdf, where your_directory is the zFS root name that you specified in
Panel 6-1, “Initial Configuration,” on page 6-11.
The SDFAppender and SDF2Appender both support all log4j pattern layouts and filters.
Since they append date/time information automatically, you don’t need to add date/time in
the pattern layout.
Note that the Websphere Application Server allows you to configure the log4j logging utility
on the server level or application level. You must configure the log4j components together
with Ironstream’s log4j appender at the application level.
This chapter describes how to configure Ironstream to gather IMS log records, either
synchronously or asynchronously. It also lists and describes the IMS log record fields
supported in this release of Ironstream.
Topics:
• “Overview of IMS Log Record Forwarding” on page 20-2
• “Synchronous IMS Log Gathering” on page 20-3
• “Asynchronous IMS Log Gathering” on page 20-4
• “IMS Log Record Extraction Process” on page 20-5
• “IMS Log Record Processing” on page 20-7
• “IMS Log Record Field Descriptions” on page 20-8
• “Messages Issued by IMS Log Records” on page 20-85
Excluded Functionality
Specific functionality excluded from this release of IMS support in Ironstream is:
• Command capture
• Capture from DLI jobs
• Capture from ancillary IMS address spaces (IMS Connect, Common Queue Server)
• Ironstream’s Data Loss Prevention (DLP) function
Whereas asynchronous log record gathering provides the information when the IMS logs
switch without impact to the IMS online service.
records, you should start this address space before starting IMS.
• Use a different value for the Ironstream [SYSTEM] "NAME" than the one being used for
<ims-id>.
• IMS_SYSTEM_ID must contain the <ims-id>, which is the value used to populate the
"IMSID" JSON pair.
See the “IMS Log Record Extraction Process” on page 20-5 for details about the available
categories.
"IMPORT":"IMSLOG"
"SEQUENCE_START":"hhhhhhhhhhhhhhhh"
"SEQUENCE_END":"hhhhhhhhhhhhhhhh"
…
[DESTINATION]
…
[SOURCE]
"DATATYPE":"IMSLOG"
"IMS_SYSTEM_ID":"%SSID"
"CATEGORY":"MTOMSG,SYSSTAT,TRANSTAT,TRANDET"
// ENDIF
• Use a different value for the Ironstream [SYSTEM] "NAME" than the one being used for
<ims-id>.
• IMS_SYSTEM_ID must contain the <ims-id>, which is the value used to populate the
"IMSID" JSON pair.
Log sequence numbers are optional (they default to the start and end of the current file)
because they are intended for exceptional use (when data has gone missing). If you specify a
log sequence number, it must be 16 hexadecimal characters with leading zeros (this is how
they appear in Ironstream messages).
See the “IMS Log Record Extraction Process” on page 20-5 for details about the available
categories.
System Segment
LU 6.1 Segment
Security Segment
Multi-system Segment
APPC Segment
Notice that the “APPC segment” may alternatively contain OTMA information.
APPC Fields
LUP_USER_ID_IND 1 character Indicates source of userid
OTMA Fields
LUY_LTERM 8 characters transaction
Data Segment
An APPC segment is present on the start record, this is described in “APPC Segment” on
page 20-16.
An APPC segment may be present, this is described in “APPC Segment” on page 20-16.
An APPC segment may be present, this is described in “APPC Segment” on page 20-16.
An APPC segment may be present, this is described in “APPC Segment” on page 20-16.
An APPC segment may be present, this is described in “APPC Segment” on page 20-16.
MQ Segment, if present
MQGET decimal number of MQGET calls Y Y
(End of MQ Segment)
DATETIME YYYY-MM-DD Log record timestamp to Y Y Y
HH:MM:SS.xx+hhm 1/100th of a second, in
m UTC with offset to local
time
LOGRC_STCK YYYY-MM-DD Log record timestamp in Y Y Y
HH:MM:SS.uuuuuu UTC
LOGRC_SEQ decimal Log record sequence Y Y Y
number
Note that Ironstream does not currently format the FA record extension.
This section provides instructions for setting up the Data Collection Extension (DCE)
parameters and its associated data types.
• “Configuring the DCE Parameters”
• “Setting Up USS File Collection”
• “Setting Up the RMF Data Forwarder”
Chapter 21 Configuring the DCE
Parameters
This chapter covers the global parameters required to activate the basic operation of
Ironstream’s Data Collection Extension (DCE), and the cluster parameters that define a
group of Ironstream systems to forward data to the same destination.
Topics:
• “Overview of DCE” on page 21-1
• “DCE Configuration Files” on page 21-2
• “Syntax for DCE Configuration Parameters” on page 21-5
• “Ironstream Forwarder Configuration Files” on page 21-5
Overview of DCE
The DCE is a component of Ironstream that provides extensions for collection of data from a
variety of sources. It currently provides support for offloading UNIX Systems Services (USS)
files and collecting Resource Measurement Facility (RMF) III metrics.
Each DCE instance has its own configuration file. These configuration files largely contain
system-level parameters; the detailed specification of RMF III metrics to be collected and
USS files and directories to be monitored are defined using IDT.
This chapter identifies the system level parameters that are common to both RMF III and
USS. You should read this chapter first.
• For details about configuring USS, see “Setting Up USS File Collection,” on page 22-1.
• For details about configuring RMF III, see “Setting Up the RMF Data Forwarder,” on
page 23-1.
DCE requires the availability of a Monoplex or Sysplex Cross System Coupling Facility
(XCF) service. For more information, see the MVS Programming: Sysplex Services Guide or
contact your system programmer.
*---------------------------------------------------------*
* Data Collection Extension for USS *
*---------------------------------------------------------*
Define Globals
InstanceName USS1
Maxthreads 64
*---------------------------------------------------------*
* Define Ironstream Clusters *
*---------------------------------------------------------*
Define IronstreamCluster ClusterU
XCFGroup USSXCFG1 XCFMember USSXCFM1
XCFGroup USSXCFG1 XCFMember USSXCFM2
XCFGroup USSXCFG1 XCFMember USSXCFM3
*---------------------------------------------------------*
*---------------------------------------------------------*
* Data Collection Extension *
*---------------------------------------------------------*
Define Globals
InstanceName RMF1
Maxthreads 4
*---------------------------------------------------------*
* Define Ironstream Clusters *
*---------------------------------------------------------*
Define IronstreamCluster ClusterR
XCFGroup RMFXCFG1 XCFMember RMFXCFM1
*---------------------------------------------------------*
As you can see, the same parameters are specified in both examples, but there are some
differences in the parameter values.
The following sections describe the DCE parameters.
• “Global Parameters” on page 21-3
• “Ironstream Cluster Parameters” on page 21-3
• “Include Parameter Group” on page 21-4
Global Parameters
Global parameters follow the mandatory “Define Globals” statement that marks the start of
the global parameter group.
Define Globals
This statement marks the start of the Globals group of parameters; any
parameters that follow it up to the next Define statement or the end of the
configuration file are assumed to be Globals parameters. Parameters in this
group are:
InstanceName instance_name
The InstanceName parameter, which can be up to eight characters long, assigns a
unique name, instance_name, to each instance of DCE on an LPAR. This enables
you to run multiple DCEs in an LPAR should you wish to do so (although this is
not required even with multiple data sources going to multiple Ironstream
systems).
When you run multiple instances of DCE, the drop-down Ironstream menu
available from the Ironstream Desktop enables you to choose the instance for
which you want to access the various display panels.
MaxThreads nn
This parameter defines the maximum number of threads, nn, that can be active
at any one time and thereby determines the maximum number of concurrent
offloads. Any value from 1 to 64 may be specified and the default when this
parameter is omitted is 16.
You may have noticed in the sample configurations in Figure 21-1 and Figure 21-2 that USS
has a value of 64 for MaxThreads; whereas, it is 4 for RMF III. There are many reasons why
USS requires many more threads, largely related to the numbers and characteristics of the
files you are monitoring and parameters associated with tailing. This is explained in detail
in “Setting Up USS File Collection,” on page 22-1.
Include member_name
This optional parameter enables you to combine sets of DCE parameters from
other configuration members in the same data set as the primary configuration
member. You can specify as many Include parameters as required.
For example, if you create member DCEUSS with just the USS global and XCF
parameters and specify this as the member name in the DCE USS started task.
In the same input library, you could create member USSDFLTS with the USS
default parameters and add the statement INCLUDE USSDFLTS in member
DCEUSS.
"KEYS"
"KEY":"0123456789ABCDEF"
"SYSTEM"
"NAME":"USS1"
"SOURCE"
"DESTINATION"
"INDEX":"your_USS_index"
"TYPE":"TCP"
"IPADDRESS":"xxx.xxx.xxx.xxx"
"PORT":"nnnnn"
"SSL":"NO"
"SOURCE"
"DATATYPE":"XCF"
"XCFGROUP":"USSXCFG1"
"XCFMEMBER":"USSXCFM1"
There needs to be a separate forwarder started task for each XCF member defined in
Figure 21-1. The only differences in the configuration file for the other two forwarder tasks
are in the NAME and XCFMEMBER parameters. Note that there is no relationship between
the SYSTEM NAME defined in the forwarder and the InstanceName defined in DCE, but
each forwarder NAME must be unique within an LPAR
"KEYS"
"KEY":"0123456789ABCDEF"
"SYSTEM"
"NAME":"RMF1"
"SOURCE"
"DESTINATION"
"INDEX":"your_RMF_index"
"TYPE":"TCP"
"IPADDRESS":"xxx.xxx.xxx.xxx"
"PORT":"nnnnn"
"SSL":"NO"
"SOURCE"
"DATATYPE":"XCF"
"XCFGROUP":"RMFXCFG1"
"XCFMEMBER":"RMFXCFM1"
This chapter describes how to configure DCE to monitor and offload Unix System Services
(USS) files to Ironstream.
Topics:
• “Overview of USS File Collection” on page 22-1
• “Summary of DCE’s USS Offload Functions” on page 22-4
• “Configuring DCE for USS File Offload” on page 22-4
• “Duplicate USS File Detection” on page 22-12
• “USS File Tailing Process” on page 22-13
• “Dynamically Modifying USS Processing” on page 22-14
Defaults group level. The offloaded data can be split further by specifying other Splunk
indexes at the Directory group level.
During the scanning process DCE looks for files that are eligible for offloading to Ironstream
according to filters that you define. A filter may select a specific file or it can be generic and
so select multiple files. Once a file has been selected it is offloaded to a defined Ironstream
system. Provision is made for defining one or more backup Ironstream systems so if the
primary target Ironstream is unavailable, then files can be offloaded to a backup.
Define USSDefaults
This statement marks the start of the USS Defaults group of parameters; any
parameters that follow it up to the next Define statement or the end of the
configuration file are assumed to be USS Defaults parameters. Parameters in this
group are:
Index index_name
The required Index parameter specifies the name of the Splunk index where you
want to offload data from USS directories. In order for DCE USS to start, you
must specify the Splunk index specified in the Ironstream XCF data type
configuration that is paired with the Ironstream forwarder for USS, or you must
specify a different Splunk index that will override the Ironstream XCF
configuration. The index specified here can also be overridden if another Splunk
index is specified at the directory level.
Tip! Since you can specify the same directory more than once, and apply different
filters so that different files are selected, it’s possible to have different files in the
same directory that are given different Splunk Source and Sourcetype metadata
and which are also sent to different Splunk indexes.
Filter filter_name
The Filter parameter specifies the name, filter_name, that is applied to all defined
USS directories when no other filter name is specified at the directory level (see
“Define USSFilter filter_name” on page 22-8). The name can be a maximum of 23
bytes long. A filter name of AllFiles can be specified, which is a default filter
provided in DCE USS and which uses a mask of *.
Important! There is no default when this parameter is omitted, which means
that a filter name must be specified at the USS directory level, otherwise no files
will be selected for offload.
Tailing No | Yes
The Tailing parameter specifies whether to use file tailing. The default is No,
which means no tailing is performed, in which case the values in TailingDelay
and TailingWait are ignored.
The Tailing, TailingDelay, and TailingWait parameters together define DCE’s
Tailing feature, as described in “USS File Tailing Process” on page 22-13.
TailingDelay 60 | nn
The TailingDelay parameter specifies the total number of seconds between 1 and
600 that DCE waits once a USS file offload is complete (end-of-file has been
detected) to check whether more data has been appended to the file. The default
when this parameter is omitted is 60 seconds.
The number of times the check is made during this period is determined by the
TailingWait parameter described next. The TailingDelay value would normally be
a multiple of the TailingWait value.
If additional records are added to a file within the TailingDelay period these are
offloaded to Ironstream without a file close/open being performed. This process is
repeated until the TailingDelay period expires without further records being
added to the file, at which point the file is closed and will be rescanned as usual
when the ScanFrequency interval next expires.
TailingWait 15 | nn
Once the tailing process starts for an offloaded file, the check for additional
records being added is made at the expiry of each TailingWait interval. The
TailingWait value is a number of seconds between 1 and 600 and should be less
than the TailingDelay value. Normally the TailingDelay value would be a
multiple of the value specified for this parameter.
StatusHistory 30 | nnn
This parameter specifies the default number of days between 1 and 365 for which
DCE retains status information for offloaded files. When this parameter is
omitted the default is 30 days.
This parameter is relevant to the Duplicate File Detection feature of DCE since it
means that a duplicate can still be detected if the original file from which the copy
was made is deleted from the directory. Once the period of days specified here is
passed the status information is lost and the duplicate can no longer be detected.
See the section “Duplicate USS File Detection” on page 22-12 for further
information.
Recursive No | Yes
The Recursive parameter specifies whether DCE searches all subdirectories of a
defined USS directory (see “USS Directory Parameters” on page 22-10) when
scanning for matching files. Files are selected for offload when they match the
criteria specified in the defined or default filter name.
The default when this parameter is omitted is ‘No’ so only the single specified
directory is scanned by default. To scan all subdirectories, specify ‘Yes’ for this
parameter.
TargetCluster cluster_name
The TargetCluster parameter specifies the name of the Ironstream cluster to
which USS file data will be offloaded. See the section “Ironstream Cluster
Parameters” on page 21-3 for a description of the Cluster parameter group.
There is no default assumed when this parameter is omitted; in this event you
must specify the TargetCluster parameter for each USSDirectory parameter
group, see “USS Directory Parameters” on page 22-10.
Here is a sample USS Defaults group definition:
Define USSDefaults
Source PATHNAME
Sourcetype USSFile
Index usslogs
Filter AllFiles
ScanFrequency 120
Tailing Yes
TailingDelay 30
TailingWait 1
ChecksumLength 1024
StatusHistory 60
Encoding AUTO
Recursive Yes
TargetCluster Cluster1
(page 10). The name can be a maximum of 23 bytes long. A filter name of AllFiles
should not be specified because this is a default filter provided in DCE USS,
which uses a mask of *.
IncludeFile *.log
IncludeFile *.error*
IncludeFile *.xml
ExcludeFile IBMUSER*
ExcludeFile TEST*
IncludeFile plx1.syslog.log
IncludeFile plx2.syslog.log
The available parameters are shown here for completeness only. For a full description, see
the section “USS Defaults Parameters” on page 22-5.
Filter filter_name
Index index_name
Tailing No | Yes
TailingDelay 60 | nn
TailingWait 60 | nn
StatusHistory 30 | nnn
Recursive No | Yes
TargetCluster cluster_name
Filter SyslogDFilter
Filter Log4jFilter
Recursive Yes
TailingDelay 5
ScanFrequency 1800
Source PATHNAME
Sourcetype USSFile
Index usslogsdir1
This “Define USS Directory” statement example uses a long path name based on this long
directory path name:
/sync1/support/plx3/zen/user/zenplx3/ZENV21/registry/ZenUserSettings
/support/plx3/zen/user
/zenplx3/ZENV21
/registry/ZenUserSettings
Filter SyslogDFilter
The path syntax is free-format but it must be broken between directory boundaries and each
line must be restarted with a forward slash character. The path must start on the
USSDirectory line and has a maximum length of 255 characters.
This directory path example uses a period as a special character, using both single and
double quotes as delimiters.
'/my documents'
‘/myuser”’
“’/.help”
bytes of every file will always be enough to uniquely identify it. In this case it would be
advantageous to reduce the ChecksumLength parameter to 50 since there would be a (small)
reduction in CPU usage. On the other hand, if you notice that one or two files pass the
duplicate file check and are forwarded to Splunk when they ought not to have been, then you
could usefully increase the value specified for ChecksumLength. Be aware that this will
cause an (again small) increase in CPU used by the algorithm but will help to prevent
duplicate files being forwarded.
How It Works
When a USS file is selected for offload, DCE reads the data from the file and offloads it to
Ironstream for forwarding to Splunk. If tailing is enabled, when end-of-file (EOF) is
detected, rather than closing the file and waiting for the scanning interval to pass and then
re-opening the file and checking for additional data, DCE waits for the interval defined by
the TailingDelay parameter (this parameter defaults to 60 seconds if it is omitted and may
be specified in both the USSDefaults and USSDirectory parameter group).
During this wait period, DCE will perform a number of checks to see whether more data has
been appended to the file according to the value specified for the TailingWait parameter. For
example, if the default values are used there will be four such checks since the TailingDelay
default is 60 (seconds) and the TailingWait value is 15 (seconds). At each check, if DCE finds
that there has been more data appended it is offloaded immediately after which DCE again
waits for the specified tailing wait period. This process repeats until the tailing delay expires
without any additional data being offloaded.
Only if the time specified by the TailingDelay parameter expires without any new data being
appended to the file will it be closed and DCE will then wait until the expiry of the defined
directory scanning interval (which would normally be longer than the tailing delay period)
before opening and checking the file again.
that most of those threads would be in continuous use, thereby leaving few threads for other
file offloads. In this situation it may be advisable to increase the value of MaxThreads.
Clicking the USS File Status item invokes the USS File Status panel. This provides an
overall view of the USS files that have been selected for offload to the selected Ironstream
cluster according to your filtering definitions. Panel 22-3 is a (partial) example:
Selecting USS Defaults enables you to view the current default settings and adjust the
maximum number of threads for concurrent USS file offloads.:
Clicking the Edit Directory icon opens the Directory Scan Attributes panel for the
selected directory. This panel displays the directory scan values for offloading files that are
set at the USSDefault level for all directories:
Selecting USS Filters displays the currently defined filters and enables you to modify and
delete them. Panel 22-7 shows a (partial) example:
This chapter describes how to configure the RMF Data Forwarder to collect Resource
Measurement Facility III system performance and utilization data.
Topics:
• “Overview of the RMF Data Forwarder” on page 23-1
• “Configuring the RMF Data Forwarder” on page 23-2
• “Configuring DCE RMF Parameters” on page 23-2
• “Defining Security Settings” on page 23-5
• “Setting the RMF Filters in IDT” on page 23-6
• “Sample Scenario for Setting RMF Filters” on page 23-10
"KEYS¨
"KEY":"0123456789ABCDEF"
"SYSTEM¨
"NAME":"RMF1"
"DESTINATION¨
"INDEX":"xhrmf"
"TYPE":"TCP"
"IPADDRESS":"nnn.nnn.nnn.nnn"
"PORT":"nnnnn"
"SSL":"NO"
"SOURCE¨
"DATATYPE":"XCF"
"XCFGROUP":"RMFXCFG2"
"XCFMEMBER":"RMFXCFM1"
The Configurator Tool creates RMF forwarder JCL in the member you specified as the RMF
forwarder STC name.
Define RMFSettings
The parameters in the RMFSettings group define values used to configure and monitor the
RMF Data Forwarder, and to control the filtering of RMF fields.
Define RMFSettings
This statement marks the start of the RMFSettings group of parameters; any
parameters that follow it up to the next Define statement, or the end of the
configuration file, are assumed to be RMFSettings parameters. Parameters in
this group are:
IPAddress ipaddress
This mandatory parameter specifies the IPv4 address (up to 15 characters) of the
RMF Distributed Data Server (DDS).
Port port
This mandatory parameter specifies the HTTP_PORT used by the RMF
Distributed Data Server (DDS).
Userid userid
This is an SAF user ID that has the required access to RMF DDS performance
data.
ScanFrequency frequency
This optional parameter specifies how often, in seconds, the RMF server is to be
contacted for retrieval of data.
A value between 5–3600 can be specified. The default is 60.
DefaultSelection Off | On
This parameter specifies whether on a COLD start all resources are selected (On)
for collection or none are selected (Off). The default is Off.
MetricSelection Off | On
This parameter specifies whether on a COLD start all metrics are selected (On)
for collection or none are selected (Off). The default is Off.
MaxNumMetrics max_value
This parameter specifies the maximum number of entries when requesting metric
data results from the RMF DDS. This is also known as the number of list
elements. Where there are multiple entries in a resource list, then only the first
max_value metrics with the highest values will be forwarded. Some metrics are
single items, while some metrics have multiple items.
A value between 1–1000 can be specified. The default is 20.
For example, MaxNumMetrics=250 means that if metric_id has multiple items,
then only the first (largest-valued) 250 are to be sent: metric_id_R0001 to
metric_id_R0250. If there are more entries in the list with lower values, then they
will be ignored.
Note: When the maximum XCF record size is exceeded, message SDF0507S will
be issued. Reducing the MaxNumMetrics value is a simple mechanism to
potentially eliminate SDF0507S messages.
TargetCluster cluster_name
The TargetCluster parameter specifies the name of the Ironstream cluster to
which RMF file data is offloaded.
DataFormat format
This optional parameter specifies the format of the data transmitted to
Ironstream, and hence, to a destination. The format can be JSON or XML. The
default is JSON.
Define RMFSettings
IPAddress 192.168.61.25
Port 8803
UserID rmfuser
ScanFrequency 120
DefaultSelection Off
MetricSelection Off
MaxNumMetrics 30
TargetCluster SDFRMF
DataFormat JSON
Panel 23-2 illustrates the RMF Filters display panel with Processor metrics selected:
Table 23-1 describes the controls available on the RMF Filter panel:
Table 23-1: RMF Filters Panel Controls
It may take some to understand precisely how to set the element and metric switch icons in
conjunction with the RMF Elements and RMF Metrics panels to gather the metrics you
require, so here are some guidelines:
• The Metrics Selected button is particularly useful in this regard because it invokes
the Selected RMF Metrics panel, which displays all the metrics that can be collected
and forwarded to Ironstream.
• The Metrics column provides a count of the number of metrics selected for collection.
When metrics are selected for a parent element, which itself has more than one item
selected for collection (examples are SSIDs, LCUs and Volumes), multiply the count of
the parent resource by the metric count displayed to arrive at the total number of
metrics selected.
• The number shown in the Metrics Selected button provides this automatically so that
the count shown is accurate.
Panel 23-7: RMF Metrics for Volume Panel - With 3 Volumes Selected
Individual metrics can be activated by clicking the red switch icons. Alternatively, all
metrics can be selected by clicking the icon next to the Description column heading.
Click Close to return to the main filter panel.
Panel 23-8 shows that 25 metrics are selected, which was the result of activating all
metrics.
Panel 23-8: RMF Metrics for Volume Panel - With 25 Metrics Selected
Panel 23-9: RMF Metrics for Volume Panel - With 75 Metrics Selected
Panel 23-9 shows that 25 metrics for the three volumes that have been selected for ZOS2,
and so the Metrics Selected button confirms “75 Metrics Selected”.
Note: Changes apply immediately, as soon as an icon changes from red to green or vice
versa, and take effect on the next DCE RMF collection cycle.
Clicking opens the Selected RMF Metric panel, which lists all seventy-five
selected metrics for ZOS2.
This section explains how Ironstream can be integrated with Splunk premium
applications.
• “Splunk Enterprise Security and Ironstream”
Chapter 24 Splunk Enterprise Security and
Ironstream
This chapter describes how Ironstream can be configured to work with Splunk Enterprise
Security.
Topics:
• “About Splunk Enterprise Security and Ironstream” on page 24-1
• “Intrusion Detection” on page 24-2
• “TSO Log-on Activity” on page 24-3
• “TSO Account Activity” on page 24-3
• “FTP Sessions” on page 24-4
• “FTP Change Analysis” on page 24-5
• “IP Traffic Analysis” on page 24-5
• “Network Management/User-Defined Notification” on page 24-6
The Ironstream TA-ES application is downloaded and installed the same way as any other
Splunk application. However, the Ironstream TA-ES application does not contain any
dashboards, unlike a standard application.
Data is supplied by the Ironstream TA-ES and surfaced on the dashboards provided within
the Splunk ES application. The Splunk dashboards display mainframe information provided
by Ironstream TA-ES alongside data from other sources giving an organization a more
complete picture of security across their computing infrastructure.
Ironstream integration with Splunk ES currently supports the following activities within
z/OS:
• “Intrusion Detection”, including:
▪ Port scans – fast and slow
▪ Flood attacks – interface floods, SYN attacks, denial-of-service (DoS)
▪ Malformed packet detection
• “TSO Log-on Activity”
• “TSO Account Activity” – creation, update, deletion, and lockout
• “FTP Sessions” (authentications) – z/OS acting as server or client
• “FTP Change Analysis” (file activity) – file modification, download, creation, deletion
• “IP Traffic Analysis” – IP component usage (TCP, UDP, Stack, tn3270, etc.)
• “Network Management/User-Defined Notification”
Intrusion Detection
z/OS implements Intrusion Detection Services (IDS) via the Traffic Regulation Management
Daemon (TRMD), which is part of the z/OS Communications Server.
TRMD is required to detect and collect intrusion events. The daemon must be configured to
collect the following message types:
• EZZ8643I — TRMD SCAN threshold exceeded.
• EZZ8650I — TRMD ATTACK SYN flood.
• EZZ8654I — TRMD ATTACK Interface flood start: date
• EZZ8648I — TRMD ATTACK packet was discarded.
• EZZ8649I — TRMD ATTACK packet would have been discarded (packet kept).
These TRMD messages are written to SyslogD.
Ironstream collects the SyslogD messages via the Ironstream Network Monitoring
Component (NMC), breaks the message text into key/value pairs, and forwards the data to
Splunk. The key/value pairs make working with values easier in Splunk.
For instructions on configuring the Ironstream NMC, see “Alerts and SyslogD Forwarding,”
on page 14-1.
Splunk ES Visibility
TRMD data primarily appears within the Splunk ES Intrusion Center dashboard, which
provides an overview of all network intrusion events from Intrusion Detection Systems (IDS)
and Intrusion Prevention Systems (IPS) device data.
Splunk ES Visibility
TSO log-on activity primarily appears within these Splunk ES dashboards:
• Access Center — provides a summary of all authentication events.
• Access Anomalies — displays concurrent authentication attempts from different IP
addresses and improbable travel anomalies using internal user credentials and
location-relevant data.
• Access Search — finds specific authentication events.
• Access Tracker — gives an overview of account statuses to track newly active or inactive
accounts, as well as those that have been inactive for a period of time but recently
became active.
• Default Account Activity — shows activity on “default accounts”, or accounts enabled by
default on various systems, such as network infrastructure devices, databases, and
applications.
• User Activity – Remote Access — displays remote access authentication by user. A user
performing risky web or email activity while using remote access services can be an
indicator of data exfiltration, or exploited credentials.
The SMF80 Event Code Qualifier (7) is used for LOCKOUT detection:
• SMF80EVQ = 7 — TSO user account lockout due to excessive password attempts.
Splunk ES Visibility
TSO account activity primarily appears within these Splunk dashboards:
• Account Management — shows changes to user accounts, such as account lockouts,
newly created accounts, disabled accounts, and password resets.
• Default Account Activity — shows activity on “default accounts”, or accounts enabled by
default on various systems, such as network infrastructure devices, databases, and
applications.
• Endpoint Changes — uses the Splunk change monitoring system, which detects
file-system and registry changes, to illustrate changes and highlight trends in the
endpoints in your environment.
FTP Sessions
Authentication of FTP sessions are detected via SyslogD activity with the following
messages:
• EZYFS51I — CONN fails
• EZYFS52I — CONN ends
• EZYFS56I — ACCESS OK
• EZYFS57I — ACCESS fails
For instructions on configuring Ironstream to capture SyslogD data, see “Alerts and SyslogD
Forwarding,” on page 14-1.
Splunk ES Visibility
FTP log-on activity primarily appears within these Splunk ES dashboards:
• Access Center — provides a summary of all authentication events.
• Access Anomalies — displays concurrent authentication attempts from different IP
addresses and improbable travel anomalies using internal user credentials and
location-relevant data.
• Access Search — finds specific authentication events.
• Access Tracker — gives an overview of account statuses to track newly active or inactive
accounts, as well as those that have been inactive for a period of time but recently
became active.
• Default Account Activity — shows activity on “default accounts”, or accounts enabled by
default on various systems, such as network infrastructure devices, databases, and
applications.
• User Activity – Remote Access — displays remote access authentication by user. A user
performing risky web or email activity while using remote access services can be an
indicator of data exfiltration, or exploited credentials.
Splunk ES Visibility
FTP change activity appears within these Splunk ES dashboards:
• Endpoint Changes — uses the Splunk change monitoring system, which detects
file-system and registry changes, to illustrate changes and highlight trends in the
endpoints in your environment.
• Network Changes — tracks configuration changes to firewalls and other network
devices in your environment.
IP Traffic Analysis
Various IP traffic types (IP, UDP, FTP, Stack activity, etc.) are tracked via SMF 119 records
in subtype 2:
• Subtype 2 — Connection termination.
For instructions on configuring Ironstream to capture SMF data, see “SMF Record
Filtering,” on page 12-1.
Splunk ES Visibility
IP activity appears within these Splunk ES dashboards:
• Traffic Center — profiles overall network traffic, helps detect trends in type and
changes in volume of traffic, and helps to isolate the cause (for example, a particular
device or source) of those changes.
• Traffic Search — assists in searching network protocol data, refined by the search
filters.
• Network Changes — tracks configuration changes to firewalls and other network
devices in your environment.
Splunk ES Visibility
Alerts appears as Notable Events in these Splunk ES dashboards:
• Security Posture — provides high-level insight into the notable events across all
domains of your deployment, suitable for display in a Security Operations Center (SOC).
• Incident Review — displays notable events and their current status to gain insight into
the severity of events occurring in your system or network.
• Identity Investigator — displays information about known or unknown user identities
across a pre-defined set of event categories, such as change analysis or malware.
Topics:
• “Overview” on page 25-1
• “Management Commands” on page 25-2
• “MODIFY Commands” on page 25-2
• “SMF Real-time INMEM Commands” on page 25-5
Overview
The commands in chapter are supported in Ironstream Version 2.1.
Note: Sample commands are written as if they were issued from an active console. When
issuing commands from within an SDSF environment, you need to prefix each command
with a slash (/).
Management Commands
The following commands are supported in Ironstream Version 2.1.
Note: Sample commands are written as if they were issued from an active console. When
issuing commands from within an SDSF environment, you need to prefix each command
with a slash (/).
STOP
The “P” command causes Ironstream to stop collecting data, process any data in the data
store or coupling facility, and then terminate.
P jobname
MODIFY Commands
The MODIFY “F” commands are used to communicate to a running Ironstream instance.
BLOCKPRINT
Prints each translated block just before it is transferred to the transmission code. The
options are YES to turn on printing and NO to turn it off. (In a change from V1.4, DEBUG
does not have to be on to set BLOCKPRINT=YES.)
F jobname,BLOCKPRINT=YES
Caution: This command can generate a lot of output very quickly and should not be used
except when directed by Syncsort Technical Support.
DEBUG
Turns debug messages on or off. The options are YES to turn on debugging messages and
NO to turn them off.
F jobname,DEBUG=YES
Caution: This command can generate a lot of output very quickly and should not be used
except when directed by Syncsort Technical Support.
DUMP
Causes an ABEND and take a dump of memory. It is a fatal command.
F jobname,DUMP
Warning! This command could cause Ironstream to crash and should only be used at the
direction of Syncsort Technical Support.
LIST
The LIST command has four options. It is used to print a variety of informational messages,
mostly for the use of Syncsort Technical Support.
CAPTURE
The CAPTURE option will list the SMF record types being captured, or the syslog prefixes
being captured.
F jobname,LIST=CAPTURE
CBS
All the Ironstream control blocks are printed.
F jobname,LIST=CBS
MODULES
Issues message SDF0831I, which lists the metadata of modules in SYSPRINT.
F jobname,LIST=MODULES
QUEUES
The QUEUES option will print out the Data Store queue configuration and usage.
F jobname,LIST=QUEUES
TRACE
The Ironstream internal trace table is printed.
F jobname,LIST=TRACE
RECONFIGURE
The RECONFIGURE commands allow you to dynamically reconfigure a running Ironstream
instance that is collecting SMF data. There are two commands: VALIDATE and EXECUTE.
VALIDATE
This command validates the reconfiguration:
F jobname,RECONFIGURE,VALIDATE
EXECUTE
This command both validates and executes the reconfiguration:
F jobname,RECONFIGURE,EXECUTE
Command Notes:
• When a valid RECONFIGURE command is issued for either VALIDATE or EXECUTE,
the configuration is printed in the SYSPRINT file.
• When invalid configuration change is encountered, the RECONFIGURE command will
not change the current configuration.
RECORDPRINT
Prints each record just before it is translated to ASCII. The options are YES to turn on
printing and NO to turn it off.
Note: In a change from V1.4, DEBUG does not have to be on to set RECORDPRINT=YES.
F jobname,RECORDPRINT=YES
Caution: This command can generate a lot of output very quickly and should not be used
except when directed by Syncsort Technical Support.
RESTART
Restarts a suspended task. The task name should be the same as specified in system
message SDF0655S or SDF0656S. A task is suspended if two failures occur less than ten
seconds apart.
F jobname,RESTART=taskname
STATUS
Instructs Ironstream to print interim statistics to the SYSPRINT file (SDF0400I through
SDF0408I messages), and to the console where the command was issued. To run the
STATUS command from a system or SDSF console:
F jobname,STATUS
The SDF0601I message in Figure 25-2 that precedes the remaining responses to the
STATUS command is not written to the requesting console.
REFRESH
Dynamically adds log streams to the capture queue, unless a log stream was disconnected
using the DISCONNECT command.
F jobname,INMEM,REFRESH
DISCONNECT
Use this command if a particular log stream is no longer required to be connected to
Ironstream, or needs Ironstream to disconnect from it for maintenance.
F jobname,INMEM,DISCONNECT,<logstream>
Once a log stream is disconnect by this command, it must be reconnected by a CONNECT
command.
CONNECT
Reconnects a disconnected log stream.
F jobname,INMEM,CONNECT,<logstream>
STATUS
Displays the status of all log stream connections, including the metadata surrounding each
log stream and the SMF real-time interface as a whole.
F jobname,INMEM,STATUS
This chapter describes some operational considerations when using Ironstream, such as
message flood automation, network contention, and data store conditions.
Topics:
• “Message Flood Automation and Syslog Message Collection” on page 26-1
• “Network Contention” on page 26-1
• “Data Store Filling or Full Condition” on page 26-2
Network Contention
Ironstream is designed to transmit data from the mainframe to chosen destinations at a
high speed. However, this could cause an impact on other network traffic if a lot of data is
being moved by Ironstream and the network is very busy. To address this issue, Ironstream
provides an option to set a cap for its transmission rate.
The THROTTLING_RATE parameter in the DESTINATION section of the configuration file
can limit the amount of data transferred per second by Ironstream. When this parameter is
set, the amount of data transferred per second through the network by Ironstream will not
exceed the capped amount for each TCP connection. For more information about the
THROTTLING_RATE parameter, refer to the “DESTINATION Section” on page 7-11).
This chapter contains the system messages generated by core Ironstream and the Data
Collection Extension (DCE).
Topics:
• “Overview of Ironstream Messages” on page 27-1
• “Ironstream Messages” on page 27-2
• “Data Collection Extension Messages” on page 27-76
Ironstream Messages
This chapter contains contact information for Syncsort Product Support for Ironstream
technical support.
Topics:
• “Before Calling Syncsort Product Support” on page 28-1
• “Contacting Syncsort Product Support” on page 28-2
North America
Customers in North America can contact Syncsort Product Support directly for expert advice
at the numbers listed below. Email can be used for any question that does not need an
immediate reply.
Contact information:
Syncsort Product Support
Syncsort Incorporated
2 Blue Hill Plaza #1563
Pearl River, NY 10965
USA
Phone: +1-877-700-8260 – Toll Free for the United States and Canada
+1-201-930-8260 – For outside the United States and Canada
FAX: 201-930-8284
E-mail: zos_tech@syncsort.com
Other Regions
Customers in other regions should contact their local Syncsort representative.
This section provides information about how to use Ironstream auditing reports.
• “Using the Ironstream Data Usage Reporter”
Chapter 29 Using the Ironstream Data
Usage Reporter
This chapter describes how to configure the Ironstream Data Usage Reporter.
Topics:
• “Overview” on page 29-1
• “Configuring the Report Parameters” on page 29-2
• “Using the Report TRACE Facility” on page 29-5
• “CSV File Report Format” on page 29-5
• “Overriding the Default SMF Record Number” on page 29-6
• “System Messages for the Data Usage Reporter” on page 29-7
Overview
The Ironstream Data Usage Reporter produces reports of Ironstream output data usage
from SMF records. The report reflects the data volumes that are sent from your system to a
destination. The report defaults to SYSPRINT, but it can also be output as a CSV file, which
can then be FTP’d to a workstation and imported into a spreadsheet. You can also get a
SYSPRINT and a CSV file in one run.
Ironstream records data usage by creating SMF records, using 207 as the default SMF
number for this purpose. The report program can input any number of SMF files
concatenated together. They can be from a mix of LPARs. They do not have to be in
sequence; in fact, some time frames may overlap.
Note: If you need to override the default SMF number, refer to “Overriding the Default SMF
Record Number” on page 29-6.
Ironstream Usage Summary Ironstream Data Usage - Acme Tools Company Page 1
Report Type......................Hourly
Report Breakdown.................LPARs
Date Selection
Start Date.......................Mon Mar 12 2018 02:00:00PM
End Date.........................Sun Mar 18 2018 06:00:00AM
No Active Selection Filters
LPARs encountered................ZOS3
SMF Data covers 3 days, 13 hours, 44 minutes, 50 seconds
Ironstream Hourly Usage Report Ironstream Data Usage - Acme Tools Company Page 2
...
SYSIN Parameters
Additionally, a SYSIN DD can be added if parameters are required. There are two types of
SYSIN parameters:
• SELECT parameters for filtering the scope of the reported data.
• REPORT parameters to specify preferences for the report.
SYSIN Syntax
• Each has keyword=value operands, one per line, including the first line, which can be
formatted as:
SELECT LPAR=SYSD
JOBNAME=PRODIRON
-or-
SELECT
LPAR=SYSD
JOBNAME=PRODIRON
• All keywords are not case sensitive, and with the exception of the REPORT TITLE,
SOURCETYPE, and INDEX values, all values are not case sensitive.
• Asterisks can be used in column one before statements to create comments.
• Trailing commas are not needed but are accepted.
SELECT Parameters
The SELECT parameters can be used to restrict the scope of the report.
SELECT
SMFNO=207
STARTDATE=09/14/16
ENDDATE=12/14/2016
STARTTIME=11:33.30pm
ENDTIME=05:33PM
INDEX=ProdLogs
LPAR=SYSA
JOBNAME=IRONSMF
IPADDRESS=172.30.40.126
PORT=1007
SOURCETYPE=LOG4J
The input date format must be mm/dd[/yy[yy]].
Start/end time formats can be 12 or 24 hour. For example, 5:00PM or 17.00.
REPORT Parameters
The REPORT parameters are used to organize the report by choosing the type (hourly or
daily), format, totals, and units.
REPORT
Title='This is our company name'
Type=Daily
Format=PRINT
Break=LPARS
Numbers=MB
The default title is currently: Ironstream Usage Report
Type=Hourly|Daily
Accumulate totals into Daily or Hourly slots. The default is Daily.
A Daily report type accumulates values into whole day slots; whereas, Hourly accumulates
values into one-hour slots. An Hourly report also shows daily totals.
Format=PRINT|CSV
Physical format of report. The default is SYSPRINT. The same report can be output as a
CSV file.
Report Format can be specified multiple times by specifying PRINT and CSV once only for
each report. You can also get a PRINT and a CSV in one run by specifying both types for a
report, as shown here:
REPORT
Title='ACME Tools Inc.'
Format=PRINT
Format=CSV
Break=Path|LPAR|LPARS
Accumulate totals by Path or by LPAR, or by all LPARS combined.
Numbers=AUTO|Bytes|KB|MB|GB|TB
Auto formats numbers in TB/GB/MB/KB, according to their magnitude. KB, MB, GB, and
TB will use those units throughout. Bytes shows the exact numbers in bytes. This option can
be useful for testing and checking results. MB is the default.
LPAR,Jobname,Sourcetype,Index,IPAddress,Timestamp,Byte count
PLX1,SDFDSSMF,syncsortMF,zsmf,172.30.40.126:8010,Wed Nov 23 10:25:31.01,9244
PLX1,SDFDSSMF,syncsortMF,zsmf,172.30.40.126:8010,Wed Nov 23 11:00:00.00,17018
PLX1,SDFDSSMF,syncsortMF,zsmf,172.30.40.126:8010,Thu Nov 24 12:00:00.00,16925
PLX1,SDFDSSMF,syncsortMF,zsmf,172.30.40.126:8010,Thu Nov 24 16:34:00.17,18488
PLX1,SDFDSSMF,syncsortMF,zsmf,172.30.40.126:8010,Mon Nov 28 11:00:00.00,31707
PLX1,SDFDSSMF,syncsortMF,zsmf,172.30.40.126:8010,Mon Nov 28 12:00:00.00,83012
PLX1,SDFDSSMF,syncsortMF,zsmf,172.30.40.126:8010,Mon Nov 28 12:00:00.00,4651094
This can be FTP'd back to a workstation where it can be imported into a spreadsheet, as
shown in Figure 29-4:
201 in the Ironstream configuration file, then these two members must be copied and
renamed to SSDFN201 and SSDFF201.
This chapter contains the data formats that are forwarded to a destination.
Topics:
• “Syslog Format” on page A-2
• “FILELOAD Format” on page A-3
• “SYSOUT Format” on page A-3
• “Log4j Format” on page A-4
• “Alert Format” on page A-4
• “SyslogD Format” on page A-5
Syslog Format
The fields forwarded to a destination from syslog data are as follows:
All syslog messages appear in their entirety under the keyword of MSGTXT. Multiple lines
of syslog text are merged together with an ASCII NL character sequence between them so
that when listed in a destination they appear as they did on syslog or on the console. The
RESPONSE and MSGTXT fields also contain the MSGID of a WTO and/or a COMMAND
response.
FILELOAD Format
The fields forwarded to a destination from sequential files are as follows:
SYSOUT Format
The keyed fields forwarded to a destination from SYSOUT files are:
Log4j Format
The keyed fields forwarded to a destination from log4j files are:
Alert Format
The keyed fields forwarded to a destination for alerts generated by the NMC components:
SyslogD Format
The keyed fields forwarded to a destination for SyslogD data are:
Anything detected in the message text that starts with the characters ‘SYSLOGD_’ is parsed
out of the message text so it can be more easily processed by a destination. Some messages
can have up to twenty additional fields that have been parsed out of the original message.
The fields SYSLOGD_ID and SYSLOGD_REPLY above are used to hold those additional fields.
Additional keywords are also generated from repeated fields in the message. This applies
primarily when fields occur for both input and for output. For example, here is a typical
SyslogD message:
SYSLOGDMSG: EZD0836I Packet permitted: 09/11/2007 15:23:06.95 sipaddr=
10.11.2.4,dipaddr = 10.81.2.2,proto =icmp(1) type= 3,code=1 Interface= 10.11.2.4 (O)
secclass= 255 dest= local len= 56 tunnelID= Y4 ifcname= MPC4124L embsipaddr= 10.81.2.2
embdipaddr= 10.81.8.8 embproto= udp(17) sport= 1050 dport= 10173 -= Interface=
10.5.1.90 (I) secclass= 255 dest= local len= 64 vpnaction= N/A tunnelID= N/A ifcname=
LNKOSA48 fragment= N
Note the emboldened input and output interface IP addresses in this message that are
formed into keywords such as these:
"SYSLOGD_INTERFACE_I":"10.11.2.4"
"SYSLOGD_INTERFACE_O":"10.5.1.90"
Topics:
• “Overview of SSDFCPR” on page B-1
• “Executing SSDFCPR” on page B-1
Overview of SSDFCPR
SSDFCPR is a utility program that produces a report containing information about the
machine on which it is run. Syncsort Inc. may need this information in order to provide
appropriate license keys for Ironstream.
Executing SSDFCPR
Syncsort may request that you run the SSDFCPR utility on each LPAR where Ironstream is
licensed. Here is sample JCL:
//jobcard
//PRINT EXEC PGM=SSDFCPR
//STEPLIB DD DISP=SHR,DSN=hlq.SSDFAUTH
//SYSPRINT DD SYSOUT=*
E-mail the output from each LPAR to zos_tech@syncsort.com. Syncsort will generate license
keys for your systems based on the information in these reports.
FTP CONTROL
Alerts forwarded 14-9
Network Contention
Avoiding 26-1