Beruflich Dokumente
Kultur Dokumente
Publish-Subscribe
Developer’s Guide
Version 7.1.1
January 2008
webMethods
Copyright
& Docu‐
ment ID
6. Publishing Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
The Publishing Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Setting Fields in the Document Envelope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
About the Activation ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
webMethods Developer provides tools to integrate resources. It enables users to build
integration solutions locally within one webMethods Integration Server or across
multiple Integration Servers all exchanging information via a Broker. This guide is for
developers and administrators who want to make use of this capability.
Note: With webMethods Developer, you can create Broker/local triggers and JMS
triggers. A Broker/local trigger is trigger that subscribes to and processes documents
published/delivered locally or to the Broker. A JMS trigger is a trigger that receives
messages from a destination (queue or topic) on a JMS provider and then processes
those messages. This guide discusses development and use of Broker/local triggers
only. Where the term triggers appears in this guide, it refers to Broker/local triggers.
For information about creating JMS triggers, see the webMethods Integration Server JMS
Client Developer’s Guide.
Document Conventions
Convention Description
Bold Identifies elements on a screen.
Italic Identifies variable information that you must supply or change
based on your specific situation or environment. Identifies terms the
first time they are defined in text. Also identifies service input and
output variables.
Narrow font Identifies storage locations for services on the webMethods
Integration Server using the convention folder.subfolder:service.
Typewriter Identifies characters and values that you must type exactly or
font messages that the system displays on the console.
UPPERCASE Identifies keyboard keys. Keys that you must press simultaneously
are joined with the “+” symbol.
\ Directory paths use the “\” directory delimiter unless the subject is
UNIX‐specific.
[ ] Optional keywords or values are enclosed in [ ]. Do not type the [ ]
symbols in your own code.
Additional Information
The webMethods Advantage Web site at http://advantage.webmethods.com provides
you with important sources of information about webMethods products:
Troubleshooting Information. The webMethods Knowledge Base provides
troubleshooting information for many webMethods products.
Documentation Feedback. To provide feedback on webMethods documentation, go to
the Documentation Feedback Form on the webMethods Bookshelf.
Additional Documentation. Starting with 7.0, you have the option of downloading the
documentation during product installation to a single directory called
“_documentation,” located by default under the webMethods installation directory.
In addition, you can find documentation for all webMethods products on the
webMethods Bookshelf.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
What Is the Publish-and-Subscribe Model? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
webMethods Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Basic Elements in the Publish-and-Subscribe Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Introduction
Companies today are tasked with implementing solutions for many types of integration
challenges within the enterprise. Many of these challenges revolve around application
integration (between software applications and other systems) and fall into common
patterns, such as
Propagation. Propagation of similar business objects from one system to multiple other
systems, for example, an order status change or a product price change.
Synchronization. Synchronization of similar business objects between two or more
systems to obtain a single view, for example, real‐time synchronization of customer,
product registration, product order, and product SKU information among several
applications. This is the most common issue requiring an integration solution.
In a one‐way synchronization, there is one system (resource) that acts as a data
source and one or more resources that are targets of the synchronization.
In a two‐way synchronization, every resource is both a potential source and target
of a synchronization. There is not a single resource that acts as the primary data
resource. A change to any resource should be reflected in all other resources. This
is called a two‐way synchronization.
Aggregation. Information joined from multiple sources into a common destination
system, for example, communicating pharmacy customer records and prescription
transactions and Web site data into a central application and database.
The webMethods product suite provides tools that you can use to design and deploy
solutions that address these challenges using a publish‐and‐subscribe model.
webMethods Components
The Integration Server and the Broker share a fast, efficient process for exchanging
documents across the entire webMethods system.
Resource
Integration
Server
Adapters
Resource
Integration Broker
Server
Resources
Integration Server
The Integration Server is the system’s central run‐time component. It serves as the entry
point for the systems and applications that you want to integrate, and is the system’s
primary engine for the execution of integration logic. It also provides the underlying
handlers and facilities that manage the orderly processing of information from resources
inside and outside the enterprise. The Integration Server publishes documents to and
receives documents from the Broker. For more information about the Integration Server,
see the webMethods Integration Server Administrator’s Guide.
Broker
The Broker forms the globally scalable messaging backbone of webMethods components.
It provides the infrastructure for implementing asynchronous, message‐based solutions
that are built on the publish‐and‐subscribe model or one of its variants, request/reply or
publish‐and‐wait.
The role of the Broker is to route documents between information producers (publishers)
and information consumers (subscribers). The Broker receives, queues, and delivers
documents.
The Broker maintains a registry of document types that it recognizes. It also maintains a
list of subscribers that are interested in receiving those types of documents. When the
Broker receives a published document, it queues it for the subscribers of that document
type. Subscribers retrieve documents from their queues. This action usually triggers an
activity on the subscriber’s system that processes the document.
A webMethods system can contain multiple Brokers. Brokers can operate in groups,
called territories, which allow several Brokers to share document type and subscription
information. For additional information about Brokers, see the webMethods Broker
Administrator’s Guide.
For more information about how documents flow between the Integration Server and the
Broker, see Chapter 2, “An Overview of the Publish and Subscribe Paths”.
Documents
In an integration solution built on the publish‐and‐subscribe model, applications publish
and subscribe to documents. Documents are objects that webMethods components use to
encapsulate and exchange data. A document represents the body of data that a resource
passes to webMethods components. Often it represents a business event such as placing
an order (purchase order document), shipping goods (shipping notice), or adding a new
employee (new employee record).
Each published document includes an envelope. The envelope is much like a header in an
email message. The envelope records information such as the sender’s address, the time
the document was sent, sequence numbers, and other useful information for routing and
control. It contains information about the document and its transit through your
webMethods system.
Note: With webMethods Developer, you can create Broker/local triggers and JMS
triggers. This guide discusses development and use of Broker/local triggers only.
Where the terms “trigger” or “triggers” appear in this guide, they refer to
Broker/local triggers.
Services
Services are method‐like units of work. They contain logic that the Integration Server
executes. You build services to carry out work such as extracting data from documents,
interacting with back‐end resources, and publishing documents to the Broker. When you
build a trigger, you specify the service that you want to use to process the documents that
you subscribe to.
For more information about building services, see the webMethods Developer User’s Guide.
Adapter Notifications
Adapter notifications notify your webMethods system whenever a specific event occurs on
an adapterʹs resource. The adapter notification publishes a document when the specified
event occurs on the resource. For example, if you are using the JDBC Adapter and a
change occurs in a database table that an adapter notification is monitoring, the adapter
notification publishes a document containing data from the event and sends it to the
Integration Server. Each adapter notification has an associated publishable document
type. The Integration Server assigns this document type the same name as the adapter
notification but appends “PublishDocument” to the name.
You can use triggers to subscribe to the publishable document types associated with
adapter notifications. The service associated with the publishable document type in the
trigger condition might perform some additional processing, updating, or
synchronization based on the contents of the adapter notification.
Canonical Documents
A canonical document is a standardized representation that a document might assume
while it is passing through your webMethods system. A canonical document acts as the
intermediary data format between resources.
For example, in an implementation that accepts purchase orders from companies, one of
the steps in the process converts the purchase order document to a company’s standard
purchase order format. This format is called the ʹcanonicalʹ form of the purchase order
document. The canonical document is published, delivered, and passed to services that
process purchase orders.
By converting a document to a neutral intermediate format, subscribers (such as adapter
services) only need to know how to convert the canonical document to the required
application format. If canonical documents were not used, every subscriber would have
to be able to decode the native document format of every publisher.
A canonical document is a publishable document type. The canonical document is used
when building publishing services and subscribed to when building triggers. In flow
services, you can map documents from the native format of an application to the
canonical format.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Overview of the Publishing Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Overview of the Subscribe Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Overview of Local Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Introduction
In the webMethods system, Integration Servers exchange documents via publication and
subscription. One Integration Server publishes a document and one or more Integration
Servers subscribe to and process that document.
This chapter provides overviews of how the Integration Server interacts with the Broker
to publish and subscribe to documents, specifically
How the Integration Server publishes documents to the Broker.
How the Integration Server retrieves documents from the Broker.
How the Integration Server publishes and subscribes to documents locally.
Note: Unless otherwise noted, this guide describes the functionality and interaction of
the webMethods Integration Server version 7.1 and the webMethods Broker version
7.1.
Note: If a Broker is not configured for the Integration Server, all publishes become
local publishes, and delivering documents to a specific recipient is not available. For
more information about publishing documents locally, see “Overview of Local
Publishing” on page 34.
The following diagram illustrates how the Integration Server publishes or delivers
documents to the Broker when the Broker is connected.
7 7 6 4
Guaranteed
Storage
5
Client
Queue X
Client
Queue Y
Step Description
1 A publishing service on the Integration Server sends a document to the
dispatcher (or an adapter notification publishes a document when an event
occurs on the resource the adapter monitors).
Before the Integration Server sends the document to the dispatcher, it validates
the document against its publishable document type. If the document is not
valid, the service returns an exception specifying the validation error.
2 The dispatcher obtains a connection from the connection pool. The connection
pool is a reserved set of connections that the Integration Server uses to publish
documents to the Broker. To publish a document to the Broker, the Integration
Server uses a connection for the default client.
3 The dispatcher sends the document to the Broker.
4 The Broker examines the storage type for the document to determine how to
store the document.
If the document is volatile, the Broker stores the document in memory.
If the document is guaranteed, the Broker stores the document in memory
and on disk.
Step Description
5 The Broker routes the document to subscribers by doing one of the following:
If the document was published (broadcast), the Broker identifies subscribers
and places a copy of the document in the client queue for each subscriber.
If the document was delivered, the Broker places the document in the
queue for the client specified in the delivery request.
If there are no subscribers for the document, the Broker returns an
acknowledgement to the publisher and then discards the document. If,
however, a deadletter subscription exists for the document, the Broker
deposits the document in the queue containing the deadletter subscription.
For more information about creating deadletter subscriptions, see
webMethods Broker Client Java API Reference Guide.
A document remains in the queue on the Broker until it is picked up by the
subscribing client. If the time‐to‐live for the document elapses, the Broker
discards the document. For more information about setting time‐to‐live for a
publishable document type, see “Setting the Time‐to‐Live for a Publishable
Document Type” on page 67.
6 If the document is guaranteed, the Broker returns an acknowledgement to the
dispatcher to indicate successful receipt and storage of the document. The
dispatcher returns the connection to the connection pool.
7 The Integration Server returns control to the publishing service, which executes
the next step.
Notes:
You can configure publishable document types and Integration Server so that
Integration Server does not validate documents when they are published. For more
information about validating publishable document types, see “Specifying Validation
for a Publishable Document Type” on page 68.
If a transient error occurs while the Integration Server publishes a document, the
audit subsystem logs the document and assigns it a status of FAILED. A transient error
is an error that arises from a condition that might be resolved quickly, such as the
unavailability of a resource due to network issues or failure to connect to a database.
You can use webMethods Monitor to find and resubmit documents with a status of
FAILED.
5
4 Guaranteed
2 Storage
3 6 Client
Outbound
Queue X
Document
Store 7 Client
Queue Y
Step Description
1 A publishing service on the Integration Server sends a document to the
dispatcher (or an adapter notification publishes a document when an event
occurs on the resource the adapter monitors).
Before the Integration Server sends the document to the dispatcher, it validates
the document against its publishable document type. If the document is not
valid, the service returns an exception specifying the validation error.
2 The dispatcher detects that the Broker is not available and does one of the
following depending on the storage type of the document:
If the document is guaranteed, the dispatcher routes the document to the
outbound document store on disk.
If the document is volatile, the dispatcher discards the document and the
publishing service throws an exception.
The Integration Server executes the next step in the publishing service.
3 When the Integration Server re‐establishes a connection to the Broker, the
Integration Server obtains a single connection from the connection pool
Step Description
4 The Integration Server automatically sends the documents from the outbound
document store to the Broker. To empty the outbound document store more
rapidly, the Integration Server sends the documents in batches instead of one at
a time.
Note: The Integration Server uses a single connection to empty the outbound
document store to preserve publication order.
5 The Broker examines the storage type for the document, determines that it is
guaranteed and stores the document in memory and on disk.
6 The Broker routes the document to subscribers by doing one of the following:
If the document was published (broadcast), the Broker identifies subscribers
and places a copy of the document in the client queue for each subscriber.
If the document was delivered, the Broker places the document in the queue
for the client specified in the delivery request.
If there are no subscribers for the document, the Broker returns an
acknowledgement to the publisher and then discards the document. If,
however, a deadletter subscription exists for the document, the Broker
deposits the document in the queue containing the deadletter subscription.
For more information about creating deadletter subscriptions, see
webMethods Broker Client Java API Reference Guide.
A document remains in the queue on the Broker until the subscribing client
picks it up. If the time‐to‐live for the document elapses, the Broker discards the
document. For more information about setting time‐to‐live for a publishable
document type, see “Setting the Time‐to‐Live for a Publishable Document Type”
on page 67.
7 The Broker returns an acknowledgement to the Integration Server to indicate
successful receipt and storage of the guaranteed document. The Integration
Server removes the document from the outbound document store.
Notes:
If you do not want published documents placed in the outbound document store
when the Broker is unavailable, you can configure Integration Server to throw a
ServiceException instead. The value of the watt.server.publish.useCSQ parameter
determines whether Integration Server places documents in the outbound document
store or throws a ServiceException.
After the connection to the Broker is re‐established, the Integration Server sends all
newly published documents (guaranteed and volatile) to the outbound document
store until the outbound store has been emptied. This allows the Integration Server to
maintain publication order. After the Integration Server empties the outbound
document store, the Integration Server resumes publishing documents directly to the
Broker.
If Integration Server makes 4 attempts to transmit a document from the outbound
document store to the Broker and all attempts fail, the audit subsystem logs the
document and assigns it a status of STATUS_TOO_MANY_TRIES.
If a transient error occurs while the Integration Server publishes a document, the
audit subsystem logs the document and assigns it a status of FAILED.
You can configure publishable document types and Integration Server so that
Integration Server does not validate documents when they are published. For more
information about validating publishable document types, see “Specifying Validation
for a Publishable Document Type” on page 68.
Tip! You can use webMethods Monitor to find and resubmit documents with a status
of STATUS_TOO_MANY_TRIES or FAILED. For more information about using
webMethods Monitor, see the webMethods Monitor documentation.
The following diagram illustrates how the Integration Server and Broker handle a
synchronous request/reply.
4
Guaranteed
Storage
6
Client
5
Pending Queue X
11 Replies 10 7
Client
Queue Y
9 Publishing 8
Server’s
Request/Reply
Client Queue
Step Description
1 A publishing service sends a document (the request) to the dispatcher. The
Integration Server populates the tag field in the document envelope with a
unique identifier that will be used to match up the reply document with this
request.
The publishing service enters into a waiting state. The service will not resume
execution until it receives a reply from a subscriber or the wait time elapses.The
Integration Server begins tracking the wait time as soon as it publishes the
document.
Before the Integration Server sends the document to the dispatcher, it validates
the document against its publishable document type. If the document is not
valid, the service returns an exception specifying the validation error. The
service unblocks, but with an exception.
2 The dispatcher obtains a connection from the connection pool. The connection
pool is a reserved set of connections that the Integration Server uses to publish
documents to the Broker. To publish a request document to the Broker, the
Integration Server uses a connection for the request/reply client.
Note: If the Broker is not available, the dispatcher routes the document to the
outbound document store. For more information, see “Publishing Documents
When the Broker Is Not Available” on page 21.
Step Description
3 The dispatcher sends the document to the Broker.
4 The Broker examines the storage type for the document to determine how to
store the document.
If the document is volatile, the Broker stores the document in memory.
If the document is guaranteed, the Broker stores the document in memory
and on disk.
5 The Broker routes the document to subscribers by doing one of the following:
If the document was published (broadcast), the Broker identifies
subscribers and places a copy of the document in the client queue for each
subscriber.
If the document was delivered, the Broker places the document in the
queue for the client specified in the delivery request.
If there are no subscribers for the document, the Broker returns an
acknowledgement to the publisher and then discards the document. If,
however, a deadletter subscription exists for the document, the Broker
deposits the document in the queue containing the deadletter subscription.
For more information about creating deadletter subscriptions, see
webMethods Broker Client Java API Reference Guide.
A document remains in the queue on the Broker until it is picked up by the
subscribing client. If the time‐to‐live for the document elapses, the Broker
discards the document. For more information about setting time‐to‐live for a
publishable document type, see “Setting the Time‐to‐Live for a Publishable
Document Type” on page 67.
6 If the document is guaranteed, the Broker returns an acknowledgement to the
dispatcher to indicate successful receipt and storage of the document. The
dispatcher returns the connection to the connection pool.
7 Subscribers retrieve and process the document.
A subscriber uses the pub.publish:reply service to compose and publish a reply
document. This service automatically populates the tag field of the reply
document envelope with the same value used in the tag field of the request
document envelope.
The pub.publish:reply service also automatically specifies the requesting client as
the recipient of the reply document
8 One or more subscribers send reply documents to the Broker. The Broker stores
the reply documents in memory.
The Broker places the reply documents in the request/reply client queue for the
Integration Server that initiated the request.
Step Description
9 The Integration Server that initiated the request obtains a request/reply client
from the connection pool and retrieves the reply documents from the Broker.
10 The Integration Server uses the tag value of the reply document to match up the
reply with the original request.
11 The Integration Server places the reply document in the pipeline of the waiting
service. The waiting service resumes execution.
Notes:
If the requesting service specified a publishable document type for the reply
document, the reply document must conform to the specified type. Otherwise, the
reply document can be an instance of any publishable document type.
A single request might receive many replies. The Integration Server that initiated the
request uses only the first reply document it retrieves from the Broker. The
Integration Server discards all other replies. First is arbitrarily defined. There is no
guarantee provided for the order in which the Broker processes incoming replies.
All reply documents are treated as volatile documents. Volatile documents are stored
in memory and will be lost if resource on which the reply document is located shuts
down or if a connection is lost while the reply document is in transit.
If the wait time elapses before the service receives a reply, the Integration Server ends
the request, and the service returns a null document that indicates the request timed
out. The Integration Server then executes the next step in the flow service. If a reply
document arrives after the flow service resumes execution, the Integration Server
rejects the document and creates a journal log message stating that the document was
rejected because there is no thread waiting for the document.
You can configure publishable document types and Integration Server so that
Integration Server does not validate documents when they are published. For more
information about validating publishable document types, see “Specifying Validation
for a Publishable Document Type” on page 68.
Note: For information about the subscribe path for documents that match a join
condition, see “Subscribe Path for Documents that Satisfy a Join Condition” on
page 167.
1
Client
Queue X
2
Dispatcher
Client
Queue Y
3
Memory
Trigger Document Store
Guaranteed
Storage Trigger Trigger
Queue X Queue Y
Trigger Service Y2
Trigger Service X2
6
Trigger Service Y1
Trigger Service X1
Step Description
1 The dispatcher on the Integration Server uses a server thread to request
documents from a trigger’s client queue on the Broker.
Note: Each trigger on the Integration Server has a corresponding client queue on
the Broker.
2 The thread retrieves a batch of documents for the trigger.
3 The dispatcher places the documents in the trigger’s queue in the trigger
document store. The trigger document store is saved in memory. The dispatcher
then releases the server thread used to retrieve the documents.
Step Description
4 The dispatcher obtains a thread from the server thread pool, pulls a document
from the trigger queue, and evaluates the document against the conditions in the
trigger.
Note: If exactly‐once processing is configured for the trigger, the Integration
Server first determines whether the document is a duplicate of one that has
already been processed by the trigger. The Integration Server continues
processing the document only if the document is new.
5 If the document matches a trigger condition, the dispatcher executes the trigger
service associated with that condition.
If the document does not match a trigger condition, the Integration Server
discards the document, returns an acknowledgement to the Broker, and returns
the server thread to the server thread pool. The Integration Server also generates
a journal log message stating that the document did not match a condition.
6 After the trigger service executes to completion (success or error), one of the
following occurs:
If the trigger service executed successfully, the Integration Server returns an
acknowledgement to the Broker (if this is a guaranteed document). The
Integration Server then removes the copy of the document from the trigger
queue and returns the server thread to the thread pool.
If a service exception occurs, the trigger service ends in error and the
Integration Server rejects the document. If the document is guaranteed, the
Integration Server returns an acknowledgement to the Broker. The
Integration Server removes the copy of the document from the trigger
queue, returns the server thread to the thread pool, and sends an error
document to indicate that an error has occurred.
If a transient error occurs during trigger service execution and the service
catches the error, wraps it and re‐throws it as an ISRuntimeException, then
the Integration Server waits for the length of the retry interval and re‐
executes the service using the original document as input. If the Integration
Server reaches the maximum number of retries and the trigger service still
fails because of a transient error, the Integration Server treats the last failure
as a service error. For more information about retrying a trigger service, see
“Configuring Transient Error Handling” on page 134.
Notes:
After receiving an acknowledgement, the Broker removes its copy of the document
from guaranteed storage. The Integration Server returns an acknowledgement for
guaranteed documents only.
If the Integration Server shuts down or reconnects to the Broker before
acknowledging a guaranteed document, the Integration Server recovers the
document from the Broker when the server restarts or the connection is
re‐established. (That is, the documents are redelivered.) For more information about
guaranteed documents, see “Selecting a Document Storage Type” on page 65.
If a trigger service generates audit data on error and includes a copy of the input
pipeline in the audit log, you can use webMethods Monitor to re‐invoke the trigger
service at a later time. For more information about configuring services to generate
audit data, see the webMethods Developer User’s Guide.
It is possible that a document could satisfy more than one condition in a trigger.
However, the Integration Server executes only the service associated with the first
satisfied condition.
The processing mode for a trigger determines whether the Integration Server
processes documents in a trigger queue serially or concurrently. In serial processing,
the Integration Server processes the documents one at a time in the order in which the
documents were placed in the trigger queue. In concurrent processing, the
Integration Server processes as many documents as it can at one time, but not
necessarily in the same order in which the documents were placed in the queue. For
more information about document processing, see “Selecting Messaging Processing”
on page 128.
If a transient error occurs during document retrieval or storage, the audit subsystem
logs the document and assigns it a status of FAILED. A transient error is an error that
arises from a condition that might be resolved later, such as the unavailability of a
resource due to network issues or failure to connect to a database. You can use
webMethods Monitor to find and resubmit documents with a FAILED status. For
more information about using webMethods Monitor, see the webMethods Monitor
documentation.
You can configure a trigger to suspend and retry at a later time if retry failure occurs.
Retry failure occurs when Integration Server makes the maximum number of retry
attempts and the trigger service still fails because of an ISRuntimeException. For
more information about handling retry failure, see “Handling Retry Failure” on
page 136.
Note: If a publishing service specifies an individual trigger as the destination of the
document (the publishing service specifies a trigger client ID as the destination ID),
the subscribe path the document follows is the same as the path followed by a
published document.
The following diagram illustrates the subscription path for a document delivered to the
default client.
1
Client Queue
Default Client
Dispatcher
2
Guaranteed 4
Storage
Trigger Document Store
Trigger Trigger
Queue X Queue Y
6 6
8 8
Trigger Service Y2
Trigger Service Y1
Trigger Service X2
Trigger Service X1
7 7
Step Description
1 The dispatcher on the Integration Server requests documents from the default
client’s queue on the Broker.
Note: The default client is the Broker client created for the Integration Server.
The Broker places documents in the default client’s Broker queue only if the
publisher delivered the document to the Integration Server’s client ID.
Step Description
2 The thread retrieves documents delivered to the default client in batches.
The number of documents the thread retrieves at one time is determined by the
capacity and refill level of the default document store and the number of
documents available for the default client on the Broker. For more information
about configuring the default document store, see the webMethods Integration
Server Administrator’s Guide.
3 The dispatcher places a copy of the documents in memory in the default
document store.
4 The dispatcher identifies subscribers to the document and routes a copy of the
document to each subscriber’s trigger queue.
In the case of delivered documents, the Integration Server saves the documents
to a trigger queue. The trigger queue is located within a trigger document store
that is saved on disk.
5 The Integration Server removes the copy of the document from the default
document store and, if the document is guaranteed, returns an
acknowledgement to the Broker. The Broker removes the document from the
default client’s queue.
6 The dispatcher obtains a thread from the server thread pool, pulls the
document from the trigger queue, and evaluates the document against the
conditions in the trigger.
Note: If exactly‐once processing is configured for the trigger, the Integration
Server first determines whether the document is a duplicate of one already
processed by the trigger. The Integration Server continues processing the
document only if the document is new.
7 If the document matches a trigger condition, the Integration Server executes the
trigger service associated with that condition.
If the document does not match a trigger condition, the Integration Server,
sends an acknowledgement to the trigger queue, discards the document
(removes it from the trigger queue), and returns the server thread to the server
thread pool. The Integration Server also generates a journal log message stating
that the document did not match a condition.
Step Description
8 After the trigger service executes to completion (success or error), one of the
following occurs:
If the trigger service executed successfully, the Integration Server returns an
acknowledgement to the trigger queue (if this is a guaranteed document),
removes the document from the trigger queue, and returns the server
thread to the thread pool.
If a service exception occurs, the trigger service ends in error and the
Integration Server rejects the document, removes the document from the
trigger queue, returns the server thread to the thread pool, and sends an
error document to indicate that an error has occurred. If the document is
guaranteed, the Integration Server returns an acknowledgement to the
trigger queue. The trigger queue removes its copy of the guaranteed
document from storage.
If a transient error occurs during trigger service execution and the service
catches the error, wraps it and re‐throws it as an ISRuntimeException, then
the Integration Server waits for the length of the retry interval and re‐
executes the service using the original document as input. If the Integration
Server reaches the maximum number of retries and the trigger service still
fails because of a transient error, the Integration Server treats the last failure
as a service error. For more information about retrying a trigger service, see
“Configuring Transient Error Handling” on page 134.
Notes:
The Integration Server saves delivered documents in a trigger document store located
on disk. The Integration Server saves published documents in a trigger document
store located in memory.
If the Integration Server shuts down before processing a guaranteed document saved
in a trigger document store on disk, the Integration Server recovers the document
from the trigger document store when it restarts. Volatile documents are saved in
memory and are not recovered up restart.
If a service generates audit data on error and includes a copy of the input pipeline in
the audit log, you can use webMethods Monitor to re‐invoke the trigger service at a
later time. For more information about configuring services to generate audit data,
see the webMethods Developer User’s Guide.
It is possible that a document could match more than one condition in a trigger.
However, the Integration Server executes only the service associated with the first
matched condition.
The processing mode for a trigger determines whether the Integration Server
processes documents in a trigger queue serially or concurrently. In serial processing,
the Integration Server processes the documents one at a time in the order in which the
documents were placed in the trigger queue. In concurrent processing, the
Integration Server processes as many documents as it can at one time, but not
necessarily in the same order in which the documents were placed in the queue. For
more information about document processing, see “Selecting Messaging Processing”
on page 128.
If a transient error occurs during document retrieval or storage, the audit subsystem
logs the document and assigns it a status of FAILED. You can use webMethods
Monitor to find and resubmit documents with a FAILED status. For more information
about using webMethods Monitor, see the webMethods Monitor documentation.
You can configure a trigger to suspend and retry at a later time if retry failure occurs.
Retry failure occurs when Integration Server makes the maximum number of retry
attempts and the trigger service still fails because of an ISRuntimeException. For
more information about handling retry failure, see “Handling Retry Failure” on
page 136.
1
Publishing Dispatcher
Service
3 3
5 5
Trigger Service
Trigger Service
Trigger Service
Trigger Service
Trigger Service
Trigger Service
4 4
Step Description
1 A publishing service on the Integration Server sends a document to the
dispatcher.
Before the Integration Server sends the document to the dispatcher, it validates
the document against its publishable document type. If the document is not
valid, the service returns an exception specifying the validation error.
2 The dispatcher does one of the following:
The dispatcher determines which triggers subscribe to the document and
places a copy of the document in each subscriber’s trigger queue. The
dispatcher saves locally published documents in a trigger document store
located on disk.
If there are no subscribers for the document, the dispatcher discards the
document.
Step Description
3 The dispatcher obtains a thread from the server thread pool, pulls the
document from the trigger queue, and evaluates the document against the
conditions in the trigger.
Note: If exactly‐once processing is configured for the trigger, the Integration
Server first determines whether the document is a duplicate of one already
processed by the trigger. The Integration Server continues processing the
document only if the document is new.
4 If the document matches a trigger condition, the dispatcher executes the trigger
service associated with that condition.
If the document does not match a trigger condition, the Integration Server
sends an acknowledgement to the trigger queue, discards the document
(removes it from the trigger queue), and returns the server thread to the server
thread pool.
5 After the trigger service executes to completion (success or error), one of the
following occurs:
If the trigger service executed successfully, the Integration Server sends an
acknowledgement to the trigger queue (if this is a guaranteed document),
removes the document from the trigger queue, and returns the server
thread to the thread pool.
If a service exception occurs, the trigger service ends in error and the
Integration Server rejects the document, removes the document from the
trigger queue, and returns the server thread to the thread pool. If the
document is guaranteed, the Integration Server sends an acknowledgement
to the trigger queue.
If a transient error occurs during trigger service execution and the service
catches the error, wraps it and re‐throws it as an ISRuntimeException, then
the Integration Server waits for the length of the retry interval and
re‐executes the service using the original document as input. If Integration
Server reaches the maximum number of retries and the trigger service still
fails because of a transient error, the Integration Server treats the last failure
as a service error. For more information about retrying a trigger service, see
“Configuring Transient Error Handling” on page 134.
Notes:
You can configure publishable document types and Integration Server so that
Integration Server does not validate documents when they are published. For more
information about validating publishable document types, see “Specifying Validation
for a Publishable Document Type” on page 68.
Integration Server saves locally published documents in a trigger document store
located on disk. If Integration Server shuts down before processing a locally
published guaranteed document, Integration Server recovers the document from the
trigger document store when it restarts. Integration Server does not recover volatile
documents when it restarts.
If a subscribing trigger queue reaches its maximum capacity, you can configure
Integration Server to reject locally published documents for that trigger queue. For
more information about this feature, see the description of the
watt.server.publish.local.rejectOOS parameter in the webMethods Integration Server
Administrator’s Guide.
If a service generates audit data on error and includes a copy of the input pipeline in
the audit log, you can use webMethods Monitor to re‐invoke the trigger service at a
later time. For more information about configuring services to generate audit data,
see the webMethods Developer User’s Guide.
It is possible that a document could match more than one condition in a trigger.
However, Integration Server executes only the service associated with the first
matched condition.
The processing mode for a trigger determines whether the Integration Server
processes documents in a trigger queue serially or concurrently. In serial processing,
Integration Server processes the documents one at a time in the order in which the
documents were placed in the trigger queue. In concurrent processing, the
Integration Server processes as many documents as it can at one time, but not
necessarily in the same order in which the documents were placed in the queue. For
more information about document processing, see “Selecting Messaging Processing”
on page 128.
You can configure a trigger to suspend and retry at a later time if retry failure occurs.
Retry failure occurs when Integration Server makes the maximum number of retry
attempts and the trigger service still fails because of an ISRuntimeException. For
more information about handling retry failure, see “Handling Retry Failure” on
page 136.
You can configure Integration Server to strictly enforce a locally published
document’s time‐to‐live and discard the document before processing it if the
document has expired. For more information about this feature, see the description of
the watt.server.trigger.local.checkTTL parameter in the webMethods Integration Server
Administrator’s Guide.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Step 1: Research the Integration Problem and Determine Solution . . . . . . . . . . . . . . . . . . . . . . 41
Step 2: Determine the Production Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Step 3: Create the Publishable Document Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Step 4: Make the Publishable Document Types Available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Step 5: Create the Services that Publish the Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Step 6: Create the Services that Process the Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Step 7: Define the Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Step 8: Synchronize the Publishable Document Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Introduction
There are two sides of a publish‐and‐subscribe model integration solution. One side is the
publishing side and the other is the subscribing side. The table below lists what you must
create for each side of the integration solution.
The following table lists the tasks that you need to perform to build an integration
solution and whether the publishing side or the subscribing side is responsible for the
task.
1 Research the integration problem and determine D D
how you want to resolve it.
2 Determine the development environment. D D
3 Create the publishable document types for the D
documents to be published.
4 Make the publishable document types available D D
to the subscribing side.
5 Create the services that publish the documents. D
6 Create the services that process the published D
documents.
7 Define the triggers that associate the published D
documents to the services that processes the
document.
8 Synchronize the publishable document types if D D
necessary.
correctly, the publishable document type on the publishing side and the subscribing side
must reference the same Broker document type.
Publishable document types must be associated with the same Broker document type
The following table describes how to make your publishable document type correspond
to the same Broker document type based on your development environment.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Configure the Connection to the Broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Configuring Document Stores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Specifying a User Account for Invoking Services Specified in Triggers . . . . . . . . . . . . . . . . . . . . 49
Configuring Server Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Configuring Settings for a Document History Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Configuring Integration Server for Key Cross-Reference and Echo Suppression . . . . . . . . . . . . 53
Configuring Integration Server to Handle Native Broker Events . . . . . . . . . . . . . . . . . . . . . . . . . 53
Introduction
Before you can begin to publish and subscribe to documents, whether locally or using a
Broker, you need to specify settings for some of the Integration Server components and
services. Specifying settings consists of using the Integration Server Administrator to:
Configure the connection to the Broker (if publishing or subscribing to documents on
the Broker).
Configure the document stores where the Integration Server will save documents
until they can be published or processed.
Specify a user account for executing services specified in triggers.
Configure a document history database.
Configure a key cross‐referencing and echo suppression database.
Configure settings for handling native Broker events.
Configure other Integration Server parameters that can affect a publish‐and‐subscribe
solution.
Note: With the exception of configuring the connection to the Broker, you do not have
to configure these settings until you have finished developing an integration solution
and are ready to test the publication/subscription of a document.
Note: If you switch your Integration Server connection from one Broker to a Broker in
another territory, you may need to synchronize your publishable document types
with the new Broker. Switching your Broker connection is not recommended or
supported. For more information about synchronizing publishable document types,
see “Synchronizing Publishable Document Types” on page 74.
You can also specify a user account that you or another server administrator defined.
When the Integration Server receives a document that satisfies a trigger condition, the
Integration Server uses the credentials for the specified user account to invoke the service
specified in the trigger condition.
Make sure that the user account you select includes the credentials required by the
execute ACLs assigned to the services associated with triggers. For example, suppose
that you specify “Developer” as the user account for invoking services in triggers. The
receiveCustomerInfo trigger contains a condition that associates a publishable document
type with the service addCustomer. The addCustomer service specifies “Replicator” for the
Execute ACL. When the trigger condition is met, the addCustomer service will not execute
because the user setting you selected (Developer) does not have the necessary credentials
to invoke the service (Replicator).
For more information about setting the Run Trigger Service As User property, see the
webMethods Integration Server Administrator’s Guide.
watt.server.brokerTransport.ret
Specifies the number of times the Broker re‐sends keep‐alive messages before
disconnecting an un‐responsive Integration Server.
watt.server.cluster.aliasList
Specifies a comma‐delimited list of aliases for remote Integration Servers in a cluster.
Integration Server uses this list when executing the remote invokes that update the other
cluster nodes with trigger management changes (such as suspending/resuming
document retrieval or document processing).
watt.server.control.controlledDeliverToTriggers.pctMaxThreshold
Specifies the trigger queue threshold at which Integration Server slows down the
delivery rate of locally published documents. This threshold is expressed as a percentage
of the trigger queue capacity.
watt.server.control.maxPersist
Specifies the capacity of the outbound document store.
watt.server.control.maxPublishOnSuccess
Specifies the maximum number of documents that the server can publish on success at
one time.
watt.server.dispatcher.comms.brokerPing
Specifies how often (in milliseconds) trigger BrokerClients should ping the Broker to
prevent connections between a trigger BrokerClient and the Broker from becoming idle,
and as a result, prevent the firewall from closing the idle connection.
watt.server.dispatcher.join.reaperDelay
Specifies how often (in milliseconds) that the Integration Server removes state
information for completed and expired joins. The default is 1800000 milliseconds (30
minutes).
watt.server.idr.reaperInterval
Specifies the initial interval at which the scheduled service
wm.server.dispatcher:deleteExpiredUUID executes and removes expired document history
entries.
watt.server.publish.local.rejectOOS
Specifies whether Integration Server should reject documents published locally, using the
pub.publish:publish or pub.publish.publishAndWait services, when the queue for the subscribing
trigger is at maximum capacity. The default is “false”.
Note: Multiple triggers can subscribe to the same document. Integration Server places
the document in any subscribing trigger queue that is not at capacity.
watt.server.publish.useCSQ
Specifies whether Integration Server uses outbound client‐side queuing if documents are
published when the Broker is unavailable. When this parameter is set to “false” and the
publish fails, a service exception occurs.
watt.server.publish.usePipelineBrokerEvent
Specifies whether Integration Server should bypass encoding that is normally performed
when documents are published to the Broker. For more information about when to set
this property, see “Configuring Integration Server to Handle Native Broker Events” on
page 53.
watt.server.publish.validateOnIS
Specifies whether Integration Server validates published documents all the time, never,
or on a per document type basis. For more information about document validation, see
“Specifying Validation for a Publishable Document Type” on page 68.
watt.server.trigger.interruptRetryOnShutdown
Specifies whether or not a request to shutdown the Integration Server interrupts the retry
process for a trigger service. The default is “false”. For more information about
interrupting trigger service retries, see “Trigger Service Retries and Shutdown Requests”
on page 141.
watt.server.trigger.keepAsBrokerEvent
Specifies whether Integration Server should bypass decoding that is normally performed
when documents are retrieved from the Broker on behalf of a trigger. For more
information about when to set this property, see “Configuring Integration Server to
Handle Native Broker Events” on page 53.
watt.server.trigger.local.checkTTL
Specifies whether Integration Server should strictly enforce a locally published
document’s time‐to‐live. When this parameter is set to “true,” before processing a locally
published document in a trigger queue, Integration Server determines whether the
document has expired. Integration Server discards the document if it has expired. The
default is “false”.
watt.server.trigger.managementUI.excludeList
Specifies a comma‐delimited list of triggers to exclude from the Trigger Management pages
in the Integration Server Administrator. The Integration Server also excludes these
triggers from trigger management changes that suspend or resume document retrieval or
document processing for all triggers. The Integration Server does not exclude these
triggers from changes to capacity, refill level, or maximum execution threads that are
made using the global trigger controls (Queue Capacity Throttle and Trigger Execution
Threads Throttle).
watt.server.trigger.monitoringInterval
Specifies the interval, measured in seconds, at which Integration Server executes resource
monitoring services. A resource monitoring service is a service that you create to check
the availability of resources used by a trigger service. For more information about
resource monitoring services, see Appendix B, “Building a Resource Monitoring
Service”.
watt.server.trigger.preprocess.suspendAndRetryOnError
Indicates whether Integration Server suspends a trigger if an error occurs during the pre‐
processing phase of trigger execution. The pre‐processing phase encompasses the time
from when the trigger retrieves the document from its local queue to the time the trigger
service executes. For more information about this property, see “What Happens When
the Document History Database Is Not Available?” on page 155 and “Document Resolver
Service and Exceptions” on page 157.
watt.server.trigger.removeSubscriptionOnReloadOrReinstall
Specifies whether Integration Server deletes document type subscriptions for triggers
when the package containing the trigger reloads or an update of the package is installed.
watt.server.xref.type
Specifies where key cross referencing and echo suppression information is written.
In some situations, you may want to bypass this encoding or decoding step on
Integration Server and instead send and receive “native” Broker events to and from the
Broker. These situations are when you:
Migrate Enterprise business logic to Integration Server.
Use custom Broker clients written in Java, C, or COM/ActiveX.
You configure Integration Server to handle native Broker events by setting server
parameters.
1 Open the Integration Server Administrator if it is not already open.
2 In the Settings menu of the Navigation panel, click Extended.
3 Look for the watt.server.publish.usePipelineBrokerEvent property and change its value to
true.
If the watt.server.publish.usePipelineBrokerEvent property is not displayed, see the
webMethods Integration Server Administrator’s Guide for instructions on displaying
extended settings.
4 Look for the watt.server.publish.validateOnIS property and change its value to never.
5 If Integration Server is retrieving documents from the Broker on behalf of a trigger,
look for the watt.server.trigger.keepAsBrokerEvent property and change its value to true.
6 Click Save Changes.
7 Restart Integration Server.
Note: If you set the watt.server.trigger.keepAsBrokerEvent property to true and the
watt.server.publish.validateOnIS property to always or perDoc, you will receive
validation errors.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Creating Publishable Document Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Setting Publication Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Modifying Publishable Document Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Deleting Publishable Document Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Synchronizing Publishable Document Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Importing and Overwriting References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Testing Publishable Document Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Introduction
A publishable document type is a named schema‐like definition that describes the
structure and publication properties of a particular kind of document. Essentially, a
publishable document type is an IS document type with specified publication properties
such as storage type and time‐to‐live.
In an integration solution that uses the publish‐and‐subscribe model, services publish
instances of publishable document types, and triggers subscribe to publishable document
types. A trigger specifies a service that the Integration Server invokes to process the
document. For example, you might create a publishable document type named EmpRec
that describes the layout of an employee record. You might create a trigger that specifies
that the Integration Server should invoke the addEmployeeRecord service when instances of
the EmpRec are received. When a service or adapter notification publishes a document of
type EmpRec, that document would be queued for the subscribers of document type
EmpRec. The Integration Server would pass the document to and invoke the
addEmployeeRecord service.
In a publication environment that includes a Broker, each publishable document type is
associated with a Broker document type. Developer provides tools that you can use to
ensure that these two document types remain synchronized.
When you build an integration solution that uses publication and subscription, you need
to create the publishable document types before you create triggers, services that process
documents, and services that publish documents.
Tip! You can distribute the publishable document types that you create to other
developers through package replication. For more information, see “Step 4: Make the
Publishable Document Types Available” on page 42 and the “Managing Packages” in
the webMethods Integration Server Administrator’s Guide.
You can create publishable document types by doing the following:
Making an existing IS document type publishable. For information about creating an
IS document type, see the webMethods Developer User’s Guide.
Creating a new document type based on an existing Broker document type
The following sections provide more information about creating publishable document
types.
Important! If you want to generate an associated Broker document type at the same
time you make the IS document type publishable, make sure that a Broker is
configured and the Integration Server on which you are working is connected to it.
For more information about configuring a connection between the Integration Server
and the Broker, see “Configuring the Server” in the webMethods Integration Server
Administrator’s Guide.
Note: You can only make an IS document type publishable if you own the lock on the
IS document type and you have write permission to the IS document type. For
information about locking elements and access permissions (ACLs), see the
webMethods Developer User’s Guide.
1 In the Navigation panel of Developer, open the IS document type that you want to
make publishable.
2 In the Properties panel, under Publication, set the Publishable property to True.
3 Next to the Storage type property, select the storage method to use for instances of this
publishable document type.
Select... To...
Volatile Specify that instances of this publishable document type are
volatile. Volatile documents are stored in memory.
Guaranteed Specify that instances of this publishable document type are
guaranteed. Guaranteed documents are stored on disk.
For more information about selecting a storage type, see “Selecting a Document
Storage Type” on page 65.
Important! For documents published to the Broker, the storage type assigned to a
document can be overridden by the storage type assigned to the client queue on
the Broker. For more information, see “Document Storage Versus Client Queue
Storage” on page 66.
4 Next to the Discard property, select one of the following to indicate how long instances
of this publishable document type remain in the trigger client queue before the
Broker discards them.
d
Select... To...
False Specify that the Broker should never discard instances of this
publishable document type.
True Specify that the Broker should discard instances of this
publishable document type after the specified time elapses.
In the fields next to Time to live specify the time‐to‐live value and
time units.
5 On the File menu, click Save to save your changes. Developer displays beside the
document type name in the Navigation panel to indicate it is a publishable document
type.
Notes:
In the Properties panel, the Broker doc type property displays the name of the
corresponding document type created on the Broker. Or, if you are not connected to a
Broker, this field displays “Publishable Locally Only”. (Later, when the Integration
Server is connected to a Broker, you can create a Broker document type for this
publishable document type by pushing the document type to the Broker during
synchronization.) You cannot edit the contents of this property. For more information
about the contents of this property, see “About the Associated Broker Document Type
Name” on page 62.
When you make a document type publishable, the Integration Server adds an
envelope field (_env) to the document type automatically. When a document is
published, the Integration Server and the Broker populate this field with metadata
about the document. For more information about this field, see “About the Envelope
Field” on page 64.
Once a publishable document type corresponds to an associated Broker document
type, you need to make sure that the document types remain in sync. That is, changes
in one document type must be made to the associated document type. You can update
one document type with changes in the other by synchronizing them. For information
about synchronizing document types, see “Synchronizing Publishable Document
Types” on page 74.
Important! The Broker prohibits the use of certain field names, for example, Java
keywords, @, *, and names containing white spaces or punctuation. If you make a
document type publishable and it contains a field name that is not valid on the
Broker, you cannot access and view the field via any Broker tool. However, the Broker
transports the contents of the field, which means that any other Integration Server
connected to that Broker has access to the field as it was displayed and implemented
on the original Integration Server. Use field names that are acceptable to the Broker.
See the webMethods Broker Administrator’s Guide for information on naming
conventions for Broker elements.
Important! If you choose to overwrite existing elements with new elements, keep in
mind that dependents of the overwritten elements will be affected. For example,
suppose the address document type defined the input signature of a flow service
deliverOrder. Overwriting the address document type might break the deliverOrder flow
service and any other services that invoked deliverOrder.
See the webMethods Integration Server Administrator’s Guide for information about
configuring the Broker. See the webMethods Developer User’s Guide for information about
locking elements and access permissions (ACLs).
1 On the File menu, click New.
2 Select Document Type and click Next.
3 In the New Document Type dialog box, do the following:
a In the list next to Folder, select the folder in which you want to save the document
type.
b In the Name field, type a name for the IS document type using a combination of
letters, numbers, and/or the underscore character. For information about naming
restrictions, see the webMethods Developer User’s Guide.
c Click Next.
4 Select Broker Document Type, and click Next. Developer opens the New Document Type
dialog box.
5 In the New Document Type dialog box, do the following:
a In the Broker Namespace field, select the Broker document type from which you
want to create an IS document type. The Broker Namespace field lists all of the
Broker document types on the Broker territory to which the Integration Server is
connected.
b If you want to replace existing elements in the Navigation panel with identically
named elements referenced by the Broker document type, select the Overwrite
existing elements when importing referenced elements check box.
Important! Overwriting the existing elements completely replaces the existing
element with the content of the referenced element. Any elements on the
Integration Server that depend on the replaced element, such as flow services,
IS document types, and specifications, might be affected. For more
information about overwriting existing elements, see “Importing and
Overwriting References” on page 84.
6 Click Finish. Developer automatically refreshes the Navigation panel and displays
beside the document type name to indicate it is a publishable document type.
Notes:
You can associate only one publishable document type with a given Broker document
type. If you try to create a publishable document type from a Broker document type
that is already associated with a publishable document type on your Integration
Server, Developer displays an error message.
In the Properties panel, the Broker doc type property displays the name of the Broker
document type used to create the publishable document type. Or, if you are not
connected to a Broker, this field displays “Publishable Locally Only”. You cannot edit
the contents of this field. For more information about the contents of this field, see
“About the Associated Broker Document Type Name” on page 62.
To create a Broker document type for a publishable document type that is publishable
locally only, push the publishable document type to the Broker during
synchronization. For more information about synchronizing, see “Synchronizing
Publishable Document Types” on page 74.
The publishable document type you create from a Broker document type has the
same publication properties as the source Broker document type
Once a publishable document type has an associated Broker document type, you
need to make sure that the document types remain in sync. That is, changes in one
document type must be made to the associated document type. You can update one
document type with changes in the other by synchronizing them. For information
about synchronizing document types, see “Synchronizing Publishable Document
Types” on page 74.
Note: If you want instances of this publishable
document type to be published to the Broker, you
need to create a Broker document type for this
publishable document type. When the Integration
Server is connected to a Broker, you can create the
Broker document type by pushing the
publishable document type to the Broker during
synchronization. For more information about
synchronizing, see “Synchronizing Publishable
Document Types” on page 74.
Note: If an IS document type contains a field named _env, you need to delete that field
before you can make the IS document type publishable.
Each adapter notification has an associated publishable document type . When you
create an adapter notification in Developer, the Integration Server automatically
generates a corresponding publishable document type. Developer assigns the
publishable document type the same name as the adapter notification, but appends
PublishDocument to the name. You can use the adapter notification publishable
document type in triggers and flow services just as you would any other publishable
document type.
The adapter notification publishable document type is directly tied to its associated
adapter notification. In fact, you can only modify the publishable document type by
modifying the adapter notification. The Integration Server automatically propagates the
changes from the adapter notification to the publishable document type. You cannot edit
the adapter notification publishable document type directly.
When working in the Navigation panel, Developer treats an adapter notification and its
publishable document type as a single unit. If you perform an action on the adapter
notification Developer performs the same action on the publishable document type. For
example, if you rename the adapter notification, Developer automatically renames the
publishable document type. If you move, cut, copy, or paste the adapter notification
Developer moves, cuts, copies, or pastes the publishable document type.
For information about how to create and modify adapter notifications, see the
appropriate adapter user’s guide.
Note: Changing a Publication property causes the publishable document type to be out
of sync with the associated Broker document type. For information about
synchronizing document types, see “Synchronizing Publishable Document Types” on
page 74.
document, but only if the document was published more than once. Specify volatile
storage for documents that have a short life or are not critical.
Guaranteed storage specifies that instances of the publishable document type are stored
on disk. Resources return acknowledgements after storing or processing guaranteed
documents. Because guaranteed documents are saved to disk and acknowledged,
guaranteed documents move through the webMethods system more slowly than
volatile documents. However, if a guaranteed document is located on a resource that
shuts down, the resource recovers the guaranteed document upon restart.
webMethods components provide guaranteed document delivery and guaranteed
processing (either at‐least‐once processing or exactly‐once processing) for guaranteed
documents. Guaranteed processing ensures that once a trigger receives the document, it
is processed. Use guaranteed storage for documents that you cannot afford to lose.
Note: Some Broker document types have a storage type of Persistent. The
Persistent storage type automatically maps to the guaranteed storage type in the
Integration Server.
1 In the Navigation panel, open the publishable document type for which you want to
set the storage type.
2 In the Properties panel, next to the Storage type property, select one of the following:
Select... To...
Guaranteed Specify that instances of this publishable document type should
be stored on disk.
Volatile Specify that instances of this publishable document type should
be stored in memory.
3 On the File menu, click Save to save your changes.
Important! For documents published to the Broker, the storage type assigned to a
document can be overridden by the storage type assigned to the client queue on the
Broker. For more information, see “Document Storage Versus Client Queue Storage”
on page 66.
a volatile client queue subscribes, the Broker changes the storage type of the document
from guaranteed to volatile before placing it in the volatile client queue. The Broker does
not change the storage type of a volatile document before placing it in a guaranteed client
queue.
The following table indicates how the client queue storage type affects the document
storage type.
If document storage type And the client queue storage The Broker saves the document
is... type is... as...
Volatile Volatile Volatile
Guaranteed Volatile
Guaranteed Volatile Volatile
Guaranteed Guaranteed
Note: On the Broker, each client queue belongs to a client group. The client queue
storage type property assigned to the client group determines the storage type for all
of the client queues in the client group. You can set the client queue storage type only
when you create the client group. By default, the Broker assigns a client queue storage
type of guaranteed for the client group created for Integration Servers. For more
information about client groups, see the webMethods Broker Administrator’s Guide.
1 In the Navigation panel, open the publishable document type for which you want to
set a time to live.
2 In the Properties panel, next to the Discard property, select one of the following:
Select... To...
False Specify that the Broker should never discard instances of this
publishable document type.
True Specify that the Broker should discard instances of this
publishable document type after the specified time elapses.
In the Time to live property, specify the time‐to‐live value and
units in which the time should be measured.
3 On the File menu, click Save to save your changes.
Note: Changing a publication property causes the publishable document type to be
out of sync with the associated Broker document type. For information about
synchronizing document types, see “Synchronizing Publishable Document Types” on
page 74.
Integration Server provides two settings that you can use to configure validation for
published documents.
A global setting named watt.server.publish.validateOnIS that indicates whether
Integration Server always performs validation, never performs validation, or
performs validation on a per document type basis. You can set this property using
Integration Server Administrator. For more information about setting this property,
see the webMethods Integration Server Administrator’s Guide.
A publication property for publishable document types that indicates whether
instances of a publishable document type should be validated. Integration Server
honors the value of this property (named Validate when published) only if the
watt.server.publish.validateOnIS is set to perDoc (the default).
Note: When deciding whether to disable document validation, be sure to weigh the
advantages of a possible increase in performance against the risks of publishing,
routing, and processing invalid documents.
The following procedure explains how to enable or disable validation for individual
publishable document types.
1 In the Navigation panel in Developer, open the publishable document type for which
you want to specify validation.
2 In the Properties panel, under Publication, set the Validate when published property to
one of the following:
Select... To...
True Perform validation for published instances of this publishable
document type.
This is the default.
False Disable validation for published instances of this publishable
document type.
3 On the File menu, click Save.
Notes:
Integration Server ignores the value of the Validate when published property if the watt.
server.publish.validateOnIS property is set to always or never.
webMethods Broker can also be configured to validate the contents of a published
document. When it receives the document from an Integration Server, Broker checks
the validation level of the Broker document type associated with the published
document. If the validation level is set to Full or Open, Broker validates the document
contents. If the validation level is set to None, Broker does not validate the document
contents. By default, Broker assigns Broker document types created from a
publishable document type on an Integration Server a validation level of None. For
more information about configuring document validation on the Broker, see the
webMethods Broker Administrator’s Guide.
Developer displays this message only if a Broker is configured for the Integration Server.
After you modify a publishable document type, you need to update the associated Broker
document type with the changes. For information about how to synchronize document
types, see “Synchronizing Publishable Document Types” on page 74.
Use only Developer to edit a publishable document type. Do not use Enterprise
Integrator to edit a Broker document type that corresponds to a publishable
document type. Editing a document type with Enterprise Integrator can lead to
synchronization problems. Specifically, changes that you make to a certain document
types with Enterprise Integrator cannot be synchronized with the publishable
document types on Integration Server.
Changes you make to the contents of a publishable document type might require you
to modify the filter for the document type in a trigger condition. For example, if you
add, rename, or move fields you need to update any filter that referred to the
modified fields. You might also need to modify the service specified in the trigger
condition. For information about filters, see “Creating a Filter for a Document” on
page 116.
Important! You must manually update any services that invoke the pub.publish services
and specify this publishable document type in the documentTypeName or the
receivedDocumentTypeName parameter.
type unpublishable, you need to update triggers or publishing services that used the
publishable document type.
Note: If a publishing service specifies the publishable document type and you make
the document type unpublishable, the publishing service will not execute
successfully. The next time the service executes, the Integration Server throws a
service exception stating that the specified document type is not publishable. For
more information about publishing services, see “The Publishing Services” on
page 90.
1 In the Navigation panel, open the publishable document type that you want to make
unpublishable.
2 In the Properties panel, next to Publishable, select False.
3 On the File menu, click Save to save your changes. If the document type is associated
with a Broker document type, Developer displays the Delete Confirmation dialog
box. This dialog box prompts you to specify whether the associated Broker document
type should be deleted or retained.
4 If you would like to delete the associated Broker document type from the Broker, click
Yes. Otherwise, click No.
Note: You can only delete the associated Broker document type if there no clients have
subscriptions for it.
Developer displays beside the document type name in the Navigation panel to
indicate it is an IS document type and cannot be published. In the Properties panel,
Developer changes the contents of the Broker doc type property to “Not Publishable”. For
more information about this field, see “About the Associated Broker Document Type
Name” on page 62.
Before you delete a publishable document type, keep the following in mind:
You can only delete the associated Broker document type if there are no subscriptions
for it.
If you intend to delete the associated Broker document type as well, make sure that
the Broker is configured and the Integration Server is connected to it.
You can only delete a publishable document type if you own the lock and have Write
permission to it. For more information about access permissions (ACLs), see the
webMethods Developer User’s Guide.
1 In the Navigation panel of Developer, select the document type you want to delete.
2 On the Edit menu, click Delete.
If you enabled the deleting safeguards in the Options dialog box, and the publishable
document type is used by other elements, Developer displays a dialog box listing all
dependent elements, including triggers and flow services.
For information about enabling safeguards to check for dependents when deleting an
element, see the webMethods Developer User’s Guide.
3 Do one of the following:
If you want to delete the publishable document type on the Integration Server, but
leave the corresponding document type on the Broker, leave the Delete associated
Broker document type on the Broker check box cleared.
If you want to delete the publishable document type on the Integration Server and
the corresponding document type on the Broker, select the Delete associated Broker
document type on the Broker check box.
4 Do one of the following:
Click... To...
Continue Delete the element from the Navigation panel. References in
dependent elements remain.
Cancel Cancel the operation and preserve the element in the Navigation
panel.
OK Delete the element from the Navigation panel. (This button only
appears if the publishable document type did not have any
dependents.)
Important! If you delete a Broker document type that is required by another
Integration Server, you can synchronize (push) the document type to the Broker
from that Integration Server. If you delete a Broker document type that is required
by a non‐IS Broker client, you can recover the document from the Broker .adl
backup file. See the webMethods Broker Administrator’s Guide for information about
importing .adl files.
Synchronization Status
Each publishable document type on your Integration Server has a synchronization status
to indicate whether it is in sync with the Broker document type, out of sync with the
Broker document type, or not associated with a Broker document type. The following
table identifies each possible synchronization status for a document type.
Status Description
Status Description
Important! Switching your Integration Server connection from one Broker to Broker in
a different territory is neither recommended nor supported. In such a switch, the
Integration Server displays the synchronization status as it was before the switch.
This synchronization status may be inaccurate because it does not apply to elements
that exist on the second Broker.
Synchronization Actions
When you synchronize document types, you decide for each publishable document type
whether to push or pull the document type to the Broker. When you push the publishable
document type to the Broker, you update the Broker document type with the publishable
document type on your Integration Server. When you pull the document type from the
Broker, you update the publishable document type on your Integration Server with the
Broker document type.
The following table describes the actions you can take when synchronizing a publishable
document type.
Action Description
The Integration Server does not automatically synchronize document types because you
might need to make decisions about which version of the document type is correct. For
example, suppose that Integration Server1 and Integration Server2 contain identical
publishable document types named Customer:getCustomer. These publishable document
types have an associated Broker document type named wm::is::Customer::getCustomer. If a
developer updates Customer:getCustomer on Integration Server2 and pushes the change to
the Broker, the Broker document type wm::is::Customer::getCustomer is updated. However,
the Broker document type is now out of sync with Customer:getCustomer on Integration
Server1. The developer using Integration Server1 might not want the changes made to the
Customer:getCustomer document type by the developer using Integration Server2. The
developer using Integration Server1 can decide whether to update the
Customer:getCustomer document type when synchronizing document types with the Broker.
Note: For a subscribing Integration Server to process an incoming document
successfully, the publishable document type on a subscribing Integration Server
needs to be in sync with the corresponding document types on the publishing
Integration Server and the Broker. If the document types are out of sync, the
subscribing Integration Server may not be able to process the incoming documents. In
this case, the subscribing Integration Server logs an error message stating that the
“Broker Coder cannot decode document; the document does not conform to the
document type, documentTypeName.”
Note: If publishable document types for this
Broker document type exist on other Integration
Servers, this action changes the synchronization
status of those publishable document types to
Updated on Broker.
Note: If publishable document types for this
Broker document type exist on other Integration
Servers, this action does not affect the
synchronization status of those publishable
document types.
Tip! If a publishable document type is in sync with
the Broker document type, set the action to Skip.
Tip! If a publishable document type is in sync with
the Broker document type, set the action to Skip.
Note: For a publishable document type created for an adapter notification, you can
select Skip or Push to Broker only. A publishable document type for an adapter
notification can only be modified on the Integration Server on which it was created.
1 In Navigation panel of Developer, select the publishable document type that you
want to synchronize.
2 Select FileSync Document TypesSelected. Developer displays the Synchronize dialog
box.
The Synchronize dialog box displays the synchronization status of the document
type, as described in “Synchronization Status” on page 74:
3 Under Action, do one of the following:
Select... To...
Note: The result of a synchronization action depends on the document status. For
more information about how the result of a synchronization status depends on
the synchronization status, see “Combining Synchronization Action with
Synchronization Status” on page 77.
Tip! When you switch the Broker configured for the Integration Server to a Broker
in a different territory, the Integration Server displays the synchronization status
as it was before the switch. This synchronization status may be inaccurate because
it does not apply to elements that exist on the second Broker. To view and
synchronize all publishable document types, use the Sync All Document Types
dialog box.
The Sync All Document Types dialog box is displayed below.
For each publishable document type that is out of sync on the Integration Server, the Sync
All Out‐of‐Sync Document Types dialog box and the Sync All Document Types dialog
box displays the following information.
Field Description
Document Type The name and icon of the publishable document type. The icon
indicates the lock status of the publishable document type. If a red
check mark appears next to the publishable document type icon,
another user has locked the document type. When a publishable
document type is locked by another user, you can only push to the
Broker. If you want to pull from the Broker, you need to own the lock
on the publishable document type (green check mark) or the
publishable document type needs to be unlocked. See the webMethods
Developer User’s Guide for information about locking objects.
Status The status of the document types as described in “Synchronization
Status” on page 74.
Field Description
Action A push to the Broker, pull from the Broker, or a skip as described in
“Synchronization Actions” on page 75.
Writable Indicates whether you have write permission to the publishable
document type. You can only pull from the Broker if you have write
permission. If you do not have write permission, you can only push to
Broker. See the webMethods Developer User’s Guide for information
about ACL permissions.
Keep the following points in mind when synchronizing multiple document types using
the Sync All Out‐of‐Sync Document Types dialog box or the Sync All Document Types
dialog box.
If you want to Pull from Broker, you must have write access to the publishable
document type. The publishable document type must be either unlocked, or you
must have locked it yourself. For more information about locking elements and access
permissions (ACLs), see the webMethods Developer User’s Guide.
When you pull document types from the Broker, Developer gives you the option of
overwriting elements with the same name that already exist on the Integration Server.
The Broker document type might reference elements such as an IS schema or other IS
document types. If the Integration Server you are importing to already contains any
elements with the referenced names, you need to know if there is any difference
between the existing elements and those being imported from the Broker. If there are
differences, you need to understand what they are and how importing them will
affect any integration solution that uses them. For more information about
overwriting existing elements, see “Importing and Overwriting References” on
page 84.
For a publishable document type created for an adapter notification, you can only
select Push to Broker or Skip. A publishable document type for an adapter notification
can only be modified on the Integration Server on which it was created.
1 In Developer, do one of the following:
To view and synchronize only out‐of‐sync document types, select FileSync
Document TypesAll Out-of-Sync. Developer displays the Sync All Out‐of‐Sync
Document Types dialog box.
To view and synchronize all document types, regardless of sync status, select
FileSync Document TypesAll. Developer displays the Sync All Document Types
dialog box.
2 If you want to specify the same synchronization action for all of the publishable
document types, do one of the following:
Select... To...
3 If you want to specify a different synchronization action for each publishable
document type, use the Action column to select the synchronization action.
Select... To...
Note: The result of a synchronization action depends on the document status. For
more information about how the result of a synchronization status depends on
the synchronization status, see “Combining Synchronization Action with
Synchronization Status” on page 77.
4 If you want to replace existing elements in the Navigation panel with identically
named elements referenced by the Broker document type, select the Overwrite existing
elements when importing referenced elements check box. For more information about
importing referenced elements during synchronization, see “Importing and
Overwriting References” on page 84.
5 Click Synchronize to perform the specified synchronization actions for of the all the
listed publishable document types.
Testing a publishable document type provides a way for you to publish a document
without building a service that does the actual publishing. If you select a publication
action where you wait for a reply document, you can verify whether or not reply
documents are received.
Note: When you test a publishable document type, the Integration Server actually
publishes the document locally or to the Broker (whichever is specified).
If you want to test a join condition in a trigger, you might test each publishable document
type identified in the join condition. If you do this, make sure to use the same activation
number for each publishable document type that you test.
Note: If your publishable document type expects Object variables that do not have
constraints assigned or an Object defined as a byte[ ], you will not be able to enter
those values in the Input dialog box. To test these values, you must write a Java
service that generates input values for your service and a flow service that publishes
the document. Then, create a flow service that first invokes the Java service and then
the publishing flow service.
1 In the Navigation panel, open the publishable document type that you want to test.
2 Click to test the publishable document type. Developer displays the Input for
PublishableDocumentTypeName dialog box.
3 In the Input for PublishableDocumentTypeName dialog box, enter valid values for the
fields defined in the publishable document type or click the Load button to retrieve
the values from a file. For information about loading input values from a file, see the
webMethods Developer User’s Guide.
4 If you want to save the input values that you have entered, click Save. Input values
that you save can be recalled and reused in later tests. For information about saving
input values, see the webMethods Developer User’s Guide.
5 Click Next. When you enter values for constrained objects in the Input dialog box,
Integration Server automatically validates the values. If the value is not of the type
specified by the object constraint, Developer displays a message identifying the
variable and the expected type.
6 In the Run test for PublishableDocumentTypeName dialog box, select the type of
publishing for the document.
Select... To...
7 Click Next or Finish.
8 If you selected either Deliver to a specific Client or Deliver to a specific Client and wait for a
Reply, in the Run test for PublishableDocumentTypeName dialog box, select the Broker
client to which you want to deliver the document. Click Next or Finish.
Note: In this dialog box, Developer displays all the clients connected to your
Broker. The Integration Server assigns trigger clients names according to the
client prefix specified on the Settings > Broker screen of the Integration Server
Administrator.
9 If you selected a publication action in which you wait for a reply, you need to select
the document type that you expect as a reply. Developer displays all the publishable
document types on the Integration Server to which you are currently connected.
a In the Name field, type the fully qualified name of the publishable document type
that you expect as a reply or select it from the Folder list. If the service does not
expect a specific document type as a reply, leave this field blank.
b Under Set how long Developer waits for a Reply, select one of the following:
Select... To...
c Click Finish. Developer publishes an instance of the publishable document type.
Notes:
Developer displays the instance document and publishing information in the
Results panel.
If you selected a publication action in which you wait for a reply, and Developer
receives a reply document, Developer displays the reply document as the value of
the receiveDocumentTypeName field in the Results panel
If Developer does not receive the reply document before the time specified next
Discard after elapses, Developer displays an error messages stating that the publish
and wait (or deliver and wait) has timed out. The Results panel displays null next
to the receiveDocumentTypeName field to indicate that the Integration Server did
not receive a reply document.
Service Description
pub.publish:deliver Delivers a document to a specified destination.
pub.publish:deliverAndWait Delivers a document to a specified destination and waits for
a response.
pub.publish:publish Publishes a document locally or to a configured Broker. Any
clients (triggers) with subscriptions to documents of this
type will receive the document.
pub.publish:publishAndWait Publishes a document locally or to a configured Broker and
waits for a response. Any clients (triggers) with
subscriptions for the published document will receive the
document.
pub.publish:reply Delivers a reply document in answer to a document received
by the client.
pub.publish:waitForReply Retrieves the reply document for a request published
asynchronously.
errorsTo A String that specifies the client ID to which the Integration Server
sends an error notification document (an instance of
pub.publish.notify:error) if errors occur during document processing by
subscribers.
If you do not specify a value for errorsTo, error notifications are sent to
the document publisher.
replyTo A String that specifies which client ID replies to the published
document should be sent to. If you do not specify a replyTo destination,
responses are sent to the document publisher.
Important! When you create a service that publishes a document and
waits for a reply, do not set the value of the replyTo field in the
document envelope. By default, the Integration Server uses the
publisher ID as the replyTo value. If you change the replyTo value,
responses will not be delivered to the waiting service.
activation A String that specifies the activation ID for the published document. If a
document does not have an activation ID, the Integration Server
automatically assigns an activation ID when it publishes the document.
Specify an activation ID when you want a trigger to join together
documents published by different services. In this case, assign the same
activation ID to the documents in the services that publish the
documents. For more information about how the Integration Server
uses activation IDs to satisfy join conditions, see Chapter 9,
“Understanding Join Conditions”.
For more information about the fields in the document envelope, see the description of
the pub.publish envelope document type in the webMethods Integration Server Built‐In Services
Reference.
You can override the default behavior by assigning an activation ID to a document
manually. For example, in the pipeline, you can map a variable to the activation field of
the document. If you want to explicitly set a document’s activation ID, you must set it
before publishing the document. When publishing the document, the Integration Server
will not overwrite an explicitly set value for the activation field.
You need to set the activation ID for a document only when you want a trigger to join
together documents published by different services. If a trigger will join together
documents published within the same execution of a service, you do not need to set the
activation ID. The Integration Server automatically assigns all the documents the same
activation ID.
Tip! If a service publishes a new document as a result of receiving a document, and
you want to correlate the new document with the received document, consider
assigning the activation ID of the received document to the new document.
Publishing a Document
When you publish a document using the pub.publish:publish service, the document is
broadcast. The service publishes the document for any interested subscribers. Any
subscribers to the document can receive and process the document.
to fields in the document envelope, see “Setting Fields in the Document Envelope” on
page 90.
4 Invoke pub.publish:publish to publish the document. This service takes the document you
created and publishes it.
The pub.publish:publish service expects to find a document (IData object) named
document in the pipeline. If you are building a flow service, you will need to use the
Pipeline tab to map the document you want to publish to document.
In addition to the document reference you map into document, you must provide the
following parameter to pub.publish:publish.
Name Description
documentTypeName A String specifying the fully qualified name of the
publishable document type that you want to publish. The
publishable document type must exist on the Integration
Server.
You may also provide the following optional parameters:
Name Description
local A String indicating whether you want to publish the
document locally. When you publish a document locally,
the Integration Server does not send the document to the
Broker. The document remains on the publishing
Integration Server. Only subscribers on the same
Integration Server can receive and process the
document.
Note: If a Broker is not configured for this Integration
Server, the Integration Server automatically publishes
the document locally. You do not need to sent the local
parameter to true.
true Publish the document locally.
false Publish the document to the configured
Broker. This is the default.
Name Description
delayUntilServiceSuccess A String specifying that the Integration Server will delay
publishing the document until the top‐level service
executes successfully. If the top‐level service fails, the
Integration Server will not publish the document.
Set to... To...
true Delay publishing until after the top‐level
service executes successfully.
Note: Integration Server does not return a
status when this parameter is set to true.
false Publish the document when the publish
service executes.
Note: The watt.server.control.maxPublishOnSuccess parameter controls the
maximum number of documents that the Integration Server can publish on success at
one time. You can use this parameter to prevent the server from running out of
memory when a service publishes many, large documents on success. By default, this
parameter is set to 50,000 documents. Decrease the number of documents that can be
published on success to help prevent an out of memory error. For more information
about this parameter, see the webMethods Integration Server Administrator’s Guide.
publish all the requests first and then collect the replies. This approach can be more
efficient than publishing a request, waiting for a reply, and then publishing the next
request.
You can use the pub.publish:publishAndWait service to build a service that performs a
synchronous or an asynchronous request/reply. If you need a specific client to respond to
the request for information, use the pub.publish:deliverAndWait service instead. For more
information about using the pub.publish:deliverAndWait service, see “Delivering a Document
and Waiting for a Reply” on page 100.
For information about how the Integration Server and Broker process a request and reply,
see “Publishing Documents and Waiting for a Reply” on page 23.
Important! When you create a service that publishes a document and waits for a
reply, do not set the value of the replyTo field in the document envelope. By
default, the Integration Server uses the publisher ID as the replyTo value. If you
change the replyTo value, responses will not be delivered to the waiting service.
Name Description
documentTypeName A String specifying the fully qualified name of the
publishable document type that you want to publish an
instance of. The publishable document type must exist on
the Integration Server.
You may also provide the following optional parameters:
Name Description
receiveDocumentTypeName A String specifying the fully qualified name of the
publishable document type expected as a reply. This
publishable document type must exist on your
Integration Server.
If you do not specify a receiveDocumentTypeName
value, the service uses the first reply that it receives
for this request.
Important! If you specify a document type, you need to
work closely with the developer of the subscribing
trigger and the reply service to make sure that the
reply service sends a reply document of the correct
type.
local A String indicating whether you want to publish the
document locally. When you publish a document
locally, the document remains on the publishing
Integration Server. The Integration Server does not
publish the document to the Broker. Only subscribers
on the same Integration Server can receive and
process the document.
Set to... To...
true Publish the document locally.
false Publish the document to the configured
Broker. This is the default.
Name Description
waitTime A String specifying how long the publishing service
waits (in milliseconds) for a reply document. If you do
not specify a waitTime value, the service waits until it
receives a reply. The Integration Server begins
tracking the waitTime as soon as it publishes the
document.
async A String indicating whether this is a synchronous or
asynchronous request.
Set to... To...
true Indicate that this is an asynchronous
request. The Integration Server publishes
the document and then executes the next
step in the service.
false Indicate that this is a synchronous request.
The Integration Server publishes the
document and then waits for the reply. The
Integration Server executes the next step in
the service only after it receives the reply
document or the wait time elapses. This is
the default.
Note: The tag value produced by the pub.publish:publishAndWait service is the same
value that the Integration Server places in the tag field of the request document’s
envelope.
The pub.publish:waitForReply service expects to find a String named tag in the pipeline.
(The Integration Server retrieves the correct reply by matching the tag value provided
to the waitForReply service to the tag value in the reply document envelope.) If you are
building a flow service, you will need to use the Pipeline tab to map the field
containing the tag value of the asynchronously published request to tag.
7 Process the reply document. The pub.publish:publishAndWait (or pub.publish:waitForReply)
service produces an output parameter named receivedDocument that contains the
reply document (an IData object) delivered by a subscriber.
If the waitTime interval elapses before the Integration Server receives a reply, the
receivedDocument parameter contains a null document.
Note: A single publish and wait request might receive many response documents.
The Integration Server that published the request uses only the first reply
document it receives from the Broker. (If provided, the document must be of the
type specified in the receiveDocumentTypeName field of the pub.publish:publishAndWait
service.) The Integration Server discards all other replies. First is arbitrarily
defined. There is no guarantee provided for the order in which the Broker
processes incoming replies. If you need a reply document from a specific client,
use the pub.publish:deliverAndWait service instead.
Delivering a Document
Delivering a document is much like publishing a document, except that you specify the
client that you want to receive the document. The Broker routes the document to the
specified subscriber. Because only one client receives the document, delivering a
document essentially bypasses all the subscriptions to the document on the Broker.
Name Description
documentTypeName A String specifying the fully qualified name of the
publishable document type that you want to publish. The
publishable document type must exist on the Integration
Server.
destID A String specifying the client ID to which you want to
deliver the document. You can deliver the document to an
individual trigger client or to the default client (an
Integration Server). You can view a list of the clients on
the Broker by:
Using the Broker Administrator.
Testing the publishable document type and selecting
one of the deliver options.
Note: If you specify an incorrect client ID, the Integration
Server delivers the document to the Broker, but the
Broker never delivers the document to the intended
recipient and no error is produced.
You may also provide the following optional parameters:
Name Description
delayUntilServiceSuccess A String specifying that the Integration Server will
delay publishing the document until the top‐level
service executes successfully. If the top‐level service
fails, the Integration Server will not publish the
document.
Set to... To...
true Delay publishing until after the top‐
level service executes successfully.
false Publish the document when the
pub.publish:deliver service executes. This is
the default.
Note: The watt.server.control.maxPublishOnSuccess parameter controls the
maximum number of documents that the Integration Server can publish on success at
one time. You can use this parameter to prevent the server from running out of
memory when a service publishes many, large documents on success. By default, this
parameter is set to 50,000 documents. Decrease the number of documents that can be
published on success to help prevent an out of memory error. For more information
about this parameter, see the webMethods Integration Server Administrator’s Guide.
A service can implement a synchronous or asynchronous request/reply.
In a synchronous request/reply, the publishing service stops executing while it waits
for a response to a published request. The publishing service resumes execution when
a reply document is received or the specified waiting time elapses.
In an asynchronous request/reply, the publishing service continues to execute after
publishing the request document. The publishing service must invoke another service
to wait for and retrieve the reply document.
If you plan to build a service that publishes multiple requests and retrieves multiple
replies, consider making the requests asynchronous. You can construct the service to
publish all the requests first and then collect the replies. This approach can be more
efficient than publishing a request, waiting for a reply, and then publishing the next
request.
You can use the pub.publish:deliverAndWait service to build a service that performs a
synchronous or an asynchronous request/reply. This service delivers the request
document to a specific client. If multiple clients can supply the requested information,
consider using the pub.publish:publishAndWait service instead. For more information about
using the pub.publish:publishAndWait service, see “Publishing a Document and Waiting for a
Reply” on page 94.
Important! When you create a service that delivers a document and waits for a reply, do
not set the value of the replyTo field in the document envelope. By default, the
Integration Server uses the publisher ID as the replyTo value. If you set the replyTo
value, responses may not be delivered to the waiting service.
Name Description
documentTypeName A String specifying the fully qualified name of the
publishable document type that you want to publish. The
publishable document type must exist on the Integration
Server.
destID A String specifying the client ID to which you want to
deliver the document.
Note: If you specify an incorrect client ID, the Integration
Server delivers the document to the Broker, but the Broker
never delivers the document to the intended recipient and
no error is produced.
You may also provide the following optional parameters:
Name Description
receiveDocumentTypeName A String specifying the fully qualified name of the
publishable document type expected as a reply. This
publishable document type must exist on your
Integration Server.
If you do not specify a receiveDocumentTypeName
value, the service uses the first reply document it
receives from the client specified in destID.
Important! If you specify a document type, you need to
work closely with the developer of the subscribing
trigger and the reply service to make sure that the
reply service sends a reply document of the correct
type.
waitTime A String specifying how long the publishing service
waits (in milliseconds) for a reply document. If you do
not specify a waitTime value, the service waits until it
receives a reply. The Integration Server begins tracking
the waitTime as soon as it publishes the document.
async A String indicating whether this is a synchronous or
asynchronous request.
Set to... To...
true Indicate that this is an asynchronous
request. The Integration Server publishes
the document and then executes the next
step in the service.
false Indicate that this is a synchronous request.
The Integration Server publishes the
document and then waits for the reply.
The Integration Server executes the next
step in the service only after it receives the
reply document or the wait time elapses.
This is the default.
request produces a tag field. If the tag field is not linked to another field, the next
asynchronously published request (that is, the next execution of the
pub.publish:publishAndWait service or the pub.publish:deliverAndWait service) will overwrite
the first tag value.
Note: The tag value produced by the pub.publish:deliverAndWait service is the same
value that the Integration Server places in the tag field of the request document’s
envelope.
Note: All reply documents are treated as volatile documents. Volatile documents are
stored in memory. If the resource on which the reply document is stored shuts down
before processing the reply document, the reply document is lost. The resource will
not recover it upon restart.
Tip! If you want a reply service to send documents to the publisher of documentA and
the publisher of documentB, invoke the pub.publish:reply service once for each document.
That is, you need to code your service to contain one pub.publish:reply service that
responds to the publisher of documentA and a second pub.publish:reply service that
responds to the sender of documentB.
2 Create a document reference to the publishable document type that you want to use as the reply
document. You can accomplish this by:
Declaring a document reference in the input signature of the replying service
—OR—
Inserting a MAP step in the replying service and adding the document reference
to Pipeline Out. You must immediately link or assign a value to the document
reference. If you do not, Developer automatically clears the document reference
the next time it refreshes the Pipeline tab.
Note: If the publishing service requires that the reply document be an instance of a
specific publishable document type, make sure that the document reference
variable refers to this publishable document type.
Name Description
documentTypeName A String specifying the fully qualified name of the
publishable document type for the reply document. The
publishable document type must exist on the Integration
Server.
Important! Services that publish or deliver a request and wait for a reply can specify
a publishable document type to which reply documents must conform. If the
reply document is not of the type specified in receiveDocumentTypeName
parameter of the pub.publish:publishAndWait or pub.publish:deliverAndWait service, the
publishing service will not receive the reply. You need to work closely with the
developer of the publishing service to make sure that your reply document is an
instance of the correct publishable document type.
You may also provide the following optional parameters.
Name Description
receivedDocumentEnvelope A document (IData object) containing the envelope of
the received document. By default, the Integration
Server uses the information in the received document’s
envelope to determine where to send the reply
document.
If the service executes because two or more documents
satisfied an All (AND) join condition, the Integration
Server uses the envelope of the last document that
satisfied the join condition. If you want the Integration
Server to always use the envelope from the same
document type, link the envelope of that publishable
document type to receivedDocumentEnvelope. If you
want each document publisher to receive a reply
document, you must invoke the pub.publish:reply service
for each document in the join.
Name Description
delayUntilServiceSuccess A String specifying that the Integration Server will
delay publishing the reply document until the top‐
level service executes successfully. If the top‐level
service fails, the Integration Server will not publish the
reply document.
Set to... To...
true Delay publishing until after the top‐level
service executes successfully.
false Publish the document when the
pub.publish:reply service executes. This is the
default.
6 Build a trigger. For this service to execute when the Integration Server receives
documents of a specified type, you need to create a trigger. The trigger needs to
contain a condition that associates the publishable document type used for the
request document with this reply service. For more information about creating a
trigger, see Chapter 7, “Working with Triggers”.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Overview of Building a Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Creating a Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Setting Trigger Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Modifying a Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Deleting Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Testing Triggers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Introduction
Triggers establish subscriptions to publishable document types and specifies how to
process instances of those publishable document types. When you build a trigger, you
create one or more conditions. A condition associates one or more publishable document
types with a single service. The publishable document type acts as the subscription piece
of the trigger. The service is the processing piece. When the trigger receives documents to
which it subscribes, the Integration Server processes the document by invoking the
service specified in the condition. Triggers can contain multiple conditions.
Note: With webMethods Developer, you can create Broker/local triggers and JMS
triggers. A Broker/local trigger is trigger that subscribes to and processes documents
published/delivered locally or to the Broker. A JMS trigger is a trigger that receives
messages from a destination (queue or topic) on a JMS provider and then processes
those messages. This guide discusses development and use of Broker/local triggers
only. Where the terms “trigger” or “triggers” appear in this guide, they refer to
Broker/local triggers.
Stage Description
When you build a trigger, you use the upper half of the editor to create, delete, and order
conditions. You use the lower half of the editor to create a condition by selecting the
publishable document types to which you want the trigger to subscribe and the service
you want the Integration Server to execute when it receives instances of those documents.
Service Requirements
The service that processes a document received by a trigger is called a trigger service. A
condition specifies a single trigger service.
Before you can enable a trigger, the trigger service must already exist on the same
Integration Server. Additionally, the input signature for the trigger service needs to have
a document reference to the publishable document type. The name for this document
reference must be the fully qualified name of the publishable document type. The fully
qualified name of a publishable document type conforms to the following format:
folder.subfolder:PublishableDocumentTypeName
For example, suppose that you want a trigger to associate the Customers:customerInfo
publishable document type with the Customers:addToCustomerStore service. On the
Input/Output tab of the service, the input signature must contain a document reference
named Customers:customerInfo.
Trigger service input signature must contain a document reference to the publishable document type
If you intend to use the service in a join condition (a condition that associates multiple
publishable document types with a service), the service’s input signature must have a
document reference for each publishable document type. The names of these document
reference fields must be the fully qualified names of the publishable document type they
reference.
Tip! You can insert a document reference into the input signature of the target service
by dragging the publishable document type from the Navigation panel to the input
side of the service’s Input/Output tab.
Tip! You can copy and paste the fully qualified document type name from the
Navigation panel to the document reference field name. To copy the fully qualified
name, right‐click the document type in the Navigation panel, and select Copy. To paste
the fully qualified name in for the field name, right‐click the document reference
field, select Rename, and press CTRL + V.
You can configure the trigger service to generate audit data when it executes by setting
Audit properties for the trigger service. If a trigger service generates audit data and
includes a copy of the input pipeline in the audit log you can use webMethods Monitor to
re‐invoke the trigger service at a later time. For information about creating services,
declaring input and output signatures, and configuring service auditing, see the
webMethods Developer User’s Guide.
Tip! You can insert a document reference into the input signature of the target service
by dragging the publishable document type from the Navigation panel to the input
side of the service’s Input/Output tab.
Trigger Validation
When you save a trigger, the Integration Server evaluates the trigger and specifically, the
conditions in the trigger, to make sure the trigger is valid. If the Integration Server
determines that the trigger or a condition in the trigger is not valid, Developer displays
an error message and prompts you to cancel the save or continue the save with a disabled
trigger. The Integration Server considers a trigger to be valid when each of the following
is true:
The trigger contains at least one condition.
Each condition in the trigger specifies a unique name.
Each condition in the trigger specifies a service.
Each condition in the trigger specifies one or more publishable document types.
If multiple conditions in the trigger specify the same publishable document type, the
filter applied to the publishable document type must be the same in each condition.
For more information about creating filters, see “Creating a Filter for a Document” on
page 116.
The syntax of a filter applied to a publishable document type is correct.
The trigger contains no more than one join condition.
Creating a Trigger
A trigger defines a subscription to one or more publishable document types. In a trigger,
the conditions that you create associate one or more publishable document types with a
service.
When you create a trigger, keep the following points in mind:
The publishable document types and services that you want to use in conditions must
already exist. For more information about requirements for services used in triggers,
see “Service Requirements” on page 111.
A trigger can subscribe to publishable document types only . A trigger cannot
subscribe to ordinary IS document types . For information about making IS
document types publishable, see “Making an Existing IS Document Type
Publishable” on page 57.
Multiple triggers (and multiple conditions within a trigger) can reference the same
publishable document type. At run time, for each trigger, the Integration Server
invokes the service specified for the first condition that matches the publishable
document type criteria.
A trigger can contain only one join condition (a condition that associates more than
one publishable document types with a service). A trigger can contain multiple
simple conditions (a condition that associates one publishable document type with a
service).
Each condition in a trigger must have a unique name.
You can save only valid triggers. For more information about requirements for a valid
trigger, see “Trigger Validation” on page 113.
Important! When you create triggers, work on a stand‐alone Integration Server instead
of an Integration Server in a cluster. Creating, modifying, disabling, and enabling
triggers on an Integration Server in a cluster can create inconsistencies in the
corresponding trigger client queues on the Broker.
To create a trigger
1 On the File menu, click New.
2 In the New dialog box, select Trigger, and click Next.
3 In the New Trigger dialog box, do the following:
a In the list next to Folder, select the folder in which you want to save the trigger.
b In the Name field, type a name for the trigger using any combination of letters,
and/or the underscore character. For a list of reserved words and symbols, see
“Naming Rules for webMethods Developer Elements” on page 210.
c Click Next.
4 In the newTriggerName dialog box, select Broker/Local trigger.
5 Click Finish.
Developer generates the new trigger and displays it in the Developer window.
Developer automatically adds an empty condition named “Condition1” to the trigger.
6 In the editor, use the following procedure to build a condition.
a In the Name field, type the name you want to assign to the condition. Developer
automatically assigns each condition a default name such as Condition1 or
Condition2. You can keep this name or change it to a more descriptive one. You
must specify a unique name for each condition within a trigger.
b In the Service field, enter the fully qualified service name that you want to
associate with the publishable document types in the condition. You can type in
the service name, or click to select the service from the Select dialog box.
Note: An XSLT service cannot be used as a trigger service.
f If you specified more than one publishable document type in the condition, select
a join type.
For more information about join types and join conditions, see Chapter 9,
“Understanding Join Conditions”.
7 In the Properties panel, specify the join time‐out period, trigger queue capacity,
document processing mode, and document delivery attempts in case of error. For
more information about trigger properties, see “Setting Trigger Properties” on
page 121.
8 In the Properties panel, under Permissions, specify the ACLs you want to apply to the
trigger, if any. See the webMethods Developer User’s Guide for instructions for this task.
9 On the File menu, click Save to save the trigger.
Notes:
Integration Server validates the trigger before saving it. If Integration Server
determines that the trigger is invalid, Developer prompts you to save the trigger in a
disabled state. For more information about valid triggers, see “Trigger Validation” on
page 113.
Integration Server establishes the subscription locally by creating a trigger queue for
the trigger. The trigger queue is located in the trigger document store. Documents
retrieved by the server remain in the trigger queue until they are processed.
If you are connected to the Broker, Integration Server registers the trigger
subscription with the Broker by creating a client for the trigger on the Broker.
Integration Server also creates a subscription for each publishable document type
specified in the trigger conditions and saves the subscriptions with the trigger client.
If you are not connected to a Broker when you save the trigger, the trigger will only
receive documents published locally. When you reconnect to a Broker, the next time
Integration Server restarts, the Integration Server will create a client for the trigger on
the Broker and create subscriptions for the publishable document types identified in
the trigger conditions. The Broker validates the filters in the trigger conditions when
Integration Server creates the subscriptions.
If a publishable document type specified in a trigger condition does not exist on the
Broker (that is, there is no associated Broker document type), Integration Server still
creates the trigger client on the Broker, but does not create any subscriptions. The
Integration Server creates the subscriptions when you synchronize (push) the
publishable document type with the Broker.
You can also use the pub.trigger:createTrigger service to create a trigger. For more
information about this service, see the webMethods Integration Server Built‐In Services
Reference.
state. The following filter will match only those documents where the value of age is
greater than 65 and the value of state is equal to FL.
%age% > 65 and %state% L_EQUALS "FL"
Both the Broker and Integration Server evaluate a document against a subscription’s filter
upon receiving the document. The Broker evaluates the filter to determine whether the
received document meets the filter criteria. If the document meets the filter criteria, the
Broker will place the document in the subscriber’s client queue. If the document does not
meet the criteria specified in the filter, the Broker discards the document.
After Integration Server receives a document and determines that the document type
matches a trigger condition, it applies the filter to the document. If the document meets
the filter criteria, Integration Server executes the trigger service specified in the trigger
condition. If the document does not meet the filter criteria, the Integration Server discards
the document. Integration Server also creates a journal log entry stating:
No condition matches in trigger triggerName for document documentTypeName
with activation activationID.
Filters can be saved with the subscription on the Broker and with the trigger on the
Integration Server. The location of the filter depends on the filter’s syntax, which is
evaluated at design time. For more information about filter evaluation at design time and
run time, see “Filter Evaluation at Design Time” below.
Note: The Broker saves as much of a filter as possible with the subscription. For
example, suppose that a filter consists of more than one expression, and only one of
the expressions contains syntax the Broker considers invalid. The Broker saves the
expressions it considers valid with the subscription on the Broker. (The Integration
Server saves all the expressions.)
Tip! You can use the Broker Administrator to view the filters saved with a
subscription.
For more information about naming conventions and restrictions for Broker elements, see
“Naming Rules for webMethods Broker Document Fields” on page 210. For more
information about filter syntax and the Broker, see “Conditional Expressions” appendix
in the webMethods Developer User’s Guide.
1 In the Navigation panel of Developer, open the trigger.
2 In the top half of the editor, select the condition containing the publishable document
type to which you want to apply the filter.
3 In the lower half of the editor, next to the publishable document type for which you
want to create a filter, enter the filter in the Filter field.
The Integration Server provides syntax and operators that you can use to create
expressions for use with filters. For more information, see the “Conditional
Expressions” appendix in the webMethods Developer User’s Guide.
4 On the File menu, click Save to save your changes to the trigger. The Integration Server
and Broker save the filter with the subscription.
Notes:
If the Integration Server is not connected to a Broker when you save the trigger, the
Broker evaluates the filter the next time you enable and save the trigger after the
connection is re‐established or when you synchronize the document types specified
in the trigger.
If you need to specify nested fields in the filter, you can copy a path to the Filter field
from the document type. Select the field in the document type, right click and select
Copy. You can then paste into the Filter field. However, you must add % as a preface
and suffix to the copied path.
If multiple conditions in the trigger specify the same publishable document type, the
filter applied to the publishable document type must be the same in the conditions. If
the filters are not the same, the Developer displays an error message when you try to
save the trigger.
Integration Server will never execute serviceA. Whenever Integration Server receives
documentA, the document satisfies ConditionAB, and Integration Server executes serviceAB.
Important! An ordered scenario assumes that documents are published in the correct
order and that you set up the trigger to process documents serially. For more
information about building services that publish documents, see Chapter 6,
“Publishing Documents”. For more information about specifying the document
processing for a trigger, see “Selecting Messaging Processing” on page 128.
If you create one trigger for each of these conditions, you could not guarantee that the
Integration Server would invoke services in the required order even if publishing
occurred in that order. Additionally, specifying serial dispatching for the trigger ensures
that a service will finish executing before the next document is processed. For example,
the Integration Server could still be executing addCustomer, when it receives the documents
customerOrder and customerBill. If you specified concurrent dispatching instead of serial
dispatching, the Integration Server might execute the services addCustomerOrder and
billCustomer before it finished executing addCustomer. In that case, the addCustomerOrder and
billCustomer services would fail.
1 In the Navigation panel, open the trigger to which you want to add a condition.
2 In the top half of the editor, click to add a condition. Developer automatically
assigns the condition a default name, such as Condition2.
3 Define the condition as described in step 6 in “To create a trigger” on page 114.
4 On the File menu, click Save to save the trigger.
If the Integration Server considers the trigger invalid, Developer displays a message
indicating why the trigger is invalid and gives you the option of saving the trigger in
a disabled state.
1 In the Navigation panel, open the trigger.
2 In the top half of the editor, select the condition to be moved.
Note: You cannot disable a trigger during trigger service execution.
To disable a trigger
1 In the Navigation panel, open the trigger you want to disable.
2 In the Properties panel, under General, set the Enabled property to False.
3 On the File menu, click Save to save the trigger in a disabled state.
In the Navigation panel, Developer changes the color of the trigger icon to gray to
indicate that it is disabled.
Tip! You can also suspend document retrieval and document processing for a trigger.
Unlike disabling a trigger, suspending retrieval and processing does not destroy the
client queue. The Broker continues to enqueue documents for suspended triggers.
However, the Integration Server does not retrieve or process documents for
suspended triggers. For more information about suspending triggers, see the
webMethods Integration Server Administrator’s Guide.
To enable a trigger
1 In the Navigation panel, open the trigger you want to enable.
2 In the Properties panel, under General, set the Enabled property to True.
3 On the File menu, click Save to save the trigger.
If the Integration Server determines that a trigger is not valid, Developer prevents you
from saving the trigger in an enabled state. Developer resets the Enabled property to
False.
for the trigger client. When you re‐enable the trigger on any server in the cluster, all the
queued documents that did not expire will be processed by the cluster.
To disable a trigger in a cluster of Integration Servers, disable the trigger on each
Integration Server in the cluster, and then manually remove the document subscriptions
created by the trigger from the Broker. For more information about deleting document
subscriptions on the Broker, see the webMethods Broker Administrator’s Guide.
Important! Disabling triggers in a cluster in a production environment is not
recommended. If you must make the trigger unavailable, delete the trigger from each
server and then delete the trigger client queue on the Broker. For more information
about deleting triggers in a cluster, see “Deleting Triggers in a Cluster” on page 143.
The implications of a join time‐out are different depending on the join type.
When the time‐out period elapses, the next document in the trigger queue that satisfies
the All (AND) condition causes the time‐out period to start again. The Integration Server
places the document in the database and assigns a status of “pending” even if the
document has the same activation ID as an earlier document that satisfied the join
condition. The Integration Server then waits for the remaining documents in the join
condition.
For more information about All (AND) join conditions see Chapter 9, “Understanding Join
Conditions”.
1 In the Navigation panel, open the trigger for which you want to set the join time‐out.
2 In the Properties panel, under General, next to Join expires, select one of the following:
Select... To...
True Indicate that the Integration Server stops waiting for the other
documents in the join condition once the time‐out period elapses.
In the Expire after property, specify the length of the join time‐out
period. The default time period is 1 day.
False Indicate that the join condition does not expire. The Integration
Server waits indefinitely for the additional documents specified in
the join condition. Set the Join expires property to False only if you
are confident that all of the documents will be received.
Important! A join condition is persisted across server restarts. To
remove a waiting join condition that does not expire, disable, then
re‐enable and save the trigger. Re‐enabling the trigger effectively
recreates the trigger.
3 On the File menu, click Save to save the trigger.
The capacity and refill level also determine how frequently the Integration Server
retrieves documents for the trigger and the combined size of the retrieved documents,
specifically:
The greater the difference between capacity and refill level, the less frequently the
Integration Server retrieves documents from the Broker. However, the combined size
of the retrieved documents will be larger.
The smaller the difference between capacity and refill level, the more frequently the
Integration Server retrieves documents. However, the combined size of the retrieved
documents will be smaller.
When you set values for capacity and refill level, you need to balance the frequency of
document retrieval with the combined size of the retrieved documents. Use the following
guidelines to set values for capacity and refill level for a trigger queue.
If the trigger subscribes to small documents, set a high capacity. Then, set refill level
to be 30% to 40% of the capacity. The Integration Server retrieves documents for this
trigger less frequently, however, the small size of the documents indicates that the
combined size of the retrieved documents will be manageable. Additionally, setting
the refill level to 30% to 40% ensures that the trigger queue does not empty before the
Integration Server retrieves more documents. This can improve performance for
high‐volume and high‐speed processing.
If the trigger subscribes to large documents, set a low capacity. Then, set the refill
level to just below slightly less than the capacity. The Integration Server retrieves
documents more frequently, however, the combined size of the retrieved documents
will be manageable and will not overwhelm the Integration Server.
Note: You can specify whether Integration Server should reject documents published
locally, using the pub.publish:publish or pub.publish.publishAndWait services, when the queue
for the subscribing trigger is at maximum capacity. For more information about this
feature, see the description for the watt.server.publish.local.rejectOOS parameter in
the webMethods Integration Server Administrator’s Guide.
1 In the Navigation panel, open the trigger for which you want to specify trigger queue
capacity.
2 In the Properties panel, under Trigger queue, in the Capacity property, type the
maximum number of documents that the trigger queue can contain. The default is 10.
3 In the Refill level property, type the number of unprocessed documents that must
remain in this trigger queue before the Integration Server retrieves more documents
for the queue from the Broker. The default is 4.
The Refill level value must be less than or equal to the Capacity value.
4 On the File menu, click Save to save the trigger.
Note: At run time, if retrieving and processing documents consumes too much
memory or too many server threads, the server administrator might need to
temporarily reduce the capacity and refill levels for trigger queues. The server
administrator can use the Integration Server Administrator to gradually decrease the
capacity and refill levels of all trigger queues. The server administrator can also use
the Integration Server Administrator to change the Capacity or Refill level values for a
trigger. For more information, see the webMethods Integration Server Administrator’s
Guide.
Note: The Integration Server returns acknowledgements for guaranteed documents
only. The Integration Server does not return acknowledgements for volatile
documents.
You can increase the number of document acknowledgements returned at one time by
changing the value of the Acknowledgement Queue Size property. The acknowledgement
queue is a queue that contains pending acknowledgements for guaranteed documents
processed by the trigger. When the acknowledgement queue size is greater than one, a
server thread places an document acknowledgement into the acknowledgement queue
after it finishes executing the trigger service. Acknowledgements collect in the queue
until a background thread returns them as a group to the sending resource.
If the Acknowledgement Queue Size is set to one, acknowledgements will not collect in the
acknowledgement queue. Instead, the Integration Server returns an acknowledgement to
the sending resource immediately after the trigger service finishes executing.
The Integration Server maintains two acknowledgement queues for a trigger. The first
queue is an inbound or filling queue in which acknowledgements accumulate. The
second queue is an outbound or emptying queue that contains the acknowledgements
the background thread gathers and returns to the sending resource.
The value of the Acknowledgement Queue Size property determines the maximum number
of pending acknowledgements in each queue. Consequently, the maximum number of
pending acknowledgements for a trigger is twice the value of this property. For example,
if the Acknowledgement Queue Size property is set to 10, the trigger can have up to 20
pending document acknowledgements (10 acknowledgements in the inbound queue and
10 acknowledgements in the outbound queue).
If the inbound and outbound acknowledgement queues fill to capacity, the Integration
Server blocks any server threads that attempt to add an acknowledgement to the queues.
The blocked threads resume execution only after the Integration Server empties one of
the queues by returning the pending acknowledgements to the sending resource.
Increasing the size of a trigger’s acknowledgement queue can provide the following
benefits:
Reduces network traffic. Returning acknowledgements one at a time for each
guaranteed document that is processed can result in a high volume of network traffic.
Configuring the trigger so that the Integration Server returns several document
acknowledgements at once can reduce the amount of network traffic.
Increases server thread availability. If the size of the acknowledgement queue is set to 1
(the default), the Integration Server releases the server thread used to process the
document only after returning the acknowledgement. If the size of the
acknowledgement queue is greater than 1, the Integration Server releases the server
thread used to process the document immediately after the thread places the
acknowledgement into the acknowledgement queue. When acknowledgements
collect in the queue, server threads can be returned to the thread pool more quickly.
If a resource or connection failure occurs before acknowledgements are sent or processed,
the transport redelivers the previously processed, but unacknowledged documents. The
number of documents redelivered to a trigger depends on the size of the trigger’s
acknowledgement queue. If exactly‐once processing is configured for the trigger, the
Integration Server detects the redelivered documents as duplicates and discards them
without re‐processing them. For more information about exactly‐once processing, see
Chapter 8, “Exactly‐Once Processing”.
1 In the Navigation panel, open the trigger for which you want to specify trigger queue
capacity.
2 In the Properties panel, under Trigger queue, in the Acknowledgement Queue Size
property, type the maximum number of pending document acknowledgements for
the trigger. The value must be greater than zero. The default is 1.
3 On the File menu, click Save to save the trigger.
Serial Processing
In serial processing, the Integration Server processes the documents in the trigger queue
one after the other. The Integration Server retrieves the first document in the trigger
queue, determines which condition the document satisfies, and executes the service
specified in the trigger condition. The Integration Server waits for the service to finish
executing before retrieving the next document from the trigger queue.
In serial processing, the Integration Server processes documents in the trigger queue in
the same order in which it retrieves the documents from the Broker. That is, serial
document processing maintains publication order. However, the Integration Server
processes documents in a trigger queue with serial dispatching more slowly than it
processes documents in trigger queue with concurrent processing.
Note: Serial document processing is equivalent to the Shared Document Order mode
of “Publisher” on the Broker.ʺ
Tip! If your trigger contains multiple conditions to handle a group of published
documents that must be processed in a specific order, use serial processing.
The following illustration and explanation describe how serial document processing
works in a clustered environment.
Broker Server X
A1 1 processCustomerInf
B1 o Trigger Queue
B2
C1
C2 2
B3 Server Z
A2
processCustomerInf
o Trigger Queue
Server X
Broker
processCustomerInf
B2 o Trigger Queue
B3
3
A1
A2 B1
4
Server Z
Broker
processCustomerInf
o Trigger Queue
B3 5
A2 C1
C2
6
Step Description
1 ServerX retrieves the first two documents in the queue (documents A1 and B1)
to fill its processCustomerInfo trigger queue to capacity. ServerX begins processing
document A1.
2 ServerZ retrieves the documents C1 and C2 to fill its processCustomerInfo trigger
queue to capacity. ServerZ begins processing the document C1.
Even though document B2 is the next document in the queue, the Broker does
not distribute document B2 from PublisherB to ServerZ because ServerX
contains unacknowledged documents from PublisherB.
3 ServerX finishes processing document A1 and acknowledges document A1 to
the Broker.
Step Description
4 ServerX requests 1 more document from the Broker. (The processCustomerInfo
trigger has refill level of 1.) The Broker distributes document B2 from
PublisherB to ServerX.
5 ServerZ finishes processing document C1 and acknowledges document C1 to
the Broker
6 ServerZ requests 1 more document from the Broker. The Broker distributes
document A2 to ServerZ.
ServerZ can process a document from PublisherA because the other server in
the cluster (ServerX) does not have any unacknowledged documents from
PublisherA. Even though document B3 is the next document in the queue, the
Broker does not distribute document B3 to ServerZ because ServerX contains
unacknowledged documents from PublisherB.
Note: The Broker and Integration Servers in a cluster cannot ensure that serial triggers
process volatile documents from the same publisher in the order in which the
documents were published.
Note: When documents are delivered to the default client in a cluster, the Broker and
Integration Servers cannot ensure that documents from the same publisher are
processed in publication order. This is because the Integration Server acknowledges
documents delivered to the default client as soon as they are retrieved from the
Broker.
Concurrent Processing
In concurrent processing, Integration Server processes the documents in the trigger
queue in parallel. That is, Integration Server processes as many documents in the trigger
queue as it can at the same time. The Integration Server does not wait for the service
specified in the trigger condition to finish executing before it begins processing the next
document in the trigger queue. You can specify the maximum number of documents the
Integration Server can process concurrently.
Concurrent processing provides faster performance than serial processing. The
Integration Server process the documents in the trigger queue more quickly because the
Integration Server can process more than one document at a time. However, the more
documents the Integration Server processes concurrently, the more server threads the
Integration Server dispatches, and the more memory the document processing consumes.
Additionally, for concurrent triggers, the Integration Server does not guarantee that
documents are processed in the order in which they are received.
Note: Concurrent document processing is equivalent to the Shared Document Order
mode of “None” on the Broker.ʺ
1 In the Navigation panel, open the trigger for which you want to specify document
processing.
2 In the Properties panel, next to Processing mode, select one of the following:
Select... To...
Serial Specify that Integration Server should process documents in the
trigger queue one after the other.
Concurrent Specify that Integration Server should process as many documents
in the trigger queue as it can at once.
In the Max execution threads property, specify the maximum number
of documents that Integration Server can process concurrently.
Integration Server uses one server thread to process each document
in the trigger queue.
3 If you selected serial processing and you want Integration Server to suspend
document processing and document retrieval automatically when a trigger service
ends with an error, under Fatal error handling, select True for the Suspend on Error
property.
For more information about fatal error handling, see “Configuring Fatal Error
Handling” on page 133.
4 On the File menu, click Save to save the trigger.
Note: Integration Server Administrator can be used to change the number of
concurrent execution threads for a trigger temporarily or permanently. For more
information, see the webMethods Integration Server Administrator’s Guide.
Important! Any documents that existed in the trigger client queue before you
changed the dispatching mode will be lost because the Integration Server
recreates the associated trigger client queue on the Broker.
If you change the document processing mode when the Integration Server is not
connected to the configured Broker, Developer displays a message stating that the
operation cannot be completed.
If the Integration Server on which you are developing triggers does not have a
configured Broker, you can change the document processing mode at any time
without risking the loss of documents.
Document processing and document retrieval remain suspended until one of the
following occurs:
You specifically resume document retrieval or document processing for the trigger.
You can resume document retrieval and document processing using the Integration
Server Administrator, built‐in services (pub.trigger:resumeProcessing or
pub.trigger:resumeRetrieval), or by calling methods in the Java API
(com.wm.app.b2b.server.dispatcher.trigger.TriggerFacade.setProcessingSuspended()
and
com.wm.app.b2b.server.dispatcher.trigger.TriggerFacade.setRetrievalSuspended()).
Integration Server restarts, the trigger is enabled or disabled (and then re‐enabled),
the package containing the trigger reloads. (When Integration Server suspends
document retrieval and document processing for a trigger because of an error,
Integration Server considers the change to be temporary. For more information about
temporary vs. permanent state changes for triggers, see the webMethods Integration
Server Administrator’s Guide.)
For more information about resuming document processing and document retrieval, see
the webMethods Integration Server Administrator’s Guide and the webMethods Integration
Server Built‐In Services Reference.
Note: Integration Server does not automatically suspend triggers because of transient
errors that occur during trigger service execution. For more information about
transient error handling, see “Configuring Transient Error Handling” on page 134.
Automatic suspension of document retrieval and processing can be especially useful for
serial triggers that are designed to process a group of documents in a particular order. If
the trigger service ends in error while processing the first document, you might not want
to the trigger to proceed with processing the subsequent documents in the group. If
Integration Server automatically suspends document processing, you have an
opportunity to determine why the trigger service did not execute successfully and then
resubmit the document using webMethods Monitor.
By automatically suspending document retrieval as well, Integration Server prevents the
trigger from retrieving more documents. Because Integration Server already suspended
document processing, new documents would just sit in the trigger queue. If Integration
Server does not retrieve more documents for the trigger and Integration Server is in a
cluster, the documents might be processed more quickly by another Integration Server in
the cluster.
Note: You can configure fatal error handling for serial triggers only.
1 In the Navigation panel, open the trigger for which you want to specify document
processing.
2 In the Properties panel, under Fatal error handling, set the Suspend on error property to
True if you want Integration Server to suspend document processing and document
retrieval automatically when a trigger service ends with an error. Otherwise, select
False. The default is False.
3 On the File menu, click Save to save the trigger.
transient error is an error that arises from a temporary condition that might be resolved or
corrected quickly, such as the unavailability of a resource due to network issues or failure
to connect to a database. Because the condition that caused the trigger service to fail is
temporary, the trigger service might execute successfully if the Integration Server waits
and then re‐executes the service.
You can configure transient error handling for a trigger to instruct Integration Server to
wait an specified time interval and then re‐execute a trigger service automatically when
an ISRuntimeException occurs. Integration Server re‐executes the trigger service using
the original input document.
If a transient error occurs and the trigger service does not use pub.flow:throwExceptionForRetry
or ISRuntimeException() to catch the error and throw an ISRuntimeException, the trigger
service ends in error. Integration Server will not retry the trigger service.
Adapter services built on Integration Server 6.0 or later, and based on the ART
framework, detect and propagate exceptions that signal a retry if a transient error is
detected on their back‐end resource. This behavior allows for the automatic retry when
the service functions as a trigger service.
Note: Integration Server does not retry a trigger service that fails because a service
exception occurred. A service exception indicates that there is something functionally
wrong with the service. A service can throw a service exception using the EXIT step.
For more information about the EXIT step, see the webMethods Developer User’s Guide.
The following sections provide more information about each retry failure handling
option.
Step Description
1 Integration Server makes the final retry attempt and the trigger service fails
because of an ISRuntimeException.
2 Integration Server treats the last trigger service failure as a service exception.
3 Integration Server rejects the document.
If the document is guaranteed, Integration Server returns an acknowledgement
to the Broker.
If a trigger service generates audit data on error and includes a copy of the input
pipeline in the audit log, you can use webMethods Monitor to re‐invoke the
trigger service manually at a later time. Note that when you use webMethods
Monitor to process the document, it is processed out of order. That is, the
document is not processed in the same order in which it was received (or
published) because the document was acknowledged to its transport when the
retry failure occurred.
4 Integration Server processes the next document in the trigger queue.
In summary, the default retry failure behavior (Throw exception) allows the trigger to
continue with document processing when retry failure occurs for a trigger service. You
can configure audit logging in such a way that you can use webMethods Monitor to
submit the document at a later time (ideally, after the condition that caused the transient
error has been remedied).
Step Description
1 Integration Server makes the final retry attempt and the trigger service fails
because of an ISRuntimeException.
2 Integration Server suspends document processing and document retrieval for
the trigger temporarily.
The trigger is suspended on this Integration Server only. If the Integration
Server is part of a cluster, other servers in the cluster can retrieve and process
documents for the trigger.
Note: The change to the trigger state is temporary. Document retrieval and
document processing will resume for the trigger if Integration Server restarts,
the trigger is enabled or disabled, or the package containing the trigger reloads.
You can also resume document retrieval and document processing manually
using Integration Server Administrator or by invoking the
pub.trigger:resumeRetrieval and pub.trigger:resumeProcessing public services.
3 Integration Server rolls back the document to the trigger document store. This
indicates that the required resources are not ready to process the document and
makes the document available for processing at a later time. For serial triggers, it
also ensures that the document maintains its position at the top of trigger queue.
4 Optionally, Integration Server schedules and executes a resource monitoring
service. A resource monitoring service is a service that you create to determine
whether the resources associated with a trigger service are available. A resource
monitoring service returns a single output parameter named isAvailable.
5 If the resource monitoring service indicates that the resources are available (that
is, the value of isAvailable is true), Integration Server resumes document retrieval
and document processing for the trigger.
If the resource monitoring service indicates that the resources are not available
(that is, the value of isAvailable is false), Integration Server waits a short time
interval (by default, 60 seconds) and then re‐executes the resource monitoring
service. Integration Server continues executing the resource monitoring service
periodically until the service indicates the resources are available.
Tip! You can change the frequency at which the resource monitoring service
executes by modifying the value of the
watt.server.trigger.monitoringInterval property.
Step Description
6 After Integration Server resumes the trigger, Integration Server passes the
document to the trigger. The trigger and trigger service process the document
just as they would any document in the trigger queue.
Note: At this point, the retry count is set to 0 (zero).
1 In the Navigation panel, open the trigger for which you want to configure retry
behavior.
2 In the Properties panel, under Transient error handling, in the Retry until property, select
one of the following:
Select... To...
Note: If a trigger is configured to retry until successful and a
transient error condition is never remedied, a trigger service
enters into an infinite retry situation in which it continually re‐
executes the service at the specified retry interval. Because you
cannot disable a trigger during trigger service execution and you
cannot shut down the server during trigger service execution, an
infinite retry situation can cause the Integration Server to
become unresponsive to a shutdown request. For information
about escaping an infinite retry loop, see “Trigger Service Retries
and Shutdown Requests” on page 141.
Select... To...
Note: If you want Integration Server to suspend the trigger
and retry it later, you must provide a resource monitoring
service that Integration Server can execute to determine
when to resume the trigger. For more information about
building resource monitoring service, see Appendix B,
“Building a Resource Monitoring Service”.
Integration Server generates the following journal log message between retry
attempts:
[ISS.0014.0031D] Service serviceName failed with ISRuntimeException. Retry x of y
will begin in retryInterval milliseconds.
If you do not configure service retry for a trigger, set the Max retry attempts property to
0. This can improve the performance of services invoked by the trigger.
You can invoke the pub.flow:getRetryCount service within a trigger service to determine
the current number of retry attempts made by the Integration Server and the
maximum number of retry attempts allowed for the trigger service. For more
information about the pub.flow:getRetryCount service, see the webMethods Integration
Server Built‐In Services Reference.
false Indicate that Integration Server should not interrupt the trigger service retry
process to respond to a shutdown request. The Integration Server shuts down
only after it makes all the retry attempts or the trigger service executes
successfully. This is the default value.
Important! If watt.server.trigger.interruptRetryOnShutdown is set to “false”
and a trigger is set to retry until successful, a trigger service can enter into an
infinite retry situation. If the transient error condition that causes the retry is
not resolved, Integration Server continually re‐executes the service at the
specified retry interval. Because you cannot disable a trigger during trigger
service execution and you cannot shut down the server during trigger service
execution, an infinite retry situation can cause Integration Server to become
unresponsive to a shutdown request. To escape an infinite retry situation, set
the watt.server.trigger.interruptRetryOnShutdown to “true”. The change
takes effect immediately.
true Indicate that Integration Server should interrupt the trigger service retry
process if a shutdown request occurs. Specifically, after the shutdown request
occurs, Integration Server waits for the current service retry to complete. If
the trigger service needs to be retried again (the service ends because of an
ISRuntimeException), the Integration Server stops the retry process and shuts
down. Upon restart, the transport (the Broker or, for a local publish, the
transient store) redelivers the document to the trigger for processing.
Note: If the trigger service retry process is interrupted and the transport
redelivers the document to the trigger, the transport increases the redelivery
count for the document. If the trigger is configured to detect duplicates but
does not use a document history database or a document resolver service to
perform duplicate detection, Integration Server considers the redelivered
document to be “In Doubt” and will not process the document. For more
information about duplicate detection and exactly‐once processing, see
Chapter 8, “Exactly‐Once Processing”.
Note: When you change the value of the
watt.sever.trigger.interruptRetryOnShutdown parameter, the change takes effect
immediately.
Modifying a Trigger
After you create a trigger, you can modify it by changing or renaming the condition,
specifying different publishable document types, specifying different trigger services, or
changing trigger properties. To modify a trigger, you need to lock the trigger and have
write access to the trigger.
If your integration solution includes a Broker, the Broker needs to be available when
editing triggers. Editing triggers when the Broker is unavailable can cause the trigger and
its associated trigger client on the Broker to become out of sync. Do not edit any of the
following trigger components when the configured Broker is not available:
Any publishable document types specified in the trigger. That is, do not change the
subscriptions established by the trigger.
Any filters specified in the trigger.
The trigger state (enabled or disabled).
The document processing mode (serial or concurrent processing).
If you edit any of these trigger components when the Broker is unavailable, Developer
displays a message stating that saving your changes will cause the trigger to become out
of sync with its associated Broker client. If you want to continue, you will need to
synchronize the trigger with its associated Broker client when the connection to the
Broker becomes available. To synchronize, use Developer to disable the trigger, re‐enable
the trigger, and save. This effectively recreates the trigger client on the Broker.
Important! Once you set up a cluster of Integration Servers, avoid editing any of the
triggers in the cluster. You can edit selected trigger properties (capacity, refill level,
maximum and execution threads) using the Integration Server Administrator and
synchronize these changes across a cluster. Do not edit any other trigger properties.
For more information about editing trigger properties using the Integration Server
Administrator, see the webMethods Integration Server Administrator’s Guide.
Deleting Triggers
When you delete a trigger, Integration Server deletes the document store for the trigger
and the Broker deletes the client for the trigger. The Broker also deletes the document
type subscriptions for the trigger.
To delete a trigger, you must lock it and have write access to it. See the webMethods
Developer User’s Guide for information about locking and access permissions (ACLs).
To delete a trigger
1 Select the trigger in the Navigation panel.
2 On the Edit menu, click Delete.
3 In the Delete Confirmation dialog box, click OK.
Note: You can also use the pub.trigger:deleteTrigger service to delete a trigger. For more
information about this service, see the webMethods Integration Server Built‐In Services
Reference.
Testing Triggers
You can test a trigger using tools provided in Developer. Testing the trigger enables you
to make sure the service executes, and that the data types for the inputs, and the filter, if
any, are valid. When you test a trigger, you test it locally; that is, there is no Broker
involvement. Additionally, the document containing the input data is not routed through
the dispatcher and trigger queue as with a published document.
Note: When you test a trigger that contains multiple conditions, you must test the
conditions one at a time. When a condition specifies more than one publishable
document type, the Integration Server does not perform join processing. That is the
Integration Server does not assign an activation ID to each document. The Integration
Server runs the service directly with the specified document values as inputs.
1 In the Navigation panel of Developer, open the trigger.
2 On the Test menu, click Run.
Note: When you test a trigger condition for the first time, and until you select the
Don’t show this again check box, Developer displays an informational message
about activation IDs. The Integration Server uses activation IDs at run time for
triggers that contain join conditions. The Integration Server does not require
activation IDs for testing and debugging a trigger condition. For information
about activation IDs, see “About the Activation ID” on page 91.
3 If the trigger contains only one condition and one document type, skip to step 7.
4 If the trigger contains only one condition and multiple document types, skip to step 6.
5 If the trigger contains multiple conditions, in the Run test for triggerName dialog box,
select the condition that you want to test and click OK. You can only test one condition
at a time.
6 In the Input for triggerName dialog box, select any document type listed and click Edit.
7 In the Input for triggerName dialog box, enter valid values for the fields defined in the
document type and click OK. The Integration Server validates values after you click
OK to test the condition, as described in step 9.
8 If there are additional document types listed in the condition and the join type is All
(AND), repeat step 6 and step 7 for each additional document type. You must enter
values for all document types in an All (AND) join condition before you can test the
trigger condition; otherwise, Developer displays an error message. Join conditions of
type Any (OR) or Only one (XOR) require you to enter values for only one document
type.
9 Click OK to test the condition.
If the Integration Server runs the service successfully, Developer displays the
results in the Results panel.
If Integration Server cannot test the condition successfully, Developer displays an
error message. Testing might fail if the Developer cannot match a filter string, or if
one or more values are invalid.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
What Is Document Processing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Overview of Exactly-Once Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Extenuating Circumstances for Exactly-Once Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Exactly-Once Processing and Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Configuring Exactly-Once Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Building a Document Resolver Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Viewing Exactly-Once Processing Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Introduction
This chapter explains what exactly‐once processing is within the context of the
Integration Server, how the Integration Server performs exactly‐once processing, and
how to configure exactly‐once processing for a trigger.
Note: Guaranteed document delivery and guaranteed document processing are not
the same thing. Guaranteed document delivery ensures that a document, once
published, is delivered at least once to the subscribing triggers. Guaranteed document
processing ensures that a trigger makes one or more attempts to process the document.
The following section provides more information about how the Integration Server
ensures exactly‐once processing.
Note: Exactly‐once processing and duplicate detection are performed for guaranteed
documents only.
Integration Server uses duplicate detection to determine the document’s status. The
document status can be one of the following:
New. The document is new and has not been processed by the trigger.
Duplicate. The document is a copy of one already processed the trigger.
In Doubt. Integration Server cannot determine the status of the document. The trigger
may or may not have processed the document before.
To resolve the document status, Integration Server evaluates, in order, one or more of the
following:
Redelivery count indicates how many times the transport has redelivered the document
to the trigger.
Document history database maintains a record of all guaranteed documents processed
by triggers for which exactly‐once processing is configured.
Document resolver service is a service created by a user to determine the document
status. The document resolver service can be used instead of or in addition to the
document history database.
The steps the Integration Server performs to determine a document’s status depend on
the exactly‐once properties configured for the subscribing trigger. For more information
about configuring exactly‐once properties, see “Configuring Exactly‐Once Processing”
on page 160.
The table below summarizes the process Integration Server follows to determine a
document’s status and the action the server takes for each duplicate detection method.
0 If using document history, Integration Server
proceeds to 2 to check the document history
database.
If document history is not used, Integration Server
considers the document to be NEW. Integration Server
executes the trigger service.
>0 If using document history, Integration Server
proceeds to 2 to check the document history
database.
If document history is not used, Integration Server
proceeds to 3 to execute the document resolver
service.
If neither document history nor a document resolver
service are used, Integration Server considers the
document to be IN DOUBT.
‐1 (Undefined) If using document history, proceed to 2 to check the
document history database.
If document history is not used, proceed to 3 to
execute the document resolver service.
Otherwise, document is NEW. Execute trigger service.
2 Check Document History
If a document history database is configured and the trigger uses it to maintain a
record of processed documents, Integration Server checks for the document’s
UUID in the document history database.
UUID Exists? Action
No. Document is NEW. Execute trigger service.
Yes. Document is a DUPLICATE. Acknowledge document
Processing completed. and discard.
Yes. If provided, proceed to 3 to invoke the document
Processing started. resolver service. Otherwise, document is IN DOUBT.
NEW Execute trigger service.
DUPLICATE Acknowledge document and discard.
IN DOUBT Acknowledge and log document.
Note: The Integration Server sends In Doubt documents to the audit subsystem for
logging. You can resubmit In Doubt documents using webMethods Monitor. The
Integration Server discards Duplicate documents. Duplicate documents cannot be
resubmitted. For more information about webMethods Monitor, see the webMethods
Monitor documentation.
The following sections provide more information about each method of duplicate
detection.
Redelivery Count
The redelivery count indicates the number of times the transport (the Broker or, for local
publishing, the transient store) has redelivered a document to the trigger. The transport
that delivers the document to the trigger maintains the document redelivery count. The
transport updates the redelivery count immediately after the trigger receives the
document. A redelivery count other than zero indicates that the trigger might have
received and processed (or partially processed) the document before.
For example, suppose that your integration solution consists of an Integration Server and
a Broker. When the server first retrieves the document for the trigger, the document
redelivery count is zero. After the server retrieves the document, the Broker increments
the redelivery count to 1. If a resource (Broker or Integration Server) shuts down before
the trigger processes and acknowledges the document, the Broker will redeliver the
document when the connection is re‐established. The redelivery count of 1 indicates that
the Broker delivered the document to the trigger once before.
The following table identifies the possible redelivery count values and the document
status associated with each value.
‐1 The resource that delivered the document does not maintain a
document redelivery count. The redelivery count is undefined.
The Integration Server uses a value of ‐1 to indicate that the
redelivery count is absent. For example, a document received
from a Broker version 6.0 or 6.0.1 does not contain a redelivery
count. (Brokers version 6.0.1 and earlier do not maintain
document redelivery counts.)
If other methods of duplicate detection are configured for this
trigger (document history database or document resolver
service), the Integration Server uses these methods to
determine the document status. If no other methods of
duplicate detection are configured, the Integration Server
assigns the document a status of New and executes the trigger
service.
0 This is most likely the first time the trigger received the
document.
If the trigger uses a document history to perform duplicate
detection, Integration Server checks the document history
database to determine the document status. If no other
methods of duplicate detection are configured, the server
assigns the document a status of New and executes the trigger
service.
> 0 The number of times the resource redelivered the document to
the trigger. The trigger might or might not have processed the
document before. For example, the server might have shut
down before or during processing. Or, the connection between
Integration Server and Broker was lost before the server could
acknowledge the document. The redelivery count does not
provide enough information to determine whether the trigger
processed the document before.
If other methods of duplicate detection are configured for this
trigger (document history database or document resolver
service), the Integration Server uses these methods to
determine the document status. If no other methods of
duplicate detection are configured, the server assigns the
document a status of In Doubt, acknowledges the document,
uses the audit subsystem to log the document, and writes a
journal log entry stating that an In Doubt document was
received.
Integration Server uses redelivery count to determine document status whenever you
enable exactly‐once processing for a trigger. That is, setting the Detect duplicates property
to true indicates redelivery count will be used as part of duplicate detection.
Note: You can retrieve a redelivery count for a document at any point during trigger
service execution by invoking the pub.publish:getRedeliveryCount service. For more
information about this service, see the webMethods Integration Server Built‐In Services
Reference.
Does not exist. Assigns the document a status of New and executes the trigger
service. The absence of the UUID indicates that the trigger has not
processed the document before.
Exists in a Assigns the document a status of Duplicate. The existence of the
“processing” “processing” and “completed” entries for the document’s UUID
entry and a indicate the trigger processed the document successfully already.
“completed” The Integration Server acknowledges the document, discards it,
entry. and writes a journal log entry indicating that a duplicate
document was received.
Exists in a Cannot determine the status of the document conclusively. The
“processing” absence of an entry with a “completed” status for the UUID
entry only. indicates that the trigger service started to process the document,
but did not finish. The trigger service might still be executing or
the server might have unexpectedly shut down during service
execution.
If a document resolver service is specified, Integration Server
invokes it. If a document resolver service is not specified for this
trigger, Integration Server assigns the document a status of In
Doubt, acknowledges the document, uses the audit subsystem to
log the document, and writes a journal log entry stating that an In
Doubt document was received.
Exists in a Determines the document is a Duplicate. The existence of the
“completed” “completed” entry indicates the trigger processed the document
entry only. successfully already. The Integration Server acknowledges the
document, discards it, and writes a journal log entry indicating
that a Duplicate document was received.
Note: The server also considers a document to be In Doubt when the document’s
UUID (or, in the absence of a UUID the value of trackID or eventID) exceeds 96
characters. The Integration Server then uses the document resolver service, if
provided, to determine the status of the document. For more information about how
the Integration Server handles a document missing a UUID, see “Documents without
UUIDs” on page 155.
For information about configuring the document history database, refer to the
webMethods Installation Guide.
true If the document history database is properly configured,
Integration Server suspends the trigger and schedules a system
task that executes a service that checks for the availability of the
document history database. Integration Server resumes the
trigger and re‐executes it when the service indicates that the
document history database is available.
If the document history database is not properly configured,
Integration Server suspends the trigger but does not schedule a
system task to check for the database’s availability and will not
resume the trigger automatically. You must manually resume
retrieval and processing for the trigger after configuring the
document history database properly.
The default value is true.
false If a document resolver service is specified, Integration Server
executes it to determine the status of the document. Otherwise,
Integration Server assigns the document a status of In Doubt,
acknowledges the document, and uses the audit subsystem to
log the document.
For more information about the
watt.server.trigger.preprocess.suspendAndRetryOnError property, see the webMethods
Integration Server Administrator’s Guide.
determine the document’s status. If specified, Integration Server executes the document
resolver service to determine the document’s status. Otherwise, the Integration Server
logs the document as In Doubt.
Note: The watt.server.idr.reaperInterval property determines the initial execution
frequency for the wm.server.dispatcher:deleteExpiredUUID service. After you define a JDBC
connection pool for Integration Server to use to communicate with the document
history database, change the execution interval by editing the scheduled service.
You can also use Integration Server Administrator to clear expired document history
entries from the database immediately.
1 Open Integration Server Administrator.
2 From the Settings menu in the Navigation panel, click Resources.
3 Click Exactly Once Statistics.
4 Click Remove Expired Document History Entries.
exists where checking the redelivery count and the document history database does not
conclusively determine whether a trigger processed a document before. For example:
If a duplicate document arrives before the trigger finishes processing the original
document, the document history database does not yet contain an entry that indicates
processing completed. Integration Server assigns the second document a status of In
Doubt. Typically, this is only an issue for long‐running trigger services.
If Integration Server fails before completing document processing, the transport
redelivers the document. However, the document history database contains only an
entry that indicates document processing started. Integration Server assigns the
redelivered document a status of In Doubt.
You can write a document resolver service to determine the status of documents received
during these windows. How the document resolver service determines the document
status is up to the developer of the service. Ideally, the writer of the document resolver
service understands the semantics of all the applications involved and can use the
document to determine the document status conclusively. If processing an earlier copy of
the document left some application resources in an indeterminate state, the document
resolver service can also issue compensating transactions.
If provided, the document resolver service is the final method of duplicate detection.
For more information about building a document resolver service, see “Building a
Document Resolver Service” on page 162.
If the document resolver service ends with an ISRuntimeException, and the
watt.server.trigger.preprocess.suspendAndRetryOnError property is set to true,
Integration Server suspends the trigger and schedules a system task to execute the
trigger’s resource monitoring service (if one is specified). Integration Server resumes
the trigger and retries trigger execution when the resource monitoring service
indicates that the resources used by the trigger are available.
If a resource monitoring service is not specified, you will need to resume the trigger
manually (via the Integration Server Administrator or the pub.trigger:resumeProcessing
and pub.trigger:resumeRetrieval services). For more information about configuring a
resource monitoring service, see Appendix B, “Building a Resource Monitoring
Service”.
If the document resolver service ends with an ISRuntimeException, and the
watt.server.trigger.preprocess.suspendAndRetryOnError property is set to false,
Integration Server assigns the document a status of In Doubt, acknowledges the
document, and uses the audit subsystem to log the document.
If the document resolver service ends with an exception other than an
ISRuntimeException, Integration Server assigns the document a status of In Doubt,
acknowledges the document, and uses the audit subsystem to log the document.
Note: Time drift occurs when the computers that host the clustered servers
gradually develop different date/time values. Even if the Integration Server
Administrator synchronizes the computer date/time when configuring the
cluster, the time maintained by each computer can gradually differ as time passes.
To alleviate time drift, synchronize the cluster node times regularly.
In some circumstances the Integration Server might not process a new, unique document
because duplicate detection determines the document is duplicate. For example:
If the publishing client assigns two different documents the same UUID, the
Integration Server detects the second document as a duplicate and does not process it.
If the document resolver service incorrectly determines the status of a document to be
duplicate (when it is, in fact, new), the server discards the document without
processing it.
Important! In the above examples, the Integration Server functions correctly when
determining the document status. However, factors outside of the control of the
Integration Server create situations in which duplicate documents are processed or
new documents are marked as duplicates. The designers and developers of the
integration solution need to make sure that clients properly publish documents,
exactly‐once properties are optimally configured, and that document resolver
services correctly determine a document’s status.
If a trigger does not need exactly‐once processing (for example, the trigger service simply
requests or retrieves data), consider leaving exactly‐once processing disabled for the
trigger. However, if you want to ensure exactly‐once processing, you must use a
document history database or implement a custom solution using the document resolver
service.
Note: On start up, Developer queries the Integration Server to determine the
Broker version to which it is connected. If an exception occurs during this check,
Developer assumes the Broker does not track document redelivery counts.
Stand‐alone Integration Servers cannot share a document history database. Only a
cluster of Integration Servers can (and must) share a document history database.
Make sure the duplicate detection window set by the History time to live property is
long enough to catch duplicate documents but does not cause the document history
database to consume too many server resources. If external applications reliably
publish documents once, you might use a smaller duplicate detection window. If the
external applications are prone to publishing duplicate documents, consider setting a
longer duplicate detection window.
If you intend to use a document history database as part of duplicate detection, you
must first install the document history database component and associate it with a
JDBC connection pool. For instructions, see the webMethods Installation Guide.
1 In the Navigation panel, open the trigger for which you want to configure exactly‐
once processing.
2 In the Properties panel, under Exactly Once, set the Detect duplicates property to True.
3 To use a document history database as part of duplicate detection, do the following:
a Set the Use history property to True.
b In the History time to live property, specify how long the document history database
maintains an entry for a document processed by this trigger. This value
determines the length of the duplicate detection window.
4 To use a service that you create to resolve the status of In Doubt documents, specify
that service in the Document resolver service property.
5 On the File menu, click Save.
1 In the Navigation panel, open the trigger for which you want to configure exactly‐
once processing.
2 In the Properties panel, under Exactly Once, set the Detect duplicates property to False.
Developer disables the remaining exactly‐once properties.
3 On the File menu, click Save.
Exactly-Once Statistics
The Integration Server saves exactly‐once statistics in memory. When the server restarts,
the statistics will be removed from memory.
Note: The exactly‐once statistics table might not completely reflect all the duplicate
documents received via the following methods: delivery to the default client, local
publishing, and from a 6.0.1 Broker. In each of these cases, the Integration Server
saves documents in a trigger queue located on disk. When a trigger queue is stored
on disk, the trigger queue rejects immediately any documents that are copies of
documents currently saved in the trigger queue. The Integration Server does not
perform duplicate detection for these documents. Consequently, the exactly‐once
statistics table will not list duplicate documents that were rejected by the trigger
queue.
1 Start webMethods Integration Server and open the Integration Server Administrator.
2 Under the Settings menu in the navigation area, click Resources.
3 Click Exactly-Once Statistics.
1 Start webMethods Integration Server and open the Integration Server Administrator.
2 Under the Settings menu in the navigation area, click Resources.
3 Click Exactly-Once Statistics.
4 Click Clear All Duplicate or In Doubt Document Statistics.
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Join Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Subscribe Path for Documents that Satisfy a Join Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Join Conditions in Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Introduction
Join conditions are conditions that associate two or more document types with a single
trigger service. Typically, join conditions are used to combine data published by different
sources and process it with one service.
Join Types
The join type that you specify for a join condition determines whether the Integration
Server needs to receive all, any, or only one of the documents to execute the trigger
service. The following table describes the join types that you can specify for a condition.
The Subscribe Path for Documents that Satisfy an All (AND) Join
Condition
When the Integration Server receives a document that satisfies an All (AND) join condition,
it stores the document and then waits for the remaining documents specified in the join
condition. The Integration Server invokes the trigger service if each of the following
occurs:
The trigger receives an instance of each document specified in the join condition
The documents have the same activation ID.
The documents arrive within the specified join time‐out period.
The following diagram illustrates how the Integration Server receives and processes
documents for All (AND) join conditions. In the following example, trigger X contains an All
(AND) join condition that specifies that documentA and documentB must be received for the
trigger service to execute.
Subscribe path for documents that satisfy an All (AND) join condition
Cache 4 5
A A
Step Description
1 The dispatcher on the Integration Server uses a server thread to request
documents from a trigger’s client queue on the Broker.
2 The thread retrieves a batch of documents for the trigger, including documentA
and documentB. Both documentA and documentB have the same activation ID.
3 The dispatcher places documentA and documentB in the trigger’s queue in the
trigger document store. The dispatcher then releases the server thread used to
retrieve the documents.
4 The dispatcher obtains a thread from the server thread pool, pulls documentA
from the trigger queue, and evaluates the document against the conditions in
the trigger. The Integration Server determines that documentA partially satisfies
an All (AND) join condition. The Integration Server moves documentA from the
trigger queue to the join manager.
The Integration Server starts the join time‐out period.
Note: If exactly‐once processing is configured for the trigger, the Integration
Server first determines whether the document is a copy of one already
processed by the trigger. The Integration Server continues processing the
document only if the document is new.
5 The join manager saves documentA to the ISInternal database. The Integration
Server assigns documentA a status of “pending.” The Integration Server returns
an acknowledgement for the document to the Broker and returns the server
thread to the server thread pool.
6 The dispatcher obtains a thread from the server thread pool, pulls documentB
from the trigger queue, and evaluates the document against the conditions in
the trigger. The Integration Server determines that documentB partially satisfies
an All (AND) join condition. The Integration Server sends documentB from the
trigger queue to the join manager.
7 The join manager determines that documentB has the same activation ID as
documentA. Because the join time‐out period has not elapsed, the All (AND) join
condition is completed. The join manager delivers a join document containing
documentA and documentB to the trigger service specified in the condition.
8 The Integration Server executes the trigger service.
Step Description
9 After the trigger service executes to completion (success or error), one of the
following occurs:
If the service executes successfully and documentB is guaranteed, the
Integration Server acknowledges receipt of documentB to the Broker. The
Integration Server then removes the copy of the documentA from the
database and removes the copy of documentB from the trigger queue. The
Integration Server returns the server thread to the thread pool.
If a service exception occurs, the service ends in error and the Integration
Server rejects the join document. If documentB is guaranteed, the Integration
Server acknowledges receipt of documentB to the Broker. The Integration
Server then removes the copy of the documentA from the database and
removes the copy of documentB from the trigger queue. The Integration
Server returns the server thread to the thread pool and sends an error
notification document to the publisher.
If the trigger service catches a transient error, wraps it, and re‐throws it as
an ISRuntimeException, then the Integration Server waits for the length of
the retry interval and re‐executes the service using the original document as
input. If the Integration Server reaches the maximum number of retries and
the trigger service still fails because of a transient error, the Integration
Server treats the last failure as a service error. For more information about
retrying a trigger service, see “Configuring Transient Error Handling” on
page 134.
Note: A transient error is an error that arises from a condition that might correct
itself later, such as a network issue or an inability to connect to a database.
Notes:
If the join time‐out period elapses before the other documents specified in the join
condition (in this case, documentB) arrive, the database drops documentA.
If documentB had a different activation ID, the join manager would move documentB to
the database, where it would wait for a documentA with a matching activation ID.
If documentB arrived after the join time‐out period started by the receipt of documentA
had elapsed, documentB would not complete the join condition. The database would
have already discarded documentA when the join time‐out period elapsed. The join
manager would send documentB to the database and wait for another documentA with
the same activation ID. The Integration Server would restart the join time‐out period.
The Integration Server returns acknowledgements for guaranteed documents only.
If a transient error occurs during document retrieval or storage, the audit subsystem
sends the document to the logging database and assigns it a status of FAILED. You
can use webMethods Monitor to find and resubmit documents with a FAILED status.
For more information about using webMethods Monitor, see the webMethods
Monitor documentation.
If a trigger service generates audit data on error and includes a copy of the input
pipeline in the audit log, you can use webMethods Monitor to re‐invoke the trigger
service at a later time. For more information about configuring services to generate
audit data, see the webMethods Developer User’s Guide.
You can configure a trigger to suspend and retry at a later time if retry failure occurs.
Retry failure occurs when Integration Server makes the maximum number of retry
attempts and the trigger service still fails because of an ISRuntimeException. For
more information about handling retry failure, see “Handling Retry Failure” on
page 136.
The Subscribe Path for Documents that Satisfy an Only one (XOR)
Join Condition
When the Integration Server receives a document that satisfies an Only one (XOR)
condition, it executes the trigger service specified in the join condition. For the duration
of the join time‐out period, the Integration Server discards documents if:
The documents are of the type specified in the join condition, and
The documents have the same activation ID as the first document that satisfied the
join condition.
The following diagram illustrates how the Integration Server receives and processes
documents for Only one (XOR) join conditions. In the following example, trigger X contains
an Only one (XOR) join condition that specifies that either documentA or documentB must be
received for the trigger service to execute. The Integration Server uses whichever
document it receives first to execute the service. When the other document specified in
the join condition arrives, the Integration Server discards it.
Subscribe path for documents that satisfy an Only one (XOR) condition
Memory 4 5
A A
8
Trigger Service X
B
(discarded)
Step Description
1 The dispatcher on the Integration Server uses a server thread to request
documents from the trigger’s client queue on the Broker.
2 The thread retrieves a batch of documents for the trigger, including documentA
and documentB. Both documentA and documentB have the same activation ID.
3 The dispatcher places documentA and documentB in the trigger’s queue in the
trigger document store. The dispatcher then releases the server thread used to
retrieve the documents.
Step Description
4 The dispatcher obtains a thread from the server thread pool, pulls documentA
from the trigger queue, and evaluates the document against the conditions in
the trigger. The Integration Server determines that documentA satisfies an Only
one (XOR) join condition. The Integration Server moves documentA from trigger
queue to the join manager.
The Integration Server starts the join time‐out period.
Note: If exactly‐once processing is configured for the trigger, the Integration
Server first determines whether the document is a copy of one already
processed by the trigger. The Integration Server continues processing the
document only if the document is new.
5 The join manager saves the state of the join for this activation in the ISInternal
database. The state information includes a status of “complete”.
6 The Integration Server completes the processing of documentA by executing the
trigger service specified in the Only one (XOR) condition.
7 After the trigger service executes to completion (success or error), one of the
following occurs:
If the service executes successfully, the Integration Server returns the server
thread to the thread pool. If the documentA is guaranteed, the Integration
Server returns an acknowledgement to the Broker. The Integration Server
removes the copy of the document from the trigger queue and returns the
server thread to the thread pool.
If a service exception occurs, the service ends in error and the Integration
Server rejects the document. If documentA is guaranteed, the Integration
Server returns an acknowledgement to the Broker. The Integration Server
removes the copy of the document from the trigger queue, returns the
server thread to the thread pool, and sends the publisher an error document
to indicate that an error has occurred.
If the trigger service catches a transient error, wraps it, and re‐throws it as
an ISRuntimeException, the Integration Server waits for the length of the
retry interval and re‐executes the service. If the Integration Server reaches
the maximum number of retries and the trigger service still fails because of
a transient error, the Integration Server treats the last failure as a service
error. For more information about retrying a trigger service, see
“Configuring Transient Error Handling” on page 134.
Note: A transient error is an error that arises from a condition that might correct
itself later, such as a network issue or an inability to connect to a database.
Step Description
8 The dispatcher obtains a thread from the server thread pool, pulls documentB
from the trigger queue, and evaluates the document against the conditions in
the trigger. The Integration Server determines that documentB satisfies the Only
one (XOR) join condition. The Integration Server sends documentB from the trigger
queue to the join manager.
9 The join manager determines that documentB has the same activation ID as
documentA. Because the join time‐out period has not elapsed, the Integration
Server discards documentB. The Integration Server returns an acknowledgement
for documentB to the Broker.
Notes:
If documentB had a different activation ID, the join manager would move documentB to
the database and execute the trigger service specified in the Only one (XOR) join
condition.
If documentB arrived after the join time‐out period started by the receipt of documentA
had elapsed, the Integration Server would invoke the trigger service specified in the
Only one (XOR) join condition and start a new time‐out period.
The Integration Server returns acknowledgements for guaranteed documents only.
If a transient error occurs during document retrieval or storage, the audit subsystem
sends the document to the logging database and assigns it a status of FAILED. You
can use webMethods Monitor to find and resubmit documents with a FAILED status.
For more information about using webMethods Monitor, see the webMethods
Monitor documentation.
If a trigger service generates audit data on error and includes a copy of the input
pipeline in the audit log, you can use webMethods Monitor to re‐invoke the trigger
service at a later time. For more information about configuring services to generate
audit data, see the webMethods Developer User’s Guide.
You can configure a trigger to suspend and retry at a later time if retry failure occurs.
Retry failure occurs when Integration Server makes the maximum number of retry
attempts and the trigger service still fails because of an ISRuntimeException. For
more information about handling retry failure, see “Handling Retry Failure” on
page 136.
The following diagram illustrates the basics of performing data synchronization with
webMethods software.
3 4
Canonical Document
Step Description
1 The source resource makes a data change.
2 The Integration Server receives notification of the change on the source. For
example, in the above illustration, an adapter checks for changes on the source.
When the adapter recognizes a change on the source, it sends an adapter
notification that contains information about the change that was made. The
adapter might either publish the adapter notification or directly invoke a
service passing it the adapter notification.
For more information about adapter notifications, see the guide for the specific
adapter that you are using with a source resource.
3 A service that you create receives the notification of change on the source. For
example, it receives the adapter notification. This service maps data from the
change notification document to a canonical document. A canonical document is
a common business document that contains the information that all target
resources will require to incorporate data changes. The canonical document has
a neutral structure among the resources. For more information, see “Canonical
Documents” on page 181.
After forming the canonical document, the service publishes the canonical
document. The targets subscribe to the canonical document.
4 On a target, the trigger that subscribes to the canonical document invokes a
service that you create. This service maps data from the canonical document
into a document that has a structure that is native to the target resource. The
target resource uses the information in this document to make the equivalent
change.
Step Description
5 The target resource makes a data change that is equivalent to the change that
the source initiated. If the target resource is using an adapter, an adapter service
can be used to make the data change.
For more information about adapter services, see the guide for the specific
adapter that you are using with a target resource.
Structure of customer data in a CRM system Structure of customer data in a Billing system
Customer ID Account ID
Customer Name Account Type
First Billing Account Owner
Surname Last
Customer Address First
Line1 Billing Address
Line2 Number
City Street
State AptNumber
ZipCode CityOrTown
Country State
Customer Payment Information Code
Customer Account Country
Billing Preferences
Data in an application contains a key value that uniquely identifies an object within the
application. In the example above, the key value that uniquely identifies a customer
within the CRM system is the Customer ID; similarly, the key value in the Billing system
is the Account ID. The key value in a specific application is referred to as that
application’s native ID. In other words, the native ID for the CRM system is the Customer
ID, and the native ID for the Billing system is the Account ID.
Canonical Documents
Source resources send documents to notify other resources of data changes. You set up
processing to map the data from these notification documents into a canonical document.
A canonical document is a document with a neutral structure, and it encompasses all
information that resources will require to incorporate data changes. Use of canonical
documents:
Simplifies data synchronization between multiple resources by eliminating the need for
every resource to understand the document structure of every other resource.
Resources need only include logic for mapping to and from the canonical document.
Without a canonical document, each resource needs to understand the document
structures of the other resources. If a change is made to the document structure of one
of the resources, logic for data synchronization would need to change in all resources.
For example, consider keeping data synchronized between three resources: a CRM
system, Billing system, and Order Management system. Because the systems map
directly to and from each others native structures, a change in one resource’s structure
affects all resources. If the address structure was changed for the CRM system, you
would need to update data synchronization logic in all resources.
When you use a canonical document, you limit the number of data synchronization
logic changes that you need to make. Using the same example where the address
structure changes for the CRM system, you would only need to update data
synchronization logic in the CRM system.
Note: The field names listed in the table below are not the actual column names used
in the cross‐reference database component. They are the input variable names used
by the key cross‐referencing built‐in services that correspond to the columns of the
cross‐reference database component.
The cross‐reference table includes the fields described in the following table:
Fields Description
appId A string that contains the identification of a resource, for example,
“CRM System”. You assign this value.
objectId A string that contains the identification of the of the object that you
want to keep synchronized, for example, “Account”. You assign this
value.
nativeId The native ID of the object for the specific resource specified by
appId. You obtain this value from the resource.
canonicalKey The canonical ID that is common to all resources for identifying the
specific object. You can assign this value, or you can allow a built‐in
service to generate the value for you.
For example, to synchronize data between a CRM system, Billing system, and Order
Management system, you might have the following rows in the cross‐reference table:
Native ID is DAN0517.
Step Description
1 As described in “Data Synchronization with webMethods” on page 178, when
a source makes a data change, the source sends a document to notify other
resources of the change. A service that you create receives this document. Your
service builds the canonical document that describes the change.
2 When forming the canonical document, to determine the value to use for the
canonical ID, your service invokes a built‐in service. This built‐in service
inspects the cross‐reference table to locate the row that contains the native ID
from the source document. The built‐in service then returns the corresponding
canonical ID from the cross‐reference table. For more information about the
built‐in services you use, see “Setting Up Key Cross‐Referencing in the Source
Integration Server” on page 194.
3 A service that you create on the target receives the canonical document. When a
target receives the canonical document, it needs to determine the native ID of
the object that the change affects. To determine the native ID, your service on
the target invokes a built‐in service. This built‐in service inspects the cross‐
reference table to locate the row that contains the canonical ID and the
appropriate resource identified by the appId and object identified by objectId.
The built‐in service then returns the corresponding native ID from the
cross‐reference table. For more information about the built‐in services you use,
see “Setting Up Key Cross‐Referencing in the Target Integration Server” on
page 198.
1 2
Notification
Source
Integration Broker
Source Update Server
Resource 3
Step Description
1 A data change occurs on a source, and the source resource sends a notification
document.
2 The source Integration Server builds and publishes a canonical document.
3 Because the source is also a target, it (as well as all other targets) subscribes to
the canonical document via a trigger. As a result, the source receives the
canonical document it just published. Logic on the source Integration Server
uses the canonical document to build an update document to send to the
source. The source receives the update document and makes the data change
again. Because the source made this data change, it once again acts as a source
to notify targets of the data change. The process starts again with step 1.
In addition to the source immediately receiving the canonical that it formed, it can also
receive the canonical document many more times because other targets build and publish
the canonical document after making the data change that the source initiated. See below
for an illustration of this circular updating.
1 2 3
Notification Update
Source Target
Integration Broker Integration
Source Update Server Server Notification Target
Resource 6 5 4 Resource
Step Description
1 A data change occurs on a source, and the source sends a notification
document.
2 The source Integration Server builds and publishes a canonical document.
Step Description
3 A target receives the canonical document and makes the equivalent change.
4 Because the target made a data change, it sends a notification document for the
data change.
5 The target Integration Server builds and publishes a canonical document.
6 The source receives the notification of the change that was made by the target
and makes the change, again. This results in the process starting again with
step 1.
To avoid the circular updating, the Integration Server provides you with the following
tools to perform echo suppression:
The isLatchClosed field in the cross‐reference table that you use to keep track of
whether a resource has already made a data change or not.
Built‐in services that you use to manipulate the isLatchClosed field in the
cross‐reference table to:
Determine the value of the isLatchClosed field, and
Set the value of the isLatchClosed column.
The following diagram illustrates how to use the isLatchClosed field for echo
suppression during data synchronization.
1 2 4
Notification Update
Source Target
Integration Broker Integration
CRM System Server Server Notification Billing
(Source System
5
Resource) 3 6 (Target
Resource)
Update Service Notification Service
latch is closed latch is closed
Do not update. Do not send
notification.
Step Description
1 A data change occurs on a source, and the source sends a notification
document.
2 A notification service that you create to notify targets of a data change invokes
the pub.synchronization.latch:isLatchClosed built‐in service to determine whether the
latch for the object is open or closed. Initially for the source, the latch for the
object that was changed is open. This indicates that updates can be made to this
object.
Step Description
The object is identified in the cross‐reference table by the following cross‐
reference table fields and the latch is considered open because isLatchClosed is
false:
appId objectId canonicalKey isLatchClosed
Step Description
Because initially the isLatchClosed column is set to false, this means the latch is
open and updates can be made to this object. To make the update, the update
service maps information from the canonical document to a native document
for the target resource and sends the document to the target. The target
resource uses this document to make the equivalent change.
The update service uses the pub.synchronization.latch:closeLatch built‐in service to
close the latch. This indicates that updates cannot be made to this object and
prevents a circular update. After the latch is closed, the cross‐reference table
fields are as follows:
appId objectId canonicalKey isLatchClosed
6 Because the target is also a source, when it receives notification of a data
change, it attempts to notify other targets of the data change. A notification
service that you create to notify targets of a data change invokes the
pub.synchronization.latch:isLatchClosed built‐in service to determine whether the
latch for the object is open or closed.
Finding that the latch is closed, which indicates that the change has already
been made, the notification service does not build the canonical document. The
notification service simply invokes the pub.synchronization.latch:openLatch built‐in
service to re‐open the latch. Because the latch is now open, future updates can
be made to the object. After the latch is open, the cross‐reference table fields are
as follows:
appId objectId canonicalKey isLatchClosed
Canonical Document
Notification Update
Source Target
Integration Broker Integration
Server Server
Source Target
Resource Resource
Canonical Document
Notification Update
Source Target
Integration Broker Integration
Server Server
Source Target
Resource Resource
To define the structure for canonical documents, you include a superset of all the fields
that are required to keep data synchronized between the resources. Additionally, you
must include a field for the canonical ID.
The following table lists options for defining the structure of a canonical document:
After determining the fields that you need in the canonical document, use the
webMethods Developer to define an publishable document type for the canonical
document. For more information about how to create publishable document types, see
Chapter 5, “Working with Publishable Document Types”.
Note: For an overview of key cross‐referencing, including the problem key
cross‐referencing solves and how key cross‐referencing works, see “Key Cross‐
Referencing and the Cross‐Reference Table” on page 182.
The following diagram highlights the part of data synchronization that this section
addresses.
Canonical Document
Notification Update
Source Target
Integration Broker Integration
Server Server
Source Target
Resource Resource
Service Description
createXReference Used by the source to assign a canonical ID and add a row to the
cross‐reference table to create a cross‐reference between the
canonical ID and the source’s native ID. To assign the value for the
canonical ID, you can either specify the value as input to the
createXReference service or have the createXReference service generate
the value.
insertXReference Used by a target to add a row to the cross‐reference table to add the
target’s cross‐reference between an existing canonical ID (which the
source added) and the target’s native ID.
getCanonicalKey Retrieves the value of the canonical ID from the cross‐reference table
that corresponds to the native ID that you specify as input to the
getCanonicalKey service.
getNativeId Retrieves the value of the native ID from the cross‐reference table
that corresponds to the canonical ID that you specify as input to the
getNativeId service.
2
3
4
5
Step Description
appId The identification of the application (e.g., CRM system).
objectId The string that you assigned to identify the object (e.g.,
Account). This string is referred to as the object ID.
nativeId The native ID from the notification document (e.g., adapter
notification), which was received as input to your service.
If the getCanonicalKey service finds a row in the cross‐reference table that
matches the input information, it returns the value of the canonical ID in the
canonicalKey output variable. If no row is found, the value of the
canonicalKey output variable is blank (i.e., an empty string). For more
information about the getCanonicalKey service, see the webMethods Integration
Server Built‐In Services Reference.
2 Split logic based on whether the canonical ID already exists.
Use a BRANCH flow step to split the logic. Set the Switch property of the
BRANCH flow step to canonicalKey.
3 Build a sequence of steps to execute when the canonical ID does not already exist.
Under the BRANCH flow step is a single sequence of steps that should be
executed only if a canonical ID was not found. Note that the Label property
for the SEQUENCE flow step is set to blank. At run time, the server matches
the value of the canonicalKey variable to the Label field to determine whether
to execute the sequence. Because the canonicalKey variable is set to blank
(i.e., an empty string), the label field must also be blank.
Important! Do not use $null for the Label property. An empty string is not
considered a null.
Step Description
appId The identification of the application (e.g., CRM system).
objectId The object type (e.g., Account).
nativeId The native ID from the notification document (e.g., adapter
notification), which was received as input to your service.
canonicalKey (optional) The value you want to assign the canonical ID.
If you do not specify a value for the canonicalKey input variable, the
createXReference service generates a canonical ID for you. For more
information about the createXReference service, see the webMethods
Integration Server Built‐In Services Reference.
5 Build the canonical document.
Map fields from the notification document (e.g., adapter notification) to the
fields of the canonical document. Make sure you map the canonical ID
generated in the last step to the canonical ID field of the canonical
document.
The notification document has the structure that you previously defined
with a the publishable document type. See “Defining How a Source
Resource Sends Notification of a Data Change” on page 191. Similarly, the
canonical document has the structure that you previously defined with a
publishable document type. See “Defining the Structure of the Canonical
Document” on page 193.
Note: Although this sample logic shows only a single MAP flow step, you
might need to use additional flow steps or possibly create a separate service
to build the canonical document.
Canonical Document
Notification Update
Source Target
Integration Broker Integration
Server Server
Source Target
Resource Resource
To set up key cross‐referencing in the target Integration Server:
Create a trigger that subscribes to the canonical document that the source Integration Server
publishes. On the target Integration Servers, define a trigger that:
Subscribes to the publishable document type that defines the canonical document.
Defines the trigger service to be the service that builds the builds a native
document for the target resource.
For more information, see Chapter 7, “Working with Triggers”.
Create an IS document type that defines the structure of the document that the target
Integration Server needs to send to the target resource to notify it of a data change.
For more information about how to create IS document types, see the webMethods
Developer User’s Guide.
Create the trigger service that uses the canonical document to build the target native document
and sends the native document to the target resource. The service receives the canonical
document, which contains the description of the data change to make. However,
typically the target resource will not understand the canonical document. Rather, the
target resource requires a document in its own native format.
The service can build the native document for the target resource by mapping
information from the canonical document to the target resource’s native document
format. Make sure you include the native ID in this document. To obtain the native
ID, invoke the pub.synchronization.xref:getNativeId built‐in service. If the native ID is
cross‐referenced with the canonical ID in the cross‐reference table, this service returns
the native ID. If no cross‐reference has been set up for the object, you will need to
determine the best way to obtain the native ID.
After forming the native document, the trigger service interacts with the target
resource to make the data change.
Note: For a description of the built‐in services that webMethods provide for key
cross‐referencing, see “Built‐In Services for Key Cross‐Referencing” on page 194.
The following shows sample logic for the update service. See the table after the diagram
for more information.
2
3
4
5
6
Step Description
1 Obtain the native ID for the target object if there is an entry for the target object in the
cross-reference table.
Invoke the pub.synchronization.xref:getNativeId service to locate a row in the cross‐
reference table for the target object. If the row already exists, the row contains
the native ID for the target object. Pass the getNativeId service the following
inputs that identify the target object:
In this input
variable... Specify...
appId The identification of the application (e.g., Billing system).
objectId The object type (e.g., Account).
canonicalKey The canonical ID from the canonical document, which was
received as input to your service.
If the getNativeId service finds a row that matches the input information, it
returns the value of the native ID in the nativeId output variable. If no row is
found, the value of the nativeId output variable is blank (i.e., an empty string).
For more information about the getNativeId service, see the webMethods
Integration Server Built‐In Services Reference.
Step Description
2 Split logic based on whether a native ID was obtained for the target resource.
Use a BRANCH flow step to split the logic. Set the Switch property of the
BRANCH flow step to nativeId, to indicate that you want to split logic based
on the value of the nativeId pipeline variable.
3 Build a sequence of steps to execute when the native ID is not obtained.
Under the BRANCH flow step is a single sequence of steps to perform only if a
native ID was not found. Note that the Label property for the SEQUENCE flow
step is set to blank. At run time, the server matches the value of the nativeID
variable to the label field to determine whether to execute the sequence.
Because the nativeId variable is set to blank (i.e., an empty string), the Label field
must also be blank.
Important! Do not use $null for the Label property. An empty string is not
considered null.
appId The identification of the application (e.g., Billing system).
objectId The object type (e.g., Account).
nativeId The native ID for the object in the target resource. You must
determine what the native ID should be.
canonicalKey The canonical ID from the canonical document, which was
received as input to your service.
For more information about the insertXReference service, see the webMethods
Integration Server Built‐In Services Reference.
Step Description
Note: Although this sample logic shows only a single MAP flow step, you might
need to use additional flow steps or possibly create a separate service to build
the native document for the target resource.
6 Invoke a service to send the native document to the target resource, so the target
resource can make the equivalent change.
Create a service that sends the native document to the target.
If you use an adapter with your target resource, you can use an adapter service
to update the target resource. For more information about adapter services, see
the documentation for your adapter.
Note: For an overview of echo suppression, including information about how echo
suppression solves the problem of circular updating, see “Echo Suppression for N‐
Way Synchronizations” on page 185.
Service Description
closeLatch Closes the latch for the specified canonical ID, application ID (appId),
and object type (objectId). To close the latch, the isLatchClosed field of
the cross‐reference table is set to true. A closed latch indicates that the
resource described in the cross‐reference row cannot be acted upon
until the latch is open using the openLatch service.
isLatchClosed Determines whether the latch is open or closed for the specified
canonical ID, application ID (appId), and object type (objectId). To
check the status of the latch, the service uses the isLatchedClosed field
of the cross‐reference table. The output provides a status of true (the
latch is closed) or false (the latch is open).
openLatch Opens the latch for the specified canonical ID, application ID (appId),
and object type (objectId). To open the latch, the isLatchClosed field of
the cross‐reference table is set to false. An open latch indicates that the
resource described in the cross‐reference row can be acted upon.
The following diagram highlights the part of data synchronization that this section
addresses.
Notification Service
latch is open
Send notification.
Notification Update
Source Target
Integration Broker Integration
Server Server
Source Notificatio Target
Resource Resource
Notification Service
latch is closed
Do not publish
notification.
2
3
4
5
6
Step Description
1 Determine whether the latch is open or closed for the object being changed.
Invoke the pub.synchronization.latch:isLatchClosed service to locate a row in the
cross‐reference table for the object that has changed and for which you want to
send notification. Pass the isLatchClosed service the following inputs that identify
the object:
In this input
variable... Specify...
appId The identification of the application (e.g., CRM system).
objectId The object type (e.g., Account).
canonicalKey The canonical ID.
The isLatchClosed service uses the isLatchClosed field from the matching row to
determine whether the latch is open or closed. If the isLatch field is ‘false’, the
latch is open, and the isLatchClosed service returns ‘false’ in the isLatchClosed
output variable. If the isLatch field is ‘true’, the latch is closed, and the service
returns ‘true’. For more information about the isLatchClosed service, see the
webMethods Integration Server Built‐In Services Reference.
2 Split logic based on whether the latch is open or closed.
Use a BRANCH flow step to split the logic. Set the Switch property of the
BRANCH to isLatchClosed, to indicate that you want to split logic based on the
value of the isLatchClosed pipeline variable.
3 Build a sequence of steps to execute when the latch is open.
Because the Label property for the SEQUENCE flow step is set to false, this
sequence of operations is executed when the isLatchClosed variable is false,
meaning the latch is open. When the latch is open, the target resources have not
yet been notified of the data change. This sequence of steps builds and publishes
the canonical document.
4 Close the latch for the object.
When the latch is open, the first step is to close the latch. By closing the latch
before publishing the canonical document, you remove any chance that the
Integration Server will receive and act on the published canonical document.
To close the latch, invoke the pub.synchronization.latch:closeLatch service. Pass the
closeLatch service the same input variables that were passed to the
pub.sychronization.latch:isLatchClosed service in step 1 above. For more information
about the closeLatch service, see the webMethods Integration Server Built‐In Services
Reference.
Step Description
Important! If multiple resources will make changes to the same object simultaneously
or near simultaneously, echo suppression cannot guarantee successful updating. If
you expect simultaneous or near simultaneous updates, you must take additional
steps:
1 When defining the structure of a canonical document, include a tracking field that
identifies the source of the change to the canonical document structure.
2 In the notification service include a filter or BRANCH to test on the source field to
determine whether to send the notification.
The following diagram highlights the part of data synchronization that this section
addresses.
Update Service
latch is open
Update resource.
Update
Notification Source Target
Integration Broker Integration
Server Server
Source Notification Target
Resource Resource
Update Service
latch is closed
Do not update
resource.
2
3
5
6
Step Description
1 Determine whether the latch is open or closed for the changed object.
Invoke the pub.synchronization.latch:isLatchClosed service to locate a row in the
cross‐reference table for the changed object. Pass the isLatchClosed service the
following inputs that identify the object:
In this input
variable... Specify...
appId The identification of the application (e.g., Billing system).
objectId The object type (e.g., Account).
canonicalKey The canonical ID.
The isLatchClosed service uses the isLatchClosed field from the matching row to
determine whether the latch is open or closed. If the isLatchClosed field is
‘false’, the latch is open, and the isLatchClosed service returns ‘false’ in the
isLatchClosed output variable. If the isLatchClosed field is ‘true’, the latch is
closed, and the service returns ‘true’. For more information about the
isLatchClosed service, see the webMethods Integration Server Built‐In Services
Reference.
2 Split logic based on whether the latch is open or closed.
Use a BRANCH flow step to split the logic. Set the Switch property of the
BRANCH to isLatchClosed, to indicate that you want to split logic based on the
value of the isLatchClosed pipeline variable.
3 Build a sequence of steps to execute when the latch is open.
Because the Label property for the SEQUENCE flow step is set to false, this
sequence of operations is executed when the isLatchClosed variable is false,
meaning the latch is open. When the latch is open, the target resource has not yet
made the equivalent data change. This sequence of steps builds and sends a
native document that the target resource uses to make the equivalent change.
4 Close the latch.
When the latch is open, close the latch before sending the native document to the
target resource. For n‐way synchronizations because a target is also a source,
when the resource receives and makes the equivalent data change, the resource
then sends notification of a data change. By closing the latch before sending the
native document to the target resource, you remove any chance that the
Integration Server will receive and act on a notification document being sent by
the resource.
To close the latch, invoke the pub.synchronization.latch:closeLatch service. Pass the
closeLatch service the same input variables that were passed to the
pub.sychronization.latch:isLatchClosed service in step 1 above. For more information
about the closeLatch service, see the webMethods Integration Server Built‐In Services
Reference.
Step Description
Important! If multiple resources will make changes to the same object simultaneously
or near simultaneously, echo suppression cannot guarantee successful updating. If
you expect simultaneous or near simultaneous updates, you must take additional
steps:
1 When defining the structure of a canonical document, include a tracking field that
identifies the source of the change to the canonical document structure.
2 In the update service include a filter or BRANCH to test on the source field to
determine whether to update the object.
? ' - # = ) ( . / \ ;
& @ ^ ! | } { ` > <
% * : $ ] [ " + , ~
Characters outside of the basic ASCII character set, such as multi‐byte characters
If you specify a name that disregards these restrictions, Developer displays an error
message. When this happens, use a different name or try adding a letter or number to the
name to make it valid.
If the Integration Server determines that the syntax is valid for the Broker, it saves the
filter with the subscription on the Broker. If the Integration Server determines that the
filter syntax is not valid on the Broker or if attempting to save the filter on the Broker
would cause an error, the Integration Server saves the subscription on the Broker without
the filter. The filter will be saved only on the Integration Server.
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Service Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Overview
A resource monitoring service is a service that you create to check the availability of
resources used by a trigger. Integration Server schedules a system task to execute a
resource monitoring service after it suspends a trigger. Specifically, Integration Server
suspends a trigger and invokes the associated resource monitoring service when one of
the following occurs:
During exactly‐once processing, the document resolver service ends because of an
ISRuntimeException and the
watt.server.trigger.preprocess.suspendAndRetryOnError property is set to true (the
default).
A retry failure occurs and the configured retry behavior is suspend and retry later
When the resource monitoring service indicates that the resources used by the trigger are
available, Integration Server resumes the trigger.
Service Requirements
A resource monitoring service must do the following:
Use the pub.trigger:resourceMonitoringSpec as the service signature.
Check the availability of the resources used by the document resolver service and all
the trigger services associated with a trigger. Keep in mind that each condition in a
trigger can be associated with a different trigger service. However, you can only
specify one resource monitoring service per trigger.
Return a value of “true” or “false” for the isAvailable output parameter. The author of
the resource monitoring service determines what criteria makes a resource available.
Catch and handle any exceptions that might occur. If the resource monitoring service
ends because of an exception, Integration Server logs the exception and continues as
if the resource monitoring service returned a value of “false” for the isAvailable output
parameter.
The same resource monitoring service can be used for multiple triggers. When the service
indicates that resources are available, Integration Server resumes all the triggers that use
the resource monitoring service.
pub.synchronization.latch G
closeLatch service 188, 189 getCanonicalKey service
isLatchClosed service 187, 188, 189 description 195
openLatch service 188, 189 example 196
target native document 205 getNativeId service
elements, overwriting during document type description 195
synchronization 79, 82, 84 example 199
Enterprise Integrator, using to edit document types guaranteed document delivery, description 148
71 guaranteed processing, description 66, 148
envelope field guaranteed storage
_env 64 description 66
pub.publish:envelope document type 64 document processing provided 66
published documents 64 guaranteed processing 66
restrictions on usage 64
setting values 90 H
errors, suspending triggers for 52
History time to live property 156
errorsTo field 91
eventID field, use in duplicate detection 155
exactly-once processing I
Broker version importance 160 In Doubt documents
configuring 160 description of status 149
description 148 fate of 151
disabling 161 statistics for 162
extenuating circumstances 158 in doubt resolver. See exactly-once processing.
guaranteed storage 66 In Sync with Broker synchronization status 75
guidelines 160 insertXReference service
overview of 149 description 195
performance impact 159 example 200
potential for duplicate document processing 158 Integration Server, description 13
potential for treating new document as duplicate interval, for clearing expired document history
159 entries 156
statistics, clearing 163 isLatchClosed field
statistics, viewing 162 description 186
how used for echo suppression 186
F obtaining value from cross-reference table 204,
207
FAILED document status 20, 23, 34, 170, 174
isLatchClosed service
fatal error handling, configuring 133
description 202
field names, limitations in Broker document types 59
example 204, 207
filters
when to use 187
creating 116, 118
ISRuntimeException 134
naming restrictions 210
performance impact 118
saved on Integration Server 117 J
saved on the Broker 117 JMS trigger, definition of 9
specifying for a document type 115 join conditions
where saved 117 activation ID 168
cluster processing 175
description 114, 166
example service to build canonical document synchronous request/reply, description 23, 94, 101
203 syntax
example service to build native target docu- for fields in Broker document types 210
ment 206
one-way synchronization T
description 178 tag field, in request/reply 24, 25, 97, 103
example service to build canonical document territories, switching 48
195 testing
example service to build target native docu- publishable document types 85
ment 199 triggers 144
opening latch 188, 205, 208 Throw service exception option 136, 137
preventing circular updates 185 time drift
processing overview 179 description of 159
source, definition 178 impact on exactly-once processing 158
targets, definition 178 time to live property 67
tasks to implement 190 specifying for publishable document types 68
synchronization status trackID field, use in duplicate detection 155
after changing Brokers 75 transient error handling, configuring 134
Created Locally 75, 77, 78 transient error, description 135
for publishable document types 74 trigger document store
In Sync with Broker 75, 78 description 49
Removed from Broker 75, 78 saved in memory 28
Updated Both Locally and on the Broker 75, 77 saved on disk 32, 35
Updated Locally 74, 75, 77 storage type 33, 49
Updated on Broker 74, 77 trigger queues
synchronizing publishable document types capacity 125
access permissions needed 79, 82 description 125
actions 75 handling documents when full 51
Created Locally status 75 refill level 125
importance of 76 trigger services
In Sync with Broker status 75 auditing 112
overwriting elements 79, 82, 84 create canonical document 195
result of overwriting 85 description 111
result of skipping 85 infinite retry loop 139
Pull from Broker action 76 infinite retry loop, escaping 141
purpose of 74 performance 141
Push to Broker action 76 requirements 111
Removed from Broker status 75 retry count, retrieving 141
result of 77 retry requirements 135
Skip action 76 retrying 134
synchronization status 74 trigger retries and service retries 140
synchronizing a single document type 79 user account for invoking 49
synchronizing document types in a cluster 84 XSLT services 115
synchronizing multiple document types 80 triggers
Updated Both Locally and on the Broker status 75 acknowledgement queue size 127
Updated Locally status 74, 75 adding conditions 121
Updated on Broker status 74 capacity 125
when to 74 changing condition order 121
watt.server.dispatcher.join.reaperDelay 51
watt.server.idr.reaperInterval 51, 156
watt.server.publish.local.rejectOOS 51
watt.server.publish.useCSQ 51
watt.server.publish.usePipelineBrokerEvent 52
watt.server.publish.validateOnIS 69
watt.server.trigger.interruptRetryOnShutdown 52,
141
watt.server.trigger.keepAsBrokerEvent 52
watt.server.trigger.local.checkTTL 52
watt.server.trigger.managementUI.excludeList 52
watt.server.trigger.monitoringInterval 52
watt.server.trigger.preprocess.suspendAndRetryOn
Error 52, 155, 157
watt.server.trigger.removeSubscriptionOnReloadOr
Reinstall 53
watt.server.xref.type 53
webMethods Broker 13
webMethods Integration Server 13
X
XOR join. See Only one (XOR) join.
XSLT service, and triggers 115