Beruflich Dokumente
Kultur Dokumente
Table of Contents
Complex Event Processing...................................................................................................................9
Event processing is asynchronous.................................................................................................10
Basics of ODM Decision Server Insights...........................................................................................10
Solution..........................................................................................................................................10
Concepts.........................................................................................................................................11
Entity..............................................................................................................................................11
Event..............................................................................................................................................12
Solution Gateway...........................................................................................................................13
Inbound and outbound connectivity..............................................................................................13
Agents............................................................................................................................................14
Business Object Model BOM.....................................................................................................15
Time...............................................................................................................................................15
When does an event arrive?......................................................................................................17
Aggregating event and entity data.................................................................................................18
Architecture...................................................................................................................................18
Designing a DSI solution...............................................................................................................19
Sources of Knowledge on ODM DSI............................................................................................19
The IBM Knowledge Center.....................................................................................................19
Books on Event Processing.......................................................................................................19
DeveloperWorks........................................................................................................................20
Important IBM Technical Notes................................................................................................20
Release History...................................................................................................................................20
Installation..........................................................................................................................................20
FixPacks.........................................................................................................................................30
Environment preparation...............................................................................................................30
Developing a solution.........................................................................................................................34
Eclipse Insight Designer.............................................................................................................35
Naming conventions for projects and artifacts..............................................................................36
Creating a new solution.................................................................................................................37
The SOLUTION.MF file..........................................................................................................40
The solution map view...................................................................................................................40
Modeling the Business Object Model (BOM)...............................................................................40
Defining Entity Types...............................................................................................................41
Defining Event Types................................................................................................................41
Business Model Definitions......................................................................................................41
Defining Entity initializations...................................................................................................49
Defining attribute enrichments..................................................................................................49
Generated Business Object Model............................................................................................51
The structure of BOM projects.................................................................................................52
Modeling the connectivity of a solution........................................................................................52
Inbound bindings.......................................................................................................................54
Inbound endpoints.....................................................................................................................54
Outbound bindings....................................................................................................................55
Outbound endpoints..................................................................................................................55
HTTP Bindings.........................................................................................................................55
JMS Bindings............................................................................................................................55
XML event message format......................................................................................................56
Transformations........................................................................................................................56
Notes about connections...........................................................................................................57
Page 2
Bundle Activators........................................................................................................................219
The Bundle Context.....................................................................................................................219
The Bundle object........................................................................................................................219
Bundle Listeners..........................................................................................................................220
Working with services..................................................................................................................220
The OSGi Blueprint component model.......................................................................................221
Blueprint Bean manager..........................................................................................................221
Blueprint Service manager......................................................................................................222
Reference manager..................................................................................................................222
Using JPA in Blueprint............................................................................................................222
Other notes .............................................................................................................................223
Examples of Blueprint............................................................................................................223
Web Application Bundles............................................................................................................224
The OSGi Application.................................................................................................................225
Using the OSGi console...............................................................................................................225
Creating a bundle from a JAR.....................................................................................................226
Adding bundles to Liberty...........................................................................................................227
Debugging Camel apps................................................................................................................228
Debugging OSGi..........................................................................................................................228
OSGi tools....................................................................................................................................229
WebSphere Liberty...........................................................................................................................229
Configuration...............................................................................................................................229
Development................................................................................................................................231
Features........................................................................................................................................231
Deploying Applications...............................................................................................................232
Security........................................................................................................................................232
SSL Security............................................................................................................................232
DB data access.............................................................................................................................235
Adding a data source...............................................................................................................235
Accessing a DB from a Java Agent.........................................................................................240
Servlets.........................................................................................................................................241
JTA...............................................................................................................................................245
Java Persistence...........................................................................................................................245
Persistence Unit.......................................................................................................................247
Physical Annotations...............................................................................................................247
Logical Annotations................................................................................................................247
Mapping Types........................................................................................................................248
Configuration in Liberty.........................................................................................................248
Examples of JPA.....................................................................................................................249
JNDI Access.................................................................................................................................252
EJB...............................................................................................................................................252
Singleton EJBs........................................................................................................................253
JAXP............................................................................................................................................253
JAXB...........................................................................................................................................253
JMS..............................................................................................................................................254
Writing a JMS Sender.............................................................................................................259
Writing an MDB......................................................................................................................259
WebSphere MQ Access................................................................................................................266
JMX and Mbeans.........................................................................................................................267
JMX and MBean programming..............................................................................................270
Page 6
XSLT Component...................................................................................................................300
Camel as a Liberty EJB...............................................................................................................300
Camel DSL in OSGi Blueprint....................................................................................................300
Camel as a Liberty OSGi environment........................................................................................300
Eclipse..............................................................................................................................................300
Importing exported projects.........................................................................................................300
Installing Eclipse Marketplace.....................................................................................................301
Installing the Liberty Developer Tools........................................................................................302
Associating an Eclipse Server View with DSI.............................................................................303
Viewing server logs.....................................................................................................................307
Using GIT with Eclipse and DSI Solutions.................................................................................307
Other related tools............................................................................................................................308
TechPuzzles......................................................................................................................................308
DSI TechPuzzle 2015-01-30........................................................................................................309
DSI TechPuzzle 2015-02-06........................................................................................................310
DSI TechPuzzle 2015-02-13........................................................................................................312
DSI TechPuzzle 2015-02-20........................................................................................................316
DSI TechPuzzle 2015-02-27........................................................................................................318
DSI TechPuzzle 2015-03-06........................................................................................................320
DSI TechPuzzle 2015-03-13........................................................................................................324
DSI TechPuzzle 2015-03-20........................................................................................................331
DSI TechPuzzle 2015-04-03........................................................................................................332
DSI TechPuzzle 2015-04-10........................................................................................................333
DSI TechPuzzle 2015-04-17........................................................................................................334
Worked Examples.............................................................................................................................336
Simple Human Resources............................................................................................................337
Experiment Scenarios.......................................................................................................................339
The Education Session ............................................................................................................339
Sales orders .................................................................................................................................339
Receiving real world sensor events ............................................................................................339
Planes arriving ............................................................................................................................339
Language puzzles ........................................................................................................................340
Collections...................................................................................................................................340
Language general.........................................................................................................................341
Things to do .....................................................................................................................................341
Page 8
But what is the nature of an event? What are its attributes and what is its meaning? Let us take a
few moments and examine this idea which will serve us well in the rest of the material.
Events have two consistent attributes associated with them.
First, every event happens at some discrete moment in time. Looking back at our sample list of
events, hopefully you can see that there will be a real-world time at which such an event occurs. By
realizing that an event happens at a specific time, we can start to apply reasoning upon the order or
sequence of events. If we say that one event happens before an another, what we are saying is that
the time when the first event happened is before the time the second event happened. This sounds
simple enough but given enough events of different types, we can start to look for patterns and take
actions on those patterns.
Given that a real world event happens at a specific time we can also start to apply reasoning on an
event not happening. This is a powerful notion. Using this idea we can further enrich our
understanding and processing of events.
The second attribute of an event we wish to consider is the notion of what did the event apply to?
Looking back at our list we can map our events to questions related to that event.
A passenger boards a plane
What was the item? Which shopping basket was the item
added to?
This is the time we can introduce a term that will be used throughout our study. The term is an
"entity". This term is used to describe the "what" with which an event is associated. By realizing
that every event has a corresponding entity, we now have another powerful reasoning ability. We
can now reason over the set of events that apply to an individual entity.
Recapping, when an event occurs that event happened at a specific time and is associated with a
specific entity.
Given these ideas, a notion of data processing against these areas was considered and given the
general name "complex event processing". Complex event processing is the examination of events
arriving from potentially multiple sources and performing reasoning over those events to detect and
respond to patterns, expectations or omissions found in those events. This was a pretty dry
description to make it more real to us, the IBM ODM DSI product is an instance of a complex
event processing solution.
Once we understand that a complex event processing system can be supplied sets of events and can
Page 9
then reason over these events, what next? This introduces another idea, that of performing an
action. It is all well and good to detect events but if we do nothing with the new knowledge, there
is little value. What we need to do is detect the events, reason over them and as a result perform
some action. What might that action be? A complex event processing system has to be flexible in
that respect. In the ODM DSI world, the action could be the sending of a request to another IT
system to perform a task such as sending an email, updating a database or initiating a process
instance but these are merely examples. The nature of the action is likely to be extremely varied
and as such a good complex event processing environment must be flexible in how actions can be
performed.
See also:
The Power of Events: An Introduction to Complex Event Processing in Distributed Enterprise Systems
Solution
A "Solution" is a complete deployable unit that represents what we are building. If it helps,
think of a "Solution" as a project or application. The end goal of working with ODM DSI is to
build a solution and deploy it into production. The solution is developed within an Eclipse
environment supplied by IBM called Insight Designer. Once a solution has been built, it is exported
from Eclipse into a file known as a "solution archive". This can then be deployed (installed) into a
server component known as a Decision Server.
Page 10
Concepts
In Object Oriented programming, we have the idea of inherited types. For example, I could define a
"Vehicle" type as an object that has properties such as:
Number of wheels
Maximum passengers
Fuel type
However if I wanted to create new types such as "Cars" or "Boats" those could be considered
"inherited" or "derived" from the base "Vehicle" type. In some programming languages (eg. Java)
we can define that the base type is not "instantiable" in its own right but instead must be extended
by some other type in order to be of use. This is known as an "abstract" type.
In DSI, a "Concept" is a generic object that has properties defined against it. It is not instantiable
by itself but rather forms the base for other types. When we further talk about things called entities
and events, we will find that they can both be derived from a concept definition.
Entity
We have seen that a "Concept" is a model of an abstract thing. An instantiated "Entity" is a
unique instance of a named "Concept" and may have relationships to other Entities.
Think of an entity as a model of a "specific thing". For example, we have the "concept" of a car but
we have an instance of a real car that real car instance would be an example of an "Entity".
Every unique entity has a unique identifier associated with it to allow us to distinguish one entity
Page 11
instance from another. In our example of cars, the car's unique identity may be modeled as its
number plate or VIN.
The structure of an instance of an entity must be modeled before it can be used and is modeled
using the notion of a "Business Model". An Entity may have many attributes associated with it and
each attribute is also modeled. For example, an instance of a car has attributes such as paint color,
manufacturer, year built, mileage and more. We don't want to model every conceivable attribute in
our Entity description, instead we want to only model the attributes that will be used to reason
against that Entity.
One of the primary purposes of ODM DSI is to maintain models of Entities at run-time.
See also:
Event
Think about something happening at some point in time. This is the core notion of an event. An
event carries with it a payload of data. This payload is considered to be the "attributes" of the event.
Each event must have a mandatory attribute that carries the date and time at which the event is
considered to have happened. The default name for this attribute is timestamp. Events are defined
within the "Business Model". A component called the "Solution gateway" is used to receive
incoming events and route them correctly for processing.
Each different event type that is modeled is considered to have a corresponding "Event type"
that allows ODM DSI to know what kind of event it is. This allows ODM DSI to perform initial
coarse grained analysis of it. For example, a purchase event may be something we are interested in
but a shopping cart abandoned event may not be useful to us at this time.
Events are delivered to Rule agents and Java agents for handling. An agent can itself emit a new
event that could be further processed by other agents.
When an Event is processed, the goal is to relate that Event to an Entity. For example, if an event
arrives saying "Lord of the rings was checked out of the library by Neil" then there are two entities
involved here. The first is the physical instance of the copy of the book and the second is the
borrower who borrowed that book. The arrival of that event should update the "Entities" being
modeled for both of these items.
Page 12
Events don't simply "appear" out of nowhere. We call the source of an event the "event producer".
Conversely, events do not just disappear into the ether. They are usually destined to be given to
something else for processing. We call the destination of an event the "event consumer". ODM
DSI allows for event producers to be external systems to DSI which can then submit events to DSI
for processing. Event consumers can also be external systems to DSI which can be the target of
events emitted by DSI.
There is an additional concept that is of extreme value to us and that is the notion of an event
being created by DSI for consumption also be DSI. These types of events we call derived events.
Other names for what we call derived events have been "internal events" and "synthetic events".
See also:
Solution Gateway
The concept of the Solution Gateway is the entry point for events arriving from external systems.
When sending events to external systems, the Solution Gateway is not utilized.
endpoint.
For the ODM DSI product, the reality is that events sent or received will arrive or be transmitted
over either JMS or HTTP.
When working with Systems of Record data that is owned outside of the DSI environment, it is
suggested that DSI not make those updates directly. Instead, DSI should emit an outbound event
and have some external system be responsible for updating the SoR using the event and its content
as the instructions on what to change. There are a number of good reasons for this but probably the
most important is the transactional nature of DSI. When an incoming event is being processed, it is
possible that an agent may raise an exception and the work done by the processing of the event be
rolled back. Since the work done in an agent is not under JTA (or XA) transactional control,
anything that is not under the control of DSI won't be rolled back. The emission of a new event
using the connections technologies is managed as part of the transaction and as such, the outbound
event will either happen or not happen as a result of the transactional processing of the original
event.
See also:
Agents
The phrase "Agent" is rather vague but once understood, no better phrase is likely to be found. The
idea here is that when an event arrives, some logic processing is performed by ODM DSI to reflect
what that event means. The "thing" in ODM DSI that performs this processing work is called an
"Agent". During the development of an ODM DSI solution, you will build out one or more Agents
to perform these tasks. It is the Agent that hosts the business logic that determines what should
happen when events arrive.
When an event arrives at an agent, it can perform a number of distinct actions:
A new event can be emitted based on the arrival of the original event
Update an existing entity from data contained or calculated from the event
From an IBM ODM DSI perspective, an agent is built within the Eclipse tooling through either Java
coding, rules language definitions or scoring.
When an agent is built, it subscribes to one or more types of events thus registering its desire and
ability to handle those. Only those agents which subscribe to particular type of event will receive a
copy of an instance of that event for processing. Agents that have not subscribed to a particular
type of event are simply unaware of it should such an event arrive at DSI. We can think of an agent
as having an "interface" and events that don't have the corresponding matching type don't pass
through that interface.
When an event is published, an instance of the event will be received by all agents that have a
matching interface.
An agent is associated with an entity. The entity to which the agent is associated is called the
"Bound Entity". The agent can perform all kinds of activity against the bound entity including
updating its information. Entities can have relationships with other entities. An agent can access
any other entity that has a relationship with its bound entity but in a read-only manner.
Page 14
A single agent is bound to only one entity but an entity can have multiple different agents bound to
it.
When a single event is to be processed by multiple agents, each agent has a priority property that
governs its relative execution order. Agents with a higher priority will receive and process the event
prior to agents with a lower priority. Agents with equal priority will be executed in alphabetical
order by their names.
There is a relationship between agents and solutions. When we get to building an agent, we will
find that it is constructed as an Insight Designer project and that the project is referenced by a
Solution. We can loosely think of a solution as being a container for a set of agent projects and as
such, becomes the deployment technique for agents (although they can be installed individually
after the solution has been installed).
See also:
Implementing Agents
Time
The very nature of ODM DSI means that we have to give a lot of thought to the concept of time.
Although it may seem redundant, we are also going to refresh out own minds on some basics of
time.
Let us start with how we measure time. When we measure things, we ascribe units to them. For
distance it is miles, for weight it is pounds, for volume we may use liters. So what then are our
units of time?
Our base unit is the second. Beyond that we have hours, days, weeks, months and years.
With this in mind, I can start to refer to durations of time. I might say "15 seconds" or "46 years"
Page 15
and these refer to durations or spans of time. However, time is an odd thing ... it flows in one
direction (from the past to the future) and on that "timeline" there are individual marks. We call
those "points in time". For example, 4:27pm on July 29th, 1968 was a specific point in time.
Another point in time will be 3:14:07 on January 19th, 2038 where this one is in the future.
Although we can refer to specific points in time such as 9:20pm we run into another consideration
when contemplating geographic timezones. 9:20pm in Texas is 3:20am in London on the next day.
So simply saying 9:20pm is not sufficient to fix a point on the timeline, we need to consider which
timezone that time refers to. However, a time point is just that ... a mark on the timeline that we can
always say is "some number of seconds ago or until". It is a relative value. If the timepoint is
exactly 8 hours from now and I start a stopwatch ticking down, then no matter where I travel in the
world with that stopwatch, the timepoint will happen irrespective of my local wall clock time when
the stopwatch reaches zero.
Now let us bring in the notion of duration. Duration is the measurement of time between two time
points. It is an absolute value meaning it is not relative to an observer. It may be measured in any
appropriate time units with "seconds" being the most fine grained.
With the notions of a fixed point in time and a duration being a measurement of time between two
time points, we introduce one more concept ... the idea of the "time period".
A time period can be considered as the set of all time points between a start time and an end time.
For example:
However, given what we know, we can also define a Time Period as a start time plus a duration or
an end time minus a duration (both of these will give us fixed points in time).
If all of this is making your head hurt we are about done but before that, consider all the following
as examples of durations and maybe a light bulb will go on:
today The duration starting at the previous midnight and lasting for 24 hours
this month The duration starting on the 1st of the month and lasting for however many
days this month contains
one hour before closing The duration defined as the preceding hour before the
pub shuts until we can drink no more
new years day The duration from midnight on January 1st to midnight on January 2nd.
Make sure to distinguish the subtle distinction between a time duration and a time period. In
summary, a time duration is a length of time encompassing no fixed time points while a time period
is also a length of time that maps out all time points within that period.
See also:
Time operators
Page 16
We see that T2 is within the time period of T1 plus 24 hours and hence is eligible for the
discount
However, imagine that we have a technology outage or our network is simply slow. This means
that there will be a latency between when T2 happens and when the event may actually arrive at
ODM DSI.
To visualize this, see the following:
The second buy event did indeed happen at time T2 which is within our time period for the discount
but because our technology was down or very slow, the event didn't arrive until after the expiry of
the discount interval. If I receive my credit card bill and don't get my expected 10% discount I will
be upset. I bought a blue widget and then within 24 hours I really did buy a second widget, then I
am not at fault here.
And this is where ODM DSI introduces a new time notion. This is the notion that every event
carries with it a time stamp which is when the event actually occurred. When the event arrives at
ODM DSI, the wall clock time at which it physically arrives is un-important. What ODM DSI
cares about is when the event actually happened and its relationship to the rules at that point in time
and not when the event mechanically arrived at the product.
The time point when an event is believed to have happened is available in a rule construct called
Page 17
"now".
When we think about aggregating data, we must consider the notion of "when" the calculations are
performed. These stories differ depending on whether the aggregate in question is built from event
data or entity data.
Aggregates built from events are recalculated as soon as possible after the event is processed. The
aggregate values are built from either the count of such events or data contained within the event.
Specifically, an event aggregate may not utilize data from other events or any data contained within
entities.
Aggregates built from entities are recalculated only on a configurable periodic basis.
An aggregate is also scoped by a solution.
See also:
Architecture
Let us start with the notion of an event arriving at ODM DSI. One of the first things that happens is
that ODM DSI searches for the set of agents that can process this "type" of event. We should take a
few minutes to consider this notion. Within an ODM DSI environment, there will be multiple types
of events that can be received. There will also be multiple types of business Entities that are
managed. As such, there needs to be this degree of traffic cop processing that looks at the incoming
event and chooses which (if any) agent types should receive the event.
It is the "agent descriptor file" artifact that maps types of events to types of agents.
See also:
Page 18
Situation Driven Design with ODM Advanced Decision Server Insights - 2015-03-25
http://www-01.ibm.com/support/knowledgecenter/SSQP76_8.7.0/com.ibm.odm.itoa/topics/odm_itoa.html
A readable book but I get the distinct impression that it feels like a specification of some theoretical
event processing system. I could imagine this book being used as the basis for an industry
specification of some event processing language. However, it does a good job of providing the core
concepts of event processing without complicating them with any particular vendor implementation.
Considered by many to be the original source material on much of event processing I found it to be
rather academic and hard-going. Undoubtedly very important for those who may be implementing
event processing middle-ware but I am not convinced that it will be that applicable to all but the
most studious DSI consumers.
Page 19
DeveloperWorks
DeveloperWorks is IBM's technical library of knowledge on its products. It regularly publishes
new articles on using DSI in interesting new ways that often clarify or provide examples of
complex areas of the product:
developerWorks - Simplify complex code with OSGi services in Decision Server Insights rules - 2015-03-25
Release History
DSI was first released at the end of 2014.
Installation
The part numbers for the components of the product are:
Description
Part Number
IBM Decision Server Insights for Windows 64 bits (IM Repository) V8.7.1 multilingual
CN5DMLL
IBM Operational Decision Manager Advanced V8.7.1 for Windows Multilingual eAssembly
CRW59ML
Page 20
The prerequisites and supported packages can be found at the following IBM web page:
http://www-01.ibm.com/support/docview.wss?uid=swg27023067
However note that the above is for IBM ODM Advanced as a whole and not just the DSI sub
components.
The product can be installed through the IBM Installation Manager product manager. Installation
Manager is a tool that can be used to perform installation and update tasks. It has knowledge of a
variety of products and the file systems and directory structures in which they live.
Installation Manager can be found on the Internet here.
The supported environment for installation is:
JDK 1.7.0
If the product was downloaded from IBM, it will be contained in a "tar" file that is called:
DSI_WIN_64_BITS_IMR_V8.7.1_ML
This should be extracted into its own folder. Make sure you have sufficient disk space as it is
gigabytes in size. Windows does not appear to have a native "tar" file extractor but one can
download 7Zip (http://www.7-zip.org/) to perform the extraction. In fact I'd be even stronger here
and ask that you use 7Zip as opposed to any potential other windows based tar file extractor. I have
used a popular other extractor and found that (by default) the resulting data was not what was
expected.
From with the extracted content we will find a folder called "disk5" and within there, a
"repository" file which is the input data to Installation Manager. We are now ready to prepare
for the installation. Launch Installation Manager and select File > Preferences. We now
add a new Repository:
Page 21
and pick the "repository.config" file from the ODM DSI extraction folder:
When we launch Installation Manager Install screens to install ODM DSI, we are first presented
with the following screen:
Page 22
After selecting that we do indeed wish to install the product, we are prompted to accept the
licensing terms.
Page 23
Next we are asked which directory we wish to use to host the files necessary for the product's
operation. In this example we chose C:\IBM\ODMDSI87 (the 87 is the version number).
Page 24
Next we can select which options of the product to install. This is specifically the choice of which
languages will be used for messages and screens.
Page 25
Page 26
Page 27
With the details selected, we can now confirm the final installation.
Page 28
The installation will progress for a while and at the conclusion we will be presented with a
successful installation outcome.
Page 29
FixPacks
It is always a valuable idea to see if there are any fix packs supplied by IBM for your product.
Environment preparation
The development environment for ODM DSI solutions is an Eclipse environment called "Insight
Designer".
Once Eclipse is launched, open the "Decision Insight" perspective:
Page 30
Note: The following issue was resolved in 8.7.0 fix pack 1 so if you are running at that level or
beyond, you can skip setting the target platform.
Before building a solution, a very specific and quite opaque series of steps must be performed
which, generically, we call "setting the target platform". Quite why this needs to be performed
manually following an installation is not clear. It is the sort of thing that would seem to be able to
be done automatically (and transparently) for us. However, it need only be performed once per
Eclipse workspace being used and then promptly forgotten about until we create our next
workspace.
The steps can be achieved by opening the Eclipse preferences and going to Plug-in
Development -> Target Platform.
Page 31
Click the Add button and from "Template" select "Insight Server".
Page 32
Click Next and Finish. Once this platform has been added, make sure that it is flagged as active:
If these steps are not followed, an error similar to the following will be presented in the Eclipse
errors view:
Page 33
Developing a solution
ODM DSI solutions are built through a combination of design (thought) and practical actions
(interaction with the tools). What we will consider here are the practical steps of building such a
solution.
Not all ODM DSI solutions will utilize all aspects of the technology. For example, some solutions
may need Java Agents while others simply won't. There are however certain parts of a solution
project that are common to each and every such project.
The common parts include:
Exporting a solution
ODM DSI solutions are built using an instance of the Eclipse development tool. The Eclipse
version supplied is at release level 4.2.2 which is also known by Eclipse folks as "Juno".
The overall pattern for building a new solution from scratch is:
1. Create a new Solution project (and a BOM project)
2. Create a new Business Model Definition
1. Define Entity Types
2. Define Event Types
3. Create a new Connectivity Definition
1. Complete the .cdef file
4. Create a new Rule Agent Project
1. Complete the agent.adsc file
5. Create a new Action Rule
6. Export the solution (Exporting a solution
7. Deploy the solution (Deploying a solution to a DSI Server Manual
8. Generate connectivity file (Deploying Connectivity Configurations
1. Edit the file
9. Deploy connectivity file (Deploying Connectivity Configurations
Page 34
Platform Version
Juno
4.2
Kepler
4.3
Luna
4.4
After opening Eclipse for the first time, one should switch to the ODM DSI perspective. An Eclipse
perspective is the set of editors and views that are logically grouped together. The DSI perspective
provides everything needed to build DSI solutions.
To change perspective, use the Window > Open Perspective > Other menu item:
Page 35
You will know which perspective you are in as it will be highlighted in the bar at the top of Eclipse:
Each of the various artifacts with which we work have icons associated with them:
Aggregate definition
Connectivity definition
BOM Model
Agent Descriptor
Java source
Manifest file
Business Modeling definition
Artifact type
Suggested naming
Solution Project
<solution>
BOM Project
<solution> BOM
Package <solution>
Name BusinessModel
Package - <solution>
Name <Event>
Package <solution>.ext
Class name <Data Provider Name>
Page 37
Clicking next prompts us for the name of the "BOM" project to create or use. The recommendation
is to use the same name as your solution project with a suffix of "BOM". For example, if your
project were called "Payroll" then a suggested name for the corresponding BOM project would
be "Payroll BOM".
Page 38
The creation of the Solution results in three new Eclipse projects being built. They are:
<Solution> - Java Interfaces A project that contains Java Interfaces used for
programming access to the solution.
For example:
We will be working with all of these and it may take you some time to be able to differentiate
between them so work slowly and carefully at first.
In addition to these projects, we will also be working with others including:
Page 39
OSGi project
IBM-IA-ZoneId An optional property that defines the time zone in which the solution
operates.
The solution map presents a visual indication of what steps need to be performed in order to
complete the solution. The diagram is split into a number of distinct sections corresponding to the
flow of building the solution. Specifically, first we model the solution, then we author the details
and finally we deploy the solution for operation. There are boxes corresponding to each of these
major flow steps. Within each box are summary reminders of what we can do plus links to launch
activities to perform those tasks. Help buttons are also shown beside each activity that will launch
the corresponding help pages for that activity.
Steps within the flow may be grayed-out to indicate the preceding steps must first be achieved
before we can make further progress.
data must happen before the creation of the agents that will be used to process that data.
When building a BOM, we will create items that represent:
Entity types
Event types
Concepts
Enumerations
Properties
Relationships
Page 41
The content of the generated ".bmd" file should be edited through the Business Model Definition
Editor in Eclipse.
When initially opened, it is empty and waiting for you to enter your definitions. Each line in the
file is called a statement and must end with a period character.
When the ".bmd" file is saved, this automatically causes Eclipse to rebuild the BOM model from
the ".bmd" definition file.
An example of a statement might be:
an employee is a business entity identified by a serial number.
Page 42
The creation of a new ".bmd" step is also found in the Solution Map and may be launched from
there:
See also:
Modeling Concepts
The idea of a concept is that of an abstract data type which is a named container of properties
(attributes, fields). The properties can be simple types or relationships to other Concepts.
The syntax for modeling a concept is:
a <concept> is a concept.
for example:
an 'Address' is a concept.
To add properties to the concept definition, we can use the "has a" phrase:
a <concept> has a <property>.
for example:
an 'Address' has a 'street'.
An alternative way to define properties is to include them in the initial concept definition using the
"with" phrase:
an 'Address' is a concept with a 'street', a 'city' and a 'zip'.
'Address'
'Address'
'Address'
'Address'
is a concept.
has a 'street'.
has a 'city'.
has a 'zip'.
We can create a new concept by extending an existing concept. The syntax for this is:
<a concept> is a <a concept>.
We might want to do this to create a "base definition" of a data type and then create specializations
for it.
For example:
a 'US Address' is an 'Address'.
a 'US Address' has a 'state'.
There is also the idea of an "enumeration" where we can define the possible values of a concept:
Page 43
For example:
a 'Security Classification' can be one of: 'Unclassified', 'Internal Use Only', 'Confidential'.
See also:
Concepts
When we model an Entity Type what we are really doing is building a data model that will be used
by ODM DSI to represent an instance of such an entity. This data model is hierarchical in nature
and is composed of properties and relationships. Each entity type must have a property that is
considered its identity (or key). No two distinct entities may have the same value for this identity
property. The data type for the identity property must be String.
For example, if we are modeling an Entity that represents an Employee, we might choose a property
called "employeeNumber" as the identity. When an event is processed, we can use a property in
the event to locate the corresponding Entity (if one exists). The phrase "identified by"
defines the property to be used as the "key".
The syntax for modeling an entity type is:
an <entity> is a business entity identified by a <property>.
For example:
an 'Employee' is a business entity identified by a 'serial number'.
Similar to the "concept" definition, we can model properties of an entity using either the "has"
or "with" phrases:
an 'Employee' is a business entity identified by a 'serial number'.
an 'Employee' has a 'name'.
an 'Employee' has a 'department'.
or
an 'Employee' is a business entity identified by a 'serial number' with a 'name' and a 'department'.
which style you choose is merely a matter of preference as they are functionally identical.
See also:
When our solutions are deployed, we will be sending in events for the run-time to process. Before
the run-time can receive such events, we have to model them in a similar fashion to our modeling of
concepts and entities. An event is also a data type definition that has a name and a set of properties.
However, one of the properties of an event must be of the data type "date & time" and will be
used to identify the timestamp at which the event was created. This is used by the run-time for time
based reasoning. If we don't explicitly model this timestamp, one will be provided for us.
Other properties can be modeled on the event using the "has a" and "with" syntaxes.
The syntax for modeling an event type is:
an <event> is a business event.
For example:
Page 44
If we choose not to supply an explicit property to be used to hold the timestamp of the event, a
default is provided called "timestamp". If we desire to explicitly name the property to be used to
contain the timestamp, we can use the following syntax:
an <event> is a business event time-stamped by a <property>.
For example:
a 'Promotion' is a business event time-stamped by an 'approval date'.
An additional option available to us when defining events it to extend an existing event definition.
The general syntax for this is:
an <event> is an <event>.
For example:
an 'Executive Promotion' is a 'Promotion' with a 'business justification'.
See also:
Modeling Attributes
A Concept, an Entity Type and an Event Type can all have attributes. We will generically call
concept types, entity types and event types "objects". An attribute of an object is a named item
contained within its type definition.
Note: I am going to use the words attributes, properties and occassionally fields interchangably. I
am sure that some purist will be able to educate me on semantic differences between those notions
and I would welcome that ... however as of the time of writing, the word I use seems to based on
whim.
If you are familiar with Java, you won't be far wrong in thinking of an attribute of an object just like
a field in a Java class definition. The attribute is defined with both name and type. If no type is
supplied, text is assumed.
There are a couple of ways to model such attributes all of which are semantically identical.
One way is to use "with":
... with a <property>, a <property>, ..., a <property>.
... with a <property>, a <property>, ... and a <property>.
Both of the above will define named properties on the target object. There is an additional phrase
that can be used to define a boolean (true/false) property which is "can be".
Using "can be"
a <[concept|entity|event]> can be a <property>.
For example:
an 'Employee' can be 'retired'.
The data type for a property, if not explicitly specified is of type "text". To specify alternative
data types, the type can follow the name of the property within parenthesis. The following types are
Page 45
allowed:
Type
Java type
numeric
double
integer
int
text
java.lang.String
a boolean
boolean
date
ilog.rules.brl.SimpleDate
time
java.time.LocalTime
java.time.ZonedDateTime
duration
com.ibm.ia.AbsoluteDuration
a Point
com.ibm.geolib.geom.Point
In the following example, notice the data type definition for "date of birth" and "salary":
a 'Person' is a concept.
a 'Person' has a 'name'.
a 'Person' has a 'date of birth' (date).
an 'Employee' is a 'Person' identified by a 'serial number'.
an 'Employee' has a 'department'.
an 'Employee' has a 'salary' (numeric).
When an instance of an entity or an event is created, the properties are not initially set with values.
We can set default values of a property in its definition using the syntax
(<type>, <value> by default)
For example:
an 'Employee' has a 'salary' (numeric, 0 by default).
If the property of an entity or event must have a value to make the object meaningful, we can flag
the property as being required with the syntax:
[mandatory]
for example:
an 'Employee' has a 'department' [mandatory].
Each of the properties described so far has a single value, however we can imagine an object as
being able to have a property which is a list of values.
For example, in the simple case of an Employee having a property called a "telephone
number", we might declare:
an 'Employee' has a 'telephone number'.
however it is possible that he may have multiple telephone numbers. We can express this using the
syntax "has some":
a <object> has some <properties>.
For example:
an 'Employee' has some 'telephone numbers'.
Page 46
Modeling Relationships
So far we have considered only the definition of properties of simple types within a model but we
can also have those properties be richer definitions such as concepts.
A relationship to a concept uses the "has" keyword:
a <[entity|event|concept]> has a <concept>.
In this case we would define an entity or event as having a named property that is an instance of the
concept. The property would have the same name as the concept. For example:
a 'Person' has an 'Address'.
For example:
a 'Person' has an 'Address', named the 'address'.
A third possibility is to provide the name and type of the concept using:
a <[entity|event|concept]> has a <property> (a <concept>).
a 'Person' has an 'address' (an 'Address').
For example:
a 'Person' has an 'address' that is an 'Address'.
Each of these are semantically equivalent and simply offer alternative styles of description. Which
one to use merely becomes a matter of choice. As if this wasn't enough IBM has gone out of its
way to provide even more options. Instead of using the phrase "has", you can also specify "is
related to". The following are all equivalent:
a
a
a
a
'Person'
'Person'
'Person'
'Person'
is
is
is
is
related
related
related
related
to
to
to
to
an
an
an
an
'Address'.
'Address', named the 'address'.
'Address' (an 'Address').
'Address' that is an 'Address'.
Comments can be inserted into a ".bmd" by starting a line with two dash symbols
-- This is a comment
An XML Schema can be imported into Eclipse to define the Events and Entities. In order to allow
Eclipse to parse the content correctly, annotations must be added. These provide instructions on
how the Schema should be interpreted.
To flag a schema complex type as an event, we would add:
<annotation>
<appinfo source="http://www.ibm.com/ia/Annotation">
Page 47
<event />
</appinfo>
</annotation>
The element within a complex type that represents an event that is to be used as the timestamp of
the event must also be flagged:
<annotation>
<appinfo source="http://www.ibm.com/ia/Annotation">
<timestamp />
</appinfo>
</annotation>
When we create a solution project, one of the wizard screens allows us to create a new BOM
project. On that screen we also have the option of linking to an existing BOM project.
The newly created Solution project will have the same concept, entity and event definitions
available to as those found in the original solution project. Changes to the .bmd will be visible in
all projects that utilize the BOM project.
Suggested initial language for initial business model definitions
When defining data models there are multiple ways to achieve the same definition. This is partly
due to the flexibility of the English language and the syntax and grammar associated with it.
It is suggested that while one learns DSI that you keep your descriptions simple. In English, one
can express an idea in a perfect, unambiguous fashion using as few words as possible.
For example, I am likely to say:
My car outside my house is a red Toyota Corrola.
as opposed to:
I have a car.
It is outside my house.
It's color is red.
It is made by Toyota.
It is a Corrola.
However, the second example contains exactly the same information as the first. One may argue
that the first example is far superior to the second but I claim that this is simply because you can see
the solution in front of you. Whenever you have the answer before you, it can immediately be
recognized as correct however when you don't yet have the answer, building "an" answer that is
correct is more important than building a perfect answer first time around.
When building an entity, I suggest the following pattern:
an
an
an
an
This pattern says that we define an entity with only its single key property and then add the
additional properties to the definition one per line.
Similarly, I advocate the construction of an event as:
an
an
an
an
Page 48
In addition, we can also define actions to be performed such as setting additional properties of the
entity.
For example:
As an alternative, we can define a Java class that will be automatically used to build new Entity
instances.
When an event arrives and no corresponding entity exists but we have a rule to process such
event/entity pairs, we can be assured that the entity will be created before being processed by the
rule. However, take care if the arrival of an event causes multiple entities to be created. It is not
assured that any entities created by the arrival of an event which are related to the rule processing
entity will be created prior to the rule being started.
For example, if an event E arrives which causes the creation of two entities (I1 and I2) and I1 has a
relationship to I2 then if a rule fires because of the arrival of E and the existence of I1 then the rule
may not see a reference to I2 because it may not yet have been constructed.
See also:
In the statement section of a BMD we can define enrichments which take the general form of:
an <Entity> is enriched by <A data provider>,
given
<parameter name> from <field value>,
Page 49
and
Having made these definitions, what remains is to implement the data provider as a Java Class.
This is described in a separate section.
There is a vitally important consideration that needs to be understood when thinking about enriched
attributes. If we define an attribute as enriched, its value is only calculated when an explicit request
for the properties value is made within a DSI server agent. Once calculated, the value will not be
recalculated for a cache time period but once the period expires, the value will be re-calculated.
What this means is that for a given entity, a property could appear to change over time without any
explicit changes being made to its value assuming the enrichment function returns different
values over time.
Another important consideration is that if one uses the REST APIs to retrieve an entity that has an
attribute that is enriched, the attribute is not returned to the client. It will not be found in the HTTP
response data. This is also true for serialized XML. It is safe to consider that the attribute as found
in the entity is not so much a value as a reference to a "function" that, when called, will return a
value.
The caching mechanisms employed can be based on the selection of an eviction algorithm.
Eviction is the action DSI will take to reclaim cache space. The two algorithms available are "time
to live" and "least recently used".
The time to live is a period measured in seconds after which the cached data record will be purged.
Think of this as timer based with the timer starting when the record is written. To enable this
feature, edit the file located at:
<DSIRoot>/runtime/wlp/templates.servers/cisContainer/grids/objectgrid.xml
The least recently used algorithm tracks access patterns to cached data and when there is a shortage
of cache storage size, DSI will select which old cache items to remove to make room for new items.
The setup of this requires editing the file:
<DSIRoot>/runtime/wlp/templates.servers/cisContainer/grids/objectgrid.xml
and making changes as defined in Knowledge Center. I don't repeat them here as I want you to
study the notes in detail for this specific recipe.
See also:
Page 50
The folder called "model" is the BOM of interest to us. In the above, its content was generated
from the .bmd file.
If the .bmd contained:
an employee is a business entity identified by a serial number.
an employee has a job title.
an employee has a salary (numeric).
Take a few moments to see the relationship between the BOM model and the .bmd declaration of
that model. By default, when a change is made to the .bmd, the model will be rebuilt. If after
building a model, you decide to make manual changes to the BOM, you can disable the relationship
between the BOM model and the .bmd.
Page 51
*.b2xa A file which describes the mapping between the BOM and the XOM
Inbound Bindings
Inbound Endpoints
Outbound Bindings
Outbound Endpoints
If some major event happens (you win the lottery), I can inform you of that event by calling your
phone or sending you an email. Even if I don't know you, I know how to leverage those
communication technologies to deliver information to you. Both you and I can leverage the notion
of the binding.
However, if I grab my phone to call you or open my mail client to email you, there is still
something missing. That is the actual phone number I use to call you or your actual email address
used to reach you. The fact that you are bound to a telephone and an email account are logical
concepts, we still need additional information to reach you. And that is where the second concept
comes into play. That is the notion of the "endpoint". An endpoint is the concrete information
associated with a binding type that allows me to reach you as opposed to reaching someone else.
The endpoint information is contextual to the type of binding. A phone number is related to a
telephone binding while an email address is related to an email binding.
Returning to ODM DSI, when we build a solution we describe one or more bindings that tell ODM
DSI which sources of information to listen upon for which events. Once we have described a
binding, we have told the solution "You are able to receive events via HTTP" or "You are able
receive events via JMS", however we have not yet told ODM DSI what is the URL path for an
HTTP event or what is the queue name for a JMS message. That is where we add the "endpoint"
information. For each binding we created, we must also create a corresponding endpoint definition.
The definition of these bindings and endpoints is entered in a file called a "Connectivity Definition"
which has a file suffix of ".cdef".
This file can be created within Eclipse by creating a new Connectivity Definition which is
associated with an existing solution project:
Once created, it will be located as a .cdef file within the Connectivity Definitions folder of
Eclipse the solution project.
Within the .cdef file we define inbound and outbound endpoints and bindings. The definitions
themselves are created in a "sort of" business like language which is odd because we would expect
this detailed technical file to be made by IT staff.
Page 53
The .cdef file contains both inbound and outbound bindings and endpoints.
The creation of a new .cdef file can also be found in the Solution Map:
See also:
Inbound bindings
An inbound binding describes how messages representing events arriving over either an HTTP or
JMS queue will be processed. An inbound binding describes the protocol to listen upon and also
the description of which events can be delivered to this binding. It is the endpoint that will name
the actual HTTP path or the actual JMS queue. The syntax for an inbound definition is:
define inbound binding '<name>'
[with description "<description>",]
using message format {application/xml|text/xml},
protocol {HTTP|JMS},
[
classifying messages:
if matches "<Xpath expression>"
]
{accepting any event. |
aceepting events:
- <event>*. |
accepting no events.}
The classifying messages section allows us to supply an expression and have transformation rules
applied if the expression is true.
Inbound endpoints
An inbound endpoint is used to represent a source of an event. The syntax for an inbound HTTP
endpoint definition is:
define inbound HTTP endpoint '<endpoint name>'
[with description "<description>",]
using binding '<inbound binding>',
url path "<url path>"
[, advanced properties:
- 'name': "value" *
].
The url path must consist of at least two parts. For example "/x/y".
The syntax for an inbound JMS endpoint is:
define inbound HTTP endpoint '<endpoint name>'
[with description "<description>",]
using binding '<inbound binding>'
[, advanced properties:
- 'name': "value" *
Page 54
].
See also:
Outbound bindings
The syntax for an outbound binding is:
define outbound binding '<binding name>'
with
description "<description>",
using
message format <message format>,
protocol <JMS|HTTP>,
delivering events :
- event
- event.
Outbound endpoints
An outbound endpoint is related to an outbound binding. It describes how to transmit the outgoing
message.
The syntax for an outbound endpoint HTTP is:
define outbound HTTP endpoint '<endpoint name>'
with
description "<endpoint description>",
using
binding '<referenced binding>',
url "<endpoint url>".
The <connection factory> is a JNDI reference to a JMS connection factory. Similarly, the
<destination> is a JNDI reference to a JMS destination (a queue or a topic).
See also:
JMS
HTTP Bindings
For inbound, this is the URL path on the ODM DSI server to which HTTP POST requests can be
made passing in event data as the body of the message in XML format.
See also:
JMS Bindings
For inbound bindings, this will be the JMS queue.
Page 55
Take care to note the separator character ('T') which separates the date from the time.
From withing JavaScript, the language supplied object called Date has a function called
toISOString() that will return a properly formatted string representation.
Transformations
For inbound bindings, we can define an XSLT style-sheet that can be used to process an incoming
XML message and transform it into a new message that is of the appropriate format for an incoming
DSI Event.
From the context menu, we can select New > Transformation File:
This will show a wizard page into which the key properties can be entered:
Page 56
These include:
Solution project The DSI solution project against which the transformation file is
to be built.
Container The folder location within the Eclipse workspace where the XSLT file will
be generated.
Template Whether this the generated file is from XML to an event (inbound) or from an
event to XML (outbound).
Event type A list of the events defined for the solution project one of which can be
selected as the template for transformation.
The resulting XSLT file can then be edited by someone who understands the XSLT language to
perform the transformations.
If we send an event to an inbound connection which is not configured to accept that kind of
event then it is discarded with a log message written to the console.
In this sample, we define a binding called "Binding1" which is going to receive XML events over
Page 57
HTTP. The type of event being listened for is a "sale" event. We will listen for this on an
endpoint called "Endpoint1" which is an HTTP endpoint found at "/Sales/EP1".
define inbound binding 'Binding1'
using message format application/xml ,
protocol HTTP ,
accepting events :
- sale .
define inbound HTTP endpoint 'Endpoint1'
using binding 'Binding1' ,
url path "/Sales/EP1".
See also:
JMS
Implementing Agents
Agents are built within Insight Designer (Eclipse) through the creation of an agent project for each
agent that you desire.
The key actions that will be undertaken to build an agent will be:
Page 58
Describing the rules and logic that the agent will perform when an event arrives.
When implementing an agent, you basically have two primary choices available to you. You can
create either a Rule Agent or a Java Agent. In both cases, you are describing what happens when an
event arrives and how it relates to the entity instance associated (bound) to that agent. The choice
of whether or not you implement your agent as a Rule Agent or a Java Agent has a number of
considerations.
You might implement your agent as a Rule Agent if
You need to interact with external systems accessible through Java APIs or libraries
You are more comfortable coding to Java than the Rule Agent language
See also:
Agents
One of the first files that we need to consider is called "agent.adsc" which is the agent
description file. The purpose of this file is to provide the following information:
What is the type of business "entity" that this agent uses as the bound entity (if any)? Note
that rule agents must have a bound entity.
What are the types of events that this agent is prepared to receive?
How do I access a specific entity instance to be associated with this agent from a specific
event type?
These definitions are made in a business level like language. When an agent project is created, a
template file called "agent.adsc" is built containing the following:
'<agent name>' is an agent related to <entity>,
[whose priority is <priority value>,]
processing events:
- <event name> [when <condition>], where <mapping> comes from <target> *
The place holders must be completed in the editor and until we do so, the agent will be flagged as
being in error.
The first place holder is <entity> which describes the entity type that this agent uses as its bound
entity.
Next comes the name of the <event name> which is used to trigger the processing.
Next comes the variable name that is used as the reference for the bound entity. This is the
<mapping> property.
Finally, provide the means to access the bound entity instance from the event. This is the
<target> property.
Page 59
The values for the priority can be Low, Medium and High or a numeric value. When an event
arrives, agents with a higher priority will process an event before agents with a lower priority.
Let us now look at the following diagram. It illustrates an event arriving and three different types
of Agents in the system. The Event (like all events) has a type associated with it. In our diagram,
we say it has an event type of "X". Of the three agents, Agent A and Agent C have declared that
they are interested in being made aware of instance of event type "X". Since Agent B has not
declared such an interest, an event of type X arriving at DSI will be ignored by that agent type.
If we now further consider that DSI is managing the state of a vast number of entities, we can
"believe" that there is an agent instance associated with each entity. Again, for each entity, assume
that it has a corresponding agent instance that is responsible for updating it.
In the following diagram, think of the triangles as representing entities with their associated
identifiers and the squares representing the associated agents that are responsible for those entities.
Page 60
When an event arrives at DSI and we have determined the types of agents to which those events are
to be delivered, we must now find the specific (actual) corresponding agent that is to actually
process the event. It is the agent descriptor file that describes how to locate a specific entity given
the payload that arrives with that event. Since each entity has a unique id if we can determine the
entity that we wish to work against, we can thus find the corresponding agent and we are close to
completion of this part of the story. When the agent is found, it can be delivered the event.
We thus see there is a dance at play here involving a number of players including events, agent
descriptor files, agents and entities. The agent descriptor file maps events to agents and event
content to entities. The entity is bound to the agent and hence when a specific event arrives, we can
determine which agent it should go to for processing.
The special case is when an event arrives and there is no entity yet in existence corresponding to the
incoming event data. Since there is no entity, there is no corresponding agent and since we have no
agent, how should the event be processed? The answer is that a brand new agent instance is created
that does not have an associated entity. This agent gets the event and can decide whether or not to
create the corresponding entity.
Rule Agents
A Rule Agent uses a high level (as compared to code) business rules language to describe the
processing and handling of incoming events. This language is edited within a specialized editor
within Eclipse.
To create a Rule Agent, we create a new project type instance called a Rule Agent project. We can
do this from the File > New menu:
Page 61
We are then asked to enter the name of the Eclipse project to be created and select the Solution
Project which will include this Rule Agent for deployment.
Within the Eclipse workspace folder structure, the project that is created by this wizard looks as
follows:
Page 62
Notes that the project, when created is flagged as containing errors. The errors will be found in a
file called "agent.adsc" which is the agent descriptor file. This file must be modified to reflect
what your agent will do and is described elsewhere.
The creation of a new Rule Agent can also be found in the Solution Map:
See also:
After completing the agent description, we can start to build out action rules. Action rules are the
individual rules that are used to describe processing upon the arrival of a corresponding event for a
Rule Agent. An Action Rule is created from the context menu:
Page 63
Rules are created under the rules folder of the Rule Agent project:
When a rule is created, it can be opened in the Rule Editor within Eclipse:
A rule is composed of a variety of parts describe in the specification of the rule language.
First we will look at the optional "definition part".
A definition part is the declaration of variables that exist only for the duration of the rule being
processed. You can think of these loosely as local variables.
Page 64
The value of an expression can be a variety of types including constants, expressions and business
terms.
Here are some variable definitions of constants:
definitions
set 'maxAmount' to 100000;
set 'open' to true;
set 'country' to "USA";
The next part of a rule we will look at is called the rule condition. It is composed of an "if
then else " construct. Following the "if" statement is a condition. If then condition
evaluates to true then the following action is performed otherwise the action following the "else"
is performed.
The general syntax of this part is:
if
<expression>
then
<action>
Describing how expressions can be constructed will be its own section as there are many varied
considerations.
The final part of a rule is the action section. Here we define what we wish to happen based on the
outcome of expression evaluation. Think of the action as the "now do this" part of a rule.
The "if then else " nature of a rule describes what the logic will be but not when it
will be applied. To capture that information we specify which events we wish to cause the
processing of the rule.
We do this with the syntax:
when <event> occurs
For example:
when a promotion occurs
if
the salary of 'the employee' is more than 50000
then
print "He earns enough";
When a corresponding event arrives, it is processed as soon as possible. There is no delay in its
processing.
Bound entities
When an event arrives at a rule agent, we have already instructed the agent on how to find the
corresponding bound entity. However, if this is the first event associated with that entity and no "is
initialized by " statement is present in the BMD, there may not yet be a bound entity and we may
choose to create one.
At a high level, our logic would be
Page 65
Creating a rule such as this and giving it a higher priority than other rules is a good idea. This will
ensure that a bound entity always exists. When we create the new entity instance, it is likely that
we will want to initialize its properties. We can do that with the following syntax:
set <boundEntity> to a new <Entity> where
the <propertyName> is <value>,
the <propertyName> is <value>,
Both the "then" part and the "else" part will be executed. There is a special semantic which says
that if a rule is executed and it has no bound entity and ends with a bound entity then re-evaluate
that rule with the new bound entity when it first ends.
The order of rules evaluation
When we have multiple rules in our solution, we may wish to control the order of rules evaluation.
We can do this through a property of a rule called its "priority". Each rule has a priority attribute
and if multiple rules can be evaluated when an event arrives, the rules with the higher numeric
priority value are evaluated first.
Within Eclipse, if we select a rule we can examine the "Properties View" and see and change
the property value associated with that rule:
When an event arrives and we don't already have a corresponding bound entity, then we can create
one.
The general form of this is:
set 'the variable' to a new <entity> where
the <property of the entity> is the <property of the event>;
We also have the capability to delete the bound entity from an agent. We do this by setting the
agent's bound entity variable to null;
Page 66
Emitting an event
An action in a rule can emit a new event using the "emit" action. This event is then made available
to all other rules as though it had arrived externally. The emitted event will not be re-consumed by
the same agent that emitted it.
By using emitted events, we can perform a number of interesting functions.
See also:
When we are authoring rules, we can use the vocabulary and logic provided by IBM with DSI.
However there are times when we may wish to augment the vocabulary and logic. Fortunately, the
product allows us to do this very easily.
Within every Rule Agent project we find a folder called "bom". This of course refers to a "Business
Object Model". Within this folder we can create additional Business Object Models which merge
with the BOM provided at the solution level. What we define in this Rule Agent specific BOM
becomes available within the rules of the Rule Agent.
We will illustrate this with an example.
One of the actions available to us is called "print". What this action does is write string data to
the console. The "print" action expects a string as a parameter. But what if we want to send
other data types to the console such as events or entities? The simple answer is that we can't,
because they are not strings and print can only accept a string.
In a programming environment, we could "cast" the data type to a string or ask the object to return a
string representation as might be found in calling the object's "toString()" method.
So ... to illustrate, the following does not work:
when a XYZ Event occurs
then
print 'the ABC';
A syntax error is shown against the "print" action since the entity called "ABC" is not a string.
Wouldn't it be nice if we could describe our rule as follows:
when a XYZ Event occurs
then
print 'the ABC' as text ;
We can in fact do this, but we have to augment the BOM to add new constructs, in this case the
addition of "<Object> as text".
Here is how we do it.
1. From eclipse, go to File > New > Other and create a new "BOM Entry"
Page 67
2. Give the new BOM entry a name and declare it as an "empty" BOM. Make sure that you
do not use the name "model" as that is already taken. All BOMs in your project must have
distinct names.
Page 68
3. Open the BOM model from within the BOM folder into the BOM editor:
Page 69
6. Select the new Class and click edit to edit the settings for the class:
Page 70
11. Select the new method and click "Edit" to edit the properties of the method:
12. Click the "Static" checkbox to flag the method as being static:
Page 71
14. Edit the BOM to XOM mapping to return the string representation of the object.
Java Agents
We have seen that the logical idea of an agent is to be associated with an entity and to process
arriving events. When an agent is declared, we state which types of events should be able to be
delivered to it. We have spoken about an agent type called the Rule Agent but there are others. One
of the other types available to us is called the Java Agent. A Java Agent is an implementation of a
Java Class that will be instantiated and called when an event arrives that is defined of interest to it.
Similar to the Rule Agent, the Java Agent also has an agent descriptor file which describes which
events it will listen upon. When an instance of such an event arrives, a new instance of the Java
Agent class is created and the event is passed to the process(Event) method of that agent.
What the agent then does is defined in the Java application logic of the class as created by a Java
programmer.
Unlike a Rule Agent which has to be associated with an entity, a Java agent does not have to be.
This means that a Java Agent is effectively stateless.
Page 72
Within the Java Agent logic, calls can be made to update external systems of record however this is
not a recommended practice. The reason for this is that events are processed as a transaction and a
single arriving event could be presented to multiple agent instances. If any one of those agents fails
then the transaction as a whole is considered failed and all updates performed by all the agents
touched by the event are rolled back. However, if the call to the external system has already
committed, then it is possible that the call will be made multiple times with potentially undesired
results.
It is recommended that if an update is requested to an external system then an event be published to
ask for that updated to be performed.
See also:
To build a Java Agent, we start by creating an Eclipse Java Agent project to house the artifacts.
From the Decision Insight perspective, we can select the File > New > Java Agent
Project menu entry:
This will launch the wizard to create a new Java Agent project instance.
Page 73
Once completed, a Java Agent project will have been built for us. Within the "src" folder within
the project we will find the generated Java Agent source file. This is the file we need to edit to add
our logic.
The creation of a new Java Agent can also be found within the Solution Map:
A configuration file called the agent description file must next be edited.
If the Java Agent is not related to a bound entity, we can declare such with:
<Agent> is an agent,
processing events :
- <An event>
Page 74
See also:
A skeleton Java file is built for us by the Eclipse wizard when we create a new Java Agent project
instance:
package javaagent1;
import com.ibm.ia.common.AgentException;
import com.ibm.ia.agent.EntityAgent;
import com.ibm.ia.model.Event;
public class MyAgent extends EntityAgent {
@Override
public void process(Event event) throws AgentException {
// TODO Add logic to handle the event
}
We will code the body of the process(Event) method to implement the custom logic for this
agent.
The Java class we are building extends an IBM supplied class called EntityAgent. This
provides the environment in which we are working. The architectural model of an agent is that it
can be associated with an entity.
When the process(Event) method is called to process the arriving event, the parameter passed
in is an instance of Event. For each of the Event types defined as supported by this agent, it is an
instance of one of those that is actually provided. We can use the Java instanceof operator
against the supplied event to determine which specific type of event has actually been supplied.
Once we know the actual type, we can cast the incoming parameter to an instance of the actual
Event type received.
Here is a sample:
public void process(Event event) throws AgentException {
JEvent jEvent;
if (event instanceof JEvent) {
jEvent = (JEvent) event;
} else {
printToLog("Not an J Event");
return;
}
ABC thisABC = getBoundEntity();
if (thisABC == null) {
thisABC = createBoundEntity();
}
thisABC.setKey(jEvent.getEventKey());
thisABC.setFieldABC1(jEvent.getFieldJ2());
updateBoundEntity(thisABC);
printToLog("MyAgent Java finished");
Page 75
} // End of process()
See also:
Within a Java Agent, we commonly wish to create new instance of events, concepts and entities.
We can achieve this through the notion of the ConceptFactory. A ConceptFactory is a Java object
which has construction methods for each of the events, concepts and entities defined within a single
BDM.
For example, if we have a BDM that defines a Concept called "MyConcept", an event called
"MyEvent" and an entity called "MyEntity", then we will find that a new class called
"ConceptFactory" is created within the package for the BDM. This ConceptFactory will
have methods called:
createMyConcept()
createMyEvent()
createMyEntity()
There will be a variety of signatures for these methods. Upon calling these methods, an instance of
an object representing the corresponding item will be returned.
Within a Java Agent, one gets the ConceptFactory itself by using the Agent defined method
called "getConceptFactory()" which takes the Class representing the ConceptFactory
contained in the BDM.
Things get a little interesting with the objects returned by a ConceptFactory based on their
definitions. If a property in an object is a List then we have extra functions. Specifically, a list
property will have:
setXXX(List)
List getXXX()
addTo_XXX(item)
removeFrom_XXX(item)
clear_XXX()
To emit a new event we can call the emit(Event) method. This of course takes as a parameter
the event that we wish to publish. We can create a new instance of such an event using a
Page 76
We can retrieve the entity using the getBoundEntity() method. If the agent does not yet have
a bound entity, the resulting reference returned will be null. We can use this to inform our code
that it should create a new instance of an entity using the createBoundEntity() method
(make sure you remember to call updateBoundEntity() to complete the creation).
It is also permissible for an agent to simply not have an associated entity. This is considered an
unbound agent. In this case, we define the Java class as extending "Agent" as opposed to
"EntityAgent".
A specific Entity instance object has setter and getter methods for each of the properties defined to
it. These are get<Property>() and set<Property>(value).
If a bound entity instance is modified or created, we must use the
updateBoundEntity(Entity) method to commit the changes.
If we wish to disassociate a bound entity from the agent, we can use the
deleteBoundEntity() method.
Here is an example of accessing an entity which, if it doesn't exist, is created:
public void process(Event event) throws AgentException {
System.out.println(this.agentName + ": Serialized event: " +
getModelSerializer().serializeEvent(DataFormat.GENERIC_XML, event));
NewClass newClass = (NewClass)event;
Session session = (Session) getBoundEntity();
// Test to see if we have an existing entity
if (session == null) {
System.out.println("No session entity!");
session = (Session)createBoundEntity();
session.setSessionName(newClass.getSessionName());
updateBoundEntity(session);
System.out.println("Created a new Session: " +
getModelSerializer().serializeEntity(DataFormat.GENERIC_XML, session));
} else {
System.out.println(this.agentName + ": Existing Serialized entity: " +
getModelSerializer().serializeEntity(DataFormat.GENERIC_XML, session));
}
} // End of process
JavaAgent lifecycle
From within an instance of a JavaAgent, we can determine meta data about it through a variety of
ways.
Page 77
A Java Agent is implemented as an OSGi bundle and follows the technical rules associated with
OSGi. Ideally, we don't have to know the programming details of OSGi but we do need to
understand a few points. Unlike a "normal" Java application which can reference anything on a
classpath, OSGi bundles can only reference packages that are explicitly declared as being necessary
for the operation of the bundle. This may initially sound like added complexity but the reality is
that it is a good thing. By explicitly stating that a bundle has a dependency on package XYZ then,
when the bundle is loaded, the runtime can validate that XYZ is available to it. The alternative is
that at runtime, only when the code attempts to reference XYZ, will a potential omission of the
implementation of an XYZ be detected.
To build an OSGi bundle, we need to create an eclipse OSGi Bundle Project:
Page 78
In the next page of the wizard, we provide a name for the Eclipse project. In our case we are calling
it "MyBundle". We want to make some changes from the default. Specifically we do not want to
associate the bundle with an application so we un-check the "Add bundle to application". In
addition, we want to add support for OSGI Blueprint so we check the box for "Generate blueprint
file".
Page 79
The next page of the wizard talks about the project structure and we wish to leave that alone.
Page 80
The final page of the wizard allows us to provide some core details of the OSGi bundle
configuration. An important change here is to remove the "Bundle root" definition. This
changes the location of the OSGi configuration data in the generated project.
The project generated at the conclusion of the wizard should look as follows:
Page 81
We can now implement the Java code within our project. Here we will build a simple example.
Create a package called "com.kolban" and create a Java interface within called "Greeting".
package com.kolban;
public interface Greeting {
public String greet(String name);
}
We have now completed the code level implementation of our Java function. We could easily
extend this by adding additional interfaces and implementation classes to this project. We will stop
here simply because we are merely illustrating a technique.
What remains in this project is to define what is exposed by the OSGi bundle that this module
implements. The nature of OSGi is to hide implementations. What then does this service wish to
expose? The answer is the the interface only.
We want to open the MANIFEST.MF file contained in the META-INF folder using the Eclipse
Page 82
manifest editor.
Next we switch to the Runtime tab and define which of the Java packages we are exposing from this
bundle. In our case it will be "com.kolban".
We must also define that the "bin" folder of the build will be included in the Classpath of the
bundle. In the Classpath area, click Add.. and select "bin/":
Page 83
Our next task is to modify the build.properties. This instructs Eclipse how to build our
solution. The easiest way to achieve this is to switch to build.properties and edit the content
to look as follows:
Page 84
At this point, were we to install this OSGi bundle into an OSGi framework, users would be able to
retrieve the interface called com.kolban.Greeting however a very skilled reader might at this
point say "What use is getting an interface because I need access to an implementation?". We could
have also exposed the "com.kolban.impl" package but this defeats the value of OSGi which is
to ensure that only logical function is exposed and not dirty implementation. From an OSGi
standpoint, what we now want is an OSGi service that will return us an implementation when
needed. This is where the OSGi Blueprint story now comes into play.
Open the OSG-INF/blueprint/blueprint.xml file in the Eclipse OSGi Blueprint editor:
Our first step will be to define a bean that refers to our implementation of the service we wish to
expose:
Page 85
Page 86
In the new blueprint folder, create a file called "blueprint.xml". The content of this file
should be:
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<bean id="GreetingImplBean" class="com.kolban.impl.GreetingImpl" />
<service ref="GreetingImplBean" id="GreetingImplBeanService"
interface="com.kolban.Greeting"></service>
</blueprint>
Page 87
This instructs the OSGI runtime to create a service called "MyGreeting" that exposes an interface
of type "com.kolban.Greeting" that when requested, will construct and return an instance of
"com.kolban.impl.GreetingImpl". This declarative "magic" is the goodness provided by
OSGi.
And that concludes the construction of our OSGi bundle for usage in other projects. If you are a
skilled Java programmer and also knowledgeable in OSGi, these steps make sense. I anticipate that
many folks will be new to OSGi development when approaching building DSI solutions. If this
recipe is followed, then chances are good that you will be able to carry on without much more OSGi
knowledge. However, I do recommend studying some more OSGi as your time permits. The
likelihood is that you won't actually use any more than what has been described here but I feel that
if you understand more about what you are building, you will just "feel" better about it all.
So now that we have built our OSGi module, how do we deploy it to the DSI runtime. There are a
few ways to achieve that and the one that we will illustrate first is the simplest. All we need do is
pick our solution that will use it and include our module in the Project References:
Page 88
When the solution is deployed, this will now bring our bundle in with it.
Finally, we come to the payoff. We can now create a Java Agent and in that Java Agent actually
leverage our new bundle. Because a Java Agent is itself an OSGi Bundle, we must edit the
MANIFEST.MF of the Java Agent and declare that we are importing the "com.kolban" package:
Page 89
We can now code a call to our service from the Java code contained within our Java Agent. Here is
an example of using such:
import org.osgi.framework.BundleContext;
import org.osgi.framework.FrameworkUtil;
import org.osgi.framework.ServiceReference;
import
import
import
import
import
com.ibm.ia.agent.EntityAgent;
com.ibm.ia.common.AgentException;
com.ibm.ia.model.Entity;
com.ibm.ia.model.Event;
com.kolban.Greeting;
See also:
OSGi
When building a Java Agent, there are likely going to be times when your custom Java code wishes
to leverage code contained in JARs that are not part of the Liberty environment. For example, you
may wish to use the many Apache Commons libraries. Unfortunately, based on the OSGi
Page 90
environment one can't simply "place the JAR" somewhere and hope that it will be found. The
following recipe illustrates how to use 3rd party Jars with your Java Agent.
First, add the JAR into into your Java Agent project. By this I mean literally add the Jar into the
folder structure of your project. Next, you want to add the JAR to your project's build path.
Finally, we need to update the MANIFEST.MF to add the JAR to the Classpath:
Having just looked at building reusable OSGi services and seeing how we can invoke those from a
Java Agent, we can now look at another interesting way in which they can be used.
ODM DSI is related to the other ODM family of products including the rules engines. They share
some common concepts such as the Business Object Model (BOM). In ODM, one can define a
BOM and relate that to a XOM that implements code. If we squint a little, we can just about see
that OSGi services are interfaces to function in a similar manner to that as may be found in a Java
class. Thinking along those lines, it is possible to create a new BOM project that references the
OSGi services we may have defined and then leverage the BOM language/rules in a DSI Rule
Agent set of rules.
For example, imagine we have a piece of Java code that has the following signature:
int randomNumber(int lower, int upper)
When called, it returns a random number between lower and upper inclusive. Wouldn't it be great if
we could formulate a DSI Rule Agent rule that might say something like:
set the assigned space of the car to a random number between 1 and 10;
Page 91
Let us look in more detail and see how we can achieve that.
At a high level, the steps involved will be:
The creation of an ODM Rule project (this is not the same as a Rule Agent project).
Let us start with the creation of a new Rule Project. We will find this in the Decision Server
Insights set of projects. Note that there is no way to quick create a new project of this type. It does
not show up in the new projects of a Solution Explorer context menu.
Creating a new Rule Project begins a quite extensive set of wizard pages which we will show in the
following pages. The first page asks for a template for the new rule project. We only have one
choice here which is a "Standard Rule Project".
Page 92
We are now asked to give a name to our new rule project. Choose what is appropriate to yourself.
Next we are asked what project references this new rule project should have. At this point we do
not select anything.
Page 93
A BOM can be related to a XOM and here we specify the project that contains our OSGi service.
Next we are asked about something called the dynamic execution object model. To be honest, I
have no idea what this means but for our purposes, we can simply skip over it.
Page 94
We have the opportunity to name folders in our new project that will be used for distinct purposes.
We are happy with the defaults.
At the conclusion of this page, we will have created our new project. We must now open the
Page 95
properties of this project and change the "Rule Engine" property. There are two choices and the
default appears to be "Classic rule engine". We must change this to "Decision
engine".
Now that we have a BOM project that can act as a container for our BOM artifacts, it is time to
create a BOM entry. Again this has to be performed through the File > New menu as there is no
quick create in any of the context menus for this option.
Page 96
We can keep the defaults which specify that we are going to create a BOM from a XOM.
Since we wish to create the BOM from a XOM, we need to tell the project about that XOM. Click
on the Browse XOM... button to bring up our choices:
Page 97
From the choices we will see the OSGi project that we referenced during the construction of the
Rule Project.
Now that we have asked the tooling to introspect the OSGi project, we are presented with the Java
classes contained within to determine which ones we wish to expose to the business user. We
should select any Interface classes that are exposed as OSGi service and that we wish to expose.
Page 98
Having picked our interfaces, we now pick the methods within those interfaces to expose:
Page 99
The result of all of this will be the final rule project that will look as follows:
We now need to open the BOM model and in the classes that are exposed, map them to their service
names by adding a new custom property with name of "OSGi.service" and value of the OSGi
service name.
For the methods that we exposed, we need to flag them as static and change the verbalization as
appropriate.
Page 100
We have now completed the steps necessary to allow us to use the new BOM language and what
remains is to actually use it. If we pick a Rule Agent project and add our Rule Project as a
reference:
We will find that the verbalizations described in our new BOM are usable within our Rule Agent
language:
Page 101
See also:
developerWorks - Simplify complex code with OSGi services in Decision Server Insights rules - 2015-03-25
At runtime, if the Java Agent is not behaving itself, we have some options for debugging.
We can insert logging statements. The EntityAgent class provides printToLog(String)
which will log the content to the WAS logs.
During development, we can log/dump the value of an entity using the model serializer. For
example:
System.out.println("Serialized entity: " +
getModelSerializer().serializeEntity(DataFormat.GENERIC_XML, entity));
System.out.println("Serialized event: " +
getModelSerializer().serializeEvent(DataFormat.GENERIC_XML, event));
Page 102
If we have a belief that the agent may be throwing exceptions, wrap your logic in a try/catch and
catch the exception yourself. You can then log it and re-throw the exception. This will give you a
stack trace showing an exact location of the problem.
If the Java Agent uses Java packages outside the default, these packages must be registered in the
Java Agent's MANIFEST.MF.
By clicking the Add button, we are prompted for packages (not JARs and not Classes but
packages) to be available from our Java Agent.
Java functions mapped to rule language
With the ability to expose Java functions as rule language, we can now explore how things map.
If the parameters to a Java class are either a java.util.List, array or a
java.util.Collection, then we can pass in a DSI collection.
Attaching a source level Debugger
Eclipse has the capability to perform source level debugging. This means that we can set
breakpoints within the Java source code of a Java Agent and when it is reached, the debugger gets
control and shows us that we have reached that point. We can also examine (and change) the values
of variables in effect.
To perform this task, we must start the WLP server in debug mode. We can do this from the Servers
Page 103
Once the server has been started (from Eclipse) there is continued communication between Eclipse
and the server. Anytime we add a break point in a Java Agent source statement, the execution will
pause when the break point is reached.
Renaming a Java Agent class or package
When your create a Java Agent you are prompted for the package and class name of that agent.
Should you decide that you want to rename it later, take care. Refactoring it in Eclipse seems to
show no errors and all compiles cleanly but errors will be found. The name of the package and of
the class are also contained in the agent descriptor file and in the blueprint.xml file associated with
the agent. Currently, these need to be manually edited and references to the old package/class be
changed to reflect the newly chosen package/class.
Page 104
When created, one specifies a name for the aggregate and which BOM project (within a solution) it
will live within.
Page 105
The definition of a new Global Aggregate can also be found within the Solution Map:
An aggregate definition is mechanically created in files with file type ".agg" found within the
aggregates folder of a BOM project. When we open an aggregate definition file, the Eclipse editor
for editing aggregates allows us to define the logic that will be used to define the aggregate.
Now we can start creating the aggregate definition itself. There is one aggregate definition per file.
The general syntax for an aggregate definition for an event aggregate is:
define '<aggregate name>' as <expression> [, where <event filter>]
while the general syntax for an aggregate definition for an entity aggregate is:
define '<aggregate name>' as <expression> [, where <event filter>,]
evaluated <evaluation schedule>.
The aggregation is primarily defined by the expression which is used to describe how the multiple
values are to be combined. The aggregation expression functions are pre-defined and are:
Page 106
the total <object collection> - The sum total of a numeric field across objects
The evaluation schedule for entity aggregates defines when the entity aggregate value will be
recalculated. It has a fearsome syntax diagram which accommodates many permutations.
Remember that a schedule is not needed for aggregations of events as those aggregations are
recalculated every time an event arrives.
However, in general we can specify either a date/time governed by month, day of the month, day of
the week or hour of the day or combinations thereof. In addition, we can specify simple repeating
intervals such as every minute, hour, day or week or multiples thereof.
Here are some examples:
An aggregate value must always be a numeric. It appears that there are also restrictions on what
may be used to calculate aggregate values. At this time, it appears that the aggregate can only be
built from the values of entity properties or event properties and not upon any computation
associated with them. To make this clear, we can sum, average and calculate the min and max of
properties but not computed properties.
The implication of this is that some items that we think we should be able to aggregate can't be. For
example, if a property field is of type duration, that can't be converted into a number and then
aggregated.
See also:
Page 107
The "where" clause allows us to filter in or out events for inclusion in the aggregate calculation.
For example, a sales event at a coffee shop may be for coffee or cakes. If we wanted to aggregate
the total of coffee sales, we may wish to define:
define 'coffee_total' as ...
where the type of sales event is 'Coffee'
An interesting question arises if we consider asking for an aggregate value before we have
accumulated enough information. For example, if we have newly started a solution and we wish to
determine if the current sale is close to the average, what does it mean if we have no data about the
previous sales yet calculated?
To answer this question, event aggregates have the notion of a default value which will be used
when ever an aggregate value is needed and we don't (yet) have enough data.
define 'average wait time' as
, defaulting to 3 if there is then than 30 minutes of event history
Once there is sufficient data, the default value will no longer be used and the actual calculated value
will take effect.
The evaluation expression can be built in a very wide variety of ways. The following syntax
diagram can be navigated to show different permutations.
The "where" clause allows us to filter in or out entities for inclusion in the aggregate calculation.
For example, if we want to know the average balance of gold customers
define 'average gold balance' as ...
where the `customer score' is 'Gold'
Page 108
Since a global entity aggregate is calculated periodically, we can now introduce the concept of an
aggregate calculation "Job". The execution of a "job" is what we call the act of recalculating the
aggregate value. If we were to examine the DSI server messages, we might see the following
produced each time a job executes:
CWMBG0466I:
CWMBG0828I:
CWMBG0807I:
CWMBG0815I:
CWMBG0209I:
CWMBG0222I:
CWMBG1003I:
CWMBG0228I:
CWMBG0229I:
CWMBG1004I:
CWMBG0223I:
CWMBG0210I:
CWMBG0813I:
Note: For my taste, these messages being written into the messages files each time a job runs is
way too much and should ideally be able to be switched off. Personally, I don't want to see that
something I expected to happen has indeed happened without any problems. I would expect to see
messages logged if something bad happened such as an exception or other failure but I don't
particularly want to see my log cluttered when all works as desired. I like my logs to be records of
one time notifications or errors.
If one doesn't want an entity aggregate computed on a periodic basis, one can ask that the calculate
job be run explicitly. One way to achieve that is through a custom Java Agent. The Java Agent API
provides a method called getJobService() which returns an instance of a
com.ibm.ia.global.jobs.JobService. This object has a method on it called
Page 109
"submitJob(name)" which will queue that job for asynchronous execution. Note that this is a
Java Agent API and is not available to an external Java app. If you need to invoke job control under
API management from an external app, you must use JMX APIs.
DSI also provides a rich command called "jobManager" which can be found in the
<DSI>/runtime/ia/bin folder. This command has a variety of options including:
update
stop
Let us take a moment to look specifically at the "jobManager run" command. Like many of
the jobManager functions, its first two mandatory parameters are:
The aggregate job names can be found in the "globalQueries.var" file inside the aggregates
folder of the BOM project:
Page 110
What is important here is the mapping between the Name property and the Verbalization. For the
purposes of DSI, the Verbalization is the name of the aggregate you modeled in Eclipse. This is the
name known to the developers and designers of a solution. The Name property is what we are
going to call the "Job Name" when we think of jobs. There is an encoding or mapping that will take
us from a verbalization to a job name but that is not important here. Instead, think of it like this:
"I have created an aggregate definition in a '.agg' file and that aggregate definition has a name. If
I now open the globalQueries definition file, I can find that name in the 'Verbalization'
column and from there read back to the 'Name' column to now find the corresponding 'Job Name'"
As to why we have this level of indirection, I have no idea. If I were to guess it is because a
verbalization name is meant to be high level yet for some internal technical reasons, we can't use
the same allowable characters (eg. spaces or underscores) that we can use in the verbalization as we
use in the Job Name. It may be awkward to have to have a level of indirection but it isn't a show
stopper and over time we may learn more about why we have this state of affairs.
Since an aggregate definition is modeled in a BOM project and a BOM project is contained within a
Page 111
Note that the command returns immediately as the job is scheduled to run. The command doesn't
wait for the job to complete.
To determine the outcome or status of the job, we can run "jobManager info <jobName>
<solution>". By default it will return the details for the last instance of that type of job
submitted but we can also supply a jobId that is returned when a job is started to examine a specific
job instance.
The "jobManager list" command lists previously submitted jobs including their jobIds and
their status.
Special characters (eg. '_') are encoded as '$xNN$` where NN is the codepoint in decimal.
projects that relate to ODM DSI. If we have a workspace which contains many projects, things can
become cluttered very quickly.
Within that dialog, we can check the box next to "Closed projects". What this says is that any
projects which are closed will not be shown in the view. We can then close any projects that we
aren't working on at the moment. These projects remain in the workspace but they are "hidden"
from the current view.
Page 113
To close projects, we can select one or more of them and from the context menu, select "Close
Project".
Page 114
The projects will be closed and resources related to them unloaded from Eclipse. The Solution
Explorer view will then update to no longer shown them. If we want to reveal them again, we can
un-check the filter that says to hide closed projects and re-open them.
When opening a Solution project, we will also be asked if we wish to open related projects. This
will restore all the projects related to a solution.
Initialization Extensions
In both these cases, they are implemented as Java code. This Java Code lives inside yet another
ODM DSI Eclipse project type called an "Extension Project". This type of project can be
Page 115
Page 116
Page 117
import com.ibm.ia.model.Event;
import com.ibm.ia.extension.EntityInitializer;
import com.ibm.ia.extension.annotations.EntityInitializerDescriptor;
import com.kolban.ENTITY1;
@EntityInitializerDescriptor(entityType = ENTITY1.class)
public class EXT1 extends EntityInitializer<ENTITY1> {
@Override
public ENTITY1 createEntityFromEvent(Event event) throws ComponentException {
ENTITY1 entity = super.createEntityFromEvent(event);
// TODO Initialize the attributes of the entity that depend on the event
return entity;
}
@Override
public void initializeEntity(ENTITY1 entity) throws ComponentException {
super.initializeEntity(entity);
// TODO Initialize the attributes of the entity
}
The class contains two methods that can be fleshed out. These methods are called:
createEntityFromEvent
initializeEntity
The first method is createEntityFromEvent(). This is passed in a copy of the event that is
causing the entity to be created and can be used to construct an entity from the content of the event.
The method is responsible for building and populating the new entity which is returned.
The second method is called initializeEntity() and is passed a reference to the entity built
in createEntityFromEvent. The entity can be further updated.
During development, we can log/dump the value of an entity using the model serializer. For
example:
System.out.println("Serialized entity: " +
getModelSerializer().serializeEntity(DataFormat.GENERIC_XML, entity));
See also:
Page 118
Page 119
import
import
import
import
dptest.DPTEST;
dptest.DPTESTRequest;
dptest.DPTESTResponse;
dptest.ConceptFactory;
The way to read this is that there is a method called processRequest() that takes as input a
Java Object called request. The method is responsible for returning a response object. The
request object contains the input values defined in the Data Provider definition in the BMD.
while the response object contains the return values defined in the Data Provider definition in the
BMD.
It is up to us how we choose to implement this Java code.
See also:
Page 120
Deploying a solution
After we have built a solution, we will want to deploy it to a DSI server for execution and testing.
The overview of this procedure is that we export the solution from Eclipse into files called archive
files. These archive files contain a technology called an "OSGi bundle" that is a deployable unit to
the DSI server. We can export either a complete solution or just a single agent. The file type for an
exported archive is ".esa".
Exporting a solution
To export a solution, have a file system directory at hand into which the archive file will be stored.
From the Eclipse environment, select Export and then choose Insight Designer >
Solution Archive.
Supply the name of the solution you wish to export and the directory and file into which the
solution archive will be written. I recommend that the name of the file be the same as the name of
the solution:
Page 121
The result of the export will be the archive file which has the file suffix of ".esa" which is an
acronym of "enterprise subsystem archive".
The export of a solution can also be found within the Solution Map view illustrated next:
We can also export a solution using a command line statement. The format of this is:
eclipse -data <workspace> -application com.ibm.ia.designer.core.automation -exportSolution <Solution
Name> -esa <archiveFileName>.esa
This command is useful for un-attended or automated deployments but is not one I recommend for
normal development as the execution takes much longer than the other techniques.
When we work within Eclipse to build solutions, we will find that we have a number of Eclipse
projects. We will have projects for:
Rule Agents
Java Agents
Solutions
BOMs
The solution project is indicated that it is a solution project through an icon decoration:
Page 122
However a question that should be on our minds is "Which Eclipse projects comprise our
solution?". If we have an Eclipse workspace in front of us, we will see many projects but it won't
be clear which ones are related to any given solution.
To determine which Eclipse projects are associated with a solution, we can open the "Project
References" on the properties of the solution. This will show all the Eclipse projects available
to us and show, by check-box marks, which ones are included in the solution:
Page 123
This opens a wizard called "Configure and Deploy" where we can create a deployment
configuration. This is an artifact that contains the knowledge that will be remembered so that when
we wish to re-deploy again in the future, we don't have to enter as many details. We are first asked
to give the deployment configuration a name and also supply the path to the "Administrator toolkit".
This is simply the path to <DSI Root>/runtime/ia/bin. I am not sure why we are asked
for this, I would have thought that this would have been a constant.
Page 124
The next page of the wizard asks for the connection properties used to reach the DSI server:
These are the same properties used by the DSI TestDriver java classes.
Notice also the first check box. This allows us to specify a properties file which we may have prePage 125
prepared that contains all of our details. Personally, I recommend getting into the habit of using
that.
Following completion, the solution project will be exported and immediately deployed for
execution. A new entry will be seen within the solution folder which is the deployment
configuration:
For subsequent redeployments, we will find the previously defined configuration within the context
menu for the Deploy action:
Page 126
You should not assume the solution is immediately ready after deployment until the message:
CWMBD0060I: Solution <Solution Name> ready.
Arguments
Description
delete "${string_prompt}"
Delete a solution
List solutions
list local
List solutions
stop "${string_prompt}"
Stop a solution
Undeploy a solution
For advanced users, the question of "What happens when we deploy a solution" is a valid one.
Knowledge here can aid in debugging of all sorts and is likely going to be needed eventually. To
fully understand what happens one needs to understand WebSphere Liberty to some degree.
First, the files that comprise the solution are extracted from the ".esa" file. These are stored in the
directory called:
<DSI>/runtime/solutions/lib
<Solution>.<Agent Name>_numbers
<Solution>.modelExecutable_numbers
<Solution>.solutionbundle_numbers
Page 128
There is nothing in these files that you should consider modifying yourself. They are described
only so that you know that they exist and can validate an install or cleanup.
The next file of interest to us is:
<DSI>/runtime/solutions/features/<Solution>-<Version>.mf
This is a Java manifest file. Again, it should never be hand modified. However, reading it we will
find an entry called "Subsystem-Content" which seems to map to the JAR files and seems to show
what files actually constitute the solution. This could be useful if you knew a solution name and
wanted to validate that all the expected implementation files were present.
Finally there are changes made to the Liberty master server.xml file located at:
<DSI>/runtime/wlp/user/servers/<serverName>/server.xml
Two changes are made. First, in the <featureManager> stanza, a new entry is added for the newly
installed solution. It will have the format:
<feature>solutions:Solution:Version</feature>
The act of undeploying a solution does not delete the files but merely removes the entries from
server.xml. If we have previously undeployed a solution and want that solution restored without
changing the files, we can run:
solutionManager deploy local <fileName.esa> --activateOnly=true
The addition of the --activateOnly=true flag causes the solution to be deployed without
changing the solution implementation files.
To delete the solution implementation files, see the solutionManager delete command.
See also:
Deleting a solution
Page 129
Redeploying a solution
During development, we may wish to make changes to a solution and redeploy them for retesting.
If we export a new solution archive, we can redeploy the solution with the command:
solutionManager redeploy <solution name>
Running this command logs some console messages. Be sure and wait for the command to
complete before attempting additional work. An example of messages might be:
Solution successfully stopped: MySolution
Solution successfully undeployed for server: cisDev
Deleted MySolution-0.0.mf
Solution successfully deleted: MySolution-0.0
You must use the "--clean" option when restarting servers
Server configuration file successfully updated for server: cisDev
You should not assume the solution is ready after redeployment until the message:
CWMBD0060I: Solution <Solution Name> ready.
Stopping a solution
A solution can be stopped using the following solutionManager command:
solutionManager stop <solution name>
Page 130
is displayed.
This script also has properties:
--host=name
--port=value
--username=value
--password=value
--trustStoreLocation=value
--trustStorePassword=value
Undeploying a solution
A solution can be un-deployed using the solutionManager script.
solutionManager undeploy local <solution name>-<version>
The name of the solution must include the version number. Before a solution can be undeployed, it
must first be stopped.
Upon a successful un-deploy, the following message is displayed:
Solution successfully undeployed for server: <server name>
Following an un-deploy, the Liberty server.xml has the entry in <featureManager> and the
<ia_runtimeSolutionVersion> removed. The physical deployed files found in:
<DSI>/runtime/solutions/lib
remain in place.
This script also has properties:
--host=name
--port=value
--username=value
--password=value
--trustStoreLocation=value
--trustStorePassword=value
See also:
Stopping a solution
Deleting a solution
When one deploys a solution, a set of files are placed into WLP directories so that it may read and
use them. The following command will delete the files corresponding to the named solution.
solutionManager delete <Solution>-<Version>
<DSI>/runtime/solutions/lib
Page 131
<DSI>/runtime/solutions/features
Running this command lists the files that were deleted. For example a typical output may be:
Deleted Basic.solutionbundle_0.0.0.20150106134921.jar
Deleted Basic.modelExecutable_0.0.0.20150106134921.jar
Deleted Basic.Basic_Rule_Agent_0.0.0.20150106134921.jar
Deleted Basic-0.0.mf
Solution successfully deleted: Basic-0.0
You must use the "--clean" option when restarting servers
Notice the indication to start the server in clean mode. If you are starting the server through
Eclipse, there is an option that will cause the appropriate start mode on next start:
If you find yourself opening lots of Windows Explorer windows and navigating to these folders to
delete files, consider installing the Eclipse plugin called "Remote System Explorer EndUser Runtime". Once installed, you can then open an Eclipse view called "Remote System
Details". This allows one to view a file system folder (local or remote) and perform actions on
files such as delete and rename. The benefit of this is that you can perform a variety of file
manipulation tasks without ever leaving Eclipse.
The following is a screen shot of the Remote System Details view in action:
Page 132
Deploying agents
When we deploy a solution, all the agents associated with that solution are also deployed.
However, there are times when we wish to simply update the solution with new or modified agents.
We don't want to replace the whole solution. We can achieve this finer grained modification by
exporting a file that contains just a single agent project and then deploy just that agent project.
Page 133
The export of an agent archive can also be found within the Solution Map:
Page 134
See also:
Deploying a solution
See also:
I use Eclipse to edit this XML file and can access it immediately from the Servers view after
having pointed/defined a WLP server instance.
Once you have the file open for editing, there are two areas that you want to look at. The
first is the <featureManager> container. If you have solutions deployed that you want
to get rid of, delete the lines that reference them. They will be of the form:
Page 135
<feature>solutions:Solution Name-Version</feature>
The second set of entries in the file are those that have the following format:
<ia_runtimeSolutionVersion currentVersion="Solution Name-Version" solutionName="Solution
Name" />
again, these should simply be deleted and the server.xml file saved.
3. Clean the solutions directory.
When solutions are deployed, artifact files (primarily JAR files and ".mf" files) are copied
into the solutions folder found at:
<DSIRoot>/runtime/solutions/lib
and
<DSIRoot>/runtime/solutions/lib/features
You should delete the files as needed. Don't delete the features folder but feel free to
delete its content.
4. Restart the server in clean mode.
You can now restart the server in clean mode. From the command line this means adding
the "--clean" flag to the start command. I use Eclipse to start my DSI server and before
starting, I flag "Clean Server on Next Start":
Once started, you should find that your DSI server is clean again and has nothing left over from
previous tests and runs.
Event history
When an event arrives at DSI for processing, we understand that the event is delivered to an agent
Page 136
and the agent determines what to do. What then happens to the event after processing?
The answer is that the events are stored in memory (RAM) for a period of time. These historic
events are available for logic within Rule Agents. Note that these historic events are not available
to Java Agents.
The default period of time is one year but this can be altered through the
solution_properties.xml file on a solution by solution basis.
The property is called "maxHorizon" for the solution as a whole and
"maxHorizon_<AgentName>" for configuration based upon a specific agent.
An example of modification might be the addition of:
<property name="maxHorizon">P10D</property>
The coding of the duration that specifies how long to keep the events is in a time unit defined in the
ISO 8601 specification (Durations).
The form of this is:
P[n]Y[n]M[n]DT[n]H[n]M[n]S
Where:
Y is number of years
M is number of months
D is number of days
T is a time designator
Page 137
This produces a dialog from which we can select the solution that contains our connectivity
definitions and the name of the XML file to contain our results:
The next page of the wizard allows us to select which definitions we wish to generate:
Page 138
The second mechanism for creating the configuration XML file is through a command line
approach.
We must run a command called "connectivityManager". The format of the command is:
connectivityManager generate config <esa file> <config xml file>
What these step do is generate an XML file. But what "is" in this file?
What it contains are a series of IBM Liberty Profile configuration definitions that will be applied to
our ODM DSI servers. When applied, they will make appropriate definitions that will cause the
server to start listening on the connection channels we have defined.
For example, if we have defined an inbound HTTP entry, the XML file will contain:
<server>
<!--Application definition for inbound connectivity application for solution:
Connectivity_Tests-->
<application location="Solution2-inbound.ear">
<application-bnd>
<security-role name="iaEventSubmitter"/>
</application-bnd>
</application>
<ia_inboundHttpEndpoint endpoint="Solution2/MyHTTPEndpoint"/>
</server>
What this tells WLP is that there is a new application that is found in "Solution2inbound.ear" and that the application should run. This application is generated by ODM DSI
and starts listening for incoming HTTP requests and, when they arrive, cause them to be processed
as events.
!!Important!!
Page 139
The XML file generated from the command line connectivityManager needs to be
manually edited to uncomment the definitions. Why this is not performed for us by the command
line tool is unknown.
Finally, the configuration needs to be deployed with the command:
connectivityManager deploy local <esa file> <config xml file>
In addition, messages will also be logged to the Liberty console. For example:
CWWKG0016I: Starting server configuration update.
CWWKG0028A: Processing included configuration resource:
C:\IBM\ODMDSI87\runtime\wlp\usr\servers\cisDev\JSTDTests-config.xml
CWWKG0017I: The server configuration was successfully updated in 0.041 seconds.
CWWKZ0018I: Starting application JSTDTests-inbound.
SRVE0169I: Loading Web Module: web-JSTDTests (JSTDTests JSTDTests-0.0).
SRVE0250I: Web Module web-JSTDTests (JSTDTests JSTDTests-0.0) has been bound to default_host.
CWWKT0016I: Web application available (default_host): http://win7-x64:9086/JSTDTests/
CWWKZ0001I: Application JSTDTests-inbound started in 0.117 seconds.
See also:
Exporting a solution
Authentication credentials
Endpoint URL
We can control those through the addition of a new XML element of the form:
<ia_outboundHttpEndpoint endpoint="<endpoint name>"
url="<URL value">
user="<User name">
password=<"user password">
/>
If we define an inbound JMS entry, two sets of WLP definitions are found in the generated XML
configuration file. One for binding to WLP JMS and one for binding to MQ JMS.
For example, for WLP JMS, the XML file will contains:
<!--WebSphere Application Server default messaging provider activation specification-->
<jmsActivationSpec id="Solution2-inbound/in2ep/in2ep" authDataRef="Solution2-inbound/in2ep/in2epauthData">
<properties.wasJms destinationRef="Solution2-inbound/in2ep/in2ep" />
</jmsActivationSpec>
<!--Authentication alias for activation specification Solution2-inbound/in2ep/in2ep -->
<authData id="Solution2-inbound/in2ep/in2ep-authData" user="" password="" />
<!--WebSphere Application Server default messaging provider queue-->
<jmsQueue id="Solution2-inbound/in2ep/in2ep" jndiName="Solution2-inbound/in2ep/in2ep">
<properties.wasJms queueName="inputQ"/>
</jmsQueue>
Take note of the "queueName" property in the "jmsQueue" definition. This is the name of the
messaging engine queue that will be watched for messages.
See also:
Page 141
See also:
JMS
One the deployment of this configuration has been performed and there are not errors in the log, we
should see that the source queue is open for incoming messages:
Page 142
See also:
Testing a solution
Once a solution is built and deployed, the next logical thing we will want to do is test that solution.
We have a number of ways to achieve this.
Page 143
From the wizard, we are prompted for the name of the file that is to be created:
After completion, a new file will be found in the Event Sequences folder:
Page 144
The icon IBM selected to represent an event sequence file is a couple of the "pins" used by the
browser based Insight Inspector web tool to indicate the presence of events.
Once an event sequence file has been created, it may be opened and we will be shown an editor.
Within the editor we can describe a set of events and their payload which can subsequently be
submitted to a DSI server for processing. Because the events are saved in a file, we can repeat the
generation of these events as many times as we desire. The order in which the events are defined in
the file is the order in which they will be submitted to DSI for processing.
The syntax of the file is a DSI style business language.
emit a new <event>, time-stamped <date & time>
com.ibm.ia.admin.tools.jar
com.ibm.ia.common.jar
com.ibm.ia.gateway.jar
com.ibm.ia.testdriver.jar
commons-codec.jar
engine-api.jar
engine-runtime.jar
objectgrid.jar
restConnector.jar
Page 145
There is one final JAR that needs to be added and that is the Solution Java Model project.
From the Java Build Path setting, the entries will look similar to the following:
At the conclusion, the Java project will have a set of References Libraries:
With the project environment ready, we can now construct our test client. Create a Java class to
host the test driver.
When the test driver runs, it needs information in order for it to operate. This information is
supplied in the form of a set of name/value properties. These can be supplied either through a file
or as a Java Properties object.
To run the test driver, we can build a properties file that describes how to connect to DSI. The name
of the properties file must be "testdriver.properties". The directory in which it is
contained must be supplied in the Java runtime property called "testdriver_home". This can
be added to the Java command line with:
Page 146
-Dtestdriver_home=<directory path>
The port numbers for your environment can be found in the configuration file described here:
If you fail to point to the testdriver.properties, an error similar to the following will be
presented:
Description
solutionName
catalogServerEndpoints
host
port
connectTimeout
The amount of time to wait before retries if the connect to the server fails.
username
password
The password for the userid used to connect TestDriver to the DSI server.
trustStorePassword
The password for the Java Key Store security keys file. The default for this is
"tester".
trustStoreLocation
The location of the Java Key Store file that contains the security keys needed to
contact DSI. The default for this is
<DSIRoot>/runtime/wlp/usr/servers/<Server
Name>/resources/security/key.jks.
disableSSLHostnameVerification
logLevel
One of:
OFF
SEVERE
WARNING
INFO
FINE
FINER
FINEST
As an alternative to supplying a properties file and a pointer to that file, one can supply a Java
Properties object instantiated and populated with the correct values. This can be passed as a
parameter to the constructor of the TestDriver. For example:
Properties connectionProperties = new Properties();
Page 147
connectionProperties.setProperty(DriverProperties.RUNTIME_HOST_NAME, "localhost");
connectionProperties.setProperty(DriverProperties.HTTP_PORT, "9449");
connectionProperties.setProperty(DriverProperties.CATALOG_SERVER_ENDPOINTS, "localhost:2815");
connectionProperties.setProperty(DriverProperties.DISABLE_SSL_HOSTNAME_VERIFICATION, "true");
connectionProperties.setProperty(DriverProperties.TRUSTSTORE_PATH,
"C:\\IBM\\ODMDSI87\\runtime\\wlp\\usr\\servers\\cisDev\\resources\\security\\key.jks");
connectionProperties.setProperty(DriverProperties.TRUSTSTORE_PASSWORD, "tester");
connectionProperties.setProperty(DriverProperties.ADMIN_USERNAME, "tester");
connectionProperties.setProperty(DriverProperties.ADMIN_PASSWORD, "tester");
TestDriver testDriver = new TestDriver(connectionProperties);
Personally, I prefer this methods when building personal tests as it is one less set of artifacts
(properties files and directory pointers) that I have to worry about. However, it does necessarily
mean that the code has to be changed if you change environments or pass it to someone else. Use
your judgment on which style is better for yourself.
When we run the client, we will find that it is extremely chatty as it logs a lot of information to the
console.
create<EventType>(ZonedDateTime)
create<EntityType>(idP1)
get<PropertyName>()
set<PropertyName>(value)
Page 148
The property names are Pascal Cased just like Java Beans. For example:
getF1()
setF2(value)
getTimestamp()
Retrieving an entity
We can use the fetchEntity() method to retrieve a specific entity.
See also:
fetchEntity(entityTypeClass, entitiyId)
TestDriver Methods
The core of the Java test client is an IBM supplied class called com.ibm.ia.TestDriver.
This class provides all the functions one needs to write such a program. Full documentation on
these methods can be found in the product documentation references. The class has the following
methods:
Page 149
addDebugReceiver(DebugReceiver)
Register a receiver for server transmitted debug information. A DebugReceiver object is passed
as a parameter. This is an instance of a class that implements
com.ibm.ia.testdriver.DebugReceiver. This interface has one method defined as:
void addDebugInfo(DebugInfo instance, String sourceAgent)
This is method is invoked by the framework when the DSI runtime tells the TestDriver that
something happened. IBM provides a sample implementation of this interface that queues the
debug items for subsequent examination. This sample class is called:
com.ibm.ia.testdriver.IADebugReceiver
Now we can look at what an instance of DebugInfo contains. It has the following methods:
getEventId() - The id of the event. The full event details can be retrieve using the
TestDriver.getAgentEvent() method.
Some setup is also required in the DSI server before debug information is returned. Specifically, we
must set up the debugPort property on which the server is listening. For example:
propertyManager set debugPort=6543
Running this command adds an entry into the server.xml into the <ia_runtime> element.
The attribute entry is "debugPort=value".
For example:
<ia_runtime debugPort="6543"/>
getAgentEvent(DebugInfo)
removeDebugReceiver(r)
connect()
Connect the TestDriver to the DSI server. The properties used for connections are the current
Page 150
properties associated with the instance of the TestDriver. The solution identified in the current
properties is used as the solution to work against.
See also:
disconnect()
connect(timeout)
Connect the TestDriver to the DSI server supplying a timeout. A value of 0 means use no
timeout value.
connect(solutionName)
Connect the TestDriver to the DSI server supplying the solution name. The supplied solution
name takes precedence over any solution currently associated with the TestDriver through its
properties.
See also:
disconnect()
connect(solutionName, timeout)
Connect the TestDriver to the DSI server supplying the solution name and timeout.
See also:
disconnect()
createRelationship(entity, key)
Create a relationship object populated with the entity type and key. Note that this does NOT create
any new entities but rather simply creates a new Relationship object.
createRelationship(t)
Create a relationship object populated with the entity type and key derived from the entity object
instance. Note that this does NOT create any new entities but rather simply creates a new
Relationship object.
deleteAllEntities()
Delete all the entities for the given solution. This effectively resets the solution to an empty state
discarding all the entities.
See also:
loadEntities(entities)loadEntity(entity)
loadEntity(entity)
deleteAllEntities(entityType)
Delete all entities for a given entity type. Note that the entity type is a String which includes
both the package and the class name of the entity type. It is not a Java Class object.
See also:
loadEntities(entities)loadEntity(entity)deleteEntity(entityType, entityId)
Page 151
loadEntity(entity)deleteEntity(entityType, entityId)
deleteEntity(entityType, entityId)
loadEntities(entities)loadEntity(entity)
loadEntity(entity)
endTest()
disconnect()
connect()
fetchEntity(entityTypeClass, entitiyId)
This method retrieves an entity from the DSI server. If changes are made to the entity they are not
written back to the DSI server until a call is made to updateEntity().
The input parameters to this method are:
See also:
updateEntity(entity)
getAgentEvent(DebugInfo)
See also:
addDebugReceiver(DebugReceiver)
getConceptFactory(conceptFactoryClass)
Retrieve the concept factory object that is used to create instances of concepts, entities and events.
The input parameter is the name of the ConceptFactory class. For example, if our BOM exists in
the package "com.kolban" then the parameter to be passed to this method would be
"com.kolban.ConceptFactory.class".
getEventFactory()
Retrieve an instance of EventFactory that can be used to create instances of events. It isn't
clear when one would create events from an event factory vs creating events from a concept factory.
Page 152
getModelSerializer()
Retrieve an instance of the Model Serializer that can be used to serialize entities and events to XML
documents.
See also:
getProductId()
Return a string representation of the name and version of the DSI product.
getProperties()
Retrieve the instance of the SolutionGateway object that is used by the TestDriver.
getSolutionProperty()
isRuntimeReady()
isSolutionReady()
Testing seems to show that this is true when the TestDriver is connected and false when not
connected. This can be used by tooling to determine if a connection is needed.
loadEntities(entities)
loadEntity(entity)deleteAllEntities()deleteAllEntities(entityType)deleteEntity(entityType, entityId)
deleteAllEntities()deleteAllEntities(entityType)deleteEntity(entityType, entityId)l
deleteAllEntities(entityType)deleteEntity(entityType, entityId)
deleteEntity(entityType, entityId)
loadEntity(entity)
loadEntities(entities)deleteAllEntities()deleteAllEntities(entityType)deleteEntity(entityType, entityId)
deleteAllEntities()deleteAllEntities(entityType)deleteEntity(entityType, entityId)
deleteAllEntities(entityType)deleteEntity(entityType, entityId)
deleteEntity(entityType, entityId)
removeDebugReceiver(r)
This method removes a debug receiver from the TestDriver. It is assumed that a previous call
to addDebugReceiver() using the same receiver object was made. See the documentation for
addDebugReceiver for more notes on using this capability.
See also:
addDebugReceiver(DebugReceiver)
getAgentEvent(DebugInfo)
resetSolutionState()
Resets the solution discarding any event history that may have previously been recorded.
setGatewayMaxSubmitDelay()
setProperties()
Start recording processing information for display within Insight Inspector. Once called, the
runtime will start recording information until requested to stop by a call to stopRecording().
A REST command can also be used to request a start.
See also:
stopRecording()
Stop recording data that was previously requested by a call to startRecording(). Following a
stop, the data can be examined from the browser based Insight Inspector. A REST command can
also be used to request a stop.
See also:
submitEvent(event)
This method submits an event to the DSI server for processing. The parameter that is passed is an
instance of an event.
Page 154
toXMLBytes()
Having previously retrieved an entity, this methods will update it back in the DSI server.
See also:
fetchEntity(entityTypeClass, entitiyId)
validateProperties()
Validate the properties. Not quite sure what that would mean.
TestDriver = Java.type("com.ibm.ia.testdriver.TestDriver");
Properties = Java.type("java.util.Properties");
ConceptFactory = Java.type("com.kolban.ConceptFactory");
Ev1 = Java.type("com.kolban.Ev1");
Page 155
We may wish to create instance of entities through JavaScript. Here is an example of creating a
single entity.
var ConceptFactory = Java.type("com.kolban.ConceptFactory");
var ZonedDateTime = Java.type("org.threeten.bp.ZonedDateTime");
var conceptFactory = testDriver.getConceptFactory(ConceptFactory.class);
var event1 = conceptFactory.createEVENT1(ZonedDateTime.now());
event1.setF1("XYZ");
event1.setF2("ABC");
testDriver.submitEvent(event1);
print("Done!");
If we have many entities to create, another notion is that we can define the entities in Json and use a
small piece of JavaScript to build the entities from the data. For example:
var ConceptFactory = Java.type("aggregate_tests.ConceptFactory");
// Variable "testDriver" is initialized to your TestDriver instance.
var conceptFactory = testDriver.getConceptFactory(ConceptFactory.class);
testDriver.deleteAllEntities();
var data = [{
id: "Blue Widget",
quantity: 5,
description: "Blue Widgets"
},{
id: "Red Widget",
quantity: 6,
description: "Red Widgets"
},{
id: "Green Widget",
quantity: 17,
description: "Green Widgets"
}];
for (var i=0; i<data.length; i++) {
var stockItem = conceptFactory.createStockItem(data[i].id);
stockItem.setQuantity(data[i].quantity);
stockItem.setDescription(data[i].description);
testDriver.loadEntity(stockItem);
}
print("Done!");
See also:
loadEntity(entity)
Start recording
Stop recording
Page 156
If we try and start recording while recording is already active, we get a 503 status returned. Of we
try and stop recording and there is no recording in progress, we also get a 503 status returned.
After having recorded some solution execution, we can open the IBM DSI Insight Inspector tool by
opening a browser to:
https://<hostname>:<port>/ibm/insights
Page 157
Upon clicking a solution, we are shown a chart and tables of the recorded data that is available for
examination:
At the top we have a time-line which we can scroll across. Markers show events being processed or
Page 158
emitted and by which rule agent. Selecting a marker shows us the event and entity data at that point
in time.
Buttons are available to allow us to zoom in and zoom out within the timeline.
If we take a new recording, we can refresh the browser to see the new data.
See also:
startRecording()stopRecording()
stopRecording()
Video - How do I use Insight Inspector in IBM ODM V8.7 Decision Server Insights to debug problems? - 2015-04-17
Page 159
We can export the schema file to a local temporary file and then copy it into our XML project or we
can export directly into the workspace folder for the XML project and refresh the project. Either
way we end up with a new XML schema file in our XML project:
Page 160
With the schema file available to us, we now wish to create an instance of an XML document that
conforms to the schema.
Page 161
Page 162
The result will be an instance of an XML document that confirms to the model desired.
<?xml version="1.0" encoding="UTF-8"?>
<m:HireEvent xmlns:m="http://www.ibm.com/ia/xmlns/default/MyBOM/model"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.ibm.com/ia/xmlns/default/MyBOM/model model.xsd ">
<m:employee>m:employee</m:employee>
<m:serialNumber>m:serialNumber</m:serialNumber>
<m:timestamp>2001-12-31T12:00:00</m:timestamp>
</m:HireEvent>
It is an instance of this XML that needs to be sent to ODM DSI for processing. The way we sent
the event is determined by how the solution is listening for incoming events. The choices available
are HTTP or JMS.
For HTTP, we can send a REST request to the inbound path of the ODM DSI server:
POST <hostname>:port/<path>
with the body of the post set to be the XML document. A tool such as postman can be used:
Page 163
See also:
See also:
JAXB
JMS
com.ibm.ws.sib.client.thin.jms_8.5.0.jar
com.ibm.jaxws.thinclient_8.5.0.jar
Page 165
Next we built a Java project in Eclipse referencing these JARs. The JVM for this project must be
the JVM supplied by WAS.
Here now is the complete logic for sending a JMS message from a file:
package com.kolban;
import java.io.RandomAccessFile;
import javax.jms.Connection;
import javax.jms.MessageProducer;
import javax.jms.Session;
import javax.jms.TextMessage;
import com.ibm.websphere.sib.api.jms.JmsConnectionFactory;
import com.ibm.websphere.sib.api.jms.JmsFactoryFactory;
import com.ibm.websphere.sib.api.jms.JmsQueue;
public class Test1 {
public static void main(String[] args) {
Test1 test1 = new Test1();
test1.run();
}
public void run() {
try {
JmsFactoryFactory jff = JmsFactoryFactory.getInstance();
JmsConnectionFactory jcf = jff.createConnectionFactory();
jcf.setProviderEndpoints("localhost:7276");
jcf.setBusName("any");
JmsQueue queue = jff.createQueue("Default.Queue");
Connection conn = jcf.createConnection();
conn.start();
// Read the file
RandomAccessFile f = new RandomAccessFile("C:\\Projects\\ODMCI\\ODMCI_WorkSpace\\XML
Data\\Solution2\\XYZEvent.xml", "r");
byte data[] = new byte[(int)f.length()];
f.read(data);
f.close();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
TextMessage tm = session.createTextMessage(new String(data));
MessageProducer producer = session.createProducer(queue);
producer.send(tm);
conn.close();
System.out.println("Done!");
} catch (Exception e) {
e.printStackTrace();
}
}
As you can see, there aren't many lines to it but it involves a whole lot of function. Some areas to
note when reading it are:
The setProviderEndpoints() method supplied the host and port on which ODM CI
is running and listening for incoming external messages.
We are creating the JMS connection and JMS queue not using JNDI as is commonly found
with JMS applications as WLP doesn't support external JNDI access.
The name of the JMS queue to which we are writing is called "Default.Queue". This is
the default queue. The name of an alternate queue may be used but must match the
definitions in the .cdef file.
allowing you to view and analyze their content. This can be exceptionally useful if you are working
with emitting events over HTTP and want to validate that the data payload of the request is what
you expect it to be.
The tool is supplied in both source and as a runnable Java jar. From the command line, we can start
the tool with:
java -jar Mockey.jar
This will launch a web page served up by Mockey into which you can define your settings
including:
When the requests arrive at Mockey, its history page shows the list of seen requests and allows you
to drill down into their content. For example:
See also:
version. The edition of the tool that I downloaded was "SoapUI-x64-5.0.0.exe" which is a
full installer.
When sending requests, ensure to add "Content-Type: application/json" or
"application/xml" to each request. Failure to do this seems to result in 200 OK response but
with no content.
See also:
REST Requests
Page 168
Operations
ODM DSI runs on top of the IBM WebSphere Liberty Profile (WLP) runtime platform. In order to
operate ODM DSI a knowledge of WLP will help. ODM DSI expects some level of configuration
to be performed against WLP to achieve certain tasks. These include:
JMS configuration
Some of the scripts supplied by ODM DSI expect connections parameters. These can be supplied
on the command line or placed in a properties file. The default properties file can be found at:
<ROOT>/runtime/ia/etc/connection.properties
See also:
WebSphere Liberty
What you will find there will be templates for servers of type:
cisCatalog
cisContainer
cisInbound
cisOutbound
defaultServer
These templates contain the bootstrap, jvm.options and server.xml (amongst other
things) for the new Liberty server that will be created.
An alternative to using the command line tooling is to use the Liberty developer tools found inside
Eclipse. It is my preference to learn and use these tools when I can. Some comment that learning
the command line tools means that you can execute these commands under any circumstances and
that is undeniably true. However, for me, life is too short to try and memorize such things and
merely to know that they exist when needed is enough. To use the developer tools to create a new
server instance, open up the Servers view and select New > Server from the context menu:
Page 169
You will now be offered a list of the types of servers you can create. Choose the Liberty profile
server type:
Next you can create a new server definition and supply a name for your new server as well as
selecting the template type from the pull-down list of available templates:
Page 170
With the name and template defined, you are now presented with an overview of what is going to be
defined in your new server instance. Clicking Finish will create its definitions:
You may wish to modify the bootrap.properties file to change any relevant port numbers.
And that is it. Nothing complex here and creating and deleting new server entries is really that easy.
See also:
KC Server command
Page 171
solutionAutoStart
maxEventProcessingThreads
magAgentTransactionRetries
engineCacheSize
agentDisableSaveState
debugPort
logSuppressionThreshold
logSuppressionThresholdPeriod
logInitialSuppressionPeriod
logMaxSuppressionPeriod
logMaxTrackedMessages
The propertyManager command has options for get, set and list to work with the properties.
The "list" command lists the names of all the properties that can be changed.
Executing a "get" before a set may return a message that the property does not exist.
DSI Security
See also:
components either locally (within the DSI server) or remotely over the network. It is WebSphere
Liberty that provides the underlying JMX framework however DSI has plugged itself into that area
nicely.
The JMX domain to which the DSI components belong is called "com.ibm.ia".
The primary beans of interest to us are:
Name
Object Name
AgentStats
com.ibm.ia:name=IA-PARTITION-X, partition=X,type=AgentStats
ConnectivityManager
com.ibm.ia:type=ConnectivityManager
DataLoadManager
com.ibm.ia:type=DataLoadManager
GlobalProperties
com.ibm.ia:type=GlobalProperties
JobManager
com.ibm.ia:type=JobManager
JobManagerDebug
com.ibm.ia:type=JobManagerDebug
OutboundBufferManager
com.ibm.ia:type=OutboundBufferManager
ServerAdmin
com.ibm.ia:type=ServerAdmin
Solutions
com.ibm.ia:type=Solutions
DSI is documented as supporting the MXBeans technology which provides very easy access to the
attributes and operations of Mbeans.
For example:
ObhectName objectName = new ObjectName("com.ibm.ia:type=JobManager);
JobManagerMXBean bean = JMX.newMXBeanProxy(connection, objectName, JobManagerMXBean.class);
JMX AgentStats
Attributes
AgentTime long The amount of time take to process all agent calls.
EngineCacheHits - long
EngineCacheMisses - long
EventStats - List<InvocationStats>
Operations
getEventStats
InvocationStats getEventStats(String type)
getAgentStats
Page 173
Data Structures
InvocationStats
String type The class name of the agent or event.
int count The number of times an agent or event type processed an event.
long time How long the agent or event type has processed events.
JMX ConnectivityManager
Attributes
Operations
Data Structures
JMX DataLoadManager
Attributes
GridOnline - boolean
LoadComplete - boolean
Operations
loadData
int loadData()
checkLoadProgress
boolean checkLoadProgress()
setGridOnline
boolean setGridOnline()
JMX GlobalProperties
Attributes
Operations
Data Structures
JMX JobManager
The JobManager provides access to entity aggregate Job Management. This includes the ability
to query jobs and schedules as well as finding their outcomes.
Attributes
ActiveJobCount - int
ActiveJobs - JobRunId[]
Page 174
JobRunIds - JobRunIs[]
JobRunInfos - JobRunInfo[]
QueuedJobs - JobRunInfo[]
Operations
getActiveJobs
JobRunId[] void getActiveJobs(String solutionName)
getJobRunInfos
JobRunInfo[] getJobRunInfos(JobRunId[] jobRunIds)
submitJob
JobRunId submitJob(String jobName, String solutionName, String description,
List<JobParameter> params)
updateJobSchedule
boolean updateJobSchedule(String jobName, String solutionName, String intervalString, String
crontabString)
getJobSchedule
String getJobSchedule(String jobName, String solutionName)
removeJobSchedule
boolean removeJobSchedule(String jobName, String solutionName)
abortJobByName
void abortJobByName(String jobName, String solutionName)
abortJob
void abortJob(String runJobId, String jobName, String solutionName)
getJobRunInfo
JobRunInfo getJobRunInfo(String jobName, String solutionName)
JobRunInfo getJobRunInfo(String jobRunId, String jobName, String solutionName)
Data Structures
JobRunId
String id
String jobName
String solutionName
String systemId
JobRunInfo
Date abortStartTime
Date creationTime
String description
Date endTime
JobRunId id
JobOrigin jobOrigin
Page 175
JobResultInfo jobResultInfo
long runDuration
Date startTime
JobStatus status
boolean abandoned
boolean restart
JobOrigin
String name
JobStatus
Enum
ABORTED
ABORTING
CANCELLED
COMPLETED
CREATED
FAILED
FAILED_SUBMISSION
QUEUED
RUNNING
SKIPPED_AS_DUPE
STARTING
TIMED_OUT
JobResultInfo
JobRunId id
String message
String resultCode
JMX OutboundBufferManager
Attributes
Operations
Data Structures
JMX ServerAdmin
Attributes
Page 176
Operations
Data Structures
JMX Solutions
The MBean Object Name is:
com.ibm.ia:type=Solutions
Attributes
Solutions - List<Solution>
Operations
deploySolution
SolutionStatus deploySolution(String fileName, boolean exportOnly, boolean activateOnly,
boolean forceActivate, boolean redeploy)
undeploySolution
SolutionStatus undeploySolution(String solutionName)
revertSolution
SolutionStatus revertSolution(String solutionName)
activateSolution
SolutionStatus activateSolution(String solutionName)
stopSolution
SolutionStatus stopSolution(String solutionName)
getProperty
String getProperty(String solutionName, String propertyName)
setProperty
boolean setProperty(String solutionName, String propertyName, String propertyValue)
getProperties
List<String> getProperties(String solutionName)
setProperties
boolean setProperties(String solutionName, Map<String, String> properties
getSolutionVersion
String getSolutionVersion(String solutionName)
isDeployed
boolean isDeployed(String solutionName)
isReady
boolean isReady(String solutionName)
Data Structures
Solution
String currentVersion
Page 177
String name
SolutionStatus
String message
boolean success
which can be applied to a DB2 database to create the appropriate definitions in a target database.
Although the file is oriented towards DB2, it appears to be pretty generic SQL and can thus be
applied to most database systems. Although the data stored in the tables is black-box, we can list
the different tables it creates. These are:
ENTITIES
OUTBOUNDEVENTS
INBOUNDEVENTS
JOBRESULTS
EVENTQUERY
JOBHISTORY
RULESETS
DELAYEDEVENTS
To enable persistence, we must edit a configuration file that belongs to objectgrid. This file can be
found at:
<ROOT>/runtime/wlp/usr/servers/<server name>/grids/objectgrid.xml
Page 178
DelayTimerPlugins
EventQueuePlugins
EntityPlugins
RulesetsPlugins
OutboundQueuePlugins
JobResultsPlugins
JobHistoryPlugins
EventQueryPlugins
Design Considerations
When building solutions, from time to time there will be considerations that may not be
Page 179
This rule would reduce the quantity of the stock if we have enough stock on hand.
A second rule may read:
when a sale occurs
if
'the stock item' is not null
and the quantity of the sale details is more than the quantity of 'the stock item'
then
print "Not enough stock - transaction id is " + the transaction id ;
emit a new no stock where
the sale details is the sale details of 'this sale' ,
the transaction id is the transaction id of 'this sale' ,
the arrival date is the timestamp of 'this sale' ;
Sounds fine however there is a fatal flaw in this design and that is that the rules are all fired for a
matching event. Here is an example of when things go wrong.
Imagine the initial quantity of stock is "6" items. Now imagine that an order for "5" items arrive.
When the first rule fires, the condition is true and the quantity is reduced to "1" (6-5). Now, the
second rule fires but the new current quantity in stock is now "1" and hence its condition is also
true as it appears that we need "5" items but only have "1" on hand.
Our core mistake here was that rules can modify the state of an entity and when a rule is evaluated,
it is the immediate and current value of the entity that is presented to the rule. If preceding rules
have modified the entity's attributes then these new values will be seen by subsequent rules.
Is this an error? I think not but it does mean that we have to be extremely cautious when
thinking about rule conditions if rules can modify the values that those conditions depend upon.
For the rules outlined, we can solve the puzzle with an "else" construct giving us a working rule
of:
when a sale occurs
if
the quantity of the sale details is at most the quantity of 'the stock item'
then
set the quantity of 'the stock item' to the quantity of 'the stock item' - the quantity of the
sale details of this sale ;
else
print "Not enough stock - transaction id is " + the transaction id ;
emit a new no stock where
the sale details is the sale details of 'this sale' ,
the transaction id is the transaction id of 'this sale' ,
Page 180
Obviously the "when" part is required. There isn't much point in having an event driven rule if we
don't associate it with an event to start it. Similarly, the "then" part is required. There isn't much
point in having a Rule detect an event if that rule doesn't do anything with the notification.
See also:
Rule Agents
Terms in scope
When writing rules, we have various terms in scope. These include:
The fields in the incoming event. These can be referenced simply by the field names and the
context of the event is assumed.
The fields in the associated bound entity. These can be referenced simply by the field names
and the context of the entity is assumed.
When using implicit context, we may end up with ambiguous phrasing For example, consider an
Event with a property called "key" and an Entity with a property also called "key". In a phrase we
can now no longer use "the key" because we now no longer have a uniqueness of that phrase.
Instead what we must do is further quality the reference. For example we could write "the key
of myEvent" or "the key of myEntity".
when the phone rings then answer the call and have a conversation.
when the doorbell rings then get up off the couch and answer the door.
when the wife yells then immediately stop what you were doing and see what she wants.
In each of these cases, we are declaring a rule of logic to follow on the occasion of such an event
happening. This is the nature of the "when" part of a rule.
The general syntax is:
when <event> occurs [, called <varname>]
Page 181
[where <condition>]
In its simplest form, we need only supply the name of the event to respond to:
when the doorbell rings occurs
Within the remainder of the rule, we can refer to an implicitly created variable that holds the event
that caused the processing to begin.
For example:
when XXX occurs ...
then can refer to "this XXX" in our rule as the event that kicked us off. We can optionally define
a new local variable to also hold this reference.
when XXX occurs, called YYY ...
the "this XXX" and "YYY" refer to the same event and we can use both variables interchangeably.
Upon arrival of the event, we may immediately decide that we want to ignore it. Maybe we can
determine this from the content of the payload. This concept is handled in the rules through the use
of the "where" part. If the condition following "where" is false, any further processing is
disregarded in this rule for this event instance.
For example to ignore payment overdue events that are less than five dollars, we could define:
when a payment overdue occurs where the amount of this payment overdue is more than 5
then
A second format of the "when" construct is the notion that we may wish to delay processing an
event for a period of time. At first this sounds and feels odd. Why would we want to do that?
Consider the following English language notions:
when it has been a month since the last time I spoke to my boss
When the event is finally processed, we must consider what the value of "now" will be? The
semantics define it to be at least "the time the event was produced plus the calendar duration
specified".
<condition>*
The "if" instruction evaluations a condition and performs a set of actions only if the condition is
Page 182
true. The "if" instruction is always used in conjunction with the "then" construct and sometimes
with the "else" construct. The use of "if" is optional. If omitted, then the "then" instructions are
always performed when a corresponding event occurs.
and
then
<action>*
else
<action>*
The "then" instruction is mandatory but the "else" part is optional and only used when an "if"
instruction is present. The "then" instruction is the syntactic introduction of the statements to be
executed when an event is recognized.
These are actions that can be performed as result of a preceding event. Think of it as classic "cause
and effect". When we define event based rules, we are actually describing a series of actions to
perform when a previous event happens. The detection of the event is important but so is the
description of the actions that we are to perform. Within DSI, we can declare a rich set of actions
that can be performed.
Setting a variable's value to null effectively deletes any previous content that variable had. The
previous content can no longer be accessed after this step. When the variable is the bound entity
instance associated with an agent then that will terminate the relationship of the agent to the
instance.
We can use arithmetic in numeric calculations:
set <variable> to <variable> + 5;
Page 183
Note that we don't use the set statement to set the values of booleans. Instead we use the "make
it" statement.
See also:
Variable values
For example:
make it true that the oven is on
or
make it false that the oven is on
When my wife tells me she is pregnant tell my friends that I can't see them anymore.
When I heard thunder an hour before I want to go fishing, call my buddy to bring extra beer.
These new events can be directed back into DSI for further processing or they can be sent outbound
from DSI to an external party to notify them that something has to be done.
The general syntax of this is:
emit <an event>;
Typically, a new instance of an event is constructed here which includes its population. For
example:
emit a new MyEvent where
the key is "myKey",
the field1 is "My Value";
See also:
Emitting an event
See also:
Variable values
Page 184
When performed, this action causes the specified string value to be written to the DSI console log.
Examining the log we will find an entry that looks like:
I CWMBD9751I: Rule Agent <Rule Agent Name> print: <string>
For example, if the property of an object called XYZ is called PQR that is a collection, we could
code:
clear the PQR of this XYZ;
For example, if the property of an object called XYZ is called PQR that is a collection, we could
code:
add a new ABC to PQR of XYZ;
For example, if the property of an object called XYZ is called PQR that is a collection, we could
code:
remove 'myPQR' from PQR of XYZ;
Page 185
- <action>*;
For example, to print out the timestamps of previous events we might use:
when an EVENT1 occurs
definitions
set 'previous events' to all EVENT1s
;
then
print "The total number of EVENT1s seen has been: " + the number of EVENT1s;
for each EVENT1 called myEvent, in 'previous events' :
- print "Previous event was at : " + the timestamp of myEvent ;
Variable values
The values of variables can be of the type of that variable. This includes the usual items such as
strings, numbers and booleans.
Strings are provided as text between double quotes as in:
"London"
77
3.141
-1
If the variable is a modeled business object, then we can assign the target variable the value of
another variable.
Alternatively, we can create a new instance of a business object. This is achieved through the use of
the "new" construct.
The syntax of this is:
new <object> where
for example
set 'this employee' to a new Employee where the 'serial number' is "ABC";
Another special value is a string that contains the name of the current rule that is being evaluated. It
is accessed through the syntax:
the name of this rule
See also:
of 2015-05-18 3:00pm. From the agent's perspective, since the arriving timestamp has an earlier
timestamp than its current "now" value, it leaves "now" as the later value and "now" remains at
2015-05-18 4:00pm.
Putting this more formally, the "now" attribute of an agent contains the latest timestamp value of
events that the agent instance has seen. The value of "now" only increases and never decreases.
Because "now" is taken from the latest timestamp of an arriving event, it is important that the values
of the timestamp in incoming events be correct. For example, if an event arrives with an invalid
timestamp (say 2115-05-18) then the agent will incorrectly interpret now as that far date in the
future and this may adversely affect subsequent event processing.
Time operators
We are used to thinking of classic arithmetic operators such as plus ('+') and minus ('-') resulting in
new numeric values. ODM DSI, because it is heavily dependent upon time, has a wealth of time
operators. In order to understand these properly, make sure that you understand the concepts of a
time point, a time duration and a time period before reading further.
In summary
A time duration is an abstract length of time that has no relationship to an actual time line
A time period is the set of all time points between two specific time periods
ODM CI
Concept
Description
now
A time point
A time period
today
A time period
yesterday
A time period
tomorrow
A time period
Page 187
The time period in units before now. This does not include
now.
A time duration
A time point.
A time point
Numeric
A time point
A time point
A time duration
A time point
A time point
A time point
A time point
A time point
A time point
A time period
A time period
A time period
A time period
A time period
A time period
A time period
A time period
Page 188
<period>
the calendar year <year number>
the calendar month <month name>
<year number>
<time point collection> before
<period>
A collection
A collection
See also:
Time
Time Expressions
Expression construction
Logical expressions
An expression evaluates to either true or false. An expression can itself be composed of other
expressions combined together using "and" and "or".
There are some other specialized expressions. The first is true if all the expressions are true which
similar to "and" but expressed in a different format:
all of the following conditions are true:
- <condition>*,
The next is true if any one of the expressions are true which is similar to "or" but expressed in a
different format:
any of the following conditions are true:
- <condition>*,
We can also say that an expression is true if all of another set of expressions are false:
none of the following conditions are true:
- <condition>*,
Numeric expressions
Numeric expressions describe relationships between numbers. In classic programming, we use
symbols such as "=" and ">" but in rules, we express these concepts in words. Since we are so used
to the use of symbols, the following table illustrates the symbols first followed by the equivalent
expressions as rules:
Page 189
English
DSI Expression
n1 = n2
n1 != n2
n1 >= n2
n1 <= n2
n1 < n2
n1 > n2
String expressions
String expressions are true/false expressions that work against string data types. They can be used
where an expression is valid.
Phrase
Example
<text> is empty
""
"ABC"
Time Expressions
<date> is at the same time as <date>
<date> is after <period>
<date> is before <period>
<date> is during <period>
<date> is within same calendar <calendar unit> as <date>
<date> is within <calendar duration> before <date>
<date> is within <calendar duration> after <date>
<date> is within <duration> before <date>
<See more>
Page 190
See also:
Time operators
Time
Aggregation expressions
Counting expressions
Now things start to get tricky. We can start to build expressions that "reason" over collections.
there are <number> <object> - There are exactly <number> objects in our
history.
Page 191
there is no <object>
In the following table, let "count" be the number of instances of <object> and X be a number.
Notion
DSI Construct
count == 0
there is no <object>
count == 1
count == X
count >= X
count <= X
count < X
count > X
count
number of <object>
Geospatial expressions
the distance between <a geometry> and <a geometry> in <a length unit>
all <geometries> within a distance of <a number> <a length unit> to <a geometry>
the <a number> nearest line strings among <lines strings> to <a geometry>
Page 192
See also:
Geometry
From a modeling perspective, think of DSI waking up each second and, for each of these rules,
evaluating the condition (hopefully that is not what actually happens as polling would be inefficient
but for our model and clarity, simply assume that is what happens).
If the condition becomes true, then the action is performed. The rule will not perform the action
again until the rule subsequently becomes false and then becomes true once more. This can be
thought of as executing a flip-flop. The rule will continue to be evaluated and executed forever
more.
A curious item to note is that the rule will not start evaluating until the Rule Agent has been woken
at least once because of a previous event.
See also:
then a rule being processed for entity with id="a" would see three events while a rule for
id="b" would see one event even though the DSI system has seen a total of four E1 events. This
makes sense but it is always good to validate that this is in fact what happens.
Within a Rule Agent we can access all the events (associated with a single entity) using the syntax
"all Xs" where "X" is the name of the event.
If we iterate over all the events using a for each loop, an interesting question is what order are they
in? Will it be most recent event or least recent event first? The answer is probably that we
shouldn't assume any ordering. Experimenting seems to show that the loop starts with the most
recent event however, it is the current event that is at the end of the loop.
so what we see is:
En-1, En-2, En-3, .... E2, E1, En
Interesting huh?
It might not be immediately clear what this means. Let us parse it apart piece by piece and see what
we can find. It begins with an event trigger which basically says that the rule will never do
anything until an instance of EVENT1 is seen.
We then have a most interesting definition statement. The statement reads:
set myEvent to an EVENT1;
This feels unusual ... what does it mean? The way to interpret this is that we are setting the local
variable called "myEvent" to an instance of a historic and previously processed EVENT1. Ahh ...
you might say ... and your next question would sensibly be "but which historic event?" and here the
answer gets very strange. The answer becomes "all of them ... one at a time".
If from a clean state, I send in an event EVENT1(e1="a", T=T1) then nothing would be logged
as we have not yet seen an event. If I send in a second event EVENT1(e1="b", T=T2), we
would have a single print statement logged reading:
Event instance: a that was seen at T1
If I send in a third event EVENT1(e1="c", T=T3), we would see two new print statements
reading:
Event instance: a that was seen at T1
Event instance: b that was seen at T2
Pause here ... notice that we sent in one new event which caused the rule to be run once but yet we
see two print statements.
Page 194
The the value of myCount will initially be one. This means that the current event is included in the
count of events associated with the entity and not just previous events.
However, this will not include the current event that caused the rule to fire. This may be what you
want but experience seems to be saying that you will likely want to include the current event as
well. If this is correct, the following code will work:
set 'total' to the total amount of all auto approves after 4 weeks before now;
Debugging a solution
Here are some tips and techniques for debugging a solution.
Always examine the messages.log file from the server. This can be found in the
<ROOT>/runtime/wlp/user/servers/cisDev/logs directory. A good tool for tailing
this file on Windows is logexpert or within Eclipse one can use "Log Viewer".
Some of the more interesting messages to look for include:
We can use "print" statements in the action sections to log information for debugging. A special
phrase called "the name of this rule" is the string representation of the current rule.
An important feature of the product is the ability to control trace flags. These can be set in the
server.xml file using the <logging> entry. Switching on all aspects of trace is probably too
much. Here are some suggested entries for different types of problems:
Page 195
See also:
Logging Events
When an event is received within DSI, it is delivered to appropriate agents for processing. During
debugging, we may wish to see the events being delivered to the system. One possible way to
achieve this is the creation of a Java Agent that listens on all kinds of events and merely logs the
incoming event for display. We can achieve this by creating a Java Agent with an agent descriptor
that looks like:
'solution1.log_all.LogAll' is an agent,
processing events :
- event
This definitions declares an agent with no associated entity that processes all types of events.
The Java code implementation of the agent could then be:
package solution1.log_all;
import
import
import
import
import
com.ibm.ia.agent.EntityAgent;
com.ibm.ia.common.AgentException;
com.ibm.ia.common.DataFormat;
com.ibm.ia.model.Entity;
com.ibm.ia.model.Event;
This logs the XML document corresponding to the event to the console.
Examining a problem
If we examine the logs, we may see messages similar to the following:
E Aggregate_Tests:: CWMBD9304E: Fatal error detected during event processing for event
[aggregate_tests.Sale:4F0C74EA283B7094EE11E4FFDF752083] by agent [Aggregate_Tests_-_RA1] on
partition [4]. Abandoning Event
By itself, finding this is huge. You now know whether or not the event contains what you expect it
to contain. In addition, you will find the event Id which you can use for correlation if there are
multiple events being processed concurrently.
Understanding messages
Messages written to the consoles and traces in many cases have IBM message codes associated with
them. The format of these messages is:
<Product ID><Message Number><Severity>
Where:
Product ID is the identifier for a product. Here are some of the codes that you will find in
IBM DSI:
CWOBJ WebSphere Extreme Scale core components
CWPRJ Extreme scale Entity projector
CWWSM HTTP Session manager
CWXQY Query Engine
CWXSA Extension point
CWXSB XsByteBuffer
CWXSC Console
CWXSI Command Line
CWXSR Log Analyser
CWMBx Decision Server Insights
CWWKF Liberty Kernel
CWWKS Liberty Security
CWWKO - ?
CWWKE - ?
CWWKZ - ?
SRVE WebSphere web container
TRAS WebSphere tracing and logging
SESN HTTP Session Manager
SSLC SSL channel security
TCPC TCP Channerl
WSBB XsByteBuffer
The message number is the unique id of this message within the message area.
The severity is a single character code indicating the nature of the message. The code will
be one of:
I Informational
Page 197
W Warning
E Error
Geometry
DSI has special support for geometry. What this means is that we can reason about interesting
geometrical knowledge such as:
The DSI support for these is based around some concepts that are related to geometry. These are:
A point a location in "coordinate space". For example, the X/Y coordinates of something
on a graph or the latitude/longitude of a place on the Earth.
A line string An ordered sequence of points describing a line composed of smaller lines
between each pair of consecutive points.
A linear ring A line string where the the first and last pairs of points are considered to form
a line segment.
A vertex - ???
A polygon A linear ring where we consider it to define not just the boundary but
everything inside the boundary as well.
In addition, DSI provides knowledge of units of geometrical measurement including length and area
units.
The geometry support is implemented within the product by a set of Java classes and interfaces
under the package com.ibm.geolib. Some of the more important are:
Warning the data types in the geometry package are not serializable java objects.
A core class in our story is the com.ibm.geolib.GeoSpatialService. From this class we
have factories to create some of the base items:
GeometryFactory geometryFactory = GeoSpatialService.getService().getGeometryFactory();
For example:
Point point = geometryFactory.getPoint(longitude, latitude);
See also:
Geospatial expressions
Wikipedia - Latitude
What this is telling us is that we can augment our own rules projects with additional BOM entries
and concepts.
Page 199
See also:
Business Object Model BOMModeling the Business Object Model (BOM)Generated Business Object Model
REST Requests
ODM DSI responds to external REST requests. When sending requests, set the Content-Type
header to "application/xml". When receiving the response, we can ask for either XML or
JSON data as a result. This is achieved with the HTTP Accept header being one of:
For each of the GET REST requests, optional additional parameters can be supplied. These include:
group Causes the returned data to be returned as "pages" where the page size is defined
by the max property.
The REST requests should be sent to the server (and port) of the DSI server. The ports can be
configured as per:
It is interesting to note that there is no pre-built REST API for submitting an event for processing.
Page 200
However, a solution developer can easily create an HTTP connection definition which will perform
the same task.
See also:
solutions:[
{
name:"FastTrackSolution",
version:"FastTrackSolution-0.1"
},
{
name:"MySolution",
version:"MySolution-0.3"
}
]
"$class" : "com.ibm.ia.admin.solution.EntityTypes",
"entityTypes" : [
"com.kolban.Employee"
],
"query" : "?solution=MySolution"
Note that what is returned is literally a list of entity types. There is no data on their structure
returned.
The "<entity type name>" property is the full package name of the entity type, for example
"com.kolban.Employee".
Page 201
"$class" : "Collection[com.kolban.Employee]",
"entities" : [
{
"$class" : "com.kolban.Employee",
"$idAttrib": "serialNumber",
"age" : null,
"firstName" : null,
"jobTitle" : null,
"salary" : 0.0,
"secondName" : null,
"serialNumber" : "123"
}
]
Be cautious with entity attributes that are defined as enriched. Their values are not calculated and
returned in the response. They will simply not appear in the returned data.
"$class" : "com.kolban.Employee",
"age" : null,
"firstName" : null,
"jobTitle" : null,
"salary" : 0.0,
"secondName" : null,
"serialNumber" : "123"
Of note in this object is a field called "$class" which contains the Java class name that represents
this object.
The body of the PUT request contains an XML object of the form
<object xmlns:xsd="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.ibm.com/ia/Entity" type="<entity type name>">
<attribute name="<attribute>">
<null />
</attribute>
<attribute name="<attribute>">
<string><Value></string>
</attribute>
...
</object>
Page 202
A HTTP response code of 404 means that we could not find the instance to delete. On success (200
OK), the response is the value of the entity before it was deleted.
This will return a list of aggregates defined for the solution. Each entry in the list will be an object
with a property named "defvar<Aggregate Name>" and have the current value of the
aggregate.
For example:
[
{
}
"defvarmy$x95$aggregate": 5.0
See also:
This request will return a single named aggregate. What is returned is an object with the single
property named for the aggregate with the aggregate value. For example:
{
"defvarmy$x95$aggregate": 5.0
See also:
REST Programming
REST is a straightforward technique providing web services through simple HTTP requests. The
following are some notes on REST programming in different environments.
Page 203
The response data from DSI is best served in JSON. A relatively new specification for JSON
processing in Java is available through JSR 353.
See:
Charting entities
Over time and as events arrive and are processed at DSI, we can imagine that DSI will build up
knowledge contained in the form of entities. Each entity will represent or model some distinct thing
and will have attributes associated with it. Such an array of data lends itself well to being charted.
Here is an example of a potential chart:
In this example, each column represents an underwriter and the height of a bar represents their
probability of approving a loan. As new events arrive indicating whether or not they approved
loans, their probability of approval will be recalculated and saved as a property of the entity. When
the graph is refreshed, the new data associated with the entity will be visually reflected in the chart.
DSI doesn't come with any charting capabilities but does provide a series of REST exposed APIs
including one called "List Entity Instances". Using this API, we can pass in the solution in which
an entity is defined and also the name of the entity type we wish to query and what will be returned
is a list of the entities known to DSI including their values. From this raw data, we can feed it into
a JavaScript charting package such as "jqPlot" to visualize the chart.
See also:
Page 204
jqPlot
Patterns
When we build out rules, the chances are high that the "flavor" of the rule has been written before.
Let us look at the simplest rules:
Taking these as a whole, we see that despite their apparent differences, they are all very similar.
They have the following in common:
When X Event happens, then do Y Action
This is what we consider a pattern. In principle, all rules will conform to one or more patterns. The
following describe some of the more common (and in some cases trivial) patterns that we come
across. They may be used as future references should you need to implement something similar.
Alternatively, they may be used as a study aid to ensure that you understand what is happening
when you read them.
Notice that we guard the action with a check to ensure that we are not already bound.
Page 205
Page 206
Sources of Events
In our journey so far we have considered only a couple of sources of events and how those can be
delivered to ODM DSI. Specifically, we have looked at XML formatted data arriving over REST or
JMS. Now we look at some additional sources of events and see how they can used in this arena.
An application performs a SQL Insert into the database which is recorded in a table which is
"magically" published as an event to the event cloud.
If we limit our consideration to IBM's DB2 database, we find that it has some elegant technology
that makes this story possible. First, we begin by examining the notion of a DB "trigger". A trigger
is the database's automatic execution of database side logic whenever it detects a modification to a
table.
The reference documentation on DB triggers can be studied in detail, for our purposes, we will only
consider a subset. Examine the following:
CREATE TRIGGER <Trigger Name>
AFTER INSERT ON <Table Name>
REFERENCING NEW AS N
FOR EACH ROW
<Statement>
This will execute a statement one for each row that is inserted. The variable "N" will contain the
new row values. What remains now is to determine what is a good statement to execute that will
cause an event to be emitted?
IBM's DB2 has native WebSphere MQ support. This means that we can write a message directly
into a queue from within SQL.
The DB2 function called "MQSEND" can put an arbitrary string message in a queue. For our
purposes, the format of the function is:
MQSEND('<service name>', <message data>)
We don't explicitly name the queue, instead we refer to the queue by its handle of "service name"
which is a lookup on a table called "DB2MQ.MQSERVICE" which contains the actual queue target.
Page 207
Unfortunately, this support seems to require DB2 Federation support which is appears to be a
separate product ... so for the purpose of this section, we will look and see if there isn't an
alternative approach available to us.
As an alternative to using messaging to send events, we can use REST requests. Within a DB2
environment, we can write procedures in Java which, when called, will execute a method from
within a Java class. If that custom Java code were to emit a REST request, we would have all the
parts we need. The DB2 procedure could then be invoked as a result of a trigger that would send
the event via REST correctly formatted.
What follows is a worked example:
First we create a Java class that looks as follows:
public static void sendEvent(String url, Clob eventClob) throws SQLException {
try {
String event = eventClob.getSubString(1L, (int) eventClob.length());
publishEvent(url, event);
} catch (Exception e) {
e.printStackTrace(log);
}
}
private static void publishEvent(String urlStr, String event) throws Exception {
URL url = new URL(urlStr);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("POST");
conn.setDoOutput(true);
conn.setUseCaches(false);
conn.setAllowUserInteraction(false);
conn.setRequestProperty("Content-Type", "application/xml");
OutputStream out = conn.getOutputStream();
Writer writer = new OutputStreamWriter(out, "UTF-8");
writer.write(event);
writer.close();
out.close();
if (conn.getResponseCode() != 200) {
throw new IOException(conn.getResponseMessage());
}
conn.disconnect();
} // End of publishEvent
With the JAR made known to DB2, we can now create a procedure that calls the JAR:
create procedure sendEvent(IN url varchar(100), IN eventText clob)
language java
parameter style java
no sql
fenced threadsafe
deterministic
external name 'ODMCIPROCS:com.kolban.odmci.DB2Procedures!sendEvent'
This procedure takes two parameters. The first is the URL that ODM DSI is listening upon for
incoming HTTP events. The second parameter is the event payload itself. We have chosen a
"CLOB" data type as this has an unbounded size and we didn't want to limit the size of the XML
payload message.
At this point we now have a Java procedure that sends data to ODM DSI as an event payload and
we are able to call it as a DB2 statement. What finally remains is for us to build a trigger such that
an insertion of a new row into a table will cause the event to be sent where the payload of the event
Page 208
The above will register a trigger on a table called "T1" which has columns "col1", "col2" and
"col3".
See also:
Writing DB2 Java Procedures and FunctionsDB2 TriggersDB2 and XMLMaking a REST call from Java
Page 209
Now let us contrast this with how we might model a DSI event. Imagine we created the following
definition in a BMD:
a
a
a
a
a
BPMBO
BPMBO
BPMBO
BPMBO
BPMBO
is a business event.
has an 'a' (text).
has a 'b' (integer).
has a 'c' (date & time).
can be 'd'.
Obviously the XML exposed by BPM is not the same XML expected by DSI so how can we handle
this? Fortunately, DSI supports XSLT transformation. If we can build an XSLT stylesheet, we can
map from the BPM generated XML to the expected DSI XML. The XSLT stylesheet mechanisms
for the connectivity definition can be leverage for this.
An example piece of Java code for an implementation of a Java service might be:
package kolban;
import
import
import
import
import
import
java.io.IOException;
java.io.OutputStream;
java.io.OutputStreamWriter;
java.io.Writer;
java.net.HttpURLConnection;
java.net.URL;
Page 210
conn.setDoOutput(true);
conn.setUseCaches(false);
conn.setAllowUserInteraction(false);
conn.setRequestProperty("Content-Type", "application/xml");
OutputStream out = conn.getOutputStream();
Writer writer = new OutputStreamWriter(out, "UTF-8");
writer.write(message);
writer.close();
out.close();
if (conn.getResponseCode() != 200) {
throw new IOException(conn.getResponseMessage());
}
conn.disconnect();
} catch (Exception e) {
e.printStackTrace();
}
} // End of sendEvent
} // End of class
// End of file
This Java code can then be packaged in a Jar and the Jar added to a BPM Process App as a server
managed file. Next we can define a BPM integration service (in this case called BPM Send Event)
which contains a Java component:
The configuration of the Java component can then point to the Java code:
Where the url is the URL of the endpoint of the DSI call and the message is an XML document.
Now, from within a BPM BPD, we can invoke the Integration Service as a step in the process:
Page 211
It is important to note that the Java coding and the creation of the Integration Service are a one-time
deal which can be easily imported as-is from the IBM samples. A designer of a BPM solution can
simply "use" the BPM Send Event service without ever having to know how it works.
The XSLT mapping still has to be performed by hand to map the fields in the BPM business object
to the fields in the expected incoming event but that is not a complex procedure. If demand became
high enough, it is likely that task could even be automated with some code that was given both a
BPM business object definition and a DSI event definition ... but we aren't going to go any further
down that path here.
It is the SolutionGateway object that is the key to the majority of our functions.
The SolutionGateway provides a variety of "submit()" methods that can be used to submit
an event for ODM DSI processing. The event object passed must be created by an
Page 212
Destinations of Events
Not only does DSI have the ability to accept events as input, it can also transmit events generated
from within DSI outbound to external systems. In this section we consider some examples of how
this might be used to interconnect with interesting systems.
EJB Deployment
One solution for deployment is to build a Singleton Session Bean which encapsulates the Camel
logic and rules. This can then be deployed to Liberty as an application which starts once deployed.
For example, the following is an EJB which, when deployed to Liberty, will start when Liberty
starts and handle the Camel processing this example omits the Camel logic but you can see
where it goes:
@Singleton
@Startup
public class EJB1 {
private CamelContext context;
/**
* Default constructor.
*/
public EJB1() {
}
Page 213
@PostConstruct
public void applicationStartup() {
System.out.println("Application Starting");
runCamel();
}
@PreDestroy
public void applicationShutdown() {
try {
System.out.println("Application ending");
context.stop();
} catch (Exception e) {
e.printStackTrace();
}
}
public void runCamel() {
try {
context = new DefaultCamelContext();
// Camel code here.
context.start();
} catch (Exception e) {
e.printStackTrace();
}
Now that we have a framework for running Camel inside of DSI, we are open to all the capabilities
of Camel itself. Specifically, the ability to read from JMS queues and transform data.
For example, let us imagine that DSI is writing to a queue called "Q1" that has an associated JMS
Connection Factory registered in JNDI as "jms/CF". We could handle that with:
@Resource(name = "jms/CF")
private ConnectionFactory cf;
// ...
context = new DefaultCamelContext();
context.addComponent("jms", JmsComponent.jmsComponentAutoAcknowledge(cf));
context.addRoutes(new RouteBuilder() {
@Override
public void configure() throws Exception {
from("jms:Q1"). //
to("file:C:/Projects/ODMDSI/junk/camel/outdir");
}
});
OSGI deployment
Another technique for deploying a Camel solution is to use OSGi bundles. From a Liberty
perspective, this is likely to be the best technique even if it requires a tad more setup to get running.
Using this story, we will deploy Camel as a set of OSGi bundles and then add an extra bundle to
own the Camel route. Thankfully, Camel is already fully OSGi compliant and simply placing the
necessary Camel supplied JARs in an appropriate bundle repository is sufficient to register Camel
for use. One downside, and it is one that is likely present in other techniques, is that some of the
components pre-supplied by Camel rely on Spring and we really want to avoid using Spring inside
an OSGi framework. The most immediate implication of that is the JMS component which is
heavily built on top of Spring. Thankfully, after a few hours work, we were able to come up with a
brand new custom component that provides generic JMS access without any dependency at all on
Spring.
Page 214
BPM exposes the ability to perform both of these tasks as REST exposed APIs. To start a new
process instance we have:
BPM requires that REST requests be authenticated when they arrive. As such we must set up
outbound HTTP request user/password.
The design of the MDB is the interesting part. It will receive a JMS TextMessage that will
contain the XML representation of the emitted event. BPM can receive either XML or JSON
encoded data. My preference would be to pass JSON to BPM which will mean that we will have to
convert the XML to a JSON string.
Component Architecture (SCA). Included in this capability is the ability for BPM to listen for
incoming messages on a variety of inbound transports including HTTP, JMS, MQ, files and many
others. When a message arrives, the message can be transformed via a rich mediation
transformation engine and then emitted onwards. The destination of the message can be a variety of
targets including the BPM process runtime. Putting this another way, BPM Advanced can receive
messages over a variety of protocols, transform the content of those messages and then use the
arrival of the message plus its content to start a BPM process.
This sounds very much like what we need in order to start a process instance from a DSI emitted
event.
Let us now look at a schematic of how this would work. We start be realizing that an SCA module
can be deployed as part of a BPM process app. Here is an example:
What we are illustrating here is an SCA module that listens on an incoming HTTP transport and,
when a message arrives, its content is transformed and then used as the input to a new instance of a
BPD process.
The reason this helps us is that when an event is emitted from DSI, it can be emitted over an HTTP
transport. If the endpoint of the DSI HTTP connection is mapped to the input of the SCA module's
HTTP SCA Export then when an event is emitted by DSI, an instance of this SCA module will be
fired up and given the event XML document as input.
Our next puzzle is to consider how the event payload of the emitted event from DSI can be used as
input to the BPM process? This is actually extremely simple and elegant. When we model an event
in DSI, we can then export the model of that event as an XML Schema Definition (XSD). That
schema can then be imported into BPM Advanced and used as the model for data arriving at the
SCA module. Since we will already have the modeled data that is expected to be supplied as input
into the BPM process, the mediation transformation can be used to map the DSI event data to the
BPM process input data. This is achieved using graphical modelers and is extremely easy to do:
Because we are doing the transformation at "receiver makes good", there is no need to use XSLT
transformation at the DSI side of the house.
Page 216
OSGi
Throughout the documentation and usage of ODM DSI we see references to something called
"OSGi". It is useful to spend a few moments discussing this.
First, we won't be covering OSGi in detail. It is far too big a subject and is covered in other books
and materials. What we will be looking to capture here are the core notes on using OSGi with
ODM DSI.
A simplistic way of thinking of the value of OSGi is that it encapsulates function in modules only
exposing what is desired to be exposed and explicitly declaring what it needs.
Imagine the alternative. In Java today, I compile a file called com.kolban.MyThing.java
and I get a new file called com.kolban.MyThing.class. This could then be used to construct
instances of Java Objects. Great... that's easy enough. I can put this class file in a JAR with other
class files and give you that JAR for usage. Great so far. Now, if you want to use MyThing do
you have everything you need?
The answer by itself is unknown. You may find that the class expects other classes to be on the
classpath. How do you find out? You run it till it fails. With OSGi, we explicitly declare ALL the
expectations of the function and hence can't not know what we need in order to run it.
Versioning is another issue. What if you write a solution against MyThing at version 1and in
version 2, I remove a method that was previously exposed. That is obviously not good practice on
my part but it is perfectly legal from a Java language perspective. OSGi allows us to declare
versions of dependencies. Try and include two versions of com.kolban.MyThing.class on
one classpath and see how far you get.
The core benefits of OSGi are:
Dynamic replacement
Page 217
Export-Package The set of packages exposed to other bundles. These packages are
"," separated if there are multiple.
Bundle-Activator The class that implements the activator for the bundle
Bundle-ClassPath The bundle internal classpath. This is where classes inside the
bundle look for class resolution. This has a default of "." which means the root of the
bundle JAR.
Bundle-Description
Bundle-DocURL
Bundle-Category
Bundle-Vendor
Bundle-ContactAddress
Bundle-Copyright
See also:
Page 218
OSGi Alliance
IBM redbook - Getting Started with the Feature Pack for OSGi Applications and JPA 2.0 SG24-7911-00 2010-12-02
OSGi in practice
developerWorks - Getting Started with OSGi Applications: Bundle Lifecycle (Part 1) 2012-07-27
developerWorks - Getting Started with OSGi Applications: OSGi Services and Servlets (Part 2) 2012-07-30
developerWorks - Getting Started with OSGi Applications: Blueprint Container (Part 3) 2012-07-30
developerWorks - Getting Started with OSGi Applications: Bundle Repositories (Part 4) - 2012-08-01
developerWorks - Developing enterprise OSGi applications for WebSphere Application Server 2010-07-14
developerWorks - Best practices for developing and working with OSGi applications - 2010-07-14
Bundle Activators
A Bundle Activator is a class which implements the BundleActivator interface. It provides a
way for a bundle to interact with the lifecycle of the OSGi framework. This has two methods that
need to be implemented:
Bundles which include activators must also specify additional information in the MANIFEST.MF
including:
Bundle-Activator
Bundle Listeners
A Bundle Listener is a class which implements the BundleListener interface.
Using registerService(), a bundle can offer itself up to the OSGi service registry for
utilization. The object returned is a ServiceRegistration object which can be used to update
the properties of a previously registered service. This also includes
ServiceRegistration.unregister() which unregisters the service.
As a consumer of a service, one would user the getServiceReference() call to retrieve a
ServiceReference object. Note that the ServiceReference is not the same as the usable
service itself. In order to get a handle to the target service one must make a call to
getService() passing in the previously received ServiceReference.
When we are finished using a service, we can execute the ungetService() to tell the
framework that we are done with our reference. This allows us to be good citizens.
Page 220
I can write a Java program that uses this interface pretty easily. For example:
public void main() {
Greeting greeting;
// Create a greeting
// code to create a greeting here
}
greeting.greet("Bob Jones");
As a user of the interface, I don't have to know how it is implemented ... but ... if you look closely, I
appear to be responsible for creating an instance of the Greeting interface. Typically, this would
mean that there is some class that looks as follows:
public class Greeting_impl implements Greeting {
public void greet(String name) {
System.out.println("Hello " + name);
}
}
This works and is commonly how it is done, but now something rather ugly has happened. I have
now exposed an implementation class to my programmers. Instead of this, I would have liked the
implementation of my Greeting to be injected. I would like it not to be tightly coupled to my code.
This is where the OSGi Blueprint story comes into play.
Blueprint XML files are placed in the folder OSGI-INF/blueprint. The commonly chosen
name for the XML file is "blueprint.xml". If one doesn't want to use OSGI-INF/blueprint as
the folder, the folder name can be specified in the Bundle-Blueprint entry in MANIFEST.MF.
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<service id="MyGreeting" interface="com.kolban.Greeting">
<bean class="com.kolban.Greeting_impl"/>
</service>
</blueprint>
Within a blueprint XML file, we can define several different major concepts.
See also:
developerWorks - Building OSGi applications with the Blueprint Container specification 2009
Page 221
argument
class
property
factory-method
The name of a method to be called to construct an instance if the factory pattern is being used.
scope
Either singleton or prototype. With singleton, the same object is returned each time an instance is
needed. With prototype, a new object instance is created.
init-method
destroy-method
A method to be called on the bean before it is destroyed. This is only applicable for beans of scope
type singleton.
A reference to a bean.
interface
auto-export
serviceproperties
ranking
Reference manager
id
interface
xmlns:bptx="http://aries.apache.org/xmlns/transactions/v1.0.0"
xmlns:jpa="http://aries.apache.org/xmlns/jpa/v1.0.0"
Page 222
This will inject an instance of an EntityManagerFactory into the bean. We can edit this in
the Blueprint XML editor with:
See also:
Java Persistence
Examples of Blueprint
Injecting a service reference
Imagine that we have a bundle that exposes a service for a Java package called
"com.mytest.MyClass". Now imagine that we wish to reference an instance of that service in
our current bean. In our current blueprint.xml we could define:
<reference id="ref1" interface="com.mytest.MyClass">
</reference>
<bean class="com.xyz.MyBean">
<property name="myClass" ref="ref1" />
</bean>
This will cause the injection of an instance of MyClass into MyBean via:
class MyBean {
...
public setMyClass(MyClass myClass)
{
Page 223
...
}
...
Page 224
4. Build a servlet.
5. Deploy as a WAR.
Here is an example of the MANIFEST.MF that might be generated:
Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: WAB1
Bundle-SymbolicName: WAB1
Bundle-Version: 1.0.0.qualifier
Bundle-ClassPath: WEB-INF/classes
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Web-ContextPath: /WAB1
Import-Package: javax.el;version="2.0",
javax.servlet;version="2.5",
javax.servlet.annotation,
javax.servlet.http;version="2.5",
javax.servlet.jsp;version="2.0",
javax.servlet.jsp.el;version="2.0",
javax.servlet.jsp.tagext;version="2.0"
Notice that the Web-ContextPath supplied the context path for the module.
Application-Name
Application-SymbolicName
Application-Version
Application-Content
Application-ExportService
???
An OSGi application is packaged as a ZIP file with extension of ".eba" (Enterprise Bundle
Archive).
Once enabled, we can interact with the OSGi console by telneting to it:
telnet localhost 5471
ss
ss name
start id
stop id
diag id
install URL
uninstall id
bundle id
headers id
services filter
packages filter
refresh id
See also:
Page 226
Page 227
This entry creates a server.xml definition into which fileset references can be added.
Debugging OSGi
If a bundle can't be found, we may get a message similar to the following:
00000035 com.ibm.ws.app.manager.esa.internal.DeploySubsystemAction
A CWWKZ0404E: An exception was generated when trying to resolve the contents of the application
MyBundles. The exception text from the OSGi framework is:
Unable to resolve Bundle1_1.0.0.201503012241.jar:
missing requirement org.apache.aries.subsystem.core.archive.ImportPackageRequirement:
namespace=osgi.wiring.package,
attributes={},
directives={
filter=(&(osgi.wiring.package=q2015_03_01)(version>=0.0.0))
},
resource=Bundle1_1.0.0.201503012241.jar
It will not be formatted as nicely as the above but instead be written as one text line. In the
example above, we are basically being told that an attempt to resolve a package called
"q2015_03_01" failed while trying to load the Bundle contained in the JAR file called
"Bundle1_1.0.0.*.jar".
Here is another larger example:
0000003e com.ibm.ws.app.manager.esa.internal.DeploySubsystemAction
A CWWKZ0404E: An exception was generated when trying to resolve the contents of the application
Camel1. The exception text from the OSGi framework is:
Unable to resolve OSGITest1_1.0.0.201503012251.jar:
missing requirement org.apache.aries.subsystem.core.archive.ImportPackageRequirement:
namespace=osgi.wiring.package, attributes={},
directives={filter=(&(osgi.wiring.package=org.apache.camel.blueprint)(version>=2.14.1))},
resource=OSGITest1_1.0.0.201503012251.jar
[caused by:
Unable to resolve org.apache.camel.camel-blueprint;2.14.1;osgi.bundle:
missing requirement org.apache.aries.subsystem.obr.internal.FelixRequirementAdapter:
namespace=osgi.wiring.package,
attributes={},
directives={cardinality=single, filter=(&(osgi.wiring.package=org.apache.camel.builder)
(version>=2.14.1)(version<=2.14.2)(!(version=2.14.2))), resolution=mandatory},
resource=org.apache.camel.camel-blueprint;2.14.1;osgi.bundle
[caused by: Unable to resolve org.apache.camel.camel-core;2.14.1;osgi.bundle:
missing requirement org.apache.aries.subsystem.obr.internal.FelixRequirementAdapter:
namespace=osgi.wiring.package,
attributes={},
Page 228
directives={cardinality=single, filter=(&(osgi.wiring.package=org.slf4j)(version>=1.6.0)
(version<=2.0.0)(!(version=2.0.0))), resolution=mandatory},
resource=org.apache.camel.camel-core;2.14.1;osgi.bundle
[caused by: Unable to resolve slf4j.api;1.6.6;osgi.bundle:
missing requirement org.apache.aries.subsystem.obr.internal.FelixRequirementAdapter:
namespace=osgi.wiring.package,
attributes={},
directives={cardinality=single, filter=(&(osgi.wiring.package=org.slf4j.impl)
(version>=1.6.0)), resolution=mandatory},
resource=slf4j.api;1.6.6;osgi.bundle
]
]
]
As you can see this quickly becomes a very complex challenge. Again, this turned out to be a
missing package called "org.slf4j.impl".
0000003b com.ibm.ws.app.manager.esa.internal.DeploySubsystemAction
A CWWKZ0403E: A management exception was generated when trying to install the application Camel1
into an OSGi framework. The error text from the OSGi framework is:
Resource does not exist:
org.apache.aries.subsystem.core.archive.SubsystemContentRequirement:
namespace=osgi.identity,
attributes={},
directives={filter=(&(osgi.identity=OSGITest1)(type=osgi.bundle)(version>=1.0.0))},
resource=org.apache.aries.subsystem.core.internal.SubsystemResource@90612196
OSGi tools
Bndtools
Bnd
WebSphere Liberty
The IBM WebSphere Liberty Core is the WAS environment used to host ODM DSI. As of DSI
v8.7, the version of Liberty is v8.5.5.
An instance of a server can be created with "server create <serverName>".
See also:
WASdev Community
KnowledgeCenter 8.5.5
Redbook Configuring and Deploying Open Source with WebSphere Application Server Liberty Profile - SG24-8194-00 - 2014-04-03
Redbook WebSphere Application Server Liberty Profile Guide for Developers SG24-8076-01 2013-08-23
Redbook WebSphere Application Server v8.5 Administration and Configuration Guide for Liberty Profile SG24-8170-00 - 2013-08-27
Configuration
The liberty profile is configured through a file called "Server.xml" which can be found at:
<Liberty>/usr/servers/<Server>/Server.xml
The configuration can also be edited through an Eclipse view called "Runtime Explorer". Once this
is opened, we are presented with a list of servers:
Page 229
By right-clicking on the "server.xml" entry and selecting open, we can open an editor for the
server properties:
wlp.user.dir
${wlp.install.dir}/usr
server.config.dir
${wlp.user.dir}/servers/<Server>/
server.output.dir
shared.app.dir
${wlp.user.dir}/shared/apps
shared.config.dir
${wlp.user.dir}/shared/config
shared.resource.dir
${wlp.user.dir}/shared/resources
Page 230
Development
The free Eclipse plugins for Liberty can be found at the IBM download site. The proper name of
these components is "Liberty Profile Developer Tools for Eclipse". Dropping
those on the Eclipse platform starts the installation.
An alternative source for the package is to use the Eclipse Marketplace and search for "Liberty":
Note: As of ODM DSI v8.7, this package is pre-installed in the Eclipse environment provided with
the product.
Features
To make the Liberty profile is compact and performant as possible, only the features that you will
use need be added to the server. These are defined in the <featureManager> element.
Page 231
Deploying Applications
Applications can be deployed in a variety of ways. The commonly used ones are to drop the
archive for the application in a known directory that is being monitored.
By default this is <ROOT>/runtime/wlp/usr/servers/<serverName>/dropins
Another is to explicitly define the application within the Server.xml file.
The Server.xml definition is called <application> which has the following properties:
location
id
name
type
context-root
autoStart
Security
SSL Security
When making HTTPS requests to a DSI server, the browser (or client) must trust the certificate
presented by the DSI server. This means retrieving the DSI certificate and adding it to the trust
store for the browser (client).
For Java clients, an excellent way to achieve this is through the Key Store Explorer tool.
Immediately after launch it looks as follows:
Page 232
We can now open security stores from the File > Open menu entries. For a typical Java JVM
the trust store will be found in the file called:
<JVM>/lib/security/cacerts
When you try an open it, you will be prompted for a password:
Once loaded, you will be shown the certificates contained within. To add a certificate for the WLP
server, click on the browser icon:
when prompted, enter the hostname and port number for your DSI server:
Page 234
DB data access
Java EE applications can use JDBC to query databases.
Page 235
Page 236
If the JDBC feature is not installed, you will be prompted to add it:
Page 238
something
The above was performed using the Liberty configuration editor. The result is the following XML
fragment in the server.xml configuration file:
<dataSource jndiName="jdbc/TEST">
<jdbcDriver>
<library>
<fileset dir="C:\Program Files\IBM\SQLLIB\java"></fileset>
</library>
</jdbcDriver>
<properties.db2.jcc databaseName="TEST"/>
</dataSource>
Note: the following has been shown to work the above may need tailoring
<dataSource jdbcDriverRef="DB2JDBCDriver" jndiName="jdbc/TEST" type="javax.sql.DataSource">
<properties.db2.jcc databaseName="TEST" password="{xor}Oz1tPjsyNjE=" portNumber="50000"
serverName="localhost" traceDirectory="C:/Projects/ODMCI/Trace" traceFile="trace.txt" traceLevel="5"
user="db2admin"/>
<containerAuthData password="{xor}Oz1tPjsyNjE=" user="db2admin"/>
</dataSource>
Page 239
<jdbcDriver id="DB2JDBCDriver">
<library>
<fileset dir="C:/Program Files/IBM/SQLLIB/java" includes="db2jcc4.jar db2jcc_license_cu.jar"/>
</library>
</jdbcDriver>
customerId
amount
description
Our goal here is that on detection of a "Sale" event we wish to save the details of the sale in a
database.
Here is the code for a process method in a Java Agent that will do just that:
public void process(Event event) throws AgentException {
Sale saleEvent;
if (event instanceof Sale == false) {
printToLog("Not a Sale event");
return;
}
saleEvent = (Sale) event;
printToLog("We have received a sale event: " + saleEvent.getCustomerId() + ", " +
saleEvent.getAmount() + ", " + saleEvent.getDescription());
try {
InitialContext ic = new InitialContext();
DataSource ds = (DataSource) ic.lookup("jdbc/TEST");
Connection con = ds.getConnection();
Statement stmt = con.createStatement();
String query = "insert into SALES (ID, AMOUNT, DESCRIPTION) values ('" +
saleEvent.getCustomerId() + "'," + saleEvent.getAmount() + ",'"+ saleEvent.getDescription() + "')";
stmt.execute(query);
System.out.println("We executed a SQL Insert of a sale event");
} catch (Exception e) {
e.printStackTrace();
}
}
Before we can deploy the Java Agent, there is one more thing we must do. Since we are leveraging
additional Java EE packages such as JNDI and JDBC, we must tell the Java Agent project that we
have a dependency upon them.
To perform this task, first we examine our Java Agent project and locate the META-INF folder and
Page 240
Next we open this file in the Manifest editor. We now switch over to the Dependencies tab. In the
imported Packages area, add two packages:
Save and close the MANIFEST.MF file and we are ready to deploy.
Servlets
From Eclipse, we can create a servlet using the following recipe:
1. Open the Java EE perspective
2. Create a new web project
Page 241
Page 242
Page 243
Page 244
6.
7. Th entry added is the JAR called com.ibm.ws.javaee.sevlet.*.jar that is found
in <DSIRoot>/runtime/wlp/dev/api/spec.
8.
JTA
InitialContext ctx = new InitialContext();
UserTransaction userTran = (UserTransaction) ctx.lookup("java:comp/UserTransaction");
userTran.begin();
// do some work
userTran.commit();
Java Persistence
The current Liberty supports JPA 2.0 (JSR 317). It is not there yet on JPA 2.1 (JSR 338).
To flag a class an an Entity we use the @Entity annotation.
The primary key within the entity has the @Id annotation.
@Entity
public MyClass {
@Id
private String
private String
private String
// Getters and
}
Page 245
key;
x;
y;
setters ...
name The name of the entity. Also the name of the table used to house persisted entities.
The default for this element is the name of the class.
EntityManager
An entity manager is factory created from EntityManagerFactory. An
EntityManagerFactory has associated with it a collection of settings called the "persistence
unit" that declare how EntityManager instances should interact with the persistence provider.
The EntityManagerFactory itself comes from an object called Persistence.
EntityManagerFactory myEntityManagerFactory =
Persistence.createEntityManagerFactory("MyPersistenceUnit");
When querying entities, we do not use standard SQL but instead something called the Java
Persistence Query Language (JP QL). An object called Query encapsulates a query. A Query is
obtained from the EntityManager. To execute the query and get the results, we can use the
getResultList() method found on the Query object.
For example:
TypedQuery<MyClass> query = myEntityManager.createQuery("SELECT e FROM MyClass e", MyClass.class);
List<MyClass> myClasses = query.getResultList();
Page 246
Persistence Unit
The persistence unit is the configuration associated with the EntityManagerFactory object
that describes how to work with the back-end data store. For a Java SE environment, it is an XML
document that is called "persistence.xml". A persistence unit is a named entity.
Here is a sample file:
<persistence>
<persistence-unit name="MyPersistenceUnit"
transaction-type="RESOURCE_LOCAL" >
<properties>
<property name="javax.persistence.jdbc.driver"
value="<Class name of JDBC Driver>" />
<property name="javax.persistence.jdbc.url"
value="<JDBC URL>" />
<property name="javax.persistence.jdbc.user"
value="<Userid>" />
<property name="javax.persistence.jdbc.password"
value="<Password>" />
</properties>
</persistence-unit>
</persistence>
The name attribute of the persistence-unit is what is used when we create an instance of an
EntityManager from the EntityManagerFactory.
Using DI, we can obtain an EntityManager using:
@PersistenceContext(unitName="MyPersistenceUnit")
EntityManager myEntityManager;
Physical Annotations
@Table The name and schema of the table to be used for an entity.
For example:
@Table(name="MYTABLE", schema="DB2ADMIN")
By default, the column name assumed for the DB is the same as that of the field.
@Lob Defines a field as representing either a CLOB or a BLOB.
@Enumerated When used with an enumeration type field, defines how the field should be stored
in the DB. Choices are EnumTypeORDINAL or EnumType.STRING.
@Temporal Used to define how Java time/date types are mapped to DB time/date types. Options
are TemporalType.DATE, TemporalType.TIME, TemporalType.TIMESTAMP.
Logical Annotations
Flagging a field with @Basic declares it as being mapped using basic JPA mapping. Since this is
the default, adding this annotation does nothing other than provide documentation.
Page 247
The eagerness of retrieving the value of a field can ne supplied with the "fetch" element.
@Basic(fetch=FetchType.LAZY)
Id fields can have their values generated during a creation request. There are multiple schemes
available to us including:
GenerationType.AUTO
GenerationType.TABLE
GenerationType.SEQUENCE
GenerationType.IDENTITY
Mapping Types
Mapping of fields to columns is supported for most Java data types.
An annotation of @ManyToOne defines the following fields as a relationship. In order to access
the target of the relationship, our source table must have a column that is used to contain the foreign
key. This can be supplied with the @JoinColumn annotation:
@ManyToOne
@JoinColumn(name="FK_1")
private MyOtherObject myOtherObject;
A relationship can also be one-to-one and is identified as such using @OneToOne. The source
entity will have @JoinColumn and the target entity will have a mappedBy element on the
@OneToOne annotation.
Configuration in Liberty
To use JPA in liberty, the jpa-* feature must be added.
The Liberty implementation of JPA is based on Apache OpenJPA.
See also:
developerWorks - Developing and running data access applications for the Liberty profile using WebSphere Application Server Developer
Tools for Eclipse - 2012-12-05
developerWorks - JPA with Rational Application Developer 8 and WebSphere Application Server 8 2011-06-28
developerWorks - Using the Java Persistence API 2.0 services with WebSphere Process Server V7, Part 1: Generating the data model
2010-12-08
developerWorks - Using the Java Persistence API 2.0 services with WebSphere Process Server V7, Part 2: Generating the JPA entities
2010-12-08
developerWorks - Using the Java Persistence API 2.0 services with WebSphere Process Server V7, Part 3: Creating a stateless session EJB
2010-12-08
developerWorks - Using the Java Persistence API 2.0 services with WebSphere Process Server V7, Part 4: Creating an SCA client 201012-08
developerWorks - Using the Java Persistence API 2.0 services with WebSphere Process Server V7, Part 5: Creating a BPEL process
2010-12-08
developerWorks - Using the Java Persistence API 2.0 services with WebSphere Process Server V7, Part 6: Generating the user interface 2010-12-08
Page 248
Examples of JPA
Calling from an OSGi Servlet and bundles
In this example, we will assume that we have a DB table that contains customer records. Since our
story is all made up anyway, the schema for the table looks as follows:
javax.persistence
javax.sql
javax.transaction
Page 249
return gender;
Next we want to create an interface that will eventually expose our JPA writer ... the interface is
called TestJPA
package testjpa;
public interface TestJPA {
public void write();
}
In the META-INF folder, we need to create a persitence.xml file that defines the JPA persistence
unit:
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0"
xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="MyPersistenceUnit"
transaction-type="JTA">
<jta-data-source>jdbc/MyDataSource</jta-data-source>
</persistence-unit>
</persistence>
Page 250
Finally, we can use our bundle. Create a Web bundle with a servlet that contains:
package testweb;
import java.io.IOException;
import
import
import
import
import
import
import
javax.servlet.ServletConfig;
javax.servlet.ServletContext;
javax.servlet.ServletException;
javax.servlet.annotation.WebServlet;
javax.servlet.http.HttpServlet;
javax.servlet.http.HttpServletRequest;
javax.servlet.http.HttpServletResponse;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceReference;
import testjpa.TestJPA;
@WebServlet("/TestWeb")
public class TestWeb extends HttpServlet {
private TestJPA testJPA;
private static final long serialVersionUID = 1L;
public TestWeb() {
super();
}
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws
ServletException, IOException {
System.out.println("TestWeb Called");
testJPA.write();
}
Page 251
JNDI Access
EJB
Liberty supports EJB 3.1. In order to use EJBs in liberty, the feature called ejbLite must be
added.
Page 252
Singleton EJBs
A singleton EJB allows us to define an EJB that can be started and stopped when the application as
a whole is deployed. This allows us to run background tasks. Initialization can be run in the
method annotated with @PostConstruct and release of resources in the method annotated with
@PreDestroy.
package ejb1;
import
import
import
import
javax.annotation.PostConstruct;
javax.annotation.PreDestroy;
javax.ejb.Singleton;
javax.ejb.Startup;
/**
* Session Bean implementation class EJB1
*/
@Singleton
@Startup
public class EJB1 {
/**
* Default constructor.
*/
public EJB1() {
}
@PostConstruct
public void applicationStartup() {
System.out.println("Application Starting");
}
@PreDestroy
public void applicationShutdown() {
System.out.println("Application ending");
}
JAXP
The Java API for XML processing (JAXP) is supported in Liberty at the 1.4 level (JSR 206).
To build a DOM from XML, the following is an example:
DocumentBuilder documentBuilder = DocumentBuilderFactory.newInstance().newDocumentBuilder();
ByteArrayInputStream bais = new ByteArrayInputStream(text.getBytes());
Document document = documentBuilder.parse(bais);
System.out.println("We have a document: " + document);
See also:
JAXB
To generate the Java bindings from an XML Schema we use the xjc command. This is typically
supplied with the Java SDK. For Liberty, it is supplied in the <WLP>/bin/jaxb folder.
Typically we would use the form:
xjc -p <package> <file.xsd>
ObjectFactory.java
Page 253
package-info.java
<Class Names>.java
JAXB creates Java classes that are models of the XML Elements and data types. These classes
conform to Java Beans and have appropriate getters and setters for their content.
To create XML from an object consider:
package sensor1;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.Marshaller;
import sensor1.xml.ObjectFactory;
import sensor1.xml.Usage;
public class Sensor1 {
public static void main(String[] args) {
// TODO Auto-generated method stub
ObjectFactory of = new ObjectFactory();
Usage usage = of.createUsage();
usage.setId("myId");
try {
JAXBContext jaxbContext = JAXBContext.newInstance(Usage.class);
Marshaller jaxbMarshaller = jaxbContext.createMarshaller();
jaxbMarshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
//jaxbMarshaller.marshal(usage, System.out);
jaxbMarshaller.marshal(of.createUsage(usage), System.out);
} catch (Exception e) {
e.printStackTrace();
}
See also:
JMS
In order to use JMS, we need to enable some WLP features:
wasJmsClient-1.1 This feature allows us to make JMS client calls within a WLP
application.
wasJmsServer-1.0 This feature enables the JMS provider implemented inside WLP.
jndi-1.0 The Java Naming and Directory Service which is where JMS resources make
themselves available.
<featureManager>
<feature>jndi-1.0</feature>
<feature>wasJmsServer-1.0</feature>
<feature>wasJmsClient-1.1</feature>
...
</featureManager>
With the inclusion of the wasJmsServer, WLP will now be performing the services of a JMS
provider. This means that WLP will be able to host both queues and topics. This means WLP will
be able to act as a repository of messages. In order for a message to be placed within a queue, we
must first define those queues.
The resources for real physical queues within a WLP are defined under the Messaging Engine
category:
Once a Messaging Engine has been defined, we can add child attributes such as queues:
Once the queue entry has been added, we can specify details including the name of the physical
Page 255
queue to create:
This is the same as making the following resource definitions within server.xml:
<messagingEngine>
<queue id="Q1" sendAllowed="true"/>
</messagingEngine>
By default WLP's messaging engine listens on port 7276 for insecure connections and 7286 for
secure connections. These will accept requests from any hosts.
A property called <wasJmsEndpoint> can be used to change these ports. For example:
Page 256
Once messaging engine definitions have been made and the server started, we can use JMX to
examine the state of both the messaging engine as well as the queues defined upon it.
Here is a JMX tree for the messaging engine:
Now that we have an internal messaging engine that is hosting a queue, we need to define the
corresponding JMS entries to refer to it from a JMS logical perspective.
First, we look at the JMS Queue Connection Factory.
This has a definition of:
<jmsQueueConnectionFactory jndiName="jms/qcf1" />
Next we look at the JMS queue definition. First we add a JMS Queue entry.
Page 257
With the JMS definition we can now map it to the messaging engine queue:
Page 258
See also:
JMS Bindings
Chapter 6 Messaging Applications Redbook: WAS Admin and Config Guide for Liberty Profile - SG24-8170
javax.annotation
javax.jms
@Resource(name="jms/QCF1")
private QueueConnectionFactory myQCF;
@Resource(name="jms/Q1")
private Queue q1;
QueueConnection qconn = myQCF.createQueueConnection();
QueueSession qsess = qconn.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
QueueSender qsender = qsess.createSender(q1);
TextMessage textMessage = qsess.createTextMessage("Hello World");
qsender.send(textMessage);
qsender.close();
qsess.close();
qconn.close();
Writing an MDB
A Message Drive Bean (MDB) is a Java EE application that passively watches a JMS destination
(queue or topic) for incoming messages.
To create an MDB, use the Eclipse developer tools.
1. Switch to the Java EE Eclipse perspective
2. Create a new EJB project
Page 259
Page 260
At the conclusion of these steps, two new Eclipse projects will be found:
Page 261
Page 262
Page 263
com.ibm.ws.javaee.ejb.*.jar
com.ibm.ws.javaee.jms.*.jar
Page 264
import
import
import
import
javax.ejb.ActivationConfigProperty;
javax.ejb.MessageDriven;
javax.jms.Message;
javax.jms.MessageListener;
/**
* Message-Driven Bean implementation class for: MDB1
*/
@MessageDriven(activationConfig = {
@ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue = "jms/Q1") },
mappedName = "jms/Q1")
public class MDB1 implements MessageListener {
/**
* Default constructor.
*/
public MDB1() {
}
/**
* @see MessageListener#onMessage(Message)
*/
public void onMessage(Message message) {
System.out.println("We have received a message");
}
jmsMdb
jndi
wasJmsClient
wasJmsServer
Create a record within it of type "WebSphere Embedded JMS Activation Specification" that points
to the JMS queue:
Page 265
Outbound endpoints
WebSphere Application Server Liberty Profile Guide for Developers Chapter 5.4
WebSphere MQ Access
A liberty application can interact with an MQ provider through JMS APIs. To allow this, two
features of the liberty profile must be enabled:
wmqJmsClient-1.1
jndi-1.0
Next we must define a variable to specify the location of the MQ RAR file. This is normally found
at <MQROOT>/java/lib/jca/wmq.jmsra.rar. A suitable definition might look like:
<variable name="wmqJmsClient.rar.location" value="<MQROOT>/java/lib/jca/wmq.jmsra.rar" />
Page 266
See also:
Once localConnector is defined and the server running, we can use the Java supplied tool
called "jconsole" to examine the Mbeans.
From within <ROOT>/jdk/bin we will find a command called "jconsole". When launched it
will take a few seconds to start up. What it is doing is looking for Java processes on your local
machine. Once found, it will present a dialog similar to the following:
Page 267
In the "Local Process" section, we want to look for the process that corresponds to ODM DSI.
We will find that its name is similar to "ws-server.jar batch-file start cisDev".
We select that entry and click "Connect". We may get an error that a secure connection failed and
may we try an insecure connection?
If we say yes, then we have now attached jconsole to the ODM DSI server. From here, we have a
wealth of features:
Page 268
However, in the context of this section, the real power of Jconsole to us is that it provides an Mbean
examiner:
Page 269
See also:
Java Jconsole
Page 270
Page 271
name: null
state: UNLOCKED
systemMessageId: 5B080744CB34578C_1500001
transactionId: null
type: JMS
See also:
The available trace levels (from higher detail to lower detail are):
all
finest
finer
fine
detail
config
info
audit
warning
severe
fatal
Be cautious on switching on too much trace as it can dramatically slow down your system's
operations.
Experience seems to show that merely changing the server.xml file will cause WLP to re-read
and honor trace setting changes.
During development, I choose to have the following trace entries switched on to fine:
com.ibm.ws.config.xml.internal.ConfigRefresher
com.ibm.ws.kernel.feature.internal.FeatureManager
com.ibm.ia.runtime.SolutionProviderMgr
Page 272
ObjectGrid*
ObjectGridReplication
ObjectGridPlacement
ObjectGridRouting
plus many more
If one switches on ALL trace, one will drown in information. Switching on "*=all" is not worth
it. At a minimum, you are going to want to turn off:
com.ibm.ws.objectgrid.*
com.ibm.ws.xs.*
com.ibm.ws.xsspi.*
See also:
Page 273
Page 274
From within the Explore entry we can get a list of the applications installed and perform work
against them including starting and stopping.
Page 275
See also:
Knowledge Center Administering the Liberty profile using Admin Center 8.5.5
replica Shards where data will be replicated. In the event of the "loss" of the primary Shard, one of
the Replica shards can be promoted to the new Primary.
Thinking back to the earlier mention of a Partition, we now see that a Partition can be defined as a
collection of Shards distributed over a collection of Grid Containers. For a particular partition, one
of the shards will be considered the primary shard while the others are replicas.
The following diagram illustrates an example of our story. Ignore the "numbers" of things. We can
have more than two JVMs, Grid Containers, Partitions and Shards this is just an example.
Imagine we have a client application that wishes to retrieve a piece of data. The client would have
to know which partition contains the data and hence which shard contains the primary data and
hence which Grid Container a request should be sent to. That's a lot of knowledge. A component of
WXS called the Catalog Server maintains that information on behalf of the solution.
Having data managed by WXS doesn't have meaning unless something is going to access that data.
The something is termed a "Grid client". The Grid client contacts the Catalog Server and retrieves
from it data known as the "route table". This information allows the client to know which partitions
contain which data so that when a request is made to retrieve data, the client can direct the request
to the correct partition.
The Catalog Server is not a passive component merely telling clients where everything lives.
Instead, it is the Catalog Server that determines "good" placement for shards across all the Grid
Containers available to the Catalog Server. The Catalog Server uses policy rules defined by
administrators when making these decisions. It is also the Catalog Server that is responsible for
changes in the topology when changes are detected such as the loss of a Grid Container or the
arrival of a new Grid Container.
As of DSI v8.7, the WebSphere Extreme Scale is at v8.6.
See also:
Redbook WebSphere eXtreme Scale v8.6 Key Concepts and Usage Scenarios SG24-7683-01 - 2013
Client APIs
From an API perspective, there are a number of access mechanisms a client application can use to
access the Grid.
Page 277
ObjectMap API
In this model, the grid appears as a Java Map with the ability to put and get objects. This manifests
itself as:
map.put(key, value)
map.get(key)
IBM DB2
There is a wealth of material on using IBM DB2 found in manuals, articles and the interwebs and
we will not try and replicate that here. However in this section, we will make notes on useful DB2
areas that may be of relevance to working with ODM DSI. These notes should not be considered
definitive on the subjects but merely examples of the areas as potentially used by ODM DSI.
Export the above as a JAR file. In this example we exported to db2test.jar. Next we followed
the instructions to import the JAR into DB2:
db2 connect to TESTDB user db2admin using db2admin
db2 "call sqlj.install_jar('file:C:/Projects/ODMCI/JAR Files/db2test.jar','TEST1')"
with this done, we now have a new stored procedure called "testp" which when called, will
lookup the class called "DB2TEST" contained in the Java Package called "com.kolban" that is
located in the JAR with handle "TEST1" and call the method called "db2test".
Page 278
When debugging Java procedures, no solution has yet been found to find where the Java console
might exist. The workaround has been to log to our own PrintWriter object.
See also:
developerWorks - Solve common problems with DB2 UDB Java stored procedures - 2005-10-27
You can't run the sqlj.install_jar command from a Data Studio environment.
To replace the JAR, use the command:
db2 "call sqlj.replace_jar('file:<path to jar>', '<JAR handle>')"
after replacing a JAR, if the signatures have changed, we may wish to ask DB2 to refresh its
classes:
db2 "call sqlj.refresh_classes()"
See also:
DB2 Triggers
The notion behind a trigger is that when a table is modified, we may wish to become aware of that
modification and perform some action. This allows us to write functions that are executed when an
external application modifies a table but without us having to re-code or otherwise interfere with the
opertaion of that external application.
CREATE TRIGGER <Trigger Name>
AFTER INSERT ON <Table Name>
REFERENCING NEW AS <Variable Name>
FOR EACH ROW
<Statement>
Page 279
will build the XML element <X>Y</X>. The first parameter is the name that the element will use.
If we wish an element to have a namespace prefix, we would include that here. For example:
xmlelement(name "m:X", 'Y')
<A>B</A>
</X>
See also:
developerWorks - Overview of DB2s XML Capabilities: An introduction to SQL/XML functions in DB2 UDB and the DB2 XML
Extender - 2003-11-20
Page 280
IBM MQ
IBM's MQ product is an industry strength messaging and queuing platform including a runtime
engine and a rich set of APIs. MQ can act as the source and destination of ODM DSI events as an
alternative transport to using a JMS provider.
Page 281
Installation of MQ
Page 282
Page 283
Page 284
Page 285
Page 286
Administering WebSphere MQ
An Eclipse based tool called "WebSphere MQ Explorer" is provided with MQ. This can be used to
perform a wide variety of administration tasks.
Page 287
Page 288
Disabling MQ Security
During testing, we may wish to disable MQ security checks. Open the properties of the queue
manager:
Page 289
Page 290
Another good tool for putting messages to MQ is called "rfhutil" which can be found here:
http://www-01.ibm.com/support/docview.wss?rs=171&uid=swg24000637
as part of the MQ IH03 SupportPac.
A package This is the namespace for which other objects will exist.
Classes This is the definition of a business object. There can be many class definitions.
Do not think of this as Java class even though it is tempting.
Attributes Each class can contain attributes where an attribute has a name and a data type.
Methods Each class can contain methods which are functions that can be called to return a
value. The methods can be supplied with parameters.
See also:
See also:
IlrObjectModel
The heart of this is a class called IlrObjectModel which is the in memory representation of the
BOM. An instance of this can be constructed by reading a .bom file through the
IlrJavaSerializer class.
From the IlrObjectModel we can retrieve classes:
Page 291
IlrModelElement
A BOM model is made up from model elements. These are the lowest level of the core concepts.
From elements come all the higher level items.
IlrNamespace
A namespace defines a scope used to enclose other items.
The IlrNamespace inherits from IlrModelElement and hence has a name and other
attributes. Specific to IlrNamespace we have:
List getClasses() Obtain a list of all the classes belonging to this namespace.
Page 292
IlrType
Methods include:
IlrClass
An interface is a collection of attributes and methods.
Since IlrClass inherits from IlrModelElement we can obtain the class's name and
namespace.
From the IlrClass we can work with attributes:
Iterator allMethods() Iterate all the methods in this class and in superclasses.
String getName() - For a class this is the class name with no namespace.
See also:
Page 293
IlrAttribute
This interface represents an attribute in a class.
Methods include:
See also:
IlrClassIlrDynamicActualValue
IlrDynamicActualValue
This class represents an actual value for a type.
Page 294
Page 295
e.printStackTrace();
Java
The Java programming language is well understood and documented thoroughly elsewhere. In this
section of the book, we are going to make notes about certain patterns that may be useful in an
ODM DSI environment.
If we look inside this JAR we find it contains items such as the following:
In this example, EV1 is a an event, CONCEPT1 is a concept and ENTITY1 and ENTITY2 are
entities. These are the names that the developer chose and are not keywords. Each of these classes
represents an artifact that we could use in our custom Java solution.
Given a JAR file of this format, we can now examine its content to look for Java classes that
represent events and entities. If we examine each entry and ask its Java Class what interfaces each
entry implements, we find that:
We can thus use this knowledge to determine which are events, which are entities and which are
simply of no interest to us.
Now if we assume that we have identified an Event or Entity of interest to us, our next question
would be "What are the properties of this object?".
Page 296
We can use the Java Bean introspection capabilities to answer that question.
Assume we have a Java object of type "Class" that represents one of these objects, we can obtain
its BeanInfo by using:
BeanInfo beanInfo = Introspector.getBeanInfo(myObjectClass);
from the BeanInfo, we can now ask for the set of properties contained within it using:
PropertyDescriptor propDesc[] = beanInfo.getPropertyDescriptors();
Now that we have the knowledge about what is contained within this model, how then should we
create and populate instances? The answer is not to attempt to instantiate these directly. Instead we
should ask the DSI environment to do so for us.
See also:
Java BeanInfo
Java PropertyDescriptor
Camel
Camel apps need to be linked with the following:
Page 297
camel-core-*.jar
slf4-api-*.jar
Here is a sample app that watches a directory for files and when one arrives, writes a copy in
another:
CamelContext context = new DefaultCamelContext();
context.addRoutes(new RouteBuilder() {
@Override
public void configure() throws Exception {
from("file:C:/Projects/ODMDSI/junk/camel/indir?noop=true"). //
to("file:C:/Projects/ODMDSI/junk/camel/outdir");
}
});
context.start();
Thread.sleep(10000);
context.stop();
System.out.println("Camel1 ending ...");
See also:
JavaDoc
Processor
Processor gives you complete Java level control over working with the content of messages.
public class MyProcessor implements Processor {
public void process(Exchange exchange) throws Exception {
// do something...
}
}
See also:
Processor
Transform
Bean
The bean() mechanism allows us to use an arbitrary Java bean to perform the processing.
Enricher
In this pattern, the message is enriched from content drawn from elsewhere.
Data Formats
XMLJSON
Example:
XmlJsonDataFormat xmlJsonDataFormat = new XmlJsonDataFormat();
xmlJsonDataFormat.setSkipNamespaces(true);
xmlJsonDataFormat.setRemoveNamespacePrefixes(true);
from("file:C:/Projects/ODMDSI/junk/camel/indir?noop=true"). //
marshal(xmlJsonDataFormat). //
to("file:C:/Projects/ODMDSI/junk/camel/outdir");
Page 298
camel-xmljson*.jar
json-lib*.jar
commons-lang
commons-beanutils
commons-collections
commons-logging
ezmorph
xom
See also:
JSON-lib
Camel components
Direct Component
This component is used to link a producer and consumer in different routes together.
See also:
Direct Component
File Component
See also:
File Component
JMS Component
The JMS Component can send or receive messages to JMS. The format of the component can be:
jms:[queue:|topic:]destinationName[?options]
For example
jms:queue:myQueue
jms:topic:myTopic
In order to use JMS, the camel JAR for JMS must be added. This is:
camel-jms*.jar
JMS Component
Stream Component
XSLT Component
Camel as a Liberty EJB
If we think of Camel as an embeddable service that can be used to perform mediations, our next
puzzle is how would we leverage that with Liberty?
One way is to create a Singleton Session EJB. When Liberty starts, it will start a single instance of
that Session EJB which will contain the logic to register and start a Camel processing service.
Eclipse
Importing exported projects
If you are supplied a ZIP file containing exported projects from Insight Designer, you can import
those into your Eclipse workspace using the Eclipse importer.
Page 300
Page 301
Page 302
This will open a dialog from which we can select the WebSphere Application Server V8.5 Liberty
profile:
Page 303
In the next page, we will be asked to supply the path to the liberty runtime. The path entered should
be <ODM DSI Root>/runtime/wlp:
Page 304
The final page allows us to select the server instance we wish to use:
Page 305
The culmination of these steps will be the appearance of an entry in the Servers view:
Server start/stop
Configuration
We can also flag that we wish the server to be started in "clean" mode the next time it is launched:
Page 306
Page 307
JARs Search Search JAR files in the file system looking for classes.
TechPuzzles
Back in the early 2000s part of my job was to study some specific IBM product very deeply until I
became competent at it and then assist fellow IBMers to learn and use the product as well. My
thinking on that became one of writing down notes which then become books (you are reading an
example of that just now). In talking to colleagues I asked them about their experiences in learning
and the common response I heard was "unless we actually practice something, we forget what we
read in a few days or weeks". I completely agree with that notion and so do many others. To that
end a number of folks provide "tutorials" that are keyboard exercises where the student follows the
bouncing ball and enters exactly what the tutorial asks. These are great if one is super new at a
product, being hand-held through getting something working is indispensable. However I believe
that there are limitations to tutorials. Tutorials are extremely time consuming for their authors to
create and as such, tutorials can't be expected to cover that many areas. Next is that a student's
knowledge grows with time. After all, if he didn't improve after study and working tutorials then
there would be something wrong. As a student's knowledge increases, the value of tutorials
decreases. The student will follow the steps saying to himself "I know this already" before finally
getting to any new materials. And lastly what is to me the biggest point of all tutorials
inevitably get followed "parrot fashion". This means that a student can follow the instructions to
get something working without actually thinking. If the student doesn't think, I could argue that
there will be little retention of knowledge.
With these thoughts in mind, as I was sitting bored-mindless at a conference, I came up with an idea
that I called "TechPuzzles". The idea here is that a technical puzzle involving a product (DSI for
example) is posed and the student has to use their knowledge and skills to solve it. The author of
the puzzle flags it as requiring a certain level of skill in order for it to be solved within an hour
(walking away from a completed puzzle in an hour or less is an essential goal). Possible skill levels
would be:
Novice
Competent
Proficient
Expert
Master
A TechPuzzle would be a written puzzle which may be augmented with diagrams and code and/or
data assets. The reader of the puzzle should be able to take that description and then go forth and
attempt to solve it. This would engage the student in a far more interesting fashion.
However, there is much more to the TechPuzzle notion. As well as a puzzle being presented, each
TechPuzzle will also have a potential solution. This solution is a full description (but not tutorial)
of an answer to the TechPuzzle. It will include thinking as well as any necessary assets that would
allow a student to get the solution running. In addition to having a solution provided, a forum
thread accompanies each TechPuzzle where students can discuss amongst themselves questions and
Page 308
answers related to that specific puzzle. This will include monitoring by the TechPuzzle author for
any questions.
The solution supplied with the TechPuzzle may not be the "best" solution and perhaps the students
or others can suggest improvements or new ways of thinking.
TechPuzzles need to be produced on a regular and short basis such as once a week. A suggestion is
to publish a new TechPuzzle on a Friday morning but withhold the solution until the publication of
the next TechPuzzle one week later. This gives students who want to challenge themselves a week
to try and come up with a solution on their own knowing that there is no published solution to act as
a safety net. Of course, the student will always have a back catalog of puzzles to work with as
desired so needn't work with the latest for the week and end up stuck and frustrated.
For DSI, the entry stakes into DSI TechPuzzles are:
Description
Your bank has determined that if three or more ATM withdrawals against an account happen within
an hour, that is a good indication of potential fraud.
Your challenge is to model withdrawal events being transmitted from an ATM to the bank and the
detection of three or more events on a particular account within the space of an hour.
Solution
When we look at this puzzle, we will find that we need to track withdrawals against accounts. This
means we need to model the "account" entity as that will be the target of the withdrawal events. In
addition, we need to model the notion of the withdrawal event itself. When you study the model
definition, you may be surprised to see that neither the account nor the withdrawal contain any
significant data. The reason for that is that our story simply doesn't need anything further than the
notion that accounts exist and withdrawals happen.
Page 309
Model Definition
an account is a business entity identified by an accountNumber.
a withdrawal is a business event.
a withdrawal has an accountNumber.
Rule Definition
when a withdrawal occurs
if
the number of withdrawals after 1 hours before now is more than 2
then
print "Fraud? - We have seen too many withdrawals for account #" + the accountNumber;
Test Script
We create an entity representing the account and then submit three events with different times to
represent three different events arriving.
var
var
var
var
ConceptFactory = Java.type("tp_2015_01_30.ConceptFactory");
ZonedDateTime = Java.type("org.threeten.bp.ZonedDateTime");
LocalDateTime = Java.type("org.threeten.bp.LocalDateTime");
ZoneId = Java.type("org.threeten.bp.ZoneId");
testDriver.deleteAllEntities("tp_2015_01_30.Account");
testDriver.resetSolutionState();
var conceptFactory = testDriver.getConceptFactory(ConceptFactory.class);
var myEntity = conceptFactory.createAccount("AN123");
testDriver.loadEntity(myEntity);
var myEvent1 = {
$class: "tp_2015_01_30.Withdrawal",
accountNumber: "AN123",
timestamp: ZonedDateTime.of(LocalDateTime.of(2015, 01, 10, 15, 0, 0), ZoneId.systemDefault())
};
var myEvent2 = {
$class: "tp_2015_01_30.Withdrawal",
accountNumber: "AN123",
timestamp: ZonedDateTime.of(LocalDateTime.of(2015, 01, 10, 15, 10, 0), ZoneId.systemDefault())
};
var myEvent3 = {
$class: "tp_2015_01_30.Withdrawal",
accountNumber: "AN123",
timestamp: ZonedDateTime.of(LocalDateTime.of(2015, 01, 10, 15, 20, 0), ZoneId.systemDefault())
};
testDriver.submitEvent(myEvent1);
testDriver.submitEvent(myEvent2);
testDriver.submitEvent(myEvent3);
print("End of script");
Page 310
Description
When one builds a solution for DSI, it is important to be able to test it. There are a number of
excellent tools and utilities growing up around the DSI product but it is still important to understand
how to test using the out of the box capabilities. The core to testing is to be able to use the DSI
product supplied Java class called TestDriver. This puzzle will test your ability to use that
feature.
In this puzzle, your task is to model an entity called Stock which has a key attribute called `stock
number` with additional attributes of `quantity` (the count of items on hand) and `location` (where
in the warehouse the stock can be found).
We won't consider events in this story.
After having modeled the entity and deployed the solution, use the TestDriver API to create an
instance of an entity and use the REST API to validate that the entity was created.
Solution
The solution to this weeks puzzle hinges on your ability to understand the Java class called
TestDriver.
The core documentation for this can be found in the Knowledge Center at:
The model definition is pretty straight forward and, as we see, the puzzle had no need for events in
order to complete. I encourage you to read each line of the Java code in the solution and for each of
the (only) five references to the TestDriver class and its object, read the corresponding JavaDoc
for the method and validate that you completely understand what it does.
Once you have created your entity by running your code, you next need to validate that it is indeed
present within the DSI runtime. The DSI REST API can perform that test for you:
https://localhost:9443/ibm/ia/rest/solutions/TechPuzzle_DSI___2015_02_06/entitytypes/tp_2015_02_06.Stock/entities
The result will be the XML Document representing the entity. For example, in Chrome it looks
like:
Page 311
Java TestDriver
package tp_2015_02_06;
import com.ibm.ia.testdriver.TestDriver;
public class Test {
public static void main(String[] args) {
try {
TestDriver testDriver = new TestDriver();
testDriver.connect();
testDriver.deleteAllEntities();
ConceptFactory conceptFactory =
testDriver.getConceptFactory(tp_2015_02_06.ConceptFactory.class);
Stock stock = conceptFactory.createStock("ABC123");
stock.setQuantity(25);
stock.setLocation("Row F");
testDriver.loadEntity(stock);
System.out.println("Entity created!!");
} catch (Exception e) {
e.printStackTrace();
}
}
Page 312
Description
You manage the operations of a trucking company. Your trucks periodically transmit their GPS
location. You have received complaints from some customers that their products are spoiled when
they arrive because the temperature in the interior of the truck was either too warm or too cold.
Your insurance premiums are already high and you want to reduce refunds and claims.
Speaking with your tech guys, they tell you they can not add sensors to transmit temperature
information, but they do have a suggestion. If you know where a truck is, we can contact the
weather service and ask for the current external air temperature. Experience says that the air
temperature outside the truck can be assumed to be the same inside the truck.
A web service has been found that, given a latitude and longitude pair (a position), returns the
weather at that location. This includes the temperature.
Your challenge as a DSI solution designer is to build a DSI solution which detects when the
temperature of a particular truck is outside of its range when an event indicating its location arrives.
An example service that supplies weather data based on location can be found at:
Mashape Ultimate Weather
Solution
After studying the weather service, we find that it is exposed via REST with a request format of:
GET https://tehuano-ultimate-weather-v1.p.mashape.com/api/obs/{latitude}/{longitude}
This means that if we can submit such a request, we can get the data we want. With this in mind,
we ask ourselves ... how do we submit a REST request when a DSI event arrives? One way to
achieve this is to leverage a Java Agent implementation and make the REST call from within the
context of Java code.
Since we are making SSL calls to the server, we must also add the certificate for the target server
into our SSL key store.
The Web Service we are using is from the Mashape provider and requires that we get a free key to
be able to use their services. This key needs to be entered into the code.
What we exercised
Page 313
JSON processing
Model Definitions
a truck ping An incoming event that says we have been told the location of a truck
temperature
temperature
temperature
temperature
issue
issue
issue
issue
is a business event.
has a truck id.
has an external temperature (numeric).
has a temperature issue reason.
A core part of our story is the Java code contained within our Java Agent. This code makes a REST
call to a weather service to obtain the temperature at the specified location. Once we know this, we
can ask if it is within the range of acceptable temperatures and, if not, emit an appropriate event.
package techpuzzle_dsi__20150213.techpuzle_dsi__20150213_java_agent_truck;
imports ...
public class TruckAgent extends EntityAgent<Entity> {
private final static String MASHAPE_KEY = "XXXXXX";
@Override
public void process(Event event) throws AgentException {
try {
Truck truck = (Truck) getBoundEntity();
if (truck == null) {
System.out.println("No such truck!");
return;
}
TruckPing truckPing = (TruckPing)event;
double coordinates[] = truckPing.getLocation().getCoordinates();
String result = send("https://tehuano-ultimate-weather-v1.p.mashape.com/api/obs/" +
coordinates[1] + "/" + coordinates[0]);
if (result == null) {
System.out.println("No weather service result.");
return;
}
JsonReader jsonReader = Json.createReader(IOUtils.toInputStream(result,
Charset.defaultCharset()));
JsonObject jo = jsonReader.readObject();
double temp = Double.parseDouble(jo.getString("temp_f"));
System.out.println("Temp is " + temp);
if (temp > truck.getMaximumTemperature() || temp < truck.getMinimumTemperature()) {
System.out.println("We have a temparture event!");
ConceptFactory conceptFactory = getConceptFactory(ConceptFactory.class);
TemperatureIssue temperatureIssue =
conceptFactory.createTemperatureIssue(ZonedDateTime.now());
Page 314
Test Script
ConceptFactory = Java.type("tp_2015_02_13.ConceptFactory");
ZonedDateTime = Java.type("org.threeten.bp.ZonedDateTime");
IADebugReceiver = Java.type("com.ibm.ia.testdriver.IADebugReceiver");
Thread = Java.type("java.lang.Thread");
GeoSpatialService = Java.type("com.ibm.geolib.GeoSpatialService");
Page 315
testDriver.submitEvent(truckPing);
print("Event sent!");
Description
When building a DSI solution we handle incoming events and also emit new events based on the
processing of those events. In this puzzle your goal will be to write a DSI solution that accepts an
incoming event and outputs a new event. But how do you know that the new emitted event is
actually fired and that its content is correct?
Your challenge is to use the IBM supplied TestDriver Java class to show that new outbound
events are published and show their content.
Solution
The TestDriver class of DSI provides the capability to register a callback function that can be
invoked when the DSI runtime publishes an event. This callback is passed all the pertinent
information about that event and we can use that information for debugging or other tasks.
The documentation for this capability can be found in the Knowledge Center:
Our chosen model definitions may surprise you. The are merely one input event and one output
event. There are no entity definitions. That is because we are going to choose to build a Java Agent
to act as the recipient of the incoming event and the published of the outgoing event. Java Agent's
do no need a corresponding bound entity. Again, out solution is illustrative an exercises our
knowledge of writing test drivers and not necessarily representative of real world business
solutions.
Page 316
an
an
an
an
Input
Input
Input
Input
Event
Event
Event
Event
an
an
an
an
Output
Output
Output
Output
Event
Event
Event
Event
is a business event.
has a i1.
has a i2.
has a i3.
is a business event.
has a o1.
has a o2.
has a o3.
Agent Descriptor
'techpuzzle_dsi__20150220.techpuzzle__dsi__20150220__java_agent.MyAgent' is an agent,
processing events :
- Input Event
org.threeten.bp.ZonedDateTime;
tp_2015_02_20.ConceptFactory;
tp_2015_02_20.InputEvent;
tp_2015_02_20.OutputEvent;
com.ibm.ia.agent.EntityAgent;
com.ibm.ia.common.AgentException;
com.ibm.ia.model.Entity;
com.ibm.ia.model.Event;
The core of our solution is the development of a TestDriver application that registers a callback
listener for emitted events.
package tp_2015_02_20;
import java.util.Properties;
import org.threeten.bp.ZonedDateTime;
import
import
import
import
import
com.ibm.ia.common.debug.DebugInfo;
com.ibm.ia.model.Event;
com.ibm.ia.testdriver.DebugReceiver;
com.ibm.ia.testdriver.DriverProperties;
com.ibm.ia.testdriver.TestDriver;
Page 317
Page 318
Description
Events noting pressure changes in a steam pipe are published whenever there is a change. If the
pressure passes a threshold for that pipe, we wish to emit an alert event. However, we do not want
to keep sending new alerts after the first one until the pressure has dropped below the threshold at
which point the alerts will "reset" and future events that show we are above the threshold will once
more cause alerts.
How can we design this?
Solution
There are potentially many ways to solve this puzzle. One way would be on receipt of a pressure
too high event look at the preceding event. If that preceding event would not have caused an alert,
then we are good to send a new event.
However that was not the technique that was chosen. Instead what we do is we keep an "alerted"
state value associated with the pipe entity. We modeled it as a boolean with true meaning we are
in an alerted state and false meaning we are not alerted. When a pressure too high event arrives,
we emit a new alert only if we are not already in the alerted state. We also set the pipe's entity state
to be alerted. When an event arrives that is an ok pressure and we are in the alerted state, we reset
the state.
Model Definitions
The pipe is an entity that contains its alerted state. We also define two events. One is an incoming
event (pressure change) and the other is an outgoing event (alert).
Agent Descriptor
The following is a Rule Agent rule that associates a pipe entity with a pressure change event.
when a pressure change occurs
if
it is not true that 'the pipe' is alerted and
the pressure value of this pressure change is at least the alert threshold of 'the pipe'
then
make it true that 'the pipe' is alerted;
emit a new alert where
the pipe id is the pipe id of 'the pipe' ,
the reason is "Pressure too high";
Page 319
The following is a Rule Agent rule that associates a pipe entity with a pressure change event. As we
can see from this puzzle, we can have multiple rules associated with the same entity/event pair. Can
you understand why we need two rules?
when a pressure change occurs
if
'the pipe' is alerted and
the pressure value of this pressure change is less than the alert threshold of 'the pipe'
then
make it false that 'the pipe' is alerted;
print "Pipe reset" ;
The following is a JavaScript script used with DSI Toolkit for testing. It creates an instance of a
pipe entity against which we can submit events.
var ConceptFactory = Java.type("tp_2015_02_27.ConceptFactory");
testDriver.deleteAllEntities();
var conceptFactory = testDriver.getConceptFactory(ConceptFactory.class);
var pipeEntity = conceptFactory.createPipe("pipe#1");
pipeEntity.setAlerted(false);
pipeEntity.setAlertThreshold(100.0);
testDriver.loadEntity(pipeEntity);
print("Pipe entity created");
The following is a JavaScript script used with DSI Toolkit for testing. It creates an instance of an
event associated with a pipe.
var ZonedDateTime = Java.type("org.threeten.bp.ZonedDateTime");
var pressureChangeEvent = {
$class: "tp_2015_02_27.PressureChange",
pipeId: "pipe#1",
pressureValue: 199.0,
timestamp: ZonedDateTime.now()
};
testDriver.submitEvent(pressureChangeEvent);
print("Event submitted!");
Page 320
Description
DSI can emit events. Those events can be externalized to an outside system via their transmission
over HTTP. This puzzle asks you to build a DSI solution such that when an incoming event arrives,
a new outbound event is emitted. The outbound event is to be transmitted over HTTP. In order to
validate that the event is actually sent, we will want to transmit the event to something that can
show the receipt of data over HTTP. It is suggested that the open source project called Mockey be
used for testing. The format and content of events is not important to this puzzle, only that when an
event is emitted, that it is correctly transmitted over HTTP.
Solution
We have seen in previous puzzles how to receive an event and emit a new one as a result.
Hopefully we are getting the hang of doing that. What is perhaps new here is configuring DSI to
physically transmit a message corresponding to the emitted event. We can achieve that through a
Connectivity Definition. When we make the definition, we have to specify where the message will
be sent so before we make that definition, we will examine what is needed to set up an endpoint.
The open source project called Mockey is a listener for incoming HTTP requests. It is full of riches
that we aren't going to use for this puzzle so the chances are high that we will see more details than
we actually need.
First we run Mockey from a DOS command window with:
java -jar Mockey.jar
That will start Mockey and open a browser window ready for us to set its configuration.
Click on the Services tab and select "Create a Service". In the page that appears, we need to
provide values for "Service Name" and "Mock Service URL". Once entered, click the
"Create new service" button at the bottom of the page:
Page 321
Make a note of the URL this will be the URL to which the event emitted from DSI will be
targeted.
After creating the service definition, you have one more task which is creating a scenario.
First, click the button to flag the service as "Static" and then click the link to create a scenario:
Page 322
When you deploy the solution, remember to also deploy the connectivity definitions. Now when
you submit an event to DSI, you should see a corresponding entry in the Mockey history
corresponding to the emitted event from DSI.
As an alternative to Mockey, the DSI Toolbox can also be used to listen for and display DSI emitted
events over HTTP. A video tutorial illustrating this is available here.
Business Model Definitions
an
an
an
an
EVENT1
EVENT1
EVENT1
EVENT1
is a business event.
has a x.
has a y.
has a z.
Page 323
an EVENT2 has a p.
an EVENT2 has a q.
an EVENT2 has a r.
Agent Descriptor
'techpuzzle_dsi__20150306.techpuzzle_dsi__20150306__java_agent__ja1.JA1' is an agent,
processing events :
- EVENT1
Java Agent
package techpuzzle_dsi__20150306.techpuzzle_dsi__20150306__java_agent__ja1;
import org.threeten.bp.ZonedDateTime;
import tp_2015_03_06.ConceptFactory;
import tp_2015_03_06.EVENT1;
import tp_2015_03_06.EVENT2;
import
import
import
import
com.ibm.ia.agent.EntityAgent;
com.ibm.ia.common.AgentException;
com.ibm.ia.model.Entity;
com.ibm.ia.model.Event;
Mockey Configuration
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<mockservice version="1.0" xml:lang="en-US">
<proxy_settings proxy_enabled="false" proxy_url=""/>
<service default_real_url_index="0" default_scenario_id="1" description=""
error_scenario_id="null" hang_time="0" name="MyService" request_inspector_name="null"
service_response_type="1" tag="" url="MyService">
<request_inspector_json_rules enable_flag="false"/>
<response_schema enable_flag="false"/>
<scenario http_method_type="" http_resp_status_code="200" id="1" name="MyScenario" tag="">
<scenario_match scenario_match_evaluation_rules_flag="false"/>
<scenario_response/>
<scenario_response_header/>
</scenario>
</service>
</mockservice>
Page 324
Description
DSI is not meant to be the system of record for permanent data. However, DSI can load data from
external sources.
Imagine that we have received an event that a customer has placed an item in their eCommerce
shopping basket and we wish to offer them additional products. We have a database table of
information on our customers that will help us make such decisions.
The table contains columns:
CUSTOMERID
NAME
AGE
GENDER
ZIPCODE
If the "add to cart" event contains a property that identifies the customer id, how can we initialize an
entity with the relevant details assuming that we don't already have a DSI entity already in
memory?
Solution
As is always the case, there is not necessarily just one possible solution and this puzzle is no
exception. We will imagine that an incoming event carries with it a property that can be used as a
key for an entity. We will also assume that the entity is not already present within the DSI runtime.
This means that we need to create such an entity when the event arrives. DSI provides exactly that
kind of capability through the technology known as an Extension Project. An Extension Project is a
Java project which is added to the DSI solution. Included within the Extension Project will be a n
instance of an Entity Initialization Extension.
This will be a Java class which is annotated with an annotation of the form:
@EntityInitializationDescriptor(entityType = <EntityType Class>)
What this declares is that the class being defined is an Entity initializer for a specific type of Entity.
The code contained within this class is then responsible for populating the entity given the incoming
event. How the Java code chooses to populate the Entity is up to the designer of the class. In our
puzzle, we have a database table that contains rows of data that correspond to the content of the
Entity we would like to have. From this, it then seems that what we should do is use the key
contained in the incoming event as a database retrieval key on the table.
Within Java, there are many ways to retrieve data from a database. Common amongst these is the
Page 325
technology known as JDBC. While JDBC is still extremely powerful and rich, it has been
overshadowed by other technologies within the Java environment. Specifically, the technology
known as Java Persistence Architecture (JPA). Through JPA, we achieve Object Relational
Mapping (ORM) at an extremely high level. ORM is the notion that from relational data contained
in a database, we can construct an instance of an object. Conversely, should we need, if we have an
object that contains data, we can map that to data contained in a database.
Putting it event simpler, we can create a Java Class that looks similar to:
public class CustomerRecord {
private String customerId;
private String name;
private int age;
private String gender;
private String zip;
}
and simply ask JPA to populate an instance of this object from the corresponding row. In fact, we
can actually perform that request in one single Java statement.
When I sat down to write this puzzle, I already knew JDBC and knew nothing of JPA. I studied
books, papers, manuals and web sites on JPA and especially how JPA behaves in an OSGi Liberty
environment. I found it confusing with what appeared to be a lot of parts. At first I couldn't
understand how this was considered superior to JDBC. However, now that I have got sufficient
skills and knowledge under my belt, there came a point in time where I said "I get it!!". Setting up
JPA for DSI requires knowledge of JPA, OSGi, Transactions, JTA, JDBC, Liberty configuration and
more. For the novice, it will be bewildering. However, as one's skills in these areas grow, there
comes a point where the pieces simply "snap" into place and it all comes into focus. JPA is never
going to be for the business user it is also unlikely to be for the novice Java programmer either.
However, if someone considers themselves an enterprise Java programmer or architect, then I
would argue that competence in JPA is essential.
An example that is similar and uses JDBC can be found in the IBM Knowledge Center for DSI in
an article called "Creating data provider extensions".
At a high level, the architecture of our chosen solution looks as follows:
Initially, an incoming DSI event arrives. DSI sees this event and realizes that it has no existing
Entity in its existing entities. It then decides that it needs to create a new entity and calls the
solution defined Entity Initializer Java code to build the new entity. This code then calls a separate
Java module that again we will write. This Java module we call the Data Accessor which
encapsulates the access to back-end data. Since we are running in an OSGi environment we can
take advantage of all the power of OSGi so the module will be designed as an OSGi Bundle. Since
we also have JPA at our disposal, we will leverage JPA to map the data in the tables that we need to
a POJO java object that will be populated within the Data Accessor. The Data Accessor will then
Page 326
return the POJO to the Entity Initializer which will complete the task of building the final entity.
There is nothing that says that this is the mandatory architecture and in fact we could have
"lumped" all the logic into just the Entity Initializer but I believe that the decomposition provides
for better design and looser coupling and besides that, it provides an excellent framework for
knowledge building in a variety of different disciplines.
Model Definition
'customer
'customer
'customer
'customer
'customer
details'
details'
details'
details'
details'
This defines an entity type called 'customer details' that represents our entity. In addition, a simple
event called 'cart creation' is defined which passes in a 'customer id'. Our plan now is that when a
'cart creation' event arrives, we wish to create a 'customer details' entity populated from the
database.
To cause an Entity Initializer to be called, we add the following into the BMD statements
definitions:
a customer details is initialized from a cart creation , where this customer details comes from the
customer id of this cart creation .
Extensions Project
com.ibm.ia.common.ComponentException;
com.ibm.ia.extension.EntityInitializer;
com.ibm.ia.extension.annotations.EntityInitializerDescriptor;
com.ibm.ia.model.Event;
@EntityInitializerDescriptor(entityType = CustomerDetails.class)
public class EntityInit extends EntityInitializer<CustomerDetails> {
private DataAccessor dataAccessor;
@Override
public CustomerDetails createEntityFromEvent(Event event) throws ComponentException {
CustomerDetails entity = super.createEntityFromEvent(event);
@Override
public void initializeEntity(CustomerDetails entity) throws ComponentException {
super.initializeEntity(entity);
System.out.println("EntityInit: initializeEntity called");
CustomerRecord customerRecord = dataAccessor.read(entity.getCustomerId());
entity.setAge(customerRecord.getAge());
entity.setGender(customerRecord.getGender());
entity.setName(customerRecord.getName());
Page 327
entity.setZip(customerRecord.getZip());
Most of the class was created for us through the Eclipse wizard however there are two areas that
stand out. The first is the creation of a method called "setDataAccessor". This is a Java bean
setter that takes an object of type DataAccessor. We will talk about this object in detail shortly
but for now understand that an instance of this class is responsible for reading data from a database.
The second stand-out is the use of the dataAccessor bean property in the
initializeEntity method. It is there that we request the data from the database for use in
populating the entity.
The style of programming here is known as dependency injection. We have not explicitly requested
the creation of a DataAccessor object, instead, it has been injected into our class. The question
now becomes one of us "Who caused the injection of the DataAccessor?"
We modified the blueprint.xml for the Extension project. It now looks as follows:
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<reference id="ref1" interface="tp_2015_03_13.data.DataAccessor">
</reference>
<bean class="tp_2015_03_13.extension.EntityInit" id="EnitityInitBean">
<property name="dataAccessor" ref="ref1"/>
</bean>
<service id="EntityInitService" interface="com.ibm.ia.extension.spi.EntityInitializerService"
ref="EnitityInitBean">
<service-properties>
<entry key="solution_name">
<value type="java.lang.String">TechPuzzle_DSI___2015_03_13</value>
</entry>
<entry key="solution_version">
<value type="java.lang.String">TechPuzzle_DSI___2015_03_13-0.0</value>
</entry>
</service-properties>
</service>
</blueprint>
What this does is define a service reference (ref1) which basically says "Find me a service that
returns a DataAccessor" object instance. In the bean definition, we then add:
<property name="dataAccessor" ref="ref1"/>
Which says, the bean (EntityInit class) has a property called "dataAccessor", set the value of
that property to be the object returned from calling the service that returns a dataAccessor.
Cool huh!!
Although this article shows raw XML, please realize that when worked on through Eclipse, there
are high level wizards and panels to make these linkages for us at a very high level.
At this point in our story, we have finished with the DSI side of the house. We now have a DSI
solution which, when an event arrives, causes the entity initializer to be fired which calls an
instance of dataAccessor to get data and set it into the entity. If we already had the magic of
dataAccessor, we would be done. What remains now is to talk about how dataAccessor
comes into existence.
Page 328
The DataAccessor is an OSGi JPA bundle. It is formed from three Java classes. The first is
merely the interface we wish to expose:
package tp_2015_03_13.data;
public interface DataAccessor {
public CustomerRecord read(String id);
}
That is pretty simple. It has one method called read that returns a CustomerRecord. The
Customer Record is our POJO that will be retrieved from the database. It looks like:
package tp_2015_03_13.data;
import javax.persistence.Entity;
import javax.persistence.Id;
@Entity
public class CustomerRecord {
@Id
private String customerId;
private String name;
private int age;
private String gender;
private String zip;
public String getCustomerId() {
return customerId;
}
public void setCustomerId(String customerId) {
this.customerId = customerId;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getGender() {
return gender;
}
public void setGender(String gender) {
this.gender = gender;
}
public String getZip() {
return zip;
}
public void setZip(String zip) {
this.zip = zip;
}
@Override
public String toString() {
return "customerId=" + getCustomerId() + //
", name=" + getName() + //
", gender=" + getGender() + //
", age=" + getAge() + //
", zip=" + getZip();
Page 329
The only slightly interesting parts in it are the annotations. The @Entity says that this is a JPA
mapped bean and the @Id defines which property of the bean is the key in the database.
And finally, the DataAccessor implementation itself:
package tp_2015_03_13.data.impl;
import javax.persistence.EntityManager;
import tp_2015_03_13.data.CustomerRecord;
import tp_2015_03_13.data.DataAccessor;
public class DataAccessor_impl implements DataAccessor {
private EntityManager entityManager;
public void setEntityManager(EntityManager entityManager) {
System.out.println("DataAccessor - setEntityManager called: " + entityManager);
this.entityManager = entityManager;
}
@Override
public CustomerRecord read(String id) {
System.out.println(">> read" + id);
CustomerRecord customerRecord = entityManager.find(CustomerRecord.class, id);
if (customerRecord != null) {
System.out.println(customerRecord);
}
System.out.println("<< read");
return customerRecord;
}
A basic JPA usage. We ask the JPA entity manager to go get us some data from the database and
return our populated POJO. Note the power here of JPA. In one method call we retrieved all the
data that we need as an object all ready to use.
The OSGi blueprint.xml for this bundle looks like:
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:bptx="http://aries.apache.org/xmlns/transactions/v1.0.0"
xmlns:jpa="http://aries.apache.org/xmlns/jpa/v1.0.0">
<bean class="tp_2015_03_13.data.impl.DataAccessor_impl" id="DataAccessor_Impl">
<jpa:context property="entityManager" type="TRANSACTION"
unitname="MyPersistenceUnit" />
<bptx:transaction method="*" value="Required" />
</bean>
<service ref="DataAccessor_Impl" id="DataAccessor_ImplService"
interface="tp_2015_03_13.data.DataAccessor">
</service>
</blueprint>
A keen eyed reader will realize that this blueprint exposes a service that provides a
DataAccessor and if we think back, we will find that the DSI Entity Initializer wished to find a
reference to a service that provided exactly this.
Table DDL
The DDL for the table used in the solution looks like:
CREATE TABLE "DB2ADMIN"."CUSTOMERRECORD" (
"CUSTOMERID" VARCHAR(80) NOT NULL,
"AGE" INTEGER,
"GENDER" VARCHAR(20),
"NAME" VARCHAR(80),
"ZIP" VARCHAR(80)
)
DATA CAPTURE NONE
IN "USERSPACE1"
Page 330
COMPRESS NO;
ALTER TABLE "DB2ADMIN"."CUSTOMERRECORD" ADD CONSTRAINT "SQL150222142731180" PRIMARY KEY
("CUSTOMERID");
See also:
Description
You travel a lot on business and choose Good Quality Airlines (GQA) as your favorite carrier. You
have racked up a lot of status with them and have reached the level of "Super Duper". For the last
three months your boss has asked you to study a new vendor product that provides complex event
processing and you haven't had the opportunity to travel at all. Today you received a phone call
from GQA asking if "everything was ok" and "was there a problem with their service". After the
call you realized that this was a perfect example of temporal event processing.
Can you model the initiation of a call to a frequent flyer under these circumstances using DSI?
Solution
Our puzzle boils down to detecting those instances where it has been some period of time (we will
choose 90 days) since a customer last took a flight with our airline. DSI has the ability to schedule
event processing forward to a time in the future after an event arrives. Imagine now that an event
arrives at DSI each time a passenger flies. For each of those events, if we say that we want to
handle it 90 days in the future, we can ask (at that time) "was this the last flight the customer took"?
If the answer is yes, then there has been no additional flights in the last 90 days and we have a
match.
Model Definition
Our data model maintains an entity for each flyer which includes a date of the last flight taken. An
event (flight taken) is an event that indicates a flight by a flyer.
a flyer is a business entity identified by an id.
a flyer has a status.
a flyer has a 'last flight date' (date).
a flight taken is a business event.
a flight taken has a flyer id.
a flight taken has a flight id.
Page 331
Agent descriptor
This rule executes immediately upon the receipt of a flight taken event. It updates the flyer's last
flight date attribute with the date of the flight in the event.
when a flight taken occurs
definitions
set flightDate to a simple date from the timestamp ;
then
set the last flight date of 'the flyer' to flightDate ;
This rule executes 90 days after receipt. It asks if there has been another flight since the one being
processed:
when a flight taken has occurred 90 days ago
definitions
set flightDate to a simple date from the timestamp ;
if
the last flight date of 'the flyer' is at the same time as flightDate
then
print "Missing - Call the customer: " + the last flight date of 'the flyer' ;
Description
We have a temperature sensor that sends in the temperature of an oven every second. We care if the
oven's tempertature is too high or too low. However, the oven can normally fluctuate its
temperature during normal operations and if it transiently went too high or low, that would not be a
problem. What we care about instead is the average temperature over the last minute. If that were
too high or too low, then we may have a problem. How can we model such a story in DSI?
Solution
DSI provides aggregation capabilities to calculate minimums, maximums and averages over periods
of time. In our story we are interested in the average temperature over the last minute. In a rule
definition, we can define an expression that calculates an average of a value from events over a
Page 332
previous set of events. Having calculated the average temperate over the last minute, we can then
determine if we are in range.
Model Definition
a refrigerator is a business entity identified by an id .
a refrigerator has a current temperature ( numeric ) .
a temperature value is a business event with
an id,
a current temperature ( numeric ) .
Description
In our building we have doors that have sensors attached to them that sends an event each time they
are opened. Our challenge is to record how many times the door opens in a period of time (say
every 30 seconds for testing purposes). After each 30 second interval, we want to emit a new event
which identifies a door and the count of opens in that 30 second period.
Solution
We model an entity to represent a door. Each door has a unique door identifier and an integer count
of how many times it has opened. When an "open" event arrives, we increment the corresponding
count for the door in question. This is all straight forward DSI activity. What makes this puzzle
more interesting is the notion of scheduled rule execution that is not related to the arrival of a new
event. If we build a rule of the form:
Page 333
then the rule is executed repeatedly when the time expression becomes true.
Model Definition
a door is a business entity identified by an id.
a door has an open count (integer).
an open is a business event.
an open has a door id.
Rule open
when an open occurs
then
set the open count of 'the door' to the open count of 'the door' + 1 ;
Rule scheduled
if now is in second 0 or now is in second 30
then
print "For door: " + the id of 'the door' + ", the number of opens was: " + the open count of
'the door' ;
set the open count of 'the door' to 0 ;
Description
This is a real-world story from the banking industry. In this story, we have a bank upon which loans
can be requested. Staff members at the bank that have the job role of "underwriters" review the
loans and either flag them as "approved" or "declined".
Here are some numbers of underwriter approvals and declines over a period of time.
Page 334
Again, not so much to see. Now one last chart of the same data:
Aha!!! Now we something interesting, underwrite u4 is approving over 70% of his loan requests
which is much higher than that of others!!
Imagine that we receive a stream of events which name an underwriter and whether or not they
approved or declined a loan. Our puzzle this week is to detect when an underwriter approves loans
more than 10% more often than the average underwriter approves loans.
Solution
We model our solution with an incoming event called "loan outcome" that carries with it:
underwriter id
Next we model an entity called "underwriter" that is keyed of an "underwriter id". The only other
property for this entity is "approval statistic". This is the key to the whole story. The "approval
statistic" is the ratio of approved loans to total processed loans performed by the underwriter. For
example, if an underwriter approves 7 loans and denies 4 loans then the approval statistic will be:
7/(7+4) = 7/11 = 0.64
Page 335
The higher this value, the higher the more approvals to denials.
Since every underwriter has an approval statistic and there are a known number of underwriters, we
can thus calculate the average approval statistic over all our underwriters. DSI can do this with a
global entity aggregate.
Now, when a new loan outcome event arrives, if we calculate the new approval statistics for the
associated underwriter associated with that event, we can ask the question "Is this underwriter's
approval statistic 20% higher than the average approval statistic?".
Model definition
a loan outcome is a business event.
a loan outcome has an underwriter id.
a loan outcome has an outcome.
an underwriter is a business entity identified by an underwriter id.
an underwriter has an approval statistic (numeric ).
Global aggregate
define 'average underwriter statistic' as the average approval statistic of all underwriters ,
defaulting to 1.0 if there are less than 3 underwriters ,
evaluated at intervals of 15 seconds
Worked Examples
They say a picture is worth a thousand words and sometimes seeing a fully worked examples of
DSI can also be illustrative. In this section we will describe some "puzzles" and how we went
about solving them. As our skills grow, we may come back to these puzzles and think of better
notions or even realize that the solutions presented are simply "wrong" and explain why that is the
case.
Page 336
Employee
Employee
Employee
Employee
This says that an employee will have an "employee id" which is their company serial number, they
will have a name (eg. Bob Jones), they will have an annual salary (eg. $50000) and they will have a
level within the company (eg. "A", "B", "C" etc). Obviously there can be much more than this
but for now this is what we will concentrate upon.
Now let us consider possible events that affect these models. The first is the "hire" event. This is
when a new employee is hired and will serve as the constructor for the entity.
Our hire event looks like:
hire(employee id, name, salary, level)
hire
hire
hire
hire
hire
is a business event.
has an employee id.
has a name.
has a salary (numeric).
has a level.
Now, how will an instance of an Employee entity be created? Do we need a rule? Here we can use
the BMD statements to say how an instance can be initialized:
an Employee is initialized from a hire,
where this Employee comes from the employee id of this hire :
- set the name of this Employee to the name of this hire
- set the salary of this Employee to the salary of this hire
- set the level of this Employee to the level of this hire
Where the rule is associates "salary increase" events with "Employee" entities.
Again, this is all pretty straight forward. Now things get interesting. Here is a new story that seems
to cause us pause. Let us assume that from time to time, our company has special events. For
example, on the CEO's birthday, everyone who is a level "C" gets a $100 salary increase. Our first
thinking on this might be an event that looks like:
levelIncrease(level, amount)
which should be understood to mean that when submitted, all employees of a certain level have
Page 337
their salary increased by a certain amount. However, how should we implement this?
The solution we came up with was to introduce a new concept and that is the idea of the
"Company". The company is a new type of entity that is composed of Employees.
We modeled this as:
a Company is a business entity identified by a 'name'.
a Company is related to some Employees.
The way to read this is that a Company has a name and has a set of employees. Simple so far.
Initially, when the company is created, it has no employees. Since we create employee instances
through "hire" events, we need to also cause the addition of that new employee into the list of
employees associated with the company. We can do this by modifying our Employee entity
constructor BMD statement to now read:
an Employee is initialized from a hire,
where this Employee comes from the employee id of this hire :
- set the name of this Employee to the name of this hire
- set the salary of this Employee to the salary of this hire
- set the level of this Employee to the level of this hire
- emit a new onboard where
the Employee is this Employee ,
the company name is "ibm" .
This uses a new type of event called an "onboard" which is defined as:
an onboard is a business event.
an onboard has a company name.
an onboard is related to a Employee.
which associates level increase events with a Company entity. The logic of this rule says "Find all
the employees of a given band and for each of those employees, submit a salary increase event".
It is logical and elegant ... but is it "good"? That is still an open question. It is not yet clear whether
this is considered a good practice or anti pattern. We will be maintaining a Company entity which
could have thousands of references to Employees ... one per employee in the company.
It may be that modeling Employees as entities is not a good use of DSI ... but let us hope that this
example will serve as at least an illustration of rule language building.
Page 338
Experiment Scenarios
The Education Session
Imagine an education session. This will be modeled as an entity. When the session is booked we
need a venue. This will be the classroom. That will be a second entity. There will be a relationship
between the two:
Session
Name of session
Classroom
Location of classroom
Flight number
Departure airport
Arrival airport
Arrival time
I want to know the total number of flights that were late in each day!
Flight Number, Day, Arrival Delay
Looking at the sample data available here:
http://apps.bts.gov/xml/ontimesummarystatistics/src/dstat/OntimeSummaryArrivalsData.xml
Airline ontime data:
http://apps.bts.gov/xml/ontimesummarystatistics/src/index.xml
We find some very interesting information.
Flight Number
Arrival date
Date
Flight Number
Scheduled Arrival
Actual Arrival
Language puzzles
Here I collect Rule Agent language puzzles for which I yet have no solutions.
Collections
Find the nth entry in a collection (eg. the first, the last, the last three ... etc).
Page 340
Language general
When would you use the 'set' action vs the 'define' action?
Things to do ...
Page 341