Sie sind auf Seite 1von 42

MAPICS Book

Jimmy Sansi

Contents
Custom Print File Overrides ......................................................... 2

Date Conversions .......................................................................... 4

Purchase Order Offline Load ........................................................ 8

Inventory Audit Reports ............................................................. 12

Backing Up ................................................................................. 16

Applying Program Updates ........................................................ 26

Database Tuning ......................................................................... 29

System Performance ................................................................... 39

Additional Resources .................................................................. 41


Custom Print File
Overrides
One of the neat features of XA is the ability to setup OS/400
print file overrides for any report without having to
customize any code. The overrides are setup in the Cross
Application Support module and can be setup a variety of
ways including by user, report, company and warehouse
depending on the report.

However there may be times when you want to create


overrides using the XA method for applying an override to a
custom report or job that is executed from within an XA
environment. This can be accomplished by configuring the
report and then calling program AXZPOV1R to retrieve the
override parameters.

First you will need to create the report override record in the
Cross Application Support module. The Printer file name is
one of the parameters you will use when to pass to the
AXZPOV1R program when called.

At the most basic level there are two parameters passed


when calling AXZPOV1R, the print file override name and
the control data such as warehouse or company number.
CL code calling program AXZPOV1R and retrieves a string
variable containing the OS/400 OVRPRTF report override
command.
PGM

/* PRTF OVERRIDE PROGRAM PARMS */


DCL VAR(&P1RPNM) TYPE(*CHAR) LEN(10)
DCL VAR(&P1CNTL) TYPE(*CHAR) LEN(10)
DCL VAR(&P1PFNM) TYPE(*CHAR) LEN(10)
DCL VAR(&P1AOOQ) TYPE(*CHAR) LEN(1)
DCL VAR(&P1AOAT) TYPE(*CHAR) LEN(1)
DCL VAR(&P1RTNC) TYPE(*CHAR) LEN(1)
DCL VAR(&P1CMD) TYPE(*CHAR) LEN(512)

/* Change P1RPNM to the name of the report override */


CHGVAR VAR(&P1RPNM) VALUE('ORDACKRP ')
CHGVAR VAR(&P1CNTL) VALUE(' ')

/* Retrieve OVRPRTF command for this report */


CALL PGM(AXZPOV1R) PARM(&P1RPNM &P1CNTL &P1PFNM +
&P1AOOQ &P1AOAT &P1RTNC &P1CMD)

/* If return code is valid then apply override */


IF COND(&P1RTNC *EQ '0') THEN(CALL PGM(QCMDEXC) +
PARM(&P1CMD 512))

CALL PGM(ORDACK)
DLTOVR FILE(ORDACKRP)
MONMSG MSGID(CPF9841)
ENDPGM

This is a straight forward override that works best as a


report or user/report override combination, if you want to
create an override by company or warehouse you would
need to place the appropriate value into the variable
P1CNTL to signify the override to use otherwise it should
be left blank.

Here are the return variables from program AXZPOV1R, as


shown in the sample source code your program should
check return variable P1RTNC to make sure the call was
successful and handle any errors returned appropriately.

Output Parameter Description

P1PNFM Program file name

P1AOOQ Outq override allowed

P1AOAT Attribute override allowed

P1RTNC Return code (0=okay, 1 =error)

P1CMD OVRPRTF command string

Date Conversions
Dates stored in the XA database files are stored in a non-
standard 7 digit format of CYYMMDD that does not work
well when using them with RPG or CL programs and you
can‟t easily use the date operations ADDDUR or SUBDUR
functions without scrubbing the date into a more
standardized format such as MMDDYY.

Typically in most XA shops you would find old utility


programs written by past programmers to get around the
date dilemma using a kludge like combination of RPG
programs with MOVE operands and substring logic to
convert from regular date formats to XA and vice versa.
While these programs do suffice fortunately now in modern
times there are a couple of macro programs that can do the
date manipulation and conversions for you.

The first program you can use, AXZDT will do date


conversions and of course error checking. Here is the
parameter table.

Parameter Description

RQUEST (5A) *FILE = Convert USER to


FILE
*USER = Convert FILE to
USER
*CALC = Add FILE date to
USER and store in FILE
*DIFF = Calculate
difference in days between
USER and FILE, store in
FILE

USRFMT (1N) User date format

0 = System value
1 = MMDDYY
2 = DDMMYY
3 = YYMMDD
4 = CYYDDD (Julian)

USER (6N) Date in user format

FILE (7N) Date in XA format

ERROR (1A) Error code

„ „ = No Error
„E‟ = Parameter Errors
„I‟ = Invalid date

Let‟s take a look at a sample CL program to convert the date


December 20th, 2008 to the XA file format of 1081220.
PGM
DCL VAR(&RQUEST) TYPE(*CHAR) LEN(5) VALUE('*FILE')
DCL VAR(&USRFMT) TYPE(*DEC) LEN(1 0) VALUE(1)
DCL VAR(&USER) TYPE(*DEC) LEN(6 0) VALUE(081208)
DCL VAR(&FILE) TYPE(*DEC) LEN(7 0)
DCL VAR(&ERROR) TYPE(*CHAR) LEN(1)

CALL PGM(AXZDT) PARM(&RQUEST &USRFMT &USER &FILE +


&ERROR)
/* At this point variable *FILE = 1081220 */

ENDPGM

Now it‟s time to convert from XA formatted date of


1081201 to a more standard date 120108 in the format of
MMDDYY.
PGM
DCL VAR(&RQUEST) TYPE(*CHAR) LEN(5) VALUE('*USER')
DCL VAR(&USRFMT) TYPE(*DEC) LEN(1 0) VALUE(1)
DCL VAR(&USER) TYPE(*DEC) LEN(6 0) VALUE(0)
DCL VAR(&FILE) TYPE(*DEC) LEN(7 0) VALUE(1081201)
DCL VAR(&ERROR) TYPE(*CHAR) LEN(1)

CALL PGM(AXZDT) PARM(&RQUEST &USRFMT &USER &FILE +


&ERROR)
/* At this point variable *USER = 120108 */

ENDPGM

The other useful date program is AXZPDR and its purpose


is to return the current date and time. AXZPDR can be
useful when called in conjunction with AXZDT. When
calling program AXZPDR it has three parameters:

Parameter Description

USRFMT (1N) Date Format

DTUSR (7N) Current Date

DTIME (6N) Current Time

Here is a sample CL program to call AXZPDR and retrieve


current date and time. You will need to examine return
variable USRFMT against the parameter table for AXZDT
to find out the format the date will be returned in.
PGM
DCL VAR(&USRFMT) TYPE(*DEC) LEN(1 0) VALUE(3)
DCL VAR(&DTUSR) TYPE(*DEC) LEN(6 0)
DCL VAR(&DTIME) TYPE(*DEC) LEN(6 0)

CALL PGM(AXZPDR) PARM(&USRFMT &DTUSR &DTIME)


/* &DTUSR is equal to current date in USRFMT format */
/* &DTIME is equal to current time in HHMMSS format */

ENDPGM

At this point the benefit would be to write a program that


calls AXZPDR to retrieve the current date and then convert
it into the format of your choice using AXZDT.
Purchase Order Offline
Load
Quite frequently the need to create purchase orders from
outside systems and third party programs arises. To fit this
requirement there is an available enhancement PTF
SH67254 for XA R6 and R7. However if you are using an
older version of XA there is another little known way to
create and modify purchase orders from your own custom
programs through a collection of API programs that comes
with the Purchasing module to make this happen.

Of course nowadays the preferred way to get any data into


XA including your own customized Integrator objects
would be to use the System-Link module or the lesser
enhancement offline load PTF. In a pinch though this
method will work for creating simple straight forward POs
or can be used to create POs in an automated fashion
without user intervention as is required when using the
offline load technique. These APIs have been tested on
release 5 through release 7.

The available API programs that can be used to generate


and modify purchase orders are as follows:

AMIAI10R
Assign purchase order number. This program retrieves the
next available purchase order number from the SYSCTL file
and then increments the record. You should never access
and directly make changes to SYSCTL!

AMIAI01R
Access and maintain purchase order master file POMAST.
AMIAI02R
Access and maintain purchase order item file POITEM.

AM6AI01R
Access and maintain the purchase order comments file
POCOMT.

The parameters for each of these programs are as follows

Parameter Description

RTNCD Return code

FNCTN Function code

LCKOUT Record locking

Data Structure Data structure of parameter


file.

The last parameter for each program is a data structure


(except for AMIA10R, the last parameter can simply be a 7
character alpha field to hold the purchase order number) of
the corresponding file containing all the fields that can be
maintained using the API call, to use them simply define the
corresponding file as an external data structure and then
pass that data structure as the data structure parameter when
calling the API.

The list of files to define as the data structure for each


program as follows:
ZDATI01R
Purchase order master file POMAST for program
AMIA01R.

ZDATI02R
Purchase order item file POITEM for program AMIAI02R.

ZDAT601R
Purchase Order comment file POCOMT for program
AM6AI01R.

The best way to get a thorough layout for each of these files
including extended field descriptions and requirements is by
utilizing the File Record Layout Report AMZ14 that you
can run by selecting the appropriate option off of the
Reports menu in Cross Application Support module by
selecting to “Specify Files.” Not only will this report list all
of the fields but often times the correct data for key fields
including the characteristics for those fields.
Here is some sample ILE RPG code for both the prototype
specifications and the data structure definitions used to
retrieve an available PO number and maintain the POMAST
file.
d RTVPONbr PR EXTPGM('AMIAI10R')
d RTNCD 1
d FNCTN 1
d LCKOUT 1
d PONBR 7

d CRTPOM PR EXTPGM('AMIAI01R')
d RTNCD 1
d FNCTN 1
d LCKOUT 1
d POMRCD like(dZPOMST)

d dZPOMST e ds extname(ZDATI01R) inz(*EXTDFT)

Sample calls in ILE RPG to retrieve the PO number and


create a purchase order.
RTVPONbr(pRTNCD:pFNCTN:pLCKOUT:dZPONBR);
CRTPOM(pRTNCD:pFNCTN:pLCKOUT:dZPOMST);

For more information on the API programs, including


information on additional APIs available to do a variety of
purchasing related tasks take a look at Appendix D of the
Purchasing user manual.

Inventory Audit Reports


When utilizing material requirements planning based
systems it is critical to maintain very high levels of accuracy
in bills of materials and inventory otherwise it will cause
chaos in those planning systems. There are several audit
reports available designed to help maintain a high degree of
inventory accuracy and the integrity of inventory
management item balance file including the Allocation
Quantity Audit Report, On Order Quantity Audit Report,
and Location Quantity Audit Report.

It is also generally a good idea and typically requested from


accounting personnel that some sort of perpetual inventory
report be run on a daily basis. In addition if your XA
environments are tailored for batch/lot control, running an
additional inventory report over the location detail file
makes sense. Let‟s examine each of the inventory auditing
reports available.

Allocation Quantity Audit Report

The Allocation Quantity Audit report AMI9A2 processes


item balance records in the file ITEMBL correcting the
manufacturing allocated quantity (MALQT) and pick list
requirements (PLREQ) fields by comparing the quantities to
the manufacturing data records and customer order releases.
The command SBMALQAUD to call this report was
introduced with release 6 as part of enhancement PTF
SH64159.

On Order Quantity Audit Report

The On Order Quantity Audit report AMI9C2 checks the


item balance file on order records against open
manufacturing and purchase orders. The command
SBMOOQAUD to call this report was introduced starting
with release 6 as part of enhancement PTF SH64166.

Location Quantity Audit Report

The Location Quantity Audit report AMIC21 makes


comparisons between the item balance file ITEMBL on
hand quantity against the location quantity file SLQNTY
and prints any differences. This is simply a report and does
not modify any records, if you have a discrepancy between
these two files it should be corrected by using a combination
of Inventory Adjustment (IA) and Location Adjustment
(LA) transactions using the location quantity detail and
transaction entry options in the Inventory Management
module. The command SBMLCQAUD to call this report
was introduced starting with release 6 as part of
enhancement PTF SH64160.

Release 5 Considerations

Unfortunately the enhancement PTFs to run the audit


reports where only made available for releases 6 and 7.
However it is possible to create a CL program to call the
report programs directly and print the On Order Quantity
and Allocation Quantity Audits as in the example below.
You would need to run the Location Quantity audit report
manually since it does not actually modify any records it
can be run ad hoc any time.
/* Automate On Order Quantity and Allocation Quantity */
/* Audits in Inventory Management – YY Environment */
PGM
ADDLIBLE LIB(AMALIBY)
MONMSG MSGID(CPF0000)
STRMAPICS ENDS(YY) MENU(*NONE)
CALL PGM(AMIPVF) PARM(0 'YY')
CALL PGM(AMIPVA) PARM(0 'YY')
ENDMAPICS
ENDPGM

Inventory Transaction Register

The inventory transaction register report AMV3G prints out


a detailed report of all inventory transactions processed
since the last time the register was run. It essentially is a
listing of all the data associated with the transaction that is
stored in the Inventory Transaction History file IMHIST.
The command PRTIMTXR is available for all releases so
you can automate the job.

If you are running a large volume of inventory transactions


this report will be incredibly large and is capable of
becoming thousands of pages in length rendering it quite
useless unless you need to pinpoint a specific transaction
posted. Along with the regular register is an identical report
that lists only the transactions that are in error, this error
report can help identify issues with IM batches that need to
be corrected or investigated further.

Perpetual Inventory Report

This is simply a report by warehouse and item that queries


the item balance file and calculates the perpetual inventory
balances by multiplying the on-hand quantity by standard
cost, average cost or last cost depending on how you
tailored questionnaire response I006. Running some sort of
perpetual is a good idea since it allows a “snapshot” of the
inventory value at a certain time in history, the only other
way to get this data is to look at specific inventory
transaction records stored in the IMHIST file which would
be very cumbersome.

The stock Inventory Valuation report AMISS can be found


in the reports section of the Inventory Management module.
This report can be run ad hoc, however it has a couple of
potential drawbacks including the inability to run it as a
batch job (so it can‟t be automated as part of a scheduled job
or backup routine) and it also includes items that have zero
quantity on hand which you may or may not want to see.
The easiest way to generate your own perpetual inventory
report would be to create a Query/400 query or if you are
using another reporting tool you could also use the
following SQL routine:

For detailed inventory listing by warehouse, location and


batch/lot you can use the stock report Location Stock Status
Detail AMISD or write your own over file SLQNTY.
Scheduling the Reports

Since the On Order and Allocation quantity audits directly


make changes to the item balance file these reports need to
be run after hours or as part of the backup routine otherwise
they can hang up trying to contend with record locks from
user jobs. Running the audit reports coupled with a
perpetual inventory report during a dedicated backup
provides a very clean daily cutoff period where inventory
transactions will be prevented from occurring making any
reconciliation or researching inventory issues much easier.

For sample CL programs to run the audit reports and


transaction register as part of the backup routine see the
chapter on backing up XA.

Additional Transaction Registers

Although not necessarily used for inventory accuracy


purposes there are two additional inventory transaction
registers that should be run from time to time to help keep
database file sizes small. These can be found in the
Customer Order Management and Purchasing modules and
utilize the same report AMV3G as the IM transaction
register. The information contained in the COM and PUR
registers contains IW, RW, SA transactions related to sales
orders and RP transactions from purchase order receipts.

Backing Up
One of the more important responsibilities performed by a
system administrator is ensuring a good backup is running
daily. If the schedule of your business operations allow it
running the XA backup routine is a good idea. Not only will
it save important data files the backup job does some other
maintenance tasks as well so if your shop isn‟t running
twenty-four seven or can at least spare a couple hours of
downtime you should definitely run the XA backup routine.

There are a couple of things that need to be done to ensure


the backup goes off without a hitch, namely closing out
transaction batches and ensuring users have logged out of
there sessions to the environment that is being backed up.
Trying to get users comply with either of these tasks is
almost always easier said than done often leaving active or
suspended batches that will stall out a backup or users who
have gone home for the day that leave there sessions open.

Improperly closed transaction batches seem to occur most


often in the Inventory Management and Accounts
Receivable modules although it is possible that improperly
closed batches in Purchasing, Production Monitoring and
Control and Repetitive Production modules can cause a
backup to fail. One solution to active or suspended batch
issue is to have a person like an operator go through the
batches and close them out prior to running the backup or at
the end of the day when everyone has gone home,
alternatively you can write a custom program to process the
inventory transaction batch control file INTRNC closing
batches by changing the field BSTAT equal to „C‟.

If your system is running only one XA environment you


should also set the OS/400 system value QINACTITV
interactive job time-out to something other than *NONE as
this will automatically log off idle users after a specific
amount of time in minutes. This system value ends sessions
of idle users who have gone home for the day except I have
seen instances where users won‟t get logged out for
whatever reason. To get around this you can also end the
interactive subsystem that users are logged in through
QINTER and also flag a backup setting to end sessions that
will be covered in a moment.

If you are using any of the Power Architecture programs


like Power-Link or Net-Link there is an idle timeout setting
that works exactly like the QINACTITV timeout setting that
you can configure through the Application Settings of the
System tab in Power-Link itself. In the backup
configuration there are also settings you will want to enable
that will end inactive, active, prestart client jobs and batch
jobs.

Auto-log off settings for Power Architecture programs.


XA Backup Options Screen.

If you are running XA release 5 you will also need to ensure


that there are not any ABENDed jobs as this will also
prevent the backup from running, this can be done by
selecting using the job status program in the CAS module or
by creating a custom program that removes records from the
JOBACT file but first check that the job is not active which
can be done by calling the OS/400 Retrieve Job Status API
QWCRJBST. As of release 7 checking for ABENDed jobs
is no longer an issue except for any un-attached UJOBs with
ABEND status which is highly unlikely to occur.

If after all this you are still having problems with the backup
not getting dedicated mode try ending and re-starting the
QUSRWRK subsystem from within the pre and post backup
routine. Doing this along with ending the interactive
subsystems will essentially cut off all access to the system
ensuring there are no jobs keeping any locks on files
preventing the backup from running. The downside of doing
so is that after the pre backup job runs if for any reason the
backup job fails to get dedicated mode it just ends and
doesn‟t run the post backup job, so if you have shut down
all the subsystems no one can access the system including
the administrator whom then needs to start all the
subsystems again. One way to get around this is to create a
second interactive subsystem that requires a specific device
name to connect that an administrator can log into to start up
subsystems.

Lastly a detailed job log is created as part of the backup


program to be used for further problem analysis and will be
required if requesting help from support.

Backup Methods

XA offers two methods of backing up data files. The first is


the traditional backup to tape option and the other is backing
up to disk which saves all of the XA data files into OS/400
save files located in the library AMSLIBy (where y is the
designator for your environment). If you opt for backing up
to disk you will either have to manually run the option to
save the backup to disk onto a physical tape or write a CL
program to accomplish the same thing. If your system is
tight on disk space you will probably want to run the backup
to tape option, the downside to backing up to tape is that it
can take longer to process since tape drives are slower then
hard drives.

Whether backing up to disk or backing up to tape if you


need to save any custom programs or libraries you will need
to incorporate the routines to do so in the pre or post backup
commands. If backing up to tape a simple SAVLIB
command will suffice or if backing up to disk create a save
file and save custom objects to the save files. Investigation
of the stock XA program that saves a disk backup to tape
reveals that it issues the OS/400 SAVSAVFDTA command.

It is important to note that these backups pertain to XA data


master and transaction files only and will not backup the
application libraries, there are additional XA menu options
to do so or the application libraries can be backed up as part
of a weekly or monthly save entire system performed by
selecting option 21 from the “save” menu of OS/400.

One final word regarding the pre and post backup


commands is that it is very important to write good quality
code and be sure to do some error checking within them. If
either of these programs hang up with a message wait it will
stop the backup until the message is answered and if
interactive subsystems have been ended no one will be able
to access the system.

Release Considerations

If you are running release 7 of XA there is a new option


available to schedule and run your backup job from the
OS/400 job scheduler. Releases prior to 7 allow you to
manually schedule a single backup job and places the
backup job into a job queue requiring you to go through the
process of scheduling the backup daily.

Tying It All Together

Regardless of what backup method selected you should


utilize the pre and post backup commands to run your own
custom CL programs to close out active or suspended
transaction batches, end and restart subsystems, save any
custom libraries and perform maintenance tasks. The pre
and post backup commands are specified in the backup
options program.

Backup Options for Pre and Post-Backup Commands.

A sample pre-backup program that shuts down the


appropriate subsystems, closes out the Inventory
Management transaction batches and runs several reports,
these reports are covered more thoroughly in another
chapter. Lastly it prints a report on what object locks are on
the SYSCTL file, the XA backup job tries to allocate
SYSCTL and if it can‟t the backup fails so this report can
help you diagnose any remaining jobs holding up your
backup from running.
PGM

ENDSBS SBS(QINTER) OPTION(*IMMED)


DLYJOB DLY(60)

/* End Java Servers for Release 7 Only */


PSICTLUJB ENV(YY) UJOB(PSVJUP) REQUEST(*END)
DLYJOB DLY(300)
PSICTLJVS REQUEST(*LIST) ENVLIST(YY) CTLOPT(*END)
GLBSRV(*YES)

ENDSBS SBS(QUSRWRK) OPTION(*IMMED)


MONMSG MSGID(CPF1054)
DLYJOB DLY(300)
/* Custom program to close active and suspended IM batches
*/
CALL PGM(CLSBCH)

/* Print Inventory Transaction Register */


PRTIMTXR PROMPT(*NO) ENDS(YY)

/* Print Allocation Quantity Audit Report */


SBMALQAUD PROMPT(*NO) ENDS(YY)

/* Print On Order Quantity Audit Report */


SBMOOQAUD PROMPT(*NO) ENDS(YY)

/* Print Location Quantity Audit Report */


SBMLCQAUD PROMPT(*NO) ENDS(YY) RPTFMT(*SUMMARY)

/* Print report on object locks on SYSCTL file */


WRKOBJLCK OBJ(AMFLIBY/SYSCTL) OBJTYPE(*FILE) MBR(*ALL) +
OUTPUT(*PRINT)

ENDPGM

CL code for Post Backup program that saves custom library


and the Integrated File System then starts up the interactive
and batch subsystems so users can get back on the system.
PGM

CRTSAVF FILE(AMSLIBY/CUSSAVF) TEXT('Custom Data Library')


MONMSG MSGID(CPF7302)

CRTSAVF FILE(AMSLIBY/IFS) TEXT('IFS Save File')


MONMSG MSGID(CPF7302)

SAVLIB LIB(CUSLIB) DEV(*SAVF) +


SAVF(AMSLIBY/CUSSAVF) OUTPUT(*PRINT)
SAVDLO DLO(*ALL) DEV(*SAVF) SAVF(AMSLIBY/IFS)
STRSBS SBSD(QINTER)
STRSBS SBSD(QUSRWRK)

ENDPGM

RPG source code listing for the CLSBCH program that


closes active and suspended inventory transaction batches.
FINTRNC UP E DISK
FQSysPrt O F 132 Printer OFLIND(*IN90)
*
D Count S 5 0
*
C If Not *IN30
C Except Header
C Seton 30
C EndIf
C If BStat = 'A' Or
C BStat = 'S'
C If *IN90 = *On
C Except Header
C Setoff 90
C EndIf
C Except Detail
C Eval BStat = 'C'
C Update INTRNCTA
C Add 1 Count
C EndIf
CLR Except PrtCount
*
* Audit Report
*
OQSYSPRT E Header 2 03
O UDATE Y 9
O 42 'Records in Batch file'
O 66 'INTRNC that were active
O 82 'and auto-closed'
O 125 'Page:'
O PAGE 130
O E Header 1
O 17 'Previous Batch'
O 40 'Month/Day Time of'
O 64 'Batch Rcd Error Rcd'
O 77 'Total'
O E Header 2
O 18 'Batch Sts Number'
O 40 'of Update Update'
O 62 'Count Count'
O 78 'Quantity'
O 101 'Total Amount'
O E Detail 1
O BSTAT 6
O BATCH Z 16
O MODAY Y 28
O UTIME 41 ' : : '
O BCNTM J 52
O BERRS J 63
O TOQTY J 81
O TOAMT J 102
O E PrtCount 2 1
O 25 'Records updated . . .:’
O Count J 34
Sample Report
Applying Program Updates
Program fixes and updates come in two different flavors,
Program Temporary Fixes (PTF) and Program Corrective
Maintenance (PCM). The primary difference between the
two is that a PTF generally contains individual bug fixes
while PCMs contain an accumulation of individual PTFs
and all major program enhancements in a single package
applied at once from a CD or tape.

Before applying any XA PTFs or PCMs you will need to get


the AFD utility that allows you to download individual
PTFs from the Infor support site and convert them into a
format that then allows the XA Cross Application Support
modules Program Maintenance functions restore and apply
them to your XA environments. This function is covered in
the Cross Application Support manual as well.

When it comes to applying PCMs it is a much more


comprehensive and difficult task due to the number of steps
required to ensure success. First you will want to review the
PTF status inquiry program to find out what PCM level the
environment is currently at.
The current PCM level for this release 7 XA
environment is 4021.

After finding out the current PCM level get on the Infor
support site and download informational PTF SH12472.
This PTF contains a list of all PCMs and the corresponding
pre-requisite download for all PCMs dating back to XA
release 4.

Once you have identified the current PCM level your


environment is running at, utilizing the information from
SH12472 get back on the support site and order the latest
PCM CDs. You will need to order every PCM CD and
apply them in the order listed, you cannot skip PCMs.

PCM Apply Process

When you are ready to apply a PCM download the latest


pre-requisite PTF as it contains the very latest code changes
and bug fixes to ensure the PCM applies without any issues,
as customers apply PCMs new issues arise and the pre-
requisites can change even if they have been released
several months prior.

You must also carefully read the cover letter for the PCM,
that is part of the PCMs pre-requisite download, you are
about to apply as it will almost always have a list of
additional individual pre-requisite PTFs you must apply
before you can apply the PCM. This will be followed up by
post apply instructions or additional individual PTFs that
must be applied. This is a very important step and failure to
follow the cover letter can result in serious errors or the
PCM apply to fail altogether.

Also keep in mind that the pre-requisite PTFs required for


the PCM may themselves contain additional pre-requisite
PTFs you will need to apply… utilize the PTF Inquiry
feature in the Cross Application Support module to peruse
the PTF apply history logs to identify all pre-requisite PTFs
that need to be applied, if in doubt go ahead and apply the
PTF again.

The PCM load and apply process can vary in length of time
depending on how many changes are contained on the CD.
It is recommended that you backup your environment (the
PCM programs will prompt you to do so) just in case
something goes awry.

Lastly if you have modified any stock XA programs


(generally not a good idea) you will need to further evaluate
the documentation included with PCM to find out if you
need to upgrade the modified programs.

Aside from modifications if you are running a plain vanilla


install the PCM apply process should go smoothly without
any major issues, however as a precaution if you have
sufficient available disk space you may want to build a
secondary test environment and apply the PCM to it and
then have users perform some preliminary testing in critical
areas to make sure everything is working correctly before
applying the PCM to your live environment. Typically any
program bugs encountered after applying a PCM have a
downloadable PTF fix available. Creating additional
environments is covered in the Cross Application Support
manual.

Database Tuning
It used to be that in the not so distant past the IT
programming staff for a company generated all of the
reports, programs and modifications for a given software
system. Because of this dependence on highly skilled
individuals tight control could be exerted over the design of
these programs and modifications ensuring performance of
the system, programs and underlying database did not
needlessly suffer from degradation due to poor process and
coding design.

At first the main performance degradation issues revolved


around users generating and running ad-hoc reports using
third party reporting tools… but with the advent of the
Integrator and Enterprise Integrator XA modules
customization has not only become routine but it has almost
become mandatory to accomplish business goals and tasks.
While Integrator has made customization a virtual piece of
cake at the very same time this sort of customization comes
at a cost of degrading performance especially when the
ability to customize Integrator objects is placed in the hands
of power users who don‟t necessarily understand the
underlying architecture database files and the nuances of the
iSeries platform.

The main issue lies with queries joining together two or


more files. The query optimizer built into OS/400 pre-
processes SQL queries behind the scenes selecting
appropriate files and indexes when they exist or modifying
the SQL statement, but when the optimal files or indexes do
not exist this will cause OS/400 to build temporary access
paths to accommodate the query request.

Remember that Integrator and other types of third party


reporting tools can make good business sense, but that the
high degree of flexibility and customization come at a cost
in performance and wreaking havoc by straying too far from
the baseline functions built into XA. It‟s best to have
competent IT individuals with a good understanding of the
databases and iSeries system work alongside users building
reports and customized objects before they are released for
mass consumption to all users. It is best to look at Integrator
as a tool to help extend the existing XA database or build
onto pieces fitting a specific business need, not to replace
the logic of the system as that path can only lead to failure
and much pain when faced with migrating volumes of
custom code to newer versions of the software.

Utilize Batch Processing

Keep in mind that a report is a report. That is, when trying


to look at or summarize slices of data over large files you
should utilize the batch processing feature built into OS/400
where possible, this way large report jobs can be run in the
background in a subsystem batch while users can do
something else without needing to wait.

Integrator and Enterprise Integrator come with the ability to


create your own Host Jobs and Host Reports. These two
options are tailor made specifically for submitting jobs to
batch subsystems to update large quantities or data or to
generate reports. Make use of these features when it makes
sense to do so and instruct users on how to make use of
these features. With the inherent flexibility of these tools
comes the possibility of swamping a system with a bunch of
interactive requests that would be better served as batch jobs
and reports. Also on some iSeries systems interactive jobs
are limited in processing resources by a governor built into
the system while batch jobs are not, take advantage of batch
processing when it makes sense to do so.

OS/400 System Values

The first place to look when tuning database performance is


the built in settings that are a part of OS/400 operating
system. There are a couple of ways you can set timeout
limits on queries made against the system to help prevent
users from consuming all of the system resources with a
malformed or badly created query. The first is by setting the
OS/400 system value Query processing time limit
QQRYTIMLMT, this value can be set to a number of
seconds ranging from 0 to 2147352578 or *NOMAX which
limits the amount of time that a query can run before the
operating system steps in and ends it.

The second option is to set a similar value as part of creating


an ODBC connecting to your system using the ODBC
driver that comes bundled with Client Access.
If your system has more than one processor you will want to
look into setting the system value for Parallel processing
degree QQRYDEGREE. QQRYDEGREE used in
conjunction with system feature DB2 Symmetric
Multiprocessing for OS/400 allows additional processor
resources to work on optimizing query access paths made to
the database in parallel; naturally this will help to improve
query performance.

Indexes and Logical Files

In this chapter we will focus on ways to get the most bang


for the buck in the least amount of time, since the majority
of un-optimized programs are going to originate from
reporting programs and Integrator objects the quickest way
to see performance gains is to look at the default OS/400
values governing query processing and to examine the
queries and in particular the number of indexes created on
the fly by the query optimizer as these can take long
amounts of time.

The term index is going to be used interchangeably with


logical file as the end result is roughly the same, a file
created over a physical file with specific key fields and
sequence.

Query Optimizer

OS/400 comes with a special database optimizer feature that


resides as a virtual layer between user programs and the
integrated DB2 database optimizing query access made
through SQL statements, OPNQRYF command, APIs,
Query/400 query definitions and ODBC connections. You
don‟t need to do anything to utilize the Query Optimizer as
it is built right into the database system running quietly in
the background without any intervention.

Essentially when any kind of query is issued the query


optimizer steps in and examines the query and the available
methods to access the data to satisfy the query, as part of
that examination the optimizer assigns a score to each
available method then selects the best scoring method to
satisfy the query request. An example of the optimizer at
work would be issuing an SQL statement to sequentially
access a physical file while query optimizer may select to
run the statement over a logical file if it determines that is
the best access plan by using a scoring system.

When working with a single file the query optimization


process is rather straight forward however when running
queries that join together two or more files is the point when
performance can become severely impacted. The worst case
scenario is when the query optimizer is optimizing a query
made upon two or more very large files and cannot find an
index and instead builds a temporary index to satisfy a
request as this is both time and resource consuming for the
system.

The query optimizer itself is a rather complex system and


for more than just simple performance gains a detailed
explanation is way out of the scope of this book. However if
you are going to be writing custom programs using
embedded SQL in or issuing SQL statements directly
against the system you should definitely check out the IBM
“DB2 Universal Database for iSeries Database
Performance and Query Optimization”1 manual for a
complete detailed analysis on optimizing the database and
programs.

SQL Performance Monitor

By now you are probably just how many new indexes are
going to have to be created to accommodate all of these
requests. So when it comes to database tuning an
indispensible tool is the SQL Performance Monitor feature
of iSeries Navigator.

The SQL Performance Monitor runs in the background


collecting detailed data on all of the queries executed on the
system. Later you can run analysis reports on this data and
find out just what programs are hammering the system the
hardest by displaying a variety of performance metrics
collected.

Like any sort of performance monitoring you should have


an established baseline created long before users start
complaining of poor query performance facilitating a knee
jerk reaction. Running a performance monitor collection for
about a week (during non crunch time month end process)
should give you a general idea of the typical system load in
relation to queries.

To start the system collecting performance data simply right


click on SQL Performance Monitors and select new SQL
Performance Monitor. Fill out a few fields,

1
http://publib.boulder.ibm.com/infocenter/iseries/v5r4/topic/rzajq/r
zajq.pdf
SQL Performance Monitor

The performance monitor will then be started, it‟s important


to note that as the running performance monitor is writing
data to a log file the log can become rather large so you will
want to keep an eye on it or turn off monitoring once you
have gathered sufficient data for analysis.

To create an analysis of the data right click on the


performance monitor you set up and select Analyze. This
brings up a whole host of options, the one we will
concentrate on specifically is “Index Create Information”,
check the appropriate box and click the View Results
button.
Performance Monitor Analysis Options

After a few moments a grid like display will pop up listing


all of the queries along with various pieces of information
about each query. In particular you will want to focus in on
the “Time to create index” column.

In particular be on the lookout for queries that have a very


lengthy “time to create index” value. Once located scroll to
the right and collect the SQL statement and copy it as we
will plug it into the Run SQL statement screen in the next
section.

Run SQL Scripts

iSeries Access also comes with the ability to directly


execute SQL statements. But beyond just simple execution
it also comes with several great tools that allow you to
dissect and analyze the SQL query.
Simply take the SQL script that was copied from the SQL
Performance Monitor and past it into the Run SQL Script
screen. Then select the VisualExplain menu and choose
Explain.

VisualExplain runs the SQL statement through the query


optimizer and produces detailed information and a graphical
listing of the entire query.
From the Action menu select Advisor and it will bring up a
box detailing indexes that the query optimizer suggests you
build to try and optimize this particular query. It details the
file name and the key sequence of the fields to build a new
SQL index or logical file over.
Index Advisor

System Performance
Here are a few general system performance tips to help keep
your system running at peak performance. Of course no
amount of tuning is going to help an undersized system that
is running at full capacity. An in depth discussion on
OS/400 performance analysis and tuning is beyond the
scope of this book but visit www.redbooks.ibm.com for
more information.

Generally you can find out how the system is performing at


a glance using the WRKSYSSTS and WRKACTJOB
commands provided with OS/400. For more advanced
performance monitoring you will need the licensed features
provided on the GO PERFORM menu. Also refer to the
chapter on database tuning.
Reorganize the database files to clean out deleted records
from the database tables. XA comes with a reorganize
function built right in the Cross Application Support
module. If you have never run file reorganization before you
will want to schedule a large chunk of downtime to do so as
the reorg requires a dedicated envrionment.

Run the reclaim storage command RCLSTG. The reclaim


storage process cleans up damaged objects and will require
your iSeries system is in dedicated mode.

Be sure to keep current on OS/400 PTFs, in particular


database group and Java group PTFs. You should setup your
system to automatically check for and download new PTFs
as they become available, then review the cover letter for
each downloaded fix and apply as appropriate.

If you are planning to upgrade to XA release 7 it is


important to note that the client architecture has
dramatically changed from prior versions. R7 client
programs have been completely rewritten in Java and make
use of a true client server technology unlike prior Browser
versions. Even though the Java processes have been
continually optimized since initially released they are much
more demanding and require more system resources than
before. So while that AS/400 model 170 may have cut the
mustard before you will need to upgrade to something more
modern. In addition to the above suggestions you there is a
lot of OS/400 settings you will need to make that are
detailed in Infor PTF SH14413.
Additional Resources
The MAPICS-L Mailing List is the place to ask questions
and get answers from other XA professionals from all over
the globe. www.midrange.com

Books

Manufacturing expert Dave Turbide has authored the books


on core XA functions with the follwing titles MAPICS
Production Activity Control and MAPICS Planning. These
are the books for learning how to use the shop floor control
and planning modules of XA, Dave has also written several
other books covering computers and MRP based systems for
manufacturers. www.daveturbide.com

Tenney Publications has published books written or co-


authored by former IBM, Marcam and MAPICS employee
Bob Tenney. Several titles are available including the IFM
module, general ledger interfaces, Information Workplace,
Microsoft FRx and Presence programs.
www.tenneypubs.com

Third Party Software

American Viking Enterprises has third party add on


modules for accounting, order entry and sales analysis,
inventory, purchasing and more. www.avef.net

Jacana by Momentum Utilities has a suite of spool file


overlay and data mining reporting tools designed to
specifically work with the iSeries and comes with canned
reports for XA users. www.jacana.com
Paper-Less created the series of shop floor control and
manufacturing execution modules for XA including the
MES, MDCC and LEAP lean manufacturing modules.
www.paperlessllc.com

Fax*Star fax software is available with hooks into XA


providing document faxing for sales orders and quotes and
purchase orders. www.faxstar.com

Vanguard Systems has the Workflow application. Infor also


owns this system but as the writing of this book Vanguard
has consultants that you can hire to help develop Workflow
applications. www.vansystems.com

Infor Channel Partners

DKM Inc covers the Pacific coast region. www.dkminc.com

CISTech covers the Atlantic coast and South East and has
also developed many great XA add on modules for
accounting, general ledger, inventory and RF transactions.
www.cistech.net

TriMin Systems Inc covers Minnesota, Iowa and Missouri.


www.triming.com

Guide Technologies covers the areas of Ohio, Indiana,


Kentucky, Pennsylvania, West Virginia and New Jersey.
www.guidetechnologies.com

Watermark Solutions covers the areas of Texas, the Rocky


Mountains and parts of the south. www.wmerp.com

The System Decision is based out of New Zealand.


www.tsdint.com

Das könnte Ihnen auch gefallen