Beruflich Dokumente
Kultur Dokumente
Analysis Manager
User’s Guide
Main Index
Corporate Europe Asia Pacific
MSC.Software Corporation MSC.Software GmbH MSC.Software Japan Ltd.
2 MacArthur Place Am Moosfeld 13 Shinjuku First West 8F
Santa Ana, CA 92707 USA 81829 Munich, Germany 23-7 Nishi Shinjuku
Telephone: (800) 345-2078 Telephone: (49) (89) 43 19 87 0 1-Chome, Shinjuku-Ku
Fax: (714) 784-4056 Fax: (49) (89) 43 61 71 6 Tokyo 160-0023, JAPAN
Telephone: (81) (3)-6911-1200
Fax: (81) (3)-6911-1201
Worldwide Web
www.mscsoftware.com
Disclaimer
This documentation, as well as the software described in it, is furnished under license and may be used only in accordance with
the terms of such license.
MSC.Software Corporation reserves the right to make changes in specifications and other information contained in this document
without prior notice.
The concepts, methods, and examples presented in this text are for illustrative and educational purposes only, and are not
intended to be exhaustive or to apply to any particular engineering problem or design. MSC.Software Corporation assumes no
liability or responsibility to any person or company for direct or indirect damages resulting from the use of any information
contained herein.
User Documentation: Copyright ©2008 MSC.Software Corporation. Printed in U.S.A. All Rights Reserved.
This notice shall be marked on any reproduction of this documentation, in whole or in part. Any reproduction or distribution of this
document, in whole or in part, without the prior written consent of MSC.Software Corporation is prohibited.
The software described herein may contain certain third-party software that is protected by copyright and licensed from
MSC.Software suppliers. Contains IBM XL Fortran for AIX V8.1, Runtime Modules, (c) Copyright IBM Corporation 1990-2002,
All Rights Reserved.
MSC, MSC/, MSC Nastran, MD Nastran, MSC Fatigue, Marc, Patran, Dytran, and Laminate Modeler are trademarks or registered
trademarks of MSC.Software Corporation in the United States and/or other countries.
NASTRAN is a registered trademark of NASA. PAM-CRASH is a trademark or registered trademark of ESI Group. SAMCEF is
a trademark or registered trademark of Samtech SA. LS-DYNA is a trademark or registered trademark of Livermore Software
Technology Corporation. ANSYS is a registered trademark of SAS IP, Inc., a wholly owned subsidiary of ANSYS Inc. ACIS is a
registered trademark of Spatial Technology, Inc. ABAQUS, and CATIA are registered trademark of Dassault Systemes, SA.
EUCLID is a registered trademark of Matra Datavision Corporation. FLEXlm is a registered trademark of Macrovision
Corporation. HPGL is a trademark of Hewlett Packard. PostScript is a registered trademark of Adobe Systems, Inc. PTC, CADDS
and Pro/ENGINEER are trademarks or registered trademarks of Parametric Technology Corporation or its subsidiaries in the
United States and/or other countries. Unigraphics, Parasolid and I-DEAS are registered trademarks of UGS Corp. a Siemens
Group Company. All other brand names, product names or trademarks belong to their respective owners.
P3*2008R1*Z*ANM*Z* DC-USR
Main Index
Contents
MSC Patran Analysis Manager User’s Guide
1 Overview
Purpose 2
Product Information 3
2 Getting Started
Quick Overview 8
ABAQUS Submittals 13
MSC.Marc Submittals 14
Generic Submittals 15
Files Created 23
3 Submit
Introduction 26
Selecting Files 28
Main Index
iv MSC Patran Analysis Manager User’s Guide
Windows Submittal 31
4 Configure
Introduction 34
Disk Space 35
MSC Nastran Disk Space 35
ABAQUS, MSC.Marc, and General Disk Space 38
Memory 40
MSC Nastran Memory 40
ABAQUS Memory 42
MSC.Marc and General Memory 44
Mail 46
Time 47
General 49
Restart 53
MSC Nastran Restarts 53
MSC.Marc Restarts 55
ABAQUS Restarts 56
Miscellaneous 58
MSC Nastran Miscellaneous 58
MSC.Marc Miscellaneous 59
ABAQUS Miscellaneous 61
General Miscellaneous 62
5 Monitor
Introduction 68
Running Job 69
Windows Interface 72
Completed Job 75
Windows Interface 76
Host/Queue 78
Job Listing 79
Host Status 80
Queue Manager Log 81
Main Index
CONTENTS v
Full Listing 82
CPU Loads 82
6 Abort
Selecting a Job 86
Aborting a Job 87
UNIX Interface 87
Windows Interface 87
7 System Management
Directory Structure 90
Installation 107
Installation Requirements 107
Installation Instructions 108
8 Error Messages
Error Messages 162
Main Index
vi MSC Patran Analysis Manager User’s Guide
Main Index
Chapter 1: Overview
Patran Analysis Manager User’s Guide
1 Overview
J Purpose 2
J Product Information 3
J
What is Included with this Product? 4
J Integration with MSC Patran 5
J How this Manual is Organized 6
Main Index
2 Patran Analysis Manager User’s Guide
Purpose
Purpose
MD Nastran, MSC.Marc, and MSC Patran are analysis software systems developed and maintained by
the MSC.Software Corporation. MD Nastran and MSC.Marc are advanced finite element analysis
programs used mainly for analyzing complex structural and thermal engineering problems. The core of
MSC Patran is a finite element analysis pre/postprocessor. Several optional products are available with
MSC Patran including advanced postprocessing, interfaces to third party solvers and application
modules. This document describes the MSC Patran Analysis Manager, one of these application modules.
The Analysis Manager provides interfaces within MSC Patran to submit, monitor and manage analysis
jobs on local and remote networked systems. It can also operate in a stand-alone mode directly with MD
Nastran, MSC.Marc, ABAQUS, and other general purpose finite element solvers.
At many sites, engineers have several computing options. Users can choose from multiple platforms or
various queues when jobs are submitted. In reality, the resources available to them are not equal. They
differ based on the amount of disk space and memory available, system speed, cost of computing
resources, and number of users. In networked environments, users frequently do their modeling on local
workstations with the actual analysis performed on compute servers or other licensed workstations.
The MSC Patran Analysis Manager automates the process of running analysis software, even on remote
and dissimilar platforms. Files are automatically copied to where they are needed; the analysis is
performed; pertinent information is relayed back to the user; files are returned or deleted when the
analysis is complete even in heterogeneous computing environments. Time consuming system
housekeeping tasks are reduced so that more time is available for productive engineering.
The Analysis Manager replaces text-oriented submission scripts with a Motif-based menu-driven
interface (or windows native interface on Windows platforms), allowing the user to submit and control
his job with point and click ease. No programming is required. Most users are able to productively use it
after a short demonstration.
Main Index
Chapter 1: Overview 3
Product Information
Product Information
The MSC Patran Analysis Manager provides convenient and automatic submittal, monitoring, control
and general management of analysis jobs to local or remote networked systems. Primary benefits of using
the Analysis Manager are engineering productivity and efficient use of local and corporate network-wide
computing resources for finite element analysis.
The Analysis Manager has its own scheduling capability. If commercially available queueing software,
such as LSF (Load Sharing Facility) from Platform Computing Ltd. or NQS is available, then the
Analysis Manager can be configured to work closely with it.
This release of the MSC Patran Analysis Manager works explicitly with MD Nastran & MSC.Marc
releases up to version 2006, and versions of ABAQUS up to 6.x. It also has a general capability which
allows almost any software analysis application to be supported in a generic way.
For more information on how to contact your local MSC representative see Technical Support, xi.
Main Index
4 Patran Analysis Manager User’s Guide
What is Included with this Product?
Main Index
Chapter 1: Overview 5
Integration with MSC Patran
Main Index
6 Patran Analysis Manager User’s Guide
How this Manual is Organized
System Management details the system management. The individual program executables are described
as well as the necessary configuration files, installation, guidelines and requirements. This chapter is
mainly for the system administrator that must install and configure the Analysis Manager.
Error Messages gives descriptions and solutions to error messages.
Main Index
Chapter 2: Getting Started
Patran Analysis Manager User’s Guide
2 Getting Started
J Quick Overview 8
J Enabling/Disabling the Analysis Manager 10
J
MSC Nastran Submittals 11
J ABAQUS Submittals 13
J MSC.Marc Submittals 14
J Generic Submittals 15
J The Main Form 16
J
Invoking the Analysis Manager Manually 20
J Files Created 23
Main Index
8 Patran Analysis Manager User’s Guide
Quick Overview
Quick Overview
Before Patran’s Analysis Manager can be used, it must be installed and configured by the system
administrator. See System Management for more on the installation and set-up of the module.
In so doing, the system administrator starts the Analysis Manager’s queue manager (QueMgr) daemon
or service, which is always running on a master system. The queue manager schedules all jobs submitted
through the Analysis Manager. The master host is generally the system on which Patran or an analysis
module was installed, but does not have to be.
The system administrator also starts another daemon (or service) that runs on all machines configured to
run analyses, called the remote manager (RmtMgr). This daemon/service allows for proper
communication and file transfer to/from these machines.
Users that already have analysis input files prepared and are not using Patran may skip to The Main Form
after reviewing the rules for input files for the various submittal types in this Chapter.
When using Patran, in general, the user begins by setting the Analysis Preference to the appropriate
analysis, such as MSC Nastran, which is available from the Preferences pull down menu on the top menu
bar.
Main Index
Chapter 2: Getting Started 9
Quick Overview
Once the Analysis Preference is set and a proper analysis model has been created in Patran, the user can
submit the job. Generally, the submittal process takes place from the Analysis application form when the
user presses the Apply button. The full interface with access to all features of Patran’s Analysis Manager
is always available, regardless of the Preference setting, from the Tools pull down menu or from the
Analysis Manager button on the Analysis form. The location of the submittal form is explained
throughout this chapter for each supported analysis code.
Main Index
10 Patran Analysis Manager User’s Guide
Enabling/Disabling the Analysis Manager
Main Index
Chapter 2: Getting Started 11
MSC Nastran Submittals
To submit, monitor, and manage MSC Nastran jobs from Patran using the Analysis Manager, make sure
the Analysis Preference is set to MSC Nastran. This is done from the Preferences menu on the main
Patran form. The Analysis form appears when the Analysis toggle, located on the main Patran
application switch, is chosen. Pressing the Apply button on the Analysis application form with the
Action set to Analyze, Monitor, or Abort will cause the Analysis Manager to perform the desired
action. A chapter is dedicated to each of these actions in the manual as well as one for custom
configuration of MSC Nastran submittals.
The Analysis Manager generates the MSC Nastran File Management Section (FMS) of the input file
automatically, unless the input file already contains the following advanced FMS statements:
INIT
DBLOCATE
ACQUIRE
DBCLEAN
DBFIX
DBLOAD
DBSETDEL
DBUNLOAD
EXPAND
RFINCLUDE
ENDJOB
ASSIGN USRSOU
ASSIGN USROBJ
ASSIGN OBJSCR
ASSIGN INPUTT2
ASSIGN INPUTT4
in which case the user is prompted whether or not to use the existing FMS as-is, or to have the Analysis
Manager auto-generate the FMS, using what FMS is already present, with certain exceptions.
Main Index
12 Patran Analysis Manager User’s Guide
MSC Nastran Submittals
Main Index
Chapter 2: Getting Started 13
ABAQUS Submittals
ABAQUS Submittals
Any standard ABAQUS (up to version 6.x) problem can be submitted using Patran’s Analysis Manager.
This is accomplished from the Analysis form with the Analysis Preference set to ABAQUS.
The following rules apply to ABAQUS run-ready input files for submittal:
1. The filename may not have any '.' characters except for the extension. The filename must begin
with a letter (not a number).
2. The combined filename and path should not exceed 80 characters.
Run-ready input files prepared by Patran follow these rules. Correct and proper analysis files are created
by following the instructions and guidelines as outlined in the Patran Interface to ABAQUS Preference
Guide.
To submit, monitor, and manage ABAQUS jobs from Patran using the Analysis Manager, make sure the
Analysis Preference is set to ABAQUS. This is done from the Preferences menu on the main form. The
Analysis form appears when the Analysis toggle, located on the Patran application switch, is chosen.
Pressing the Apply button on the Analysis application form with the Action set to Analyze, Monitor,
or Abort will cause the Analysis Manager to perform the desired action. A chapter is dedicated to each
of these actions in the manual as well as one for custom configuration of ABAQUS submittals.
If multiple file systems have been defined, the Analysis Manager will generate aux_scratch and
split_scratch parameters appropriately based on current free space among all file systems for the
host on which the job is executing. See Disk Space for more information.
Restarts are handled by the Analysis Manager by optionally copying the restart (.res) file to the
executing host first, then running ABAQUS with the oldjob keyword. See Restart for more
information.
Main Index
14 Patran Analysis Manager User’s Guide
MSC.Marc Submittals
MSC.Marc Submittals
Any standard MSC.Marc (up to version 2006) problem can be submitted using Patran’s Analysis
Manager. This is accomplished from the Analysis form with the Analysis Preference set to MSC.Marc.
The following rules apply to MSC.Marc run-ready input files for submittal:
1. The filename may not have any '.' characters except for the extension. The filename must begin
with a letter (not a number).
Run-ready input files prepared by Patran follow these rules. Correct and proper analysis files are created
by following the instructions and guidelines as outlined in the Marc Preferance Guide.
To submit, monitor, and manage MSC.Marc jobs from Patran using the Analysis Manager, make sure the
Analysis Preference is set to MSC.Marc. This is done from the Preferences menu on the main form.
The Analysis form appears when the Analysis toggle, located on the Patran application switch, is chosen.
Pressing the Apply button on the Analysis application form with the Action set to Analyze, Monitor,
or Abort will cause the Analysis Manager to perform the desired action. A chapter is dedicated to each
of these actions in the manual as well as one for custom configuration of
MSC.Marc submittals.
Multiple file systems are not supported with MSC.Marc submittals. See Disk Space for
more information.
Restarts, user subroutines, externally referenced result (POST) and view factor files are handled by the
Analysis Manager by optionally copying these files to the executing host first, then running MSC.Marc
with the appropriate command arguments. See Restart for more information.
Main Index
Chapter 2: Getting Started 15
Generic Submittals
Generic Submittals
Aside from the explicitly supported analysis codes, MSC Nastran, MSC.Marc, and ABAQUS, most any
analysis application can be submitted, monitored and managed using Patran’s Analysis Manager general
analysis management capability. This is accomplished by selecting Analysis Manager from the Tools
pull down menu on the main Patran form. This brings up the full Analysis Manager user interface which
is described in the next section, The Main Form.
When the Analysis Manager is accessed in this manner, it keys off the current Analysis Preference. If the
Preference is set to MSC Nastran, MSC.Marc, and ABAQUS, the jobname and any restart information
is passed from the current job to the Analysis Manager and is brought up ready to submit, monitor, or
manage this job.
Any other Preference that is set must be configured correctly as described in Installation and is considered
part of the general analysis management. The jobname from the Analysis form is passed to the Analysis
Manager and the job submitted with the configured command line and arguments. (How to configure this
information is given in Miscellaneous and Applications.) If an analysis code is to be submitted, yet no
Analysis Preference exists for this code, the Analysis Manager is brought up in its default mode and the
user must then manually change the analysis application to be submitted via an option menu. This is
explained in detail in the next section.
On submittal of a general analysis code, the job file is copied to the specified analysis computer, the
analysis is run, and all resulting files from the submittal are copied back to the invoking computer
and directory.
Main Index
16 Patran Analysis Manager User’s Guide
The Main Form
Main Index
Chapter 2: Getting Started 17
The Main Form
UNIX Interface
Note: The rest of this form’s appearance varies depending on the Action that is set. Different
databoxes, listboxes, or other items in accordance with the Action/Object menu settings are
displayed. Each of these are discussed in the following chapters.
Main Index
18 Patran Analysis Manager User’s Guide
The Main Form
Windows Interface
Main Index
Chapter 2: Getting Started 19
The Main Form
Note: The rest of this form’s appearance varies depending on the Tab that is set. Different
databoxes, listboxes, or other items in accordance with the Tree and/or Tab settings that are
displayed. Each of these are discussed in the following chapters.
Queue The main purpose of this pull-down is to allow a user to Exit the program, Print
where appropriate, and to Connect To... other queue manager daemons or services.
User Settings can also be saved and read from this pull-down menu. For
administrators, other items on this pull-down become available when configuring the
Analysis Manager and for Starting and Stopping queue manager services. This is
detailed in System Management. These items in the Queue pull-down menu are only
enabled when the Administration tree tab is accessed.
Edit Gives access to standard text Cut and Paste operation when applicable.
View This pull-down menu allows the user mainly to update the view when jobs are being
run. The Refresh (F5) option graphically updates the window when in monitoring
mode. The program automatically refreshes the screen based on the Update Speed
also. All Jobs or only the current User Jobs can be seen if desired.
Tools The Options under this menu allow the user to change the default editor when
viewing result files or input files. The number of user completed jobs viewable from
the interface is also set here.
Windows The main purpose of this pull-down menu is to hide or display the Status Bar and
Output Window at the bottom of the window.
Help Not currently implemented in this release.
Windows Icons
These icons appear on the main form.
Folder The open folder icon is the same as the Connect To... option under the Queue pull-
down menu, which allows you to connect to other queue manager daemons/services
that may be running and accessible.
Save The diskette icon is for saving user settings.
Printer Allows to print when appropriate.
Paintbrush This allows refresh of the window when in monitoring mode.
Main Index
20 Patran Analysis Manager User’s Guide
Invoking the Analysis Manager Manually
Argument Description
arg1 The program can be started up in one of the following 8 modes (enter the
(start-up type) number only):
Main Index
Chapter 2: Getting Started 21
Invoking the Analysis Manager Manually
Argument Description
optional args -coldstart coldstart_jobname
(MSC Nastran) The -coldstart parameter followed by the cold start MSC Nastran
jobname indicates a restart job. Also see P3Mgr.
optional args -runtype <0, 1 or 2>
If no arguments are provided, defaults are used (full interface (1), “.dat”, “unknown”, MSC Nastran (1)).
The arguments listed in the table above are very convenient when invoking the Analysis Manager from
pre and postprocessors such as Patran, which have access to the pertinent information which may be
passed along in the arguments. It may, however, be more convenient for the user to define an alias such
that the program always comes up in the same mode.
Here are some examples of invoking Patran’s Analysis Manager:
$P3_HOME/bin/p3analysis_mgr
or
$P3_HOME/p3manager_files/p3analysis_mgr 1 bdf myjob 1
or
$P3_HOME/p3manager_files/p3analysis_mgr 1 bdf myjob MSC.Nastran
This invokes Patran’s Analysis Manager by specifying the entire path name to the executable where
$P3_HOME is a variable containing the Patran installation directory. The entire user interface is brought
up specified by the first argument. The input file is called myjob.bdf and the last argument specifies
that MSC Nastran is the analysis code of preference.
Here is another example:
Main Index
22 Patran Analysis Manager User’s Guide
Invoking the Analysis Manager Manually
Main Index
Chapter 2: Getting Started 23
Files Created
Files Created
Aside from the files generated by the analysis codes themselves, Patran’s Analysis Manager also
generates files, the contents of which are described in the following table.
Argument Description
jobname.mon This file contains the final monitoring or status information from a
submitted job. It can be replotted using the Monitor | Completed Job
selection from the main form.
jobname.tml This is the analysis manager log file that gives the status of the analysis job
and parameters that were used during execution.
jobname.submit This file contains the messages that would normally appear on the screen if
the job were submitted interactively. When a silent submit is performed
(batch submittal), this file is created. Interactive submittals will display all
messages to a form on the screen.
jobname.stdout This file contains any messages that would normally go to the standard
output (generally the screen) if the user had invoked the analysis code from
the system prompt.
jobname.stderr This file will contain any messages from the analysis which are written to
standard error. If no such messages are generated this file does not appear.
Any or all of these files should be checked for error messages and codes if a job is not successful and it
does not appear that the analysis itself is at fault for abnormal termination.
Main Index
24 Patran Analysis Manager User’s Guide
Files Created
Main Index
Chapter 3: Submit
Patran Analysis Manager User’s Guide
3 Submit
J Introduction 26
J Selecting Files 28
J
Where to Run Jobs 29
J Windows Submittal 31
Main Index
26 Patran Analysis Manager User’s Guide
Introduction
Introduction
The process of submitting a job requires the user to select the file and options desired. The job is
submitted to the system and ultimately executes MD Nastran, ABAQUS, MSC.Marc, or some other
application module. Patran’s Analysis Manager properly handles all necessary files and provides
monitoring capability to the user during and after job execution. See Monitor for more information on
monitoring jobs.
In Patran, jobs are submitted one of two ways: through the Analysis application form for the particular
Analysis Preference, or outside of Patran through Patran’s Analysis Manager user interface with the
Action (or tree tab in the Windows interface) set to Submit. Submitting through the Analysis form in
Patran makes the submittal process transparent to the user and is explained in Getting Started.
For more flexibility the full user interface can be invoked from the system prompt as explained in the
previous chapter or from within Patran by pressing the Analysis Manager button on the Analysis
application form or by invoking it from the Tools pull down menu. This gives access to more advanced
and flexible features such as submitting existing input files from different directories, changing groups
or organizations (queue manager daemons/services), selecting different hosts or queues, and configuring
analysis specific items. The rest of this chapter explains these capabilities.
Below is the UNIX submittal form (see Windows Submittal for the Windows interface).
Main Index
Chapter 3: Submit 27
Introduction
Main Index
28 Patran Analysis Manager User’s Guide
Selecting Files
Selecting Files
The filename of the job that is currently opened will appear in a textbox of the form on the previous page.
If this is not the job to be submitted, press the Select File button and a file browser will appear.
Below is the UNIX file browser form (see Windows Submittal for the Windows interface)
.
All appropriate files in the selected directory are displayed in the file browser. Select the file to be run
from those listed in the file browser or change the directory path in the Filter databox and then press the
Filter button to re-display the files in the new directory indicated. An asterisk (*) serves as a wild card.
Select OK once the file is properly selected and displayed, or double-click on the selected file.
Note: The directory in the Filter databox indicates where the input file will be copied from upon
submission AND where the results files from the analysis will be copied to upon
completion. Any existing results files of the same names will be overwritten on completion
and you must have write privileges to the specified directory.
Main Index
Chapter 3: Submit 29
Where to Run Jobs
The submit function can also be invoked manually from the system prompt. See Invoking the Analysis
Manager Manually for details. It can be invoked in both an interactive and a batch mode.
Main Index
30 Patran Analysis Manager User’s Guide
Where to Run Jobs
Note: Often, the user will look into the Host/Queue listing window described in Host/Queue, to
see what host/queue is most appropriate (free or empty) before selecting from the list and
submitting. When submitting to an LSF/NQS queue, the host is selected automatically,
however you can select a particular host from the Choose Specific Host button (not shown)
if desired.
Main Index
Chapter 3: Submit 31
Windows Submittal
Windows Submittal
The interface on Windows platforms is quite different in appearance than that for UNIX, but the process
is almost identical. Submitting through this interface is simple. Simply follow these steps:
Main Index
32 Patran Analysis Manager User’s Guide
Windows Submittal
Once a file is selected, you can edit the file if necessary before submitting it. This is done by pressing the
Edit File button. By default the Notepad application is used as the editor. The default editor can be
changed under the Tools | Options menu pick as shown below.
Main Index
Chapter 4: Configure
Patran Analysis Manager User’s Guide
4 Configure
J Introduction 34
J Disk Space 35
J
Memory 40
J Mail 46
J Time 47
J General 49
J Restart 53
J
Miscellaneous 58
Main Index
34 Patran Analysis Manager User’s Guide
Introduction
Introduction
By setting the Action to Conbody on the main Patran Analysis Manager form, the user has control of a
variety of options that affect job submittal. The user can customize the submitting environment by setting
any of the parameters discussed in this chapter. These parameters can be saved such that all subsequent
submittals use the new settings or they can be set for a single submittal only. All of this is at the control
of the user.
Main Index
Chapter 4: Configure 35
Disk Space
Disk Space
The Disk Space configuration is analysis code specific.
Main Index
36 Patran Analysis Manager User’s Guide
Disk Space
Note: Patran’s Analysis Manager will only check for sufficient disk space if the numbers for
DBALL, MASTER, and SCRATCH are provided. An error message will appear if not
enough disk space is available. If these values are not specified the job will be submitted
and will run until completion or the disk is full and an error occurs.
Main Index
Chapter 4: Configure 37
Disk Space
The Windows interface for MSC Nastran disk space is shown below.
Main Index
38 Patran Analysis Manager User’s Guide
Disk Space
The Windows interface for ABAQUS, MSC.Marc, or other user defined analysis disk space requirements
is shown below.
Main Index
Chapter 4: Configure 39
Disk Space
Main Index
40 Patran Analysis Manager User’s Guide
Memory
Memory
The Memory configuration is analysis code specific.
Main Index
Chapter 4: Configure 41
Memory
The Windows interface for MSC Nastran memory requirements is shown below:
Main Index
42 Patran Analysis Manager User’s Guide
Memory
ABAQUS Memory
After selecting the Memory option on the Object menu, the following Memory form appears.
Main Index
Chapter 4: Configure 43
Memory
Main Index
44 Patran Analysis Manager User’s Guide
Memory
The Windows interface for MSC.Marc or other general application memory requirements is shown
below:
Main Index
Chapter 4: Configure 45
Memory
Main Index
46 Patran Analysis Manager User’s Guide
Mail
Mail
The Mail configuration setting determines whether or not to have mail notification and, if so, where to
send the mail notices.
Note: In this version there is no mail notification. This feature has been disabled.
Main Index
Chapter 4: Configure 47
Time
Time
Any job can be submitted to be run immediately, with a delay, or at a specific future time. The default
submittal is immediate. To change the submittal time, use the following Time form.
Main Index
48 Patran Analysis Manager User’s Guide
Time
The Windows interface for setting job submit delay and maximum job time is specified directly on the
Submit | Job Control tab as shown below:
Main Index
Chapter 4: Configure 49
General
General
The General configuration form allows preferences to be set for a number of items as described below.
Nothing in this form is analysis specific.
Note: Items not described on this page are described on subsequent pages in this section.
Main Index
50 Patran Analysis Manager User’s Guide
General
The Windows interface for General setting is specified directly on the Submit | General tab as shown
below:
Main Index
Chapter 4: Configure 51
General
Note: Unlike the UNIX interface, to save a default Host/Queue, you select the Host/Queue on the
Job Control tab and then save the settings under the Queue pull-down menu.
Project Directory
The project directory is a subdirectory below the Patran Analysis Manager install path where the
Analysis Manager's job-specific files are created during job execution.
Projects are a method of organizing one’s jobs and results. For instance, if a user had two different
bracket assembly designs and each assembly contained many similar if not identical parts, each assembly
file might be named assembly.dat. But to avoid interference, each file is executed out of a different
project directory.
If the first project is design1 and the second is design2, then one job is executed out of <file
system(s) for selected host>/proj/design1 and the other is <file system(s)
for selected host>/proj/design2. Hence, the user could have both jobs running at the same
time without any problems, even though they are labeled with the same file name. See Disk Configuration.
When the job is completely finished, all appropriate files are copied back to the originating host/directory
(the machine and directory where the job was actually submitted from).
Separate User
The Separate User option allows job submittal to the selected system as a different user in case the current
user does not have an account on the selected system. This must be enabled and set up in advance by the
system administrator. In order for this to work properly, the separate user account specified in this
databox must exist on both the selected system to run the job and the machine where the job is being
Main Index
52 Patran Analysis Manager User’s Guide
General
submitted from. See Examples of Configuration Files for an explanation on how to set up separate users
submission.
Default Host/Queue
The Default Host/Queue, if saved, is the host/queue to which jobs are submitted when submitted directly
from Patran by using the Apply button on the Analysis form. It is also the host/queue to which jobs will
be submitted when using the batch submittal from the direct Analysis Manager command line. It is also
the host/queue which will come up as the selected default when the full Analysis Manager interface is
started. If this setting is not saved, the default host/queue is the first in the list.
Patran Database
You can specify the name of an Patran database so that on a post-submit task such as running a script file
it will know the Patran database to use for what it (the script) wants to do (like automatically reading the
results back in after a job has completed.
Main Index
Chapter 4: Configure 53
Restart
Restart
The Restart configuration is analysis code specific and does not apply to General applications.
Within Patran, to perform a restart using the Analysis Manager, the job is submitted from the Analysis
application as normal however, a restart job must be indicated. When invoking the Analysis Manager’s
main interface with a restart job from Patran, this information is passed to the Analysis Manager and the
restart jobname shows up in the Configure| Restart form. The restart job can be submitted directly from
the main form or from Patran. In either case, the restart job looks for the previous job to be restarted in
the local path and/or on the host machine. If this restart jobname is not specified, the databases must be
located on the host machine to perform a successful restart.
Main Index
54 Patran Analysis Manager User’s Guide
Restart
Main Index
Chapter 4: Configure 55
Restart
MSC.Marc Restarts
Restarts in MSC.Marc are quite similar to those in MSC Nastran.
Main Index
56 Patran Analysis Manager User’s Guide
Restart
ABAQUS Restarts
Main Index
Chapter 4: Configure 57
Restart
After selecting the Restart option on the menu, the following Restart form appears.
Main Index
58 Patran Analysis Manager User’s Guide
Miscellaneous
Miscellaneous
The Miscellaneous configuration is analysis code specific.
Main Index
Chapter 4: Configure 59
Miscellaneous
MSC.Marc Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.
Note: When invoked from Patran, items requiring file locations are usually passed directly into
the Analysis Manager such as the User Subroutine, POST file, and View Factor file. Thus,
in this case, there would be no need to reenter these items.
Main Index
60 Patran Analysis Manager User’s Guide
Miscellaneous
Main Index
Chapter 4: Configure 61
Miscellaneous
ABAQUS Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.
Main Index
62 Patran Analysis Manager User’s Guide
Miscellaneous
General Miscellaneous
After selecting the Miscellaneous option on the menu, the following form appears.
Main Index
Chapter 4: Configure 63
Miscellaneous
Examples of some specific command lines used to invoke analysis codes are given here.
Example 1:
The first example involves the ANSYS 5 code. First the Analysis Preference must be set to ANSYS 5
from Patran’s Analysis Preference form and an input deck for ANSYS 5 must have been generated via
the Analysis application (this is done by setting the Action to Analyze, and the Method to Analysis
Deck). Then Patran’s Analysis Manager can be invoked from the Analysis main form. Note that a direct
submittal from Patran is not feasible in this and the subsequent example.
The jobfile (jobname.prp in this case) is automatically displayed as the input file and the Submit
button can be pressed. The jobfile is the only file that is copied over to the remote host with this general
analysis submittal capability.
In the host.cfg configuration file the path_name of the executable is defined. The rest of the
command line would then look like this:
-j $JOBNAME < $JOBFILE > $JOBNAME.log
If the executable and path defined is as /ansys/bin/ansys.er4k50a, then the entire command
that is executed is:
/ansys/bin/ansys.er4k50a -j $JOBNAME < $JOBFILE > $JOBNAME.log
Here the executable is invoked with a parameter (-j) specifying the jobname. The input file
($JOBFILE) is redirected using the UNIX redirect symbol as the standard input and the standard output
is redirected into a file called $JOBNAME.log. The variables beginning with the $ sign are passed by
Patran’s Analysis Manager. All resulting output files are copied back to the invoking host and directory
on completion.
Example 2:
This is a more complicated example where an analysis code needs more than one input file. The general
analysis capability in Patran’s Analysis Manager only copies one input file over to the remote host for
execution. If more than one file needs to be copied over then a script must be developed for this purpose.
This example shows how Patran FEA can be submitted via a script that does the proper copying of files
to the remote host.
The Analysis Preference in Patran is set to Patran FEA and, in addition to setting the Preference, the
input file suffix is specified as .job. Patran FEA needs three possible input files: jobname.job,
jobname.ntl, and an auxiliary input file. The jobname.job file is automatically copied over to the
remote host. The auxiliary input file can be called anything and is specified in the jobname.job file.
A shell script called FeaExecute is created and placed on all hosts that allow execution of Patran FEA.
This FeaExecute script does the following:
1. Parses the jobname.job file to find the name of the auxiliary input file if it is specified.
2. Copies the auxiliary input file and the jobname.ntl file to the remote host.
3. Execute the FeaControl script which controls actual execution of the Patran FEA job. This is
a standard script which is delivered with the Patran FEA installation.
Main Index
64 Patran Analysis Manager User’s Guide
Miscellaneous
In the Patran Analysis Manager configuration file, the FeaExecute script and its path are specified.
The input parameters for this script are:
-j $JOBNAME, -h $P3AMHOST -d $P3AMDIR
which specify the jobname, the host from which the job was submitted and the directory on that host
where job was submitted from. With this information the job can be successfully run. The full command
that is executed on the remote host is (assuming a location of FeaExecute):
/fea/bin/FeaExecute -j $JOBNAME, -h $P3AMHOST -d $P3AMDIR
The FeaExecute script contents are shown for completeness:
#! /bin/sh
# Script to submit Patran FEA to a remote host via the Analysis Manager
# Define a function for displaying valid params for this script
abort_usage( ) {
cat 2>&1 <</
Usage: $Cmd -j Jobname -h Remote_Host -d Remote_Dir
/
exit 1
}
Main Index
Chapter 4: Configure 65
Miscellaneous
#
get_Jobname()
{
echo $1 | sed -e ‘s;^.*/;;’ -e ‘s;\..*$;;’
}
# Determine the command name of this script
Cmd=‘echo $0 | sed ‘s;^.*/;;’‘
if [ $# -ne 6 ] ; then
abort_usage
fi
while [ $# -ne 0 ] ; do
case “$1” in
-j) Jobname=$2 ; shift 2 ;;
-h) remhost=$2 ; shift 2 ;;
-d) remdir=$2 ; shift 2 ;;
*) abort_usage ;;
esac
done
# Runtime determination of machine/system type
OsName=‘uname -a | awk ‘{print $1}’‘
case “$OsName” in
SunOS)
Rsh=”rsh”
RshN1=’-n’
RshN2=’’
;;
HP-UX)
Rsh=remsh
RshN1=’’
RshN2=’’
;;
AIX)
Rsh=/usr/ucb/remsh
RshN1=’’
RshN2=’-n’
;;
ULTRIX)
Rsh=/usr/ucb/rsh
RshN1=’’
RshN2=’-n’
;;
IRIX)
Rsh=rsh
RshN1=’’
RshN2=’-n’
;;
Main Index
66 Patran Analysis Manager User’s Guide
Miscellaneous
*)
Rsh=rsh
RshN1=’’
RshN2=’’
;;
esac
Main Index
Chapter 5: Monitor
Patran Analysis Manager User’s Guide
5 Monitor
J Introduction 68
J Running Job 69
J
Completed Job 75
J Host/Queue 78
Main Index
68 Patran Analysis Manager User’s Guide
Introduction
Introduction
By setting the Action to Monitor on the main Patran Analysis Manager form, the user can monitor not
only his active jobs but also the Host or Queue activity. In addition, completed graphical monitoring
graphs can also be recalled at anytime. Each of these functions is explained in this chapter.
Each of these functions for monitoring jobs or hosts/queues is also accessible directly from the Analysis
application form within Patran. The only difference is that the full user interface of Patran Analysis
Manager is not accessed first; instead, the monitoring forms are displayed directly as explained in the
next few pages.
Note: The UNIX interface is shown above. In subsequent sections both the UNIX and the Windows
interface are shown. Monitoring in the Windows interface is done from the Monitor tree tabs.
Main Index
Chapter 5: Monitor 69
Running Job
Running Job
With the Action set to monitor a Running Job, pertinent information about a specific job that is currently
running or queued to run can be obtained. Jobs can be monitored from any host in the Analysis Manager's
configuration, not just from where they were submitted.
Note: This form is not displayed when a job is monitored directly from Patran. Instead, only the
monitoring form is displayed as shown on the next page since all the pertinent information to
monitor a job is passed in from Patran. The Windows interface is displayed further down also.
Main Index
70 Patran Analysis Manager User’s Guide
Running Job
A graph of the selected running job appears, showing the duration of the job where it has been or is
running.
The following table describes all the widgets that appear in this job graph.
Main Index
Chapter 5: Monitor 71
Running Job
Item Description
Job Status This widget gives the total elapsed time in blue and the actual CPU time in red.
A check mark appears when the job is completed successfully. Otherwise, an
“X” appears. The clear portion of the blue bar indicates the amount of time the
job was queued before execution began. Elapsed and CPU time are reported in
minutes.
Percent CPU This widget gives the percentage of CPU that is being used by the analysis code
Usage at any given time. The maximum percentage of CPU during job execution is
indicated as a grey shade which remains at the highest level of % CPU usage.
Total Disk Usage This widget gives the total amount of disk space used by the job during
execution in megabytes.
Percent Disk Usage This widget gives the percentage of the total disk space that this job occupies at
an given time for all file systems. If you click on this widget with the mouse, all
file systems will be shown. The maximum percentage of disk space used during
job execution is indicated as a grey shade which remains at the highest level.
Job Information Job # - the sequential number of the job
Main Index
72 Patran Analysis Manager User’s Guide
Running Job
Item Description
Controls Remove beginning queue time - takes off the queued portion of the graphics bar,
e.g., the portion that is not blue before job begins.
Suspend/Resume Job - when toggled on, the job will be indefinitely suspended.
A banner across the CPU dial will display the word SUSPENDED while the job
is suspended. Toggle the switch off to resume the job. The banner will be
removed.
The bottom left panel lists information about the job, such as date and time of event task name, host name,
and status. Any error and status messages will appear here. An example listing is:
Fri Jan 4 13:31:31 1994 <TASK COMPLETED> Task Name: shock
The running job function can also be invoked manually from the system prompt. See Invoking the
Analysis Manager Manually for details.
Windows Interface
For Running Jobs, when a job is submitted from the Windows interface, the user is queried as the
whether he/she wants the interface to switch automatically to the monitoring mode.
When a job is running the Monitor tree shows running jobs and jobs that have been queued.
Main Index
Chapter 5: Monitor 73
Running Job
When a Running Jobs in the tree structure is selected, three tabs become available to give specific
status of the job, allow viewing of created output files, and give graphical display of memory, CPU and
disk usage.
Main Index
74 Patran Analysis Manager User’s Guide
Running Job
Main Index
Chapter 5: Monitor 75
Completed Job
Completed Job
This is an Analysis Manager utility that allows the user to graph a particular completed job run by the
Analysis Manager
Note: This form is not displayed if this action is selected directly from the Analysis application
form in Patran. Instead, only the monitoring form is displayed as shown on the next page.
The Windows interface is also shown
Main Index
76 Patran Analysis Manager User’s Guide
Completed Job
The .mon file is created when a job is first submitted to Patran’s Analysis Manager. Information on all
the job tasks is written to the .mon file. Time submitted, job name, job number, time actually run, time
finished and completion status are all recorded in the file, so that this Analysis Manager function can read
the file and have enough information to graph the job’s progress completely.
The explanation of the graphs on this form is identical to that of a Running Job except that the Update
slider bar does not show up since it is not applicable to a completed job.
Windows Interface
For Completed Jobs, the Windows interface displays them under the Completed Jobs tab in the
Monitor tree.
Main Index
Chapter 5: Monitor 77
Completed Job
Main Index
78 Patran Analysis Manager User’s Guide
Host/Queue
Host/Queue
Information about all hosts or queues used by Patran’s Analysis Manager and jobs submitted through the
Analysis Manager can be reviewed using the Monitor Host/Queue selection. Options available include
Job Listing, Host Status, Queue Manager Log and a Full Listing. Press the Apply
button to invoke these functions. The user can vary how often the information is updated, using the
slider control.
The Host/Queue monitoring function can also be invoked manually from the system prompt. See
Invoking the Analysis Manager Manually for details.
Main Index
Chapter 5: Monitor 79
Host/Queue
Job Listing
The initial application form of Monitor's Host/Queue appears as follows:
At the top of the main form for Monitor Queue is a slider labeled Update Time (Min.). Drag the slider
to the left to shorten the interval between information updates, or drag the slider to the right to slow
update of information. The default interval time is 5 minutes. In the Windows interface the refresh setting
is set under the View | Update Speeds menu pick.
The update interval may be changed at any time during the use of any Monitor Queue options.
All jobs are listed which are currently running in some capacity. Information about each job includes:
Job Number, Job Name, Owner and Time. The job number is a unique, sequential number that the
Analysis Manager generates for each job submitted to it. Pressing the Close button will close down the
monitor form.
Main Index
80 Patran Analysis Manager User’s Guide
Host/Queue
Host Status
When the Host Status toggle is highlighted the form appears as follows:
The status is reported on all hosts or queues used by the Analysis Manager. Information about each
host/queue includes: host/queue name (Host Name), number of jobs running (# Running), number of jobs
queued (# Queued), maximum allowed to run concurrently (Max Running), and Host Type (i.e., MSC
Nastran).
If NQS or LSF is being used, queue information is provided instead of host information. See Submit for
more information on default settings.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes. In Windows, use the View | Update Speeds menu option.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
Log files are unaffected when the form is closed.
Main Index
Chapter 5: Monitor 81
Host/Queue
The most recent jobs submitted are listed, regardless of where or when they were run. Information about
each job includes: date and time of event, event description, job number, job or task name or host name,
task type or PID (process id of task), and owner. Most recent jobs are listed in the text list box from the
time the Analysis Manager’s Queue Manager daemon was started. See System Management for a
description of the Queue Manager daemon.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes. In Windows, use the View | Update Speeds menu option.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
Log files are unaffected when the form is closed.
Main Index
82 Patran Analysis Manager User’s Guide
Host/Queue
Full Listing
When Full Listing is selected, the form appears as follows:
The Full Listing information shows all job tasks submitted. Information about each host/queue includes:
status (blue = running; red = queued), job number, task name, task type, date and time submitted, and
owner.
The queue name is shown if an additional scheduler is present and being used (LSF/Utopia) and a pointer
to the actual queue name.
The update interval may be changed at any time during the use of any Monitor Queue options. The default
interval time is 5 minutes.
To exit from the Monitor Queue, select the Close button on the bottom of the main form on the right.
CPU Loads
When CPU Loads is selected, the form appears as follows:
Main Index
Chapter 5: Monitor 83
Host/Queue
The load on the workstations and computers can be determined by inspecting this form which
periodically updates itself. The list of hosts or queues appears with the percent CPU usage, total amount
of free disk space, and available memory at that particular snapshot in time. The user may sort the hosts
by CPU UTILIZATION, FREE DISK SPACE, or AVAILABLE MEMORY, so that the host or
queue with the best situation appears at the top. Also, indicated in blue are the best hosts or queues for
each category of CPU, disk space and memory.
Main Index
84 Patran Analysis Manager User’s Guide
Host/Queue
Main Index
Chapter 6: Abort
Patran Analysis Manager User’s Guide
6 Abort
J Selecting a Job 86
J Aborting a Job 87
Main Index
86 Patran Analysis Manager User’s Guide
Selecting a Job
Selecting a Job
This capability allows the user to terminate a running job originally submitted through Patran’s Analysis
Manager. When aborting a job, the Analysis Manager cleans up all appropriate files.
The abort function can also be invoked manually from the system prompt. See Invoking the Analysis
Manager Manually for details. A currently running job must be available.
Main Index
Chapter 6: Abort 87
Aborting a Job
Aborting a Job
You can only abort jobs which you own (i.e., originally submitted by you).
When a job is aborted, the analysis files are removed from where they were copied to, and all scratch and
database files are removed, unless the job is a restart from a previous run, in which case the scratch files
are removed, but the original database files from previous runs are left unaffected.
Note: When a job is aborted from within Patran, no user interface appears. The job is simply aborted
after the confirmation.
UNIX Interface
Press the Apply button on the main form with the Action set to Abort as shown on the previous page.
You are asked to confirm with,
Are you sure you wish to abort job # <jobname> ?
Press the OK button to confirm.
The Cancel button will take no action and close the Abort form.
Windows Interface
There are three ways to abort a job from the Windows interface.
1. When the job is initially submitted, a modal window appears asking whether you want to monitor
or abort the job or simply do nothing and let the job run.
2. Once the job is running, from the Job Control tab in the Monitor tree structure. There is an
Abort button on this form to terminate the job.
Main Index
88 Patran Analysis Manager User’s Guide
Aborting a Job
3. From the Monitor | Running Jobs tree structure you can right mouse click on a running job. A
pulldown menu appears from which you can select Abort.
Main Index
Chapter 7: System Management
Patran Analysis Manager User’s Guide
7 System Management
J Directory Structure 90
J Analysis Manager Programs 92
J
Organization Environment Variables 103
J Installation 107
J X Resource Settings 111
J Configuration Management Interface 113
J Examples of Configuration Files 146
J
Starting the Queue/Remote Managers 155
Main Index
90 Patran Analysis Manager User’s Guide
Directory Structure
Directory Structure
The Analysis Manager has a set directory structure, configurable environment variables and other tunable
parameters which are discussed in this chapter.
The Analysis Manager directory structure is displayed below. The main installation directory is shown
as an environment variable, $P3_HOME =<installation_directory>. Typically this would be
or /msc/patran200x or something similar.
where:
<org> (optional) is an additional organizational group and shares the same directory tree as default
yet will have its own unique set of configuration files. See Organization Environment Variables.
<arch> is one of:
Main Index
Chapter 7: System Management 91
Directory Structure
There may be more than one <arch> directory in a filesystem. Architecture types that are not applicable
to your installation may be deleted to reduce disk space usage; however, all machine architecture types
that will be accessed by the Analysis Manager must be kept. Each one of the executables under the bin
directory is described in Analysis Manager Programs.
All configuration files are explained in detail in Examples of Configuration Files. These include org.cfg,
host.cfg, disk.cfg, lsf.cfg, and nqs.cfg.
Organization groups and their uses are described in Organization Environment Variables.
The QueMgr.log file is created when a Queue Manager daemon is started and does not exist until this
time and, therefore, will not exist in the above directory structure unitl after the initial installation. Use
of this file is described in Starting the Queue/Remote Managers respectively. The file QueMgr.rdb is also
created when a Queue Manager daemon is started and is a database containing job specific statistics of
every job ever submitted through the Queue Manager for that particular set of configuration file or
<org>. The contents of this file can be viewed on Unix platforms using the Job_Viewer executable.
Items in the bin and exe directories are scripts to enable easier access to the main programs. These scripts
make sure that the proper environment variables are set before invoking the particular program that reside
in $P3_HOME/p3manager_files/bin/<arch>.
Note: The directories (conf, log, proj) for each set of configuration file (organizational
structure) must have read, write, and execute (777) permission for all users. This can be the
cause of task manager errors.
Main Index
92 Patran Analysis Manager User’s Guide
Analysis Manager Programs
User Interface
The first part of the Analysis Manager is the user interface from which a user submits and monitors the
progress of jobs (P3Mgr is the executable name). This program can be executed in many different ways
and from many different locations (i.e., either locally or remotely over a network). An administration tool
also is available to easily set up and edit configuration files, and test for proper installation. (AdmMgr is
the executable name on Unix. On Windows there is no separate executable; it is part of P3Mgr.) A small
editor program (p3edit) is also part of the user interface portion and is invoked directly from the main
user interface when editing and viewing files.
Two shell scripts are actually used to invoke the Analysis Manager and the administration tool. These are
p3analysis_mgr and p3am_admin. When properly installed, these scripts automatically determine the
installation path directory structure and which machine architecture executable to use.
Daemons
The second part of the Analysis Manager is a series of daemons (or services on Windows) which actually
execute and control jobs. These daemons are responsible for queuing jobs, finding a host to run jobs,
moving data files to selected hosts, executing the selected analysis code, etc. Each one is described here:
Queue Manager
This is a daemon (or service on Windows) which must run all the time (QueMgr executable name). The
machine on which the Queue Manager runs is knows as the master host. Generally it runs as root (or
administrator) and is responsible for scheduling jobs. The Queue Manager always has a complete account
of all jobs running and/or queued. When a request to run a job is received, the Queue Manager checks to
see what hosts are eligible to run the selected code and how many jobs each host is currently running. If
there is a host which is eligible, the Queue Manager will start up the task on that host. If the Analysis
Manager is installed along with a third party scheduling program (i.e., LSF or NQS) the Queue Manager
is responsible for communicating with the scheduling software to control job execution. In summary, the
Queue Manager is the Scheduler of the Analysis Manager environment. (Also, see Starting the
Queue/Remote Managers, Starting the Queue Manager.)
Remote Manager
There is only one Queue Manager, but there are many Remote Managers. A RmtMgr process runs on
each and every analysis machine. These are machines that are configured to run an analysis such as MSC
Nastran or MSC.Marc. A RmtMgr can also be run on each submit machine (recommended - see Job
Manager below). These are machines from which the analysis was submitted such as where Patran runs.
If the submit and analysis machines are the same host, then only one RmtMgr needs to be running. The
QueMgr and RmtMgr processes start up at boot time automatically and run always, but use very little
Main Index
Chapter 7: System Management 93
Analysis Manager Programs
memory and cpu resources, so users will not notice performance effects. Also these processes can run as
root (Administrator on Windows) or as any user, if these privileges are not available.
Each RmtMgr binds to a known/chosen port number that is the same for every RmtMgr machine. Each
RmtMgr process collects machine statistics on free CPU cycles, free memory and free disk space and
returns this data to the QueMgr at frequent intervals. The RmtMgr is actually used to perform a
command and return the output from that command on the host it is running. This is essentially a remote
shell (rsh) host command as on a Unix machine.
Note: Its best to run the RmtMgr service on Windows as someone other than SYSTEM (the
default if you do not do anything different). After installing the RmtMgr, use the control
panel to access the services and then find the RmtMgr and change its startup to use a
different account, something generic if it exists, or an Analysis Manager admin. account. If
the RmtMgr is running as a user and not SYSTEM then the NasMgr/ MarMgr / AbaMgr/
GenMgr will run as this user and have access to Windows networking, shared drives and
all. If it is run as SYSTEM then it is limited to only local Windows drives, shares, etc. The
QueMgr does not do much in the way of files so running that as SYSTEM is OK.
Job Manager
The Job Manager (JobMgr executable name) runs for the life of a job. When a user submits a job using
the Analysis Managerr, the user interface tells Queue Manager about the job and then starts a Job
Manager daemon. The Job Manager daemon will receive and save job information from the Analysis
Manager's user interface. The main purpose of the Job Manager is to record job status for monitoring and
file transfer.
During the execution of jobs, users utilizing the Analysis Manager's user interface program can
seamlessly connect to the Job Manager of their job and see what the status of the job is. In summary, the
Job Manager controls the execution of a single job and is always aware of the current status of that job.
The Job Manager runs on the submit host machine.
Note: On Windows if a RmtMgr is running on a local machine, the JobMgr will be started
through it as usual, but if a RmtMgr is NOT running then a JobMgr will be started
anyway, and the submit will still work fine. The only restriction is if, in this latter case, the
user logs off, a popup dialog appears asking if the user really wants to logoff. The job will
be terminated if he does. This will not happen if the RmtMgr is running as a service.
Main Index
94 Patran Analysis Manager User’s Guide
Analysis Manager Programs
During execution, the NasMgr relays pertinent information (disk usage, cpu, etc.) to the Job Manager
(JobMgr), which then updates the graphical information displayed to the user. The NasMgr is also
responsible for cleaning up files and putting results back to desired locations, as well as reporting its
status to the Job Manager. This daemon runs on the analysis host machine and only for the life of the
analysis.
MSC.Marc Manager
The MSC.Marc Manager (MarMgr executable name) runs only for the life of a job. The MarMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of MSC.Marc
analyses.
ABAQUS Manager
The ABAQUS Manager (AbaMgr executable name) runs only for the life of a job. The AbaMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of ABAQUS
analyses.
General Manager
The General Manager (GenMgr executable name) runs only for the life of a job. The GenMgr is
identical in function to the MSC Nastran Manager (NasMgr) except it is for execution of general analysis
applications.
Editor
The editor (p3edit executable name) runs when requested from P3Mgr when viewing results files or
editing the input deck.
Text Manager
The Text Manager (TxtMgr executable name) is a text based interface to the Analysis Manager to
illustrate the Analysis Manager API. See Application Procedural Interface (API).
Job Viewer
The job viewer (Job_Viewer executable name) is a simple program available on UNIX platforms for
opening and viewing job statistics for the Analysis Manager’s database file. This file is generally located
in $P3_HOME/p3manager_files/default/log/QueMgr.rdb. You must run Job_Viewer
and then open the file manually.
Main Index
Chapter 7: System Management 95
Analysis Manager Programs
JobMgr
Started automatically by P3Mgr/TxtMgr (or RmtMgr); no command line arguments.
RmtMgr
This is a daemon on Unix or a service on Windows and started automatically at boot time. Possible
command line arguments (also see Organization Environment Variables):
Argument Description
-version Just prints Analysis Manager version and exits
-ultima Switch to change P3_HOME to AM_HOME, p3manager_files to
analysis_manager so there is no p3 in the environment required.
(Generally not used!)
-port <####> Port number to use. MUST be the SAME port number for ALL RmtMgr's for
whole network (per QueMgr) default is 1800 if not set.
-path <path> Use to specify base path for finding the Analysis Manager executables:
$P3_HOME/p3manager_files/bin/{arch}/*Mgr.
<path> is the base path $P3_HOME. Default is to use same path as program
was started up with, but in the case of "./RmtMgr ...." it will not work. If
a full path is used to start RmtMgr (like in a startup script) then this is not
needed.
-orgpath <path> Use to specify base path for finding the Analysis Manager org tree
(configuration files and directories):
$P3_HOME/p3manager_files/{org}/{conf,log,proj}.
<path> is the base path $P3_HOME. Use to specify the base path to find the org
tree only if different than the -path argument.
QueMgr (AdMgr)
This is a daemon on Unix or a service on Windows and started automatically at boot time. Possible
command line arguments (also see Organization Environment Variables):
Note: On Unix, the AdmMgr (p3am_admin) accepts the same arguments as QueMgr.
Main Index
96 Patran Analysis Manager User’s Guide
Analysis Manager Programs
Argument Description
-version Just prints Analysis Manager version and exits
-ultima Switch to change P3_HOME to AM_HOME, p3manager_files to
analysis_manager so there is no p3 in the environment required.
(Generally not used!)
-port <####> Port number to use. The default is 1900 if not set. If using an org.cfg file then
use this argument with the -org option below to force a port number and org
name.
-path <path> Use to specify base path for finding the Analysis Manager executables:
$P3_HOME/p3manager_files/bin/{arch}/*Mgr.
<path> is the base path $P3_HOME. Default is to use same path as program
was started up with, but in the case of "./QueMgr ...." it will not work. If
a full path is used to start QueMgr (like in a startup script) then this is not
needed.
-orgpath <path> Use to specify base path for finding the Analysis Manager org tree
(configuration files and directories):
$P3_HOME/p3manager_files/{org}/{conf,log,proj}.
<path> is the base path $P3_HOME. Use to specify the base path to find the org
tree only if different than the -path argument.
Main Index
Chapter 7: System Management 97
Analysis Manager Programs
Argument Description
-org <org> org name to use. This is the name of the directory containing the configuration
files for this Queue Manager daemon (i.e.,
$P3_HOME/p3manager_files/{org}/{conf,log,proj}). The
default is default. If using an org.cfg file then use this with the -port
option above to force a port number and org name.
-delayint <###> Default is 20 seconds. This is rarely used. Every delay_interval seconds
the QueMgr asks another host in its list of all job hosts for a status and if it has
not heard back from a host in (delay_interval * number_of_hosts
* 3) + 30 seconds then it had been approximately 3 round trips through the
host list without a response, so QueMgr marks the host as DOWN and will not
submit new jobs to it, until it starts responding again. This is a flag to be able to
modify that interval to account for network problems etc. which may be causing
the Analysis Manager to think some hosts are down when they may not really
be down.
P3Mgr
This program is started by the user. If 4 arguments are present then its assumed that:
Argument Description
arg 1 Startup type..... It is one of the following:
1 - MSC Nastran
2 - ABAQUS
3 - MSC.Marc
20 through 29 - General (user defined applications)
Main Index
98 Patran Analysis Manager User’s Guide
Analysis Manager Programs
Argument Description
arg 5 X position of upper left corner of Patran right hand side interface in inches.
arg 6 Y position of upper left corner of Patran right hand side interface in inches.
arg 7 Width of Patran right hand side interface in inches.
arg 8 Height of Patran right hand side interface in inches.
The following arguments can be used alone or after the first 4 arguments above:
Argument Description
-rcf <file> rcf file to use for all GUI settings (same format as -env/-envall output)
- see Analysis Manager Environment File.
-auth <file> License file to use. Environment variable MSC_LICENSE_FILE is the
default. This can also point to a port as well as a physical license file (with
path), e.g., -auth 1700@banff
-env Prints the rcf / GUI settings for all applications.
-envall Same as -env but even more information is printed.
-extra <args> Argument to add extra arguments to add on the end of a particular command
line.
-runtype <#> ABAQUS ONLY set run type to:
0 - full analysis
1 - restart
2 - data check
-restart <file> ABAQUS ONLY - coldstart filename for restart.
-coldstart <file> MSC Nastran ONLY - coldstart filename for restart. MSC.Marc uses the
rcfile - see Analysis Manager Environment File.
TxtMgr
This program is started by the user to manage jobs through a simple text submittal program. Possible
arguments are:
Argument Description
-version Same as RmtMgr.
-qmgrhost <hostname> Hostname QueMgr is running on. Default is this one if no org.cfg is
found.
-qmgrport <####> Port QueMgr is running on. Default is 1900 if no org.cfg is found.
Main Index
Chapter 7: System Management 99
Analysis Manager Programs
Argument Description
-rmgrport <####> Port for ALL RmtMgr's for this org (QueMgr). Not needed unless using
the Admin test feature and the default RmtMgr port is not being used.
-org <org> org to use. Default is default.
-orgpath <path> Same as RmtMgr. Needed for writing configuration files and/or Admin
tests if it is not the default path (default is $P3_HOME).
-auth <file> License file to use. Environment variable MSC_LICENSE_FILE is the
default.
-app <name> Application name to use. Default is MSC Nastran (or first valid app).
-rcf <file> rcf file to use for all GUI settings (same format as -env/-envall
output).- see Analysis Manager Environment File.
-p3home <path> Switch to use if $P3_HOME environment variable is not set.
-amhome <path> Switch to use if $AM_HOME environment variable is not set.
-choice <#> Startup option if not full menu:
1) submit a job
2) abort a job
3) monitor a job
9) admin test
Main Index
100 Patran Analysis Manager User’s Guide
Analysis Manager Programs
#
# rc file ---
#
cfg.total_h_list[0].host_name = tavarua
cfg.total_h_list[0].arch = HP700
cfg.total_h_list[0].maxtasks = 2
cfg.total_h_list[0].num_apps = 3
cfg.total_h_list[0].sub_app[MSC.Nastran].pseudohost_name =
tavarua_nast2001
cfg.total_h_list[0].sub_app[MSC.Nastran].exepath =
/solvers/nast2001/bin/nast2001
cfg.total_h_list[0].sub_app[MSC.Nastran].rcpath =
/solvers/nast2001/conf/nast2001rc
cfg.total_h_list[0].sub_app[ABAQUS].pseudohost_name = tavarua_aba62
cfg.total_h_list[0].sub_app[ABAQUS].exepath =
/solvers/hks/Commands/abaqus
cfg.total_h_list[0].sub_app[ABAQUS].rcpath = /solvers/hks/6.2-
1/site/abaqus_v6.env
cfg.total_h_list[0].sub_app[MSC.Marc].pseudohost_name =
tavarua_marc2001
cfg.total_h_list[0].sub_app[MSC.Marc].exepath =
/solvers/marc2001/tools/run_marc
cfg.total_h_list[0].sub_app[MSC.Marc].rcpath =
/solvers/marc2001/tools/include
cfg.total_h_list[1].host_name = salani
cfg.total_h_list[1].arch = WINNT
cfg.total_h_list[1].maxtasks = 2
cfg.total_h_list[1].num_apps = 3
cfg.total_h_list[1].sub_app[MSC.Nastran].pseudohost_name =
salani_nast2001
cfg.total_h_list[1].sub_app[MSC.Nastran].exepath =
d:\msc\bin\nast2001.exe
cfg.total_h_list[1].sub_app[MSC.Nastran].rcpath =
d:\msc\conf\nast2001.rcf
cfg.total_h_list[1].sub_app[ABAQUS].pseudohost_name = salani_aba62
cfg.total_h_list[1].sub_app[ABAQUS].exepath =
d:\hks\Commands\abq621.bat
cfg.total_h_list[1].sub_app[ABAQUS].rcpath = d:\hks\6.2-
1\site\abaqus_v6.env
cfg.total_h_list[1].sub_app[MSC.Marc].pseudohost_name =
salani_marc2001
cfg.total_h_list[1].sub_app[MSC.Marc].exepath =
d:\msc\marc2001\tools\run_marc.bat
Main Index
Chapter 7: System Management 101
Analysis Manager Programs
cfg.total_h_list[1].sub_app[MSC.Marc].rcpath =
d:\msc\marc2001\tools\include.bat
#
unv_config.auto_mon_flag = 1
unv_config.time_type = 0
unv_config.delay_hour = 0
unv_config.delay_min = 0
unv_config.specific_hour = 0
unv_config.specific_min = 0
unv_config.specific_day = 0
unv_config.mail_on_off = 0
unv_config.mon_file_flag = 1
unv_config.copy_link_flag = 0
unv_config.job_max_time = 0
unv_config.project_name = user1
unv_config.orig_pre_prog =
unv_config.orig_pos_prog =
unv_config.exec_pre_prog =
unv_config.exec_pos_prog =
unv_config.separate_user = user1
unv_config.p3db_file =
unv_config.email_addr = empty
#
nas_config.disk_master = 0
nas_config.disk_dball = 0
nas_config.disk_scratch = 0
nas_config.disk_units = 2
nas_config.scr_run_flag = 1
nas_config.save_db_flag = 0
nas_config.copy_db_flag = 0
nas_config.mem_req = 0
nas_config.mem_units = 0
nas_config.smem_units = 0
nas_config.extra_arg =
nas_config.num_hosts = 2
nas_host[tavarua.scm.na.mscsoftware.com].mem = 0
nas_host[tavarua.scm.na.mscsoftware.com].smem = 0
nas_host[tavarua.scm.na.mscsoftware.com].num_cpus = 0
nas_host[lalati.scm.na.mscsoftware.com].mem = 0
nas_host[lalati.scm.na.mscsoftware.com].smem = 0
nas_host[lalati.scm.na.mscsoftware.com].num_cpus = 0
nas_config.default_host = tavarua_nast2001
nas_config.default_queue = N/A
nas_submit.restart_type = 0
nas_submit.restart = 0
nas_submit.modfms = 1
nas_submit.nas_input_deck =
nas_submit.cold_jobname =
#
aba_config.copy_res_file = 1
aba_config.save_res_file = 0
aba_config.mem_req = 0
aba_config.mem_units = 0
aba_config.disk_units = 2
aba_config.space_req = 0
aba_config.append_fil = 0
aba_config.user_sub =
aba_config.use_standard = 1
aba_config.extra_arg =
aba_config.num_hosts = 2
Main Index
102 Patran Analysis Manager User’s Guide
Analysis Manager Programs
aba_host[tavarua.scm.na.mscsoftware.com].num_cpus = 1
aba_host[tavarua.scm.na.mscsoftware.com].pre_buf = 0
aba_host[tavarua.scm.na.mscsoftware.com].pre_mem = 0
aba_host[tavarua.scm.na.mscsoftware.com].main_buf = 0
aba_host[tavarua.scm.na.mscsoftware.com].main_mem = 0
aba_host[lalati.scm.na.mscsoftware.com].num_cpus = 1
aba_host[lalati.scm.na.mscsoftware.com].pre_buf = 0
aba_host[lalati.scm.na.mscsoftware.com].pre_mem = 0
aba_host[lalati.scm.na.mscsoftware.com].main_buf = 0
aba_host[lalati.scm.na.mscsoftware.com].main_mem = 0
aba_config.default_host = tavarua_aba62
aba_config.default_queue = N/A
aba_submit.restart = 0
aba_submit.aba_input_deck =
aba_submit.restart_file =
#
mar_config.disk_units = 2
mar_config.space_req = 0
mar_config.mem_req = 0
mar_config.mem_units = 2
mar_config.translate_input = 1
mar_config.num_hosts = 2
mar_host[tavarua.scm.na.mscsoftware.com].num_cpus = 1
mar_host[lalati.scm.na.mscsoftware.com].num_cpus = 1
mar_config.default_host = tavarua_marc2001
mar_config.default_queue = N/A
mar_config.cmd_line =
mar_config.mon_file = $JOBNAME.sts
mar_submit.save = 0
mar_submit.nprocd = 0
mar_submit.datfile_name =
mar_submit.restart_name =
mar_submit.post_name =
mar_submit.program_name =
mar_submit.user_subroutine_name =
mar_submit.viewfactor =
mar_submit.hostfile =
mar_submit.iamval =
Main Index
Chapter 7: System Management 103
Organization Environment Variables
These variables can be set in the following manner with cshell (if necessary):
setenv P3_HOME /msc/patran200x
setenv P3_PLATFORM HP700
or for bourne shell or kern shell users:
P3_HOME=/patran3
P3_PLATFORM=DECA
export P3_HOME
export P3_PLATFORM
or on Windows:
set P3_HOME=c:/msc/patran200x
set P3_PLATFORM=WINNT
In most instances, users will never have to concern themselves with these environment variables but are
included here for completeness. In a typical Patran installation, a file called .wrapper exists in the
$P3_HOME/bin directory which automatically determines these environment variables. The names of
the invoking scripts, p3analysis_mgr and p3am_admin exist as pointers to .wrapper in this bin directory
which, when executed, determines the variable values and then executes the actual scripts. For this to
work conveniently, the user should have $P3_HOME/bin in his/her path, otherwise the entire path name
must be used when invoking the programs.
P3_ORG
It may be desirable to have multiple Queue Managers running (groups of systems for the Analysis
Manager to use) each with a separate organizational directory for Analysis Manager configuration files.
An optional environment variable, P3_ORG, may be set for each user to specify a separate named
organizational directory. If defined, the Analysis Manager will use it for accessing its required
configuration files and thus connect to the Queue Manager specified by P3_ORG.
Main Index
104 Patran Analysis Manager User’s Guide
Organization Environment Variables
Main Index
Chapter 7: System Management 105
Organization Environment Variables
Main Index
106 Patran Analysis Manager User’s Guide
Organization Environment Variables
can be specified to change organizational groups each time the Analysis Manager is invoked. In
this last method, the user or the system administrator that starts the QueMgr does not need to ever
worry about assigning unique port numbers. However, it is one of the most restrictive installations
and methods of access.
Note: On Windows these variables should be set under the System Control Panel such that on
reboot, the RmtMgr and QueMgr start up with these arguments. You can check the Event
Viewer under Adminstrative Tools Control Panel to check for proper startup.
Main Index
Chapter 7: System Management 107
Installation
Installation
Installation Requirements
The following definitions apply to this section:
1. The master host is the machine which continually runs the Analysis Manager daemon (called
QueMgr). This is also referred to as the master node.
2. The submit host is the machine from which the analysis is submitted, sometimes referred to as the
client also.
3. The analysis host is the machine which actually executes the analysis.
Below is an itemized list of installation requirements:
1. One master node must be chosen for each organizational group (for each Queue Manager that will
be running - typical installation only have one!).
2. Queue Manager (QueMgr) should run as root on the master node. This is not a strict requirement
but recommended on Unix. On Windows it can run as a user or as administrator.
3. Each node (submit and analysis hosts) in the Analysis Manager configuration must be reachable
to and from the master node via a TCP/IP network.
4. Each analysis host must have a Remote Manager (RmtMgr) running with the same port number
(for each QueMgr). It is recommended that each Submit machine also (especially on Windows)
however this is not a strict requirement. (This takes the place of rsh (remsh) remote access
capabilities that used to be used in older versions of the Analysis Manager.)
5. The Analysis Manager software will come off of the installation media onto any machine (master,
submit, or analysis host) under the $P3_HOME/p3manager_files directory. The $P3_HOME
variable is the installation directory and is typically set up as /msc/patran200x and is usually
defined as an environment variable. This p3manager_files directory and tree must exist as-is and
not be renamed.
6. Each analysis host machine in the Analysis Manager configuration must be able to identically see
the installation tree. If a RmtMgr is running this is not an issue because the RmtMgr knows where
the Analysis Manager executables are.
7. The root user should run the administration program (p3am_admin (AdmMgr)) on the master
node to test and ensure that new users can correctly access the Analysis Manager. See
Configuration Management Interface.
Each user wishing to use the Analysis Manager must meet the following requirements:
1. Users who are using the Analysis Manager should have the same login name, user and group ids
on all hosts / nodes in the Analysis Manager configuration. This will prevent file access problems.
In specific cases, users may run jobs on different accounts other than their own, but this must be
set up by the system administrator. This is described in Examples of Configuration Files.
2. Users must have uname in their default search path (path or PATH environment variable in the
user's .cshrc or .profile file).
Main Index
108 Patran Analysis Manager User’s Guide
Installation
Installation Instructions
1. Unload the p3manager_files directory from the installation media. (Consult the Installation
guide for more information on how this is done.)
2. Decide on a master node (typically the node the Patran software is located on), and login to that
node as root.
3. Decide which machine(s), that have MSC Nastran, MSC.Marc, ABAQUS, or other analysis
modules to be used, will be included in the Analysis Manager’s configuration. Find out where
each runtime script and/or configuration files are located (i.e. /msc/bin/nast200x,
/msc/conf/nast200xrc for MSC Nastran) for each machine. Only these machines will be enabled
for later job submission, monitoring, and management.
4. Each analysis host machine that will be configured to run an analysis code must be able to see the
p3manager_files directory structure as outlined in Directory Structure. This directory
structure must also exist on the master node as well as client (submit) nodes. This can be done in
one of two ways. Either the directory structure can be copied directly to each machine so that it
can be accessed in the same manner as on the master node, or symbolic links and NFS mounts
can be created. In any case, if on one machine you type
cd $P3_HOME/p3manager_files
you should be able to do the same on all analysis nodes and see the same directory structure.
As an example of setting up a link, suppose that the machine venus is the master host and has
the installation directory structure in /venus/users/msc/patran200x. A link can be established on
venus by typing:
mkdir /patran
ln -s /venus/users/msc/patran200x /patran
This will ensure that on venus, if you type cd /patran you will be put into
/venus/users/msc/patran200x.
Now on an analysis host called jupiter, NFS mount the disk /venus/users and then type:
mkdir /patran
ln -s /venus/users/msc/patran /patran
This will ensure that analysis host jupiter can see the installation directory structure. Repeat
this for all analysis hosts. NFS mounts are not necessary if you wish to copy the installation
directory structures to each host separately instead of creating links.
Each submit host (hosts that submit jobs) does not necessarily need to see the directory structure
in exactly the same way as the master and analysis hosts do. They only need to be able to see an
installation directory structure to find the user interface executable (P3Mgr).
Note: The above description sounds a bit more restrictive than it really is. In actuality, if a
RmtMgr is started on each analysis host, the directory structure can be seen because
RmtMgr knows from where it was launched and thus knows where all the Analysis
Manager executable are. However, it is still recommended to follow the above
procedure if at all possible.
Main Index
Chapter 7: System Management 109
Installation
5. Start up the RmtMgr daemon or service on each and every analysis node. It is recommended to
start RmtMgr on submit machines also. Starting the Queue/Remote Managers explains this
procedure. This must be done before configuration testing can be done.
6. Use the p3am_admin program to set up the configuration files. This program is located in
$P3_HOME/bin/p3am_admin
Modify Configuration Files explains the use of this program and the format of the generated
configuration files as a result of running this program. The configuration file will be placed in the
correct locations automatically. The following configuration files will be generated:
Note: For a minimal configuration with a single Queue Manager, you should remove or
rename the file $P3_HOME/p3manager_files/org.cfg. See step 12. for more
information.
7. Test the configuration setup using p3am_admin’s testing features. Specifically, do basic tests and
network tests for each user that wishes to access the Analysis Manager. Test Configuration
explains this procedure in detail.
8. Start up the QueMgr daemon on the master node. Starting the Queue/Remote Managers explains
this procedure.
9. Add commands to the appropriate rc files for automatic start-up of the QueMgr and RmtMgr
daemons when the master, submit or analysis nodes have to be rebooted. Starting the
Queue/Remote Managers also explains this procedure.
10. Invoke the Analysis Manager user interface as a normal user and check that the installation was
performed properly. Invoking the Analysis Manager is explained in Invoking the Analysis
Manager Manually.
11. Repeat the procedure from step 2. for each organizational group (Queue Manager) you wish to
set up.
12. When more than one organizational group (Queue Manager) is to be accessed, either modify the
org.cfg file and add the port numbers and group names, or have users set the appropriate
environment variables to access them. See Organization Environment Variables for an explanation
of these variables and see Examples of Configuration Files for setting up the org.cfg file.
13. Make sure users have $P3_HOME/bin in their path. Most Analysis Manager executables can be
invoked from$P3_HOME/bin or are links from $P3_HOME/bin for setting all environment
variables. These include:
Main Index
110 Patran Analysis Manager User’s Guide
Installation
p3am_admin
p3am_viewer (Unix only)
p3analysis_mgr
QueMgr
RmtMgr
It is always safest to invoke these executables from $P3_HOME/bin.
Main Index
Chapter 7: System Management 111
X Resource Settings
X Resource Settings
The Analysis Manager GUI on Unix requires the use of certain X Window System Resources. The
following explains this use.
The name of the Analysis Manager X application class is P3Mgr. Therefore, to change the background
color the Analysis Manager uses to red, the following resource specification is used:
P3Mgr*background: red
The lines below belong in the P3Mgr file delivered with your installation. This file can be found in
$P3_HOME/app-defaults. This file can reside in the user’s local directory or in his home directory
or be placed in .Xdefaults or /usr/lib/X11/app-defaults. It is most convenient to place
it in the user’s home directory; that way changes can be made instantly without having to log out. These
are the resources which the Analysis Manager requires for it to look and behave like Patran.
!
! Resources for Patran Analysis Manager:
!
P3Mgr*background:white
P3Mgr*foreground:black
P3Mgr*bottomShadowColor:bisque4
P3Mgr*troughColor:bisque3
P3Mgr*topShadowColor:white
P3Mgr*highlightColor: black
P3Mgr*XmScrollBar.foreground:white
P3Mgr*XmScrollBar.background:white
P3Mgr*mon_run_trough.background:DodgerBlue
P3Mgr*mon_ok_label.foreground:DodgerBlue
P3Mgr*mon_bad_label.foreground:red
P3Mgr*que_mon_queued.background:red
P3Mgr*que_mon_run.background:DodgerBlue
P3Mgr*mon_disk_trough.background:red
P3Mgr*mon_cpu_trough.background: green
!
! End of Patran Analysis Manager Resources
!
A file called p3am_admin (AdmMgr) also exists for the system administration tool X resources.
Font Handling
The Analysis Manager on Unix requires three fonts to work correctly. At start-up, the Analysis Manager
looks through the fonts available on the machine and picks out three fonts which meet its needs. You will
notice that there are no font definitions in the default Analysis Manager resources. On platforms which
utilize an R4 based version of X windows, the fonts are NOT adjustable by the user. The fonts that the
Analysis Manager calculates are used all the time.
On R5 X windows platforms, the three fonts are still calculated by the Analysis Manager, but the user
has the option of overriding the calculated fonts by using the X resources. The names of the resources to
use are as follows:
Main Index
112 Patran Analysis Manager User’s Guide
X Resource Settings
P3Mgr*fontList: *lucida-bold-r-*-14-140-*
P3Mgr*middle.fontList: *lucida-medium-r-*-14-140-*
P3Mgr*fixed.fontList: *courier-medium-r-*-12-120-*
If the user decides to change the fonts, these are the resources which need to be set. All three fonts do not
have to be changed, a single one can be adjusted by itself. The only requirement is that a fixed font is
defined for P3Mgr*fixed.fontList. It is important for this font to be fixed or the interface for the
Queue Monitor will not appear correctly.
Main Index
Chapter 7: System Management 113
Configuration Management Interface
<path_name> = $P3_HOME/p3manager_files/bin/<arch>/
<arch> = the architecture type of the machine you wish to run on which
can be one of the following:
Main Index
114 Patran Analysis Manager User’s Guide
Configuration Management Interface
$P3_HOME The path where the Analysis Manager is installed. This path will be used to
locate the p3manager_files directory. For example, if /msc/patran200x is
specified, the p3am_admin (AdmMgr) program will look for the
/msc/patran/p3manager_files directory. Typically, the install directory is
/msc/patran200x and defined in an environment variable called
$P3_HOME.
-org <org> This is the organizational group to be used. See Organization Environment
Variables for a description on the use of organizations. It is the name of the
directory under the p3manager_files directory that contains the
configuration files.
Both of the arguments listed above are optional. If they are not specified, the p3am_admin (AdmMgr)
program will check for the following two environment variables:
If the command line arguments are not specified, then at least the P3_HOME environment variable must
be set. The P3_ORG variable is not required. If the P3_ORG variable is not set and the -org option is not
provided, an organization of default is used. Therefore, p3am_admin (AdmMgr) will check for
configuration files in the following location:
$P3_HOME/p3manager_files/default/conf
When running the p3am_admin (AdmMgr) program, it is recommended this be done on the master node
and as the root user. The p3am_admin (AdmMgr) program can be run as normal users, but some of the
testing options will not be available. In addition, the user may not have the necessary privileges to save
the changes to the configuration files or start up a Queue Manager daemon.
When p3am_admin (AdmMgr) starts up, it will take the arguments provided (or environment variables)
and check to see if configuration files already exist. The configuration files should exist as follows. The
last two are only necessary if LSF or NQS queueing are used.
$P3_HOME/p3manager_files/<org>/conf/host.cfg
$P3_HOME/p3manager_files/<org>/conf/disk.cfg
$P3_HOME/p3manager_files/<org>/conf/lsf.cfg
$P3_HOME/p3manager_files/<org>/conf/nqs.cfg
If these files exist, they will be read in for use within the p3am_admin (AdmMgr) program. If these files
are not found, p3am_admin (AdmMgr) will start up in an initial state. In this state there are no hosts,
filesystems, or queues defined and they must all be added using the p3am_admin (AdmMgr)
functionality.
Main Index
Chapter 7: System Management 115
Configuration Management Interface
Therefore, upon initial installation and/or configuration of the Analysis Manager, the p3am_admin
(AdmMgr) program will come up in an initial state and the user can build up configuration files to save.
Action Options
The initial form for p3am_admin (AdmMgr) has the following Actions/Options:
1. Modify Config Files
2. Test Configuration
3. Reconfigure Que Mgr
Main Index
116 Patran Analysis Manager User’s Guide
Configuration Management Interface
Main Index
Chapter 7: System Management 117
Configuration Management Interface
Note: Apply saves all configuration files: host, disk, and, if applicable, lsf or nqs.
Queue Managers set up on Windows only have choice of the default MSC Queue type.
LSF and NQS are not supported for Queue Managers running on Windows.
Administrator User
You must also set the Admin user. This should not be root on Unix or the administrator account on
Windows, but should be a normal user name.
Main Index
118 Patran Analysis Manager User’s Guide
Configuration Management Interface
Configuration Version
There are three configuration versions. The functionality accessible to setup is dependent on which
version you select. Version 1 is the original.
Version 2 includes an additional capability of limiting the maximum number of task for any given
applications allowed to run at any one time. If this number is exceeded, any additional submittals are
queued until the maximum number of tasks for that application drops below this number. This is
typically used when there are only so may application licenses available such that a job cannot be
submitted without a license being available. Otherwise the application might fail due to no license
being available.
Version 3 includes all capabilities of versions 1 and 2, and also includes the ability to set up a host, made
of a group of hosts, that will be monitored for the least loaded machines. Once a machine in that group
satisfies the loading critiria, the job is submitted to that machine.
Applications
Since the Analysis Manager can execute different applications, it needs to know which applications to
execute and how to access them. This configuration information is stored in the host.cfg file located in
the $P3_HOME/p3manager_files/default/conf directory. This portion of the host.cfg file contains the
following fields:
type An integer number used to identify the application. The user never has to worry
about this number because it is automatically assigned by the program.
program name Program names can be either:
NasMgr for executing MSC Nastran
MarMgr for executing MSC.Marc
AbaMgr for executing ABAQUS
GenMgr for executing other analysis modules.
Patran name The name of the Patran Preference from which to key off of when invoking the
Analysis Manager. These can be MSC Nastran, MSC.Marc, ABAQUS,
ANSYS, etc. Check to see what the exact Patran Preference spelling is and
remove any spaces. If the Preference does not exist then the first configured
application will be used when the Analysis Manager is invoked from Patran
after which the user can change it to the one he wants.
optional args Used for generic program execution only. These specify the arguments to be
added to the invoking line when running a generic application.
MaxAppTask By default this is not set. If the configuration file version is set to 2 or 3, then
you may specify the maximum number of tasks that the given application can
run at any one time (on all machines). This is convenient when you don’t want
jobs submitted with the possibility of one or more not being able to check out
the proper licenses if none are available because too many jobs are running at
once.
Main Index
Chapter 7: System Management 119
Configuration Management Interface
The p3am_admin (AdmMgr) program can be used to add and delete applications or change any field
above as shown in the forms below.
The exception to this is the Maximum Number of Tasks. This value must be changed manually by
editing the configuration file and then restarting the Queue Manager service on Windows. On UNIX,
this can be controlled through the Administration GUI.
Adding an Application
To add an application, select the Add Application button. (On Windows, right mouse click the
Applications tree tab.) An application list form appears from which an application can be selected. If
GENERAL is selected the Application Name and Optional Args data boxes appear on the main form.
Main Index
120 Patran Analysis Manager User’s Guide
Configuration Management Interface
For GENERAL, enter the name of the application as it is know by the Patran Preference, without any
spaces. For example if ANSYS 5 is a preference, then enter ANSYS5.
Enter the optional arguments that are needed to run the specified analysis code. For example, if an
executable for the MARC analysis code needs arguments of -j jobname, you can specify -j
$JOBNAME as the optional args. Arguments can be specified explicitly such as the -j, or they can be
placed in as variables such as the $JOBNAME. The following variables are available:
Up to 10 GENERAL applications can be added. To save the configuration, the Apply button must be
pressed and the newly added application information will be saved in the host.cfg file. On Windows this
is Save Config Settings under the Queue pull down menu.
Note: Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
Main Index
Chapter 7: System Management 121
Configuration Management Interface
Deleting an Application
To remove an application, select the Delete Application button. A list of defined applications appears.
Select one to be deleted by clicking on the application name in the list. Then, select OK. The application
will be removed from the list and the list of application will disappear.
On Windows, simply select the application you want to delete from the Applications tree tab and press
the Delete button (or right-mouse click the application and select Delete).
To save the configuration, the Apply button must be pressed and the newly deleted application
information will be saved in the host.cfg file. On Windows this is Save Config Settings under the Queue
pull down menu.
Note: Apply saves all configuration files: host, disk, and, if applicable, lsf or nqs.
Physical Hosts
Since the Analysis Manager can execute jobs on different hosts, it needs to know about each analysis
host. Host configuration for the Analysis Manager is done via the host.cfg file located in the
$P3_HOME/p3manager_files/default/conf directory.
This portion of the host.cfg file contains the following fields:
Main Index
122 Patran Analysis Manager User’s Guide
Configuration Management Interface
physical host Name of host machine for the use of the Analysis Manager
class System & O/S type:
HP700 - Hewlett Packard HP-UX
RS6K - IBM RS/6000 AIX
SGI5 - Silicon Graphics IRIX
SUNS - Sun SPARC Solaris
LX86 - Linux (MSC or Red Hat)
WINNT - Windows 2000 or XP
maximum tasks Maximum allowable concurrent job processes for this machine.
The p3am_admin (AdmMgr) program can be used to add and delete hosts or change any field above as
shown in the forms below.
Main Index
Chapter 7: System Management 123
Configuration Management Interface
Enter the name of the host in the Host Name box, and select the system/OS in the Host Type menu.
Additional hosts can be added by repeating this process.
When all hosts have been added, select Apply and the newly added host information will be saved in the
host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu.
Note: Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
Main Index
124 Patran Analysis Manager User’s Guide
Configuration Management Interface
Deleting a Host
To remove a host from use by the Analysis Manager, select the Delete Physical Host button on the
bottom of the p3am_admin (AdmMgr) form. A list of possible hosts will appear.
Select the host to be deleted by clicking on the hostname in the list. Then, select OK. The host will be
removed from the list of hosts and the list will go away.
On Windows, simply select the Host you want to delete from the Physical Hosts tree tab and press the
Delete button (or right-mouse click the host and select Delete).
When all host configurations are ready, select Apply and the revised host.cfg file will be saved,
excluding the deleted hosts. On Windows this is Save Config Settings under the Queue pull down menu.
Main Index
Chapter 7: System Management 125
Configuration Management Interface
AM hostname Unique name for the combination of the analysis application and physical host.
It can be called anything but must be unique, for example, nas68_venus.
physical host The physical host name where the analysis application will run.
type The unique integer ID assigned to this type of analysis. This is automatically
assigned by the program and the user should not have to worry about this.
path How this machine can find the analysis application - for MSC Nastran, this is
the runtime script (typically the nast200x file), for MSC.Marc, ABAQUS, and
GENERAL applications, this is the executable location.
rcpath How this machine can find the analysis application runtime configuration file -
the MSC Nastran nast200xrc file or the ABAQUS site.env file. This is not
applicable to MSC.Marc or GENERAL application and should be filed with the
keyword NONE.
The p3am_admin (AdmMgr) program can be used to add and delete AM hosts and change any field
above as shown by the forms below.
Adding an AM Host
An AM host is a unique name which the user will specify when submitting a job. Information contained
in the AM host is a combination of the physical host and application type along with the physical location
of that application. To add a specific AM host press the Add AM Host button. A new host description
Main Index
126 Patran Analysis Manager User’s Guide
Configuration Management Interface
will be created and displayed in the left scrolled window, with AM Host Name: Unknown, Physical Host:
UNKNOWN, and Application Type: Unknown.
Enter the unique name of the host in the AM Host Name box, and select the Physical Host that this
application will run on. The application is selected from the Application Type menu. Then, specify the
Configuration Location and Runtime Location paths in the corresponding boxes. The unique name
should reflect the name of the application to be run and where it will run. For example, if V68 of MSC
Nastran is to be run on host venus, then specify NasV68_venus as the AM host name.
The Runtime Location is the actual path to the executable or script to be run, such as /msc/bin/nas68 for
MSC Nastran. The Config Location is the actual path to the MSC Nastran rc (nast68rc) file or the
ABAQUS site.env file.
Additional AM hosts can be added by repeating this process.
For each AM host, at least one filesystem must be specified. Use the Add Filesystem capability in
Modify Config Files/Filesystems to specify a filesystem for each added host.
When all hosts have been added, select Apply and the newly added host information will be saved in the
host.cfg file. On Windows this is Save Config Settings under the Queue pull down menu. Note that
Apply saves all configuration files: host, disk, and if applicable, lsf or nqs.
For Group, see Groups (of hosts).
Main Index
Chapter 7: System Management 127
Configuration Management Interface
Deleting an AM Host
To remove a host from use by the Analysis Manager, select the Delete AM Host button. A list of possible
hosts will appear.
Select the host to be deleted by clicking on the hostname in the list. Then, select OK. The host will be
removed from the list of hosts and the list of hosts will go away.
On Windows, simply select the AM Host you want to delete from the AM Hosts tree tab and press the
Delete button (or right-mouse click the host and select Delete).
When all host configurations are ready, select Apply and the revised host.cfg file will be saved,
excluding the deleted hosts. On Windows this is Save Config Settings under the Queue pull down menu.
Disk Configuration
In order to define filesystems to be written for scratch and database files, the Analysis Manager needs to
have a list of each file system for each host in the disk.cfg file that is to be used when running analyses.
This file contains a list of each host, a list of each file system for that host, and the file system type. There
are two different Analysis Manager file system types: NFS and local.
Main Index
128 Patran Analysis Manager User’s Guide
Configuration Management Interface
Adding a Filesystem
Use the Modify Config Files/Filesystems form to specify or add a filesystem for use by the Analysis
Manager.
Press the Add Filesystem button. Then, select a host from the list provided.
There are two types of filesystems: NFS and local. Select the appropriate type for the newly added
filesystem.
Additional filesystems can be added by repeating this process. Multiple filesystems can be added for each
host. When all filesystems have been added, select Apply and the newly added filesystem information
will be saved in the disk.cfg file.
Each host must contain at least one filesystem.
After adding a host or filesystem, test the configuration information using the Test Configuration form.
See Test Configuration.
Note: When using the Analysis Manager with LSF or NQS, you must run the administration
program and start a Queue Manager on the same machine that LSF or NQS executables
are located.
Main Index
Chapter 7: System Management 129
Configuration Management Interface
When an AM Host is created, one filesystem is created by default (c:\temp). You can add more
filesystems to an AM Host under the Disk Space tree tab and pressing the Add button. You can change
the directory path by clicking on the Directory itself and editing it in the normal method on Windows.
The Type is changed by the pulldown menu next to the Directory name. If the filesystem is a Unix
filesystem, make sure you remove the c:, e.g., /tmp.
Deleting a Filesystem
At the bottom of the Modify Config Files/Filesystems form, select the Delete Filesystem button to
delete a filesystem from use by the Analysis Manager.
Then, select a host from the list provided, and click OK.
After selecting a host, a list of filesystems defined for the chosen host will appear. Choose the filesystem
to delete from this list and click OK.
On Windows, select the AM Host under the Disk Space tree tab and press the Delete button. The last
filesystem created is deleted.
Main Index
130 Patran Analysis Manager User’s Guide
Configuration Management Interface
Additional filesystems can be deleted by repeating this process. When all appropriate filesystems have
been deleted, select Apply and the updated filesystem information will be saved in the disk.cfg file. On
Windows this is Save Config Settings under the Queue pull down menu.
Queue Configuration
If the LSF or NQS scheduling system is being used at this site, the Analysis Manager can interact with it
using the queue configuration file (i.e., lsf.cfg or nqs.cfg). Ensure that LSF or NQS Queue is set for the
Queue Type field in the Modify Config Files form. See Analysis Manager Host Configurations. This sets
a line in the host.cfg file to QUE_TYPE: LSF or NQS. The Queue Manager configuration file lists
each queue name, and all hosts allowed to run MSC Nastran, MSC.Marc, MSC Nastran, or other
GENERAL applications for that queue. In addition, a queue install path is required so that the Analysis
Manager can execute queue commands with the proper path.
Note: NQS and LSF are only supported by Unix platform Queue Managers. Although you can
submit to an LSF or NQS queue from Windows to a Unix platform, the Windows Queue
Manager does not support LSF or NQS submittals at this time.
Main Index
Chapter 7: System Management 131
Configuration Management Interface
Adding a Queue
To add a queue for use by the Analysis Manager, press the Add Queue button on the bottom of the
p3am_admin (AdmMgr) form. A new queue description will be created and displayed on the left panel,
with MSC Queue Name: Unknown and LSF (or NQS) Queue Name: Unknown.”
Enter the names of the queue in the MSC Queue Name and LSF (or NQS) Queue Name boxes
provided. These names can be the same or different. In addition, the administrator must also choose
between one or more hosts from the listbox on the right side of the specified queue name. The host in the
listbox to the right only appear after selecting an application from the Application pulldown menu. Only
those hosts configured to run that application will appear in the list box. These are the hosts which will
be allowed to run the analysis application when submitted to that queue.
Additional queues can be added by repeating this process. When all queues have been added, press Apply
and the newly queue host information will be saved in the lsf.cfg (or nqs.cfg) file.
Various information need to be supplied for the Analysis Manager to communicate properly with the
queueing software. The most important information is the Executable Path. Enter the full executable
path where the NQS or LSF executables can be found. In addition, you may specify additional (optional)
parameters for the NQS or LSF executables to use if necessary. Keywords can also be used. The
description of how these keywords work can be found in General. Two keywords are available: MEM
and DISK, which are evaluated to what Minimum MEMory and DISK space has been specified. For
example, if an NQS command has these additional parameters: -nr -lm $MEM -lf $DISK
Main Index
132 Patran Analysis Manager User’s Guide
Configuration Management Interface
Deleting a Queue
To remove a queue from use by the Analysis Manager, press the Delete Queue button on the bottom of
the p3am_admin (AdmMgr) form. A list of possible queues will appear.
Select the queue to be deleted by clicking on the queue name in the list. Then, select OK. The queue will
be removed from the list of queues and the list of queues will go away.
When the queue configuration is ready, select Apply and the revised lsf.cfg (or nqs.cfg) file will be saved,
excluding the deleted queues.
Main Index
Chapter 7: System Management 133
Configuration Management Interface
This version of Analysis Manager supports the concept of groups of hosts. In the host.cfg file if you
specify VERSION: 3 as the first non-commented line and you also add the group/queue name on the
end of the am_host line in the AM_HOSTS section then you will have enabled this feature. Here is an
example:
VERSION: 3
...
AM_HOSTS:
#am_hosthosttypebin pathrc pathgroup
#----------------------------------------------------------------------
--------
N2004_hst1host11/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2004_hst2host21/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2004_hst3host31/msc/bin/n2004/msc/conf/nast2004rcgrp_nas2004
N2001_hst1host11/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
N2001_hst2host21/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
N2001_hst3host31/msc/bin/n2001/msc/conf/nast2001rcgrp_nas2001
M2001_hst1host13/m2001/marcNONEgrp_mar2001
M2001_hst2host23/m2001/marcNONEgrp_mar2001
M2001_hst3host33/m2001marcNONEgrp_mar2001
...
In this configuration, when you submit a job, you will also have the choice of the group name with the
added label of 'least-loaded-grp:<group name>' in addition to and to distinguish it from regular host
names. When you select this group instead of a regular host, the Analysis Manager will then decide
Main Index
134 Patran Analysis Manager User’s Guide
Configuration Management Interface
which host from the list of those in the group is best-suited to run the job and start it there when possible.
Here, best-suited means the next available host based several factors, including:
• Free tasks on each host (Maximum currently running jobs)
• Cpu utilization of host
• Available memory of host
• Free disk space of host
• Time since most recent job was started on host
If in the above example, you submitted an MSC Nastran job to the grp_nas2004 then there are 3
machines the Analysis Manager could select to run the job, host1, host2 or host3. The Analysis Manager
will query each host for the current cpu utilization, available memory and free disk space (as configured
by the Analysis Manager) and also the free tasks and time since an Analysis Manager job was last started
and figure out which, if any, machine can run the job. If more than one machine can run the job based
on the criteria above then the Analysis Manager will select the best suited host by sorting the acceptable
hosts in a user-selectable sort order. If no machines have met the criteria then the job remains queued,
and the Analysis Manager will try again to find a suitable host at periodic intervals. The user selectable
sort order is specificed in an optional configuration file called msc.cfg. If this file does not exist then
the sort order and criteria are as follows:
• free_tasks
• cpu_util
• avail_mem
• free_disk
• last_job_time
Where the defaults for cpu util, available mem and disk are:
• Cpu util: 98
• Available mem: 5 mb
• Available disk: 10 mb
Thus any host that has cpu util < 98 and available mem > 5mb and available disk > 10mb and at least one
free task (so it can start another Analysis Manager job) is eligible to run a job and the best suited host will
be the one after a sort on all eligible hosts is done. You can change the sort order and defaults for cpu
util, available mem and disk in the msc.cfg file. The msc.cfg file exists in the same location as the
host.cfg and disk.cfg and has this format as explained in Group/Queue Feature.
Test Configuration
The p3am_admin (AdmMgr) program has various tests that facilitate verification of the
configuration.
Main Index
Chapter 7: System Management 135
Configuration Management Interface
Application Test
Changes to the host.cfg file dealing with defined applications can be tested by selecting the Test
Configuration/Applications option. The Applications Test form will appear when the Application
Test button is pressed. On Windows press the Test Configuration button under Adminstration
Main Index
136 Patran Analysis Manager User’s Guide
Configuration Management Interface
Main Index
Chapter 7: System Management 137
Configuration Management Interface
Main Index
138 Patran Analysis Manager User’s Guide
Configuration Management Interface
Main Index
Chapter 7: System Management 139
Configuration Management Interface
AM Hosts Test
Changes to portions of the host.cfg file dealing with the AM hosts can be tested by selecting the Test
Configuration/AM Hosts option.
Main Index
140 Patran Analysis Manager User’s Guide
Configuration Management Interface
If a problem is detected, close the form and return to the Modify Config File form to correct the
configuration.
Main Index
Chapter 7: System Management 141
Configuration Management Interface
Main Index
142 Patran Analysis Manager User’s Guide
Configuration Management Interface
Queue Test
Changes to the lsf.cfg queue configuration file can be tested by selecting the Test Configuration/Queue
Configuration option. The test queue configuration form will appear.
Main Index
Chapter 7: System Management 143
Configuration Management Interface
Queue Manager
This simply allows any changes in the configuration files that may have been implemented during a
p3am_admin (AdmMgr) session to be applied. If the configuration files are owned by root then you must
have root access to change them. Once they have been changed, in order for the QueMgr to recognize
them, it must be reconfigured. Simply press the Apply button with the Restart QueMgr toggle selected.
Main Index
144 Patran Analysis Manager User’s Guide
Configuration Management Interface
This forces the Queue Manager to reread the configuration files. Once the Queue Manager has been
reconfigured, new jobs submitted will use the updated configuration.
If a reconfiguration is issued while jobs are currently running, then those jobs are allowed to finish before
the reconfiguration occurs. During this period, the Queue Manager is said to be in drain mode, not
accepting any new jobs until all old jobs are complete and the Queue Manager has reconfigured itself.
The Queue Manager can also be halted immediately (which kills any job running) or can be halted after
it is drained.”
When the Queue Manager is halted, the three toggles on the right side change to one toggle to allow the
Queue Manager to be started. All configurations that are being used are shown on the left. When the
Queue Manager is halted, you may change some of the configurations on the left side, such as Port, Log
File, and Log File User before starting the daemon again. For more information on the Queue Manager
see Starting the Queue/Remote Managers.
On Windows you can Start and Stop the Queue Manager from the Queue pulldown menu when you are
in the Administration tree tab.
Main Index
Chapter 7: System Management 145
Configuration Management Interface
Or you can right mouse click the Administration tree tab and the choices to Read or Save configuration
file or Start and Stop the Queue Manager are also available.
Main Index
146 Patran Analysis Manager User’s Guide
Examples of Configuration Files
AM Hostname A unique name for the combination of the analysis application and physical
host. It can be called anything but must be unique, for example nas68_venus.
Physical Host The physical host name where the analysis application will run.
Type The unique integer ID assigned to this type of analysis. This is automatically
assigned by the program and the user should not have to worry about this.
Path How this machine can find the analysis application. For MSC Nastran, this is
the runtime script (typically the nast68 file), for MSC.Marc, ABAQUS or
GENERAL applications, this is the executable location.
rcpath How this machine can find the analysis application runtime configuration file:
the MSC Nastran nast68rc file or the ABAQUS site.env file. This is not
applicable to a MSC.Marc, GENERAL application and should be filed with
the keyword NONE.
The physical host information has the following fields associated with it:
Physical Host Name of host machine for the use of the Analysis Manager
Class Machine type (RS6K, HP700, etc.)
Main Index
Chapter 7: System Management 147
Examples of Configuration Files
Note: The MaxAppTsk setting must be added manually. There is no widget in the AdmMgr to do
this. If there are NO configuration files on start up of the AdmMgr, then it will set the version
to 2 and use 1000 as the MaxAppTsk. If configuration files exist and version 2 is set, it will
honor whatever is already there and pass them through. If version 1 is set, then MaxAppTsk
is not written to the configuration files.
The application information has the following fields associated with it:
If the scheduling system is a separate package (e.g., LSF or NQS), then the Analysis Manager will submit
jobs to a queue provided. Queues are described below. Also, If the scheduler is separate from the
Analysis Manager, then the maximum task field is not used. All tasks are submitted through the queue
and the queueing system will execute or hold each task according to its own configuration. An example
of a host.cfg file is given below. Each comment line must begin with a # character. All fields are
separated by one or more spaces. All fields must be present.
#------------------------------------------------------
# Analysis Manager host.cfg file
#------------------------------------------------------
#
#
# A/M Config file version
# Que Type: possible choices are P3, LSF, or NQS
#
VERSION: 2
ADMIN: am_admin
QUE_TYPE: MSC
#
#------------------------------------------------------
# AM HOSTS Section
#------------------------------------------------------
#
# Must start with a “P3AM_HOSTS:” tag.
#
Main Index
148 Patran Analysis Manager User’s Guide
Examples of Configuration Files
# AM Host:
# Name to represent the choice as it will appear
# on the AM menus.
#
# Physical Host:
# Actual hostname of the machine to run the application on.
#
# Type:
# 1 - MSC.Nastran
# 2 - ABAQUS
# 3 - MSC.Marc
# 20 - User defined (General) application #1
# 21 - User defined (General) application #3
# etc. (max of 29)
#
# This field defines the application for this entry.
# Each value will have a corresponding entry in the
# “APPLICATIONS” section.
#
# EXE_Path:
# Where executable entry is made.
#
# RC_Path:
# Where runtime configuration file (if present) is found.
# Set to “NONE” if “General” application.
#
#------------------------------------------------------
# Physical Hosts Section
#------------------------------------------------------
#
# Must start with a “PHYSICAL_HOSTS:” tag.
#
# Class:
# HP700 - Hewlett Packard HP-UX
# RS6K - IBM RS/6000 AIX
# SGI5 - Silicon Graphics IRIX
# SUNS - Sun Solaris
# LX86 - Linux/
# WINNT - Windows
#
# Max:
#
# Maximum allowable concurrent tasks for this host.
#
#------------------------------------------------------
# Applications Section
#------------------------------------------------------
#
# Must start with a “APPLICATIONS:” tag.
#
# Type: See above for values
# Prog_name:
#
# The name of the Patran AM Task Manager executable to start.
#
# This field must be set to the following, based on the
# application it represents:
#
# MSC.Nastran -> NasMgr
# HKS/ABAQUS -> AbaMgr
Main Index
Chapter 7: System Management 149
Examples of Configuration Files
#---------------------------------------------------------------------
#
#Physical Host Class Max
#--------------------------------------------------------------
PHYSICAL HOSTS:
venus SGI4D 2
mars SUN4 1
#--------------------------------------------------------------
#
#
#Type Prog_name MSC P3 name MaxAppTsk [option args]
#--------------------------------------------------------------
APPLICATIONS:
1 NasMgr MSC.Nastran 3
2 AbaMgr ABAQUS 3
3 MarMgr MSC.Marc 3
20 GenMgr MYCODE 3 -j $JOBNAME -f $JOBFILE
Main Index
150 Patran Analysis Manager User’s Guide
Examples of Configuration Files
#--------------------------------------------------------------
Main Index
Chapter 7: System Management 151
Examples of Configuration Files
Venus_mycode /tmp
Venus_mycode /server/scratch nfs
#
Mars_nas68 /mars/nas_scratch
#
Mars_aba5.2 /mars/users/aba_scratch
Mars_aba5.2 /tmp
#
Mars_mycode /tmp
#---------------------------------------------------------------------
Each comment line must begin with a # character. All fields are separated by one or more spaces. All
fields must be present.
In this example, the term file system is used to define a directory that may or may not be its own file
system, and that already exists and has permissions so that any the Analysis Manager user can create
directories below it. It is recommended that the Analysis Manager file systems be directories with large
amounts of disk space and restricted to the Analysis Manager’s use, because the Analysis Manager’s
MSC Nastran, MSC.Marc, ABAQUS, and GENERAL Managers only know about their own jobs
and processes.
Main Index
152 Patran Analysis Manager User’s Guide
Examples of Configuration Files
#
# NOTE:
# Each queue can only contain one Host of a given application
# version(i.e., if there are two version entries for
# MSC.Nastran, nas67 and nas68, then each queue
# set up to run MSC.Nastran tasks could only include
# one of these versions. To be able to submit to
# the other version, create a separate, additional
# MSC queue containing the same LSF queue name, but
# referencing the other version)
#
#
TYPE: 1
#
TYPE: 2
Main Index
Chapter 7: System Management 153
Examples of Configuration Files
#------------------------------------------------------
# Patran ANALYSIS MANAGER org.cfg file
#------------------------------------------------------
#
# Org Master Host Port #
#------------------------------------------------------
default casablanca 1500
atf atf_ibm 1501
lsf_atf atf_sgi 1502
support umea 1503
Note: Any user account that is configured in this manner must exist not only on the machine
where the analysis is going to run, but also on the machine from which the job was
submitted.
Main Index
154 Patran Analysis Manager User’s Guide
Examples of Configuration Files
The capability or necessity of this separate user file has somewhat been obsoleted. In general the
following applies:
1. On Unix machines, if RmtMgrs are running as root then they can run the job as the user (or the
separate user as specified by this file) with no problem.
2. On Unix machines, if RmtMgrs are running as a specific user then the job will run as that user
regardless of the user (or separate user) who submitted the job.
3. If Windows, the job gets runs as whoever is running the RmtMgr on the PC. The user (and
separate user) is ignored.
Group/Queue Feature
This configuration file msc.cfg, allows the default least-loaded criteria to be modified when using the
host grouping feature for automatically selection the least loaded machine to submit to. The file contents
look like:
SORT_ORDER: free_tasks cpu_util last_job_time avail_mem
free_disk
GROUP:grp_nas2004
MIN_DISK: 10
MIN_MEM::5
MAX_CPU_UTIL: 95
The SORT_ORDER line lists the names of the sort criteria in the order you care to sort eligible hosts. The
remaining lines are then for each group you care to change the defaults. Thus you must define multiple
entries of the GROUP, MIN_DISK, MIN_MEM, MAX_CPU_UTIL for each group.
A group can not contain multiple entries that use the same physical hosts (e.g.: nast2004_host1 and
nast2001_host1 in the above example) because then the Analysis Manager would not know which to use.
In this case just create another group name (grp_nas2001 like above) and it will work as expected. You
can have different applications in the same group with no problems. You could in the above example
have used grp_nas2004 as the group name for all the MSC Nastran entries (possibly changing the name
of the group to make more sense that its for hosts which run MSC Nastran) or you can keep them separate
with the added flexibility of defining a different sort order and util/mem/disk criteria for each
application/group.
Main Index
Chapter 7: System Management 155
Starting the Queue/Remote Managers
QueMgr Usage:
The Queue Manager can be manually invoked by typing
$P3_HOME/bin/QueMgr <args>
with the arguments below:
QueMgr -path $P3_HOME -org <org> -log <logfile> -port <#>
where:
$P3_HOME is the installation directory.
Only the -path is required unless the QueMgr is started with a full path. The QueMgr is recommended
to be started as root although not a strict requirement. It is recommended to run the QueMgr as a separate
user such as the administrator account.
Main Index
156 Patran Analysis Manager User’s Guide
Starting the Queue/Remote Managers
Example:
If the Analysis Manager is installed in /msc/patran200x and the master node is an IBM RS/6000
computer, log into the master node (as root if you want) and do the following:
/msc/patran200x/bin/QueMgr -path /msc/patran200x
If the Analysis Manager is installed on a filesystem that is not local to the master node and the QueMgr
is started as root, it is recommended that the -log option be used when starting the Queue Manager. The
-log option should be used to specify a log file which should be on a filesystem local to the master node.
Writing files as root onto network mounted filesystems is sometimes not possible. Starting the QueMgr
as a normal user solves this problem.
You may want to put this command line somewhere in a script so the Queue Manager is started as root
each time the master node is rebooted. See Starting Daemons at Boot Time.
Note: There are other arguments that can be used when starting up the Queue Manager for more
flexibility. See Analysis Manager Program Startup Arguments.
RmtMgr Usage:
The Remote Manager can be manually invoked by typing
$P3_HOME/bin/RmtMgr
where:
$P3_HOME is the installation directory. No arguments are necessary unless you start from where it exists
with a ./RmtMgr in which case you will need the -path $P3_HOME argument.
The RmtMgr should not be started as root.
Example:
If the Analysis Manager is installed in /msc/patran200x and the analysis node is an IBM RS/6000
computer, log into the analysis node as root and do the following:
/msc/patran200x/bin/RmtMgr -path /msc/patran200x
All other arguments not specified will be defaulted. You may want to put this command line somewhere
in a script so the Queue Manager is started as root each time the master node is rebooted. See Starting
Daemons at Boot Time.
Note: There are other arguments that can be used when starting up the Remote Manager for more
flexibility. See Analysis Manager Program Startup Arguments.
Main Index
Chapter 7: System Management 157
Starting the Queue/Remote Managers
the /etc/rc2 method as opposed to the inittab method. These methods can vary from Unix machine to Unix
machine. If you have trouble, consult your system administrator.
Windows uses services. Manually installing and configuring these services is also described below.
Unix Method: rc
As root the following done in general terms:
Create a file in /etc/rc2.d called Sxxz_p3am where xx is a number as high as possible (say 99)
and the name z_p3am is simply a name. (The higher number indicates that it will be executed last of all
the scripts in this directory during startup.) In this file you place the script commands to start the QueMgr
and RmtMgr. You can also add the su command to start up the daemons as a user.
Note: The location of the rc2.d directory may vary from computer to computer. Check /etc and /sbin.
Main Index
158 Patran Analysis Manager User’s Guide
Starting the Queue/Remote Managers
Note: The script above is specific to starting the QueMgr on SGI machines. For other machines,
replace the SGI5 with the appropriate <arch> as described in Directory Structure.
The above script can be used to stop the daemons also. This would be done if the machine were brought
down when rebooting. In this case you use a script in the rc0.d directory with a name of Kxx_p3am
where xx is the lowest number such as 01 to force it to be executed first among all the scripts in this
directory. The argument to the above script would then be stop instead of start. This is used to do a clean
and proper exit of the daemons when the machine is shut down. The example of a script called
K01_p3am is:
#! /sbin/sh
# stop QueMgr
/etc/p3am_que stop
Now create the file, /etc/p3am and add the following lines:
#!/bin/sh
QueMgr=$P3_HOME/bin/QueMgr
RmtMgr=$P3_HOME/bin/RmtMgr
Main Index
Chapter 7: System Management 159
Starting the Queue/Remote Managers
if [ -x $QueMgr ]
then
$QueMgr -path $P3_HOME
fi
if [ -x $RmtMgr ]
then
$RmtMgr
fi
where $P3_HOME is the Analysis Manager installation directory commonly referred to as $P3_HOME
throughout this manual. You must replace it with the exact path in the above example. Make sure that
this file’s protection allows for execution:
chmod 755 /etc/p3am
Main Index
160 Patran Analysis Manager User’s Guide
Starting the Queue/Remote Managers
$P3_HOME\p3manager_files\bin\WINNT
called:
start_server.bat
start_client.bat
stop_server.bat
stop_client.bat
query_server.bat
query_client.bat
remove_server.bat
remove_client.bat
to do exactly as the file describes for starting, stopping, querying, and removing the Queue
Manager (server) service or the Remote Manager (client) service.
If you follow the above steps, manually installation should be successful. You will still have to edit your
configuration files and the reconfigure (or stop and start) the Queue Manager to read the configuration
before you will be able to successfully use the Analysis Manager. See Configuration Management
Interface.
Main Index
Chapter 8: Error Messages
Patran Analysis Manager User’s Guide
8 Error Messages
Main Index
162 Patran Analysis Manager User’s Guide
Error Messages
Error Messages
The following are possible error messages and their corresponding explanations and possible solutions.
Only messages which are not self explanatory are elaborated upon. If you are having trouble, please
check the QueMgr.log file usually located in the directory
$P3_HOME/p3manager_files/<org>/log or in the directory that the was specified by the -log
argument when starting the QueMgr. On Windows, check the Event Log under the Administrative
Tools Control Panel (or a system log on Unix).
Note: The directories (conf, log, proj) for each set of configuration file (organizational
structure) must have read, write, and execute (777) permission for all users. This can be the
cause of many task manager errors.
Sometimes errors occur because the RmtMgr is running as root or administrator on Windows yet
RmtMgr is trying to access network resources such as shared drives. For this reason it is recommended
that RmtMgr (and QueMgr) be started as a normal user.
Windows
No doctemplate is loaded. Cannot create new document.
If you get a message saying, this is because you have not merged this file into the registry, or the path
was incorrect. See For Window machines: in Queue Manager.
Main Index
Chapter 8: Error Messages 163
Error Messages
JobMgr is unable to create client communication. Possible reason is the host’s network interface is not
configured properly.
================
Main Index
164 Patran Analysis Manager User’s Guide
Error Messages
Main Index
Chapter 8: Error Messages 165
Error Messages
JobMgr is unable to query memory for the memory req config setting. Contact support personnel for
assistance.
================
Main Index
166 Patran Analysis Manager User’s Guide
Error Messages
Main Index
Chapter 8: Error Messages 167
Error Messages
Main Index
168 Patran Analysis Manager User’s Guide
Error Messages
P3Mgr cannot determine which port to connect to a valid Queue Manager. The file QueMgr.sid is not
actually used anymore. You should set P3_MASTER and P3_PORT environment variables, or use the
org.cfg file.
================
Main Index
Chapter 8: Error Messages 169
Error Messages
ERROR... File <> does not exist... Enter A/M to select file explicitly.
P3Mgr was asked to monitor a completed job (using the mon file) from the jobname information only
and this file cannot be found. Use the Full Analysis Manager interface and the file browser (under
Monitor, Completed Job) to select an existing mon file.
================
Main Index
170 Patran Analysis Manager User’s Guide
Error Messages
P3Mgr was asked to monitor a completed job and the selected mon file does not exist. Select an existing
mon file and try again.
================
ERROR... than one active job named <> found. Request an active list
ERROR... More than one active job named <> found. Enter A/M to
explicitly select job
ERROR... No jobs named <> owned by <> are currently active
P3Mgr is asked to monitor or delete a job from the job name (and owner) and no such job can be located
in the queues. Select an active list of jobs from the Full Analysis Manager interface (Monitor, Running
jobs)
================
Main Index
Chapter 8: Error Messages 171
Error Messages
ABAQUS:
ERROR... JobName <> and Restart JobName <> are identical.
ABAQUS cannot have jobs where the job name and the restart job name are the same. Change one or the
other and re-submit.
================
ERROR... JobName <> and Input Temperature File JobName <> are identical.
ABAQUS cannot have jobs where the job name and the temperature data file job name are the same.
Change one or the other and re-submit.
================
Main Index
172 Patran Analysis Manager User’s Guide
Error Messages
ABAQUS “RESTART” card encountered, but no filename specified. Add filename to this card and
re-submit.
MSC Nastran:
================
ERROR... File <> cannot contain more than one period in its name.
P3Mgr will currently only allow MSC Nastran jobs to contain one period in their
filename. Rename the input file to contain no more than one period and re-submit.
================
MSC.Marc:
================
Main Index
Chapter 8: Error Messages 173
Error Messages
================
ERROR... Total disk space req of %d (kb) cannot EVER be met. Cannot
continue.
There is not enough disk space (free or used) to honor the space requirement provided by the user so the
job will stop. Add more disk space or check the requirement specified.
================
Main Index
174 Patran Analysis Manager User’s Guide
Error Messages
================
Main Index
Chapter 8: Error Messages 175
Error Messages
P3edit is unable to read file completely, or is unable to read text from memory to write file.
================
RmtMgr Errors...
RmtMgr errors are returned to the connection program and also printed in the OS system log (syslog)
or Event Viewer for Windows.
================
Main Index
176 Patran Analysis Manager User’s Guide
Error Messages
================
QueMgr Errors...
Sometimes errors occur because the RmtMgr is running as root or administrator on Windows yet
RmtMgr is trying to access network resources such as shared drives. For this reason it is recommended
that RmtMgr (and QueMgr) be started as a normal user.
ERROR... Determining computer architecture
QueMgr is unable to recognize its host architecture. Check installation and OS version compatibility.
================
Main Index
Chapter 8: Error Messages 177
Error Messages
QueMgr was started with an invalid port or user argument. Select a valid port or user name argument
and try to start the QueMgr again.
================
203 ERROR... Problem creating com file for Task Manager execution.
QueMgr is unable to create a com file on the eligible host(s) for execution. Possible causes are lack of
permission connecting to the eligible host(s) as the designated user (check network permission/access
using the Administration tool) or incorrect path/permission on the directory on the eligible host(s). (Use
the Administration tool to check this.) The major cause of this error is that the specified user does not
have remote shell access from the Master Host to the Analysis Host. Resolutions to this problem are to
add the Analysis Host name to the hosts.equiv file or the user‘s.rhosts file on the Master Host.
================
Main Index
178 Patran Analysis Manager User’s Guide
Error Messages
ERROR... User: <> can not delete job number <> which is owned by: <>
201 ERROR... User can not kill a job owned by someone else.
QueMgr was asked to delete a job from a user other than the one who submitted the job. Only the user
who submitted a job is eligible to delete it.
================
Main Index
Chapter 8: Error Messages 179
Error Messages
205 ERROR... Error submitting job to NQS. See Que Manager Log.
211 ERROR... Unable to submit task to NQS queue.
QueMgr received an error while trying to submit a job to an NQS queue. Check the QueMgr.log file
for the more detailed NQS error.
================
206 ERROR... Error submitting job to LSF. See Que Manager Log.
207 ERROR... Error in return string from LSF bsub. See Que Manager Log.
210 ERROR... Unable to submit task to LSF queue.
QueMgr received an error while trying to submit a job to an LSF queue. Check the QueMgr.log file
for the more detailed LSF error.
================
Main Index
180 Patran Analysis Manager User’s Guide
Error Messages
Main Index
Chapter 8: Error Messages 181
Error Messages
Main Index
182 Patran Analysis Manager User’s Guide
Error Messages
Main Index
Chapter 8: Error Messages 183
Error Messages
================
Main Index
184 Patran Analysis Manager User’s Guide
Error Messages
The Process has received a signal, either from an abort (from the user) or from an internal error, and
during the shutdown procedures, a second signal has occurred, indicating an error in the shutdown
procedure.
================
ABAQUS (AbaMgr):
ERROR... Unable to create local environment file
AbaMgr is unable to create a local abaqus.env file. Check file system/directory free space and
permissions.
================
Main Index
Chapter 8: Error Messages 185
Error Messages
================
GENERAL (GenMgr):
ERROR... Unable to load GENERAL configuration info
GenMgr is unable to load configuration information from internal memory. Contact support personnel
for assistance.
================
Main Index
186 Patran Analysis Manager User’s Guide
Error Messages
NasMgr is unable to load configuration information from internal memory. Contact support personnel
for assistance.
================
MSC.Marc (MarMgr):
================
Main Index
Chapter 8: Error Messages 187
Error Messages
ERROR... Total disk space req of %d (kb) cannot EVER be met. Cannot
continue.
There is not enough disk space (free or used) to honor the space requirement provided by the user so the
job will stop. Add more disk space or check the requirement specified.
================
ERROR... A/M Host <> configuration file <> does not contain an absolute
path.
Main Index
188 Patran Analysis Manager User’s Guide
Error Messages
The AdmMgr program has found an rc file entry, or an exe file entry in the host.cfg file, or a file
system in the disk.cfg file to not be a full path. Change the entries to be fully qualified. (starts with a
slash character ‘/’)
================
ERROR... A/M Host <> does not have a valid application defined. Run
Basic A/M Host Test.
The configuration files do not contain any valid applications. Add a valid application and all its required
information and run the basic test to verify.
================
ERROR... A/M Host <> filesystem <> does not contain an absolute path.
The file system designated for the host listed is not fully qualified. Change the entry to begin with a slash
‘/’ character.
================
ERROR... A/M Host <> runtime file <> does not contain an absolute path.
The rc file designated for the host listed is not fully qualified. Change the entry to begin with a slash ‘/’
character.
================
ERROR... A/M Host name <> is used more than once within application <>.
Each application contains a list of Analysis Manager host names (which are mapped to physical host
names) and each Analysis Manager host name must be unique. The AdmMgr program has found the
designated Analysis Manager host names is being used more than once. Change the Analysis Manager
host name for all but one of the applications and re-test.
================
Main Index
Chapter 8: Error Messages 189
Error Messages
The AdmMgr program requires an administrator account which is not the root account. Change the
administrator account name to something other than root and continue testing.
================
ERROR... At least one filesystem must be defined for A/M Host <>.
The configuration requires each Analysis Manager host to reference a file system. Add a reference for
this AM host and re-test.
================
Main Index
190 Patran Analysis Manager User’s Guide
Error Messages
Main Index
Chapter 8: Error Messages 191
Error Messages
ERROR... Host not specified for A/M Host <>. Run Basic A/M Host Test.
Each Analysis Manager host must reference a physical host. Provide a physical host reference and
continue testing.
================
ERROR... Physical host has not been defined for A/M Host <>. Run Basic
A/M Host Test.
Each Analysis Manager host must reference a physical host. Provide a physical host reference and
continue testing.
================
Main Index
192 Patran Analysis Manager User’s Guide
Error Messages
Main Index
Chapter 9: Application Procedural Interface (API)
Patran Analysis Manager User’s Guide
J
Analysis Manager API 194
J
Include File 204
J
Example Interface 230
Main Index
194 Patran Analysis Manager User’s Guide
Analysis Manager API
Assumptions:
The product is ALREADY installed and configured. A description of what it takes to install and
configure is included in System Management, but for the purpose of describing the API, assume this
for now.
A Quick Background
There are 3 machines involved in the job submit/abort/monitor cycle:
1. The QueMgr scheduling process machine, labelled the master node.
2. The user's submit (home) machine where the graphical user interface (GUI) is run and the input
files are located, labelled the submit node.
3. The analysis machine where the job actually runs, labelled the analysis node.
All 3 machines can be the same or different or any combination of these. And there are two separate
persistent processes, which are already running as part of the installation. These processes (called
daemons on Unix or services on Windows) are the:
1. QueMgr - job scheduling daemon or service
2. RmtMgr - remote command daemon or service
There is one and only one QueMgr process per site (or group or organization or network) but there are
many RmtMgr processes. A RmtMgr process runs on each and every analysis machine. A RmtMgr can
also be run on each submit machine (recommended). If the submit and analysis machines are the same
host, then only one RmtMgr needs to be running.
The QueMgr and RmtMgr processes start up at boot time automatically and run always, but use very
little memory and CPU resources, so users will not notice performance effects. Also these processes can
run as root (Administrator on Windows) or as any user, if these privileges are not available.
Each RmtMgr binds to a known/chosen port number that is the same for every RmtMgr machine. Each
RmtMgr process collects machine statistics on free CPU cycles, free memory and free disk space and
returns this data to the QueMgr at frequent intervals.
Main Index
Chapter 9: Application Procedural Interface (API) 195
Analysis Manager API
The QueMgr then maintains a sorted list of each RmtMgr machine and its capacity to report back to a
GUI/user. (A least loaded host selection is currently being developed so the QueMgr selects the actual
host for a submit based on these statistics, instead of a user explicitly setting the hostname in the GUI.)
There are a few other AM executables:
1. The TxtMgr - a simple text-based UI which is built on this API and demonstrates all these
features.
2. The JobMgr - GUI back-end processes, starts up on the same machine as the GUI (submit
machine) when a job is submitted and runs only for the life of a job. There is always 1 JobMgr
process per job.
3. The analysis family: These 3 programs are all built on top of an additional API which uses many
common features each must do. The common code is run and the custom work for each
application is in a few separate routines, pre_app(), post_app(), abort_app()
• NasMgr - The MSC Nastran analysis process which communicates data to/from the JobMgr
and spawns the actual MSC Nastran sub-process. It also reads include files and transfers them,
adds FMS statements to the deck if appropriate, and periodically sends job resource data and
msgpop message data to the JobMgr to store off.
• MarMgr - The Abaqus analysis process which does the same things as NasMgr, but for the
Abaqus application.
• AbaMgr - The Abaqus analysis process which does the same things as NasMgr, but for the
Abaqus application.
• GenMgr - The General application analysis process, used for any other application. Does
what NasMgr does except it has no knowledge of the application and just runs it and collects
resource usage.
Main Index
196 Patran Analysis Manager User’s Guide
Analysis Manager API
Each function requires some common data and some unique data. Common data include the QueMgr host
and port it is listening on and the configuration structure information. Unique data is described
further below.
Configure
The first step to any of the Analysis Manager functions is to connect to an already running QueMgr. To
do this you must first know the host and port of the running QueMgr, which is usually in the
$P3_HOME/p3manager_files/org.cfg or the
$P3_HOME/p3manager_files/default/conf/QueMgr.sid file. After that, simply call
CONFIG *cfg;
char qmgr_host[128];
int qmgr_port;
int ret_code;
int error_msg;
cfg = get_config(qmgr_host, qmgr_port, &ret_code, error_msg)
ret_code and possbily error_msg are returned for checking errors.
The CONFIG structure is defined in an include file shown below. Then initialize sub-parts of the
configuration structure by calling
init_config(cfg)
Then determine the application name/index. The application is the name of the application you plan to
work with, most-likely MSC Nastran, but it could be anything that is already pre-configured. A
configuration includes basically the application name, and a list of hosts and paths where it is installed,
as described in the $P3_HOME/p3manager_files/default/conf/host.cfg file, read by the
QueMgr on start up. Each application has different names and possibly different options to the Analysis
Manager functions. All applications names/indexes are in the cfg structure so the GUI can ask the user
and check against the accepted list.
Then call the function of choice:
1. Submit a job
2. Abort a job
3. Monitor a specific job
4. Monitor all the hosts/queues
5. List statistics of a completed job
Submit
For submit, the GUI then needs to fill in the application structure data and make a call to submit the job.
The call may block and wait for the job to complete (maybe a very long time) or it can return
immediately. See the job info rcf/GUI settings listed below for what can be set and changed. Assuming
defaults for ALL settings, then only a jobname (input file selection), hostname and (possibly) memory
need to be set before submitting.
Then call
Main Index
Chapter 9: Application Procedural Interface (API) 197
Analysis Manager API
char *jobfile;
char *jobname; /* usually same as basename of jobfile */
int background;
int ret_code;
int job_number;
job_number = submit_job(jobfile,jobname,background,&ret_code);
This call goes through many steps: contacting the QueMgr, getting a valid reserved job number, asking
the QueMgr to start a JobMgr, etc. and then sends all the config/rcf/GUI structure info to the JobMgr.
The JobMgr runs for the life of the job and is essentially the back-end of the GUI, transferring files
to/from the user submit machine to the analysis machine (the NasMgr, MarMgr, AbaMgr or
GenMgr process).
Abort
For abort, the GUI then needs to query the QueMgr for a list of jobs, and then present this for the user
to select:
char *qmgr_host;
int qmgr_port;
JOBLIST *job_list;
int job_count;
job_count = get_job_list(qmgr_host,qmgr_port,job_list);
Once a job is chosen, a simple call deletes it:
int job_number;
char *job_user;
int ret_code;
ret_code = delete_job(job_number,job_user);
Main Index
198 Patran Analysis Manager User’s Guide
Analysis Manager API
for(i=0;i<msg_count;i++)
printf("%s",ret_string[i]);
printf("cpu time used by job = %d,
mem used by job = %d, disk used by job = %d\n",cpu,mem,disk);
The CPU, MEM and DISK values are the current resources used by the job.
Monitor Hosts/Queues
For monitor all hosts/queues, the GUI then needs to make a call and get back all QueMgr data for the
application chosen. This gets complex. There are 4 different types/groups of data available. For now lets
just assume only one type is wanted. There are:
1. FULL_LIST
2. JOB_LIST
3. QUEMGR_LOG
4. QUE_STATUS
5. HOST_STATS
Each has its own syntax and set of data. For the QUE_STATUS type, the call returns an array of
structures containing the hostname, number of running jobs, number of waiting jobs, maximum jobs
allowed to run on that host, for the given (input) application.
char *qmgr_host;
int qmgr_port;
int job_count;
QUESTAT *que_info;
que_info = get_que_stats(qgr_host,qmgr_port,&job_count);
for(i=0;i<job_count;i++)
printf("%s %d %d %d\n",
que_info[i].hostname,que_info[i].num_running,
que_info[i].num_waiting,que_info[i].maxtsk);
For FULL_LIST:
See Include File.
For JOB_LIST:
See Include File.
For QUEMGR_LOG, this is simply a character string of the last 4096 bytes of the QueMgr log file:
See Include File.
For HOST_STATS:
See Include File.
Main Index
Chapter 9: Application Procedural Interface (API) 199
Analysis Manager API
Remote Manager
On a another level, a GUI could also connect to any RmtMgr and ask it to perform a command and return
the output from that command. This is essentially a remote shell (rsh) host command as on a Unix
machine. This functionality may come in handy when adding/extending the Analysis Manager product
to network install other MSC software or whatever is thought of. The syntax for this is as follows:
char *ret_msg;
int ret_code;
char *rmtuser;
char *rmthost;
int rmtport;
char *command;
int background (== FORGROUND (0) or BACKGROUND (1))
ret_msg = remote_command(rmtuser, rmthost,
rmtport, command, background, &ret_code)
Structures
The JOBLIST structure contains these members:
int job_number;
char job_name[128];
char job_user[128];
char job_host[128];
char work_dir[256];
int port_number;
cfg structure from config.h:
typedef struct{
char org_name[NAME_LENGTH];
char org_name2[NAME_LENGTH];
char host_name[NAME_LENGTH];
unsigned int addr;
int port;
}ORG;
typedef struct{
char prog_name[NAME_LENGTH];
char app_name[NAME_LENGTH];
char args[PATH_LENGTH];
char extension[24];
Main Index
200 Patran Analysis Manager User’s Guide
Analysis Manager API
}PROGS;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char host_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int glob_index;
int sub_index;
char arch[NAME_LENGTH];
unsigned int address;
}HSTS;
typedef struct{
int num_hosts;
HSTS *hosts;
}HOST;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int type;
}APPS;
typedef struct{
char host_name[NAME_LENGTH];
int num_subapps;
APPS subapp[MAX_SUB_APPS];
int maxtsk;
char arch[NAME_LENGTH];
unsigned int address;
}TOT_HST;
typedef struct{
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
int glob_index;
}QUES;
typedef struct{
int num_queues;
QUES *queues;
}QUEUE;
typedef struct{
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
HOST sub_host[MAX_APPS];
}TOT_QUE;
typedef struct{
char file_sys_name[NAME_LENGTH];
int model;
int max_size;
int cur_free;
}FILES;
typedef struct{
char pseudohost_name[NAME_LENGTH];
int num_fsystems;
FILES *sub_fsystems;
}TOT_FSYS;
typedef struct{
char sepuser_name[NAME_LENGTH];
}SEP_USER;
Main Index
Chapter 9: Application Procedural Interface (API) 201
Analysis Manager API
typedef struct{
int QUE_TYPE;
char ADMIN[128];
int NUM_APPS;
unsigned int timestamp;
/* prog names */
PROGS progs[MAX_APPS];
/* host stuff */
HOST hsts[MAX_APPS];
int total_h;
TOT_HST *total_h_list;
/* que stuff */
char que_install_path[PATH_LENGTH];
char que_options[PATH_LENGTH];
int min_mem_value;
int min_disk_value;
int min_time_value;
QUEUE ques[MAX_APPS];
int total_q;
TOT_QUE *total_q_list;
/* file stuff */
int total_f;
TOT_FSYS *total_f_list
Main Index
202 Patran Analysis Manager User’s Guide
Analysis Manager API
#
unv_config.auto_mon_flag = 0
unv_config.time_type = 0
unv_config.delay_hour = 0
unv_config.delay_min = 0
unv_config.specific_hour = 0
unv_config.specific_min = 0
unv_config.specific_day = 0
unv_config.mail_on_off = 0
unv_config.mon_file_flag = 0
unv_config.copy_link_flag = 0
unv_config.job_max_time = 0
unv_config.project_name = nastusr
unv_config.orig_pre_prog =
unv_config.orig_pos_prog =
unv_config.exec_pre_prog =
unv_config.exec_pos_prog =
unv_config.separate_user = nastusr
unv_config.p3db_file =
#
nas_config.disk_master = 0
nas_config.disk_dball = 0
nas_config.disk_scratch = 0
nas_config.disk_units = 2
nas_config.scr_run_flag = 1
nas_config.save_db_flag = 0
nas_config.copy_db_flag = 0
nas_config.mem_req = 0
nas_config.mem_units = 0
nas_config.smem_units = 0
nas_config.extra_arg =
nas_config.num_hosts = 2
nas_host[hal9000.macsch.com].mem = 0
nas_host[hal9000.macsch.com].smem = 0
nas_host[daisy.macsch.com].mem = 0
nas_host[daisy.macsch.com].smem = 0
nas_config.default_host = nas_host_u
nas_config.default_queue = N/A
nas_submit.restart_type = 0
nas_submit.restart = 0
nas_submit.modfms = 0
nas_submit.nas_input_deck =
nas_submit.cold_jobname =
#
aba_config.copy_res_file = 1
aba_config.save_res_file = 0
aba_config.mem_req = 0
aba_config.mem_units = 0
aba_config.disk_units = 2
aba_config.space_req = 0
aba_config.append_fil = 0
aba_config.user_sub =
aba_config.use_standard = 1
aba_config.extra_arg =
aba_config.num_hosts = 2
aba_host[hal9000.macsch.com].num_cpus = 1
aba_host[hal9000.macsch.com].pre_buf = 0
aba_host[hal9000.macsch.com].pre_mem = 0
aba_host[hal9000.macsch.com].main_buf = 0
aba_host[hal9000.macsch.com].main_mem = 0
Main Index
Chapter 9: Application Procedural Interface (API) 203
Analysis Manager API
aba_host[daisy.macsch.com].num_cpus = 1
aba_host[daisy.macsch.com].pre_buf = 0
aba_host[daisy.macsch.com].pre_mem = 0
aba_host[daisy.macsch.com].main_buf = 0
aba_host[daisy.macsch.com].main_mem = 0
aba_config.default_host = aba_host_u
aba_config.default_queue = N/A
aba_submit.restart = 0
aba_submit.aba_input_deck =
aba_submit.restart_file =
#
gen_config[GENERIC].disk_units = 2
gen_config[GENERIC].space_req = 0
gen_config[GENERIC].mem_units = 2
gen_config[GENERIC].mem_req = 0
gen_config[GENERIC].cmd_line = jid=$JOBFILE mem=$MEM
gen_config[GENERIC].mon_file = $JOBNAME.log
gen_config[GENERIC].default_host = gen_host_u
gen_config[GENERIC].default_queue = N/A
gen_submit[GENERIC].gen_input_deck =
#
gen_config[GENERIC2].disk_units = 2
gen_config[GENERIC2].space_req = 0
gen_config[GENERIC2].mem_units = 2
gen_config[GENERIC2].mem_req = 0
gen_config[GENERIC2].cmd_line =
gen_config[GENERIC2].mon_file = $JOBNAME.log
gen_config[GENERIC2].default_host = gen_host2_nt
gen_config[GENERIC2].default_queue = N/A
gen_submit[GENERIC2].gen_input_deck =
#
Main Index
204 Patran Analysis Manager User’s Guide
Include File
Include File
This include file (api.h) must be included in any source file using the Analysis Manager API.
#ifndef _AMAPI
#define _AMAPI
#ifdef __cplusplus
extern “C” {
#endif
#if defined(SGI5)
typedef int socklen_t;
#elif defined(DECA)
typedef size_t socklen_t;
#elif defined(HP700)
# if !defined(_ILP32) && !defined(_LP64)
typedef int socklen_t;
# endif
#elif defined(WINNT)
typedef int socklen_t;
#endif
#ifndef AM_INITIALIZE
# define AM_EXTERN extern
#else
# define AM_EXTERN
# if !defined(__LINT__)
# if !defined(__TAG_USED)
# define __TAG_USED
static char *sccsid[] =
{
“@(#) MSC Analysis Manager 2003.0.1”,
“@(#) “
};
# endif /* __TAG_USED */
# endif /* __LINT__ */
#endif
#if defined(AM_INITIALIZE)
char *global_auth_msg = NULL;
int __is_checked_out = 0;
#else
extern char *global_auth_msg;
extern int __is_checked_out;
#endif
#if defined(AM_INITIALIZE)
Main Index
Chapter 9: Application Procedural Interface (API) 205
Include File
int xxx_has_input_deck;
int hks_has_restart;
int has_extra_arg;
#else
extern int xxx_has_input_deck;
extern int hks_has_restart;
extern int has_extra_arg;
#endif
#define SOCKET_VERSION1 1
#define SOCKET_VERSION2 1
#ifndef PATH_LENGTH
# define PATH_LENGTH 400
#endif
#ifndef NAME_LENGTH
# define NAME_LENGTH 256
#endif
#ifndef MAX_STR_LEN
# define MAX_STR_LEN 256
#endif
#ifndef SOMAXCONN
# define SOMAXCONN 20
#endif
#ifdef ULTIMA
#define MSGPOP 1
#else
#ifdef MSGPOP
# undef MSGPOP
#endif
#define MSGPOPnotused 1
#endif
#define UNKNOWN_STATUS -1
#define OK_STATUS 0
#define BAD_STATUS 999
#define BLOCK_TIMEOUT 60
#define NONB_TIMEOUT 15
#define TOTAL_TO_QM_EVENTS 39
/* ---------------------------- */
Main Index
206 Patran Analysis Manager User’s Guide
Include File
#define TRANS_CONFIG1
#define JM_QM_JOB_FINISHED2
#define JM_QM_JOB_INIT3
#define JM_QM_ADD_TASK4
#define JM_QM_DB_UPDATE 19
#define JM_QM_CLEANUP_JOB 26
#define TM_QM_TASK_FINISHED5
#define TM_QM_TASK_RUNNING6
#define TM_QM_APP_FILES 25
#define PM_QM_REMOVE_JOB7
#define PM_QM_FULL_LIST8
#define PM_QM_JOB_LIST9
#define PM_QM_QUEMGR_LOG10
#define PM_QM_QUE_STATUS11
#define PM_QM_JOB_SELECT_LIST12
#define PM_QM_JOB_COMP_LIST27
#define PM_QM_JOBNUM_REQ13
#define PM_QM_SUSPEND_JOB 21
#define PM_QM_RESUME_JOB 22
#define PM_QM_CPU_LOADS 23
#define PM_QM_START_UP_JOBMGR 29
#define PA_QM_HALT_QUEMGR14
#define PA_QM_DRAIN_HALT15
#define PA_QM_DRAIN_RESTART16
#define PA_QM_CHECK 17
#define PA_QM_GET_RECFG_TEXT 18
#define XX_QM_REQ_VERSION 20
#define RM_QM_LOAD_INFO 24
#define RM_QM_CMD_OUT 28
#define RM_XX_PROC_OUT 32
#define QM_JM_TASK_FINISHED40
#define QM_JM_TASK_RUNNING41
#define QM_JM_KILL_TASK42
#define QM_JM_ACCEPT_REQUEST43
#define TM_JM_IN_PRE44
#define TM_JM_RUN_INFO45
#define TM_JM_IN_POS46
#define TM_JM_GET_FILES 62
Main Index
Chapter 9: Application Procedural Interface (API) 207
Include File
#define TM_JM_PUT_FILES 63
#define TM_JM_CFG_STRUCTS 65
#define TM_JM_DISK_INIT 66
#define TM_JM_LOG_INFO69
#define TM_JM_PRE_PROG 96
#define TM_JM_POS_PROG 97
#define TM_JM_SUSPEND_JOB 77
#define TM_JM_RESUME_JOB 78
#define TM_JM_ADD_COMMENT 85
#define TM_JM_RM_FILE 86
#define TM_JM_RUNNING_FILE 87
#define TM_JM_MSG_BUFFERS 95
#define XX_RM_STOP_NOW 74
#define XX_RM_RMT_CMD 81
#define XX_RM_RMT_AM_CMD 99
#define XX_RM_SEND_LOADS 82
#define XX_RM_KILL_PROCESS 83
#define XX_RM_REMOVE_FILE 84
#define XX_RM_REMOVE_AM_FILE 100
#define XX_RM_WRITE_FILE 75
#define XX_RM_PUT_FILE 109
#define XX_RM_PUL_FILE 110
#define XX_RM_PING_ME 111
#define XX_RM_GET_UNAME 112
#define XX_RM_EXIST_FILE 113
#define XX_RM_DIR_WRITEABLE 114
#define XX_RM_CAT_FILE 115
#define QM_PM_RET_CODE47
#define QM_PM_FULL_LISTING48
#define QM_PM_JOB_LIST49
#define QM_PM_QUEUE_STATUS50
#define QM_PM_QUEMGR_LOG51
#define QM_PM_JOB_SEL_LIST52
#define QM_PM_SEND_JOBNUM53
#define QM_PM_NEEDS_RECFG 91
#define QM_PM_LOAD_INFO 92
#define QM_PM_JOBMGR_START 94
#define PM_JM_REQ_JOBMON 54
#define PM_JM_REQ_RUNNING_FILE 88
#define PM_JM_KILL_TRANSFERS 90
#define PM_JM_MSGDEST_REQ 101
#define PM_JM_STATS_REQ 102
Main Index
208 Patran Analysis Manager User’s Guide
Include File
#define QM_PA_INFO 67
#define QM_PA_SEND_RECFG_TEXT 68
#define JM_PM_LOG_COMMENT55
#define JM_PM_LOG_INIT_JOB56
#define JM_PM_LOG_TASK_SUBMIT57
#define JM_PM_LOG_TASK_RUN58
#define JM_PM_LOG_TASK_COMPLETE59
#define JM_PM_LOG_JOB_FINISHED60
#define JM_PM_TIME_SYNC61
#define JM_PM_LOG_LINE 70
#define JM_PM_FILE_PRESENT 71
#define JM_JM_PRE_FINISHED 72
#define JM_JM_POS_FINISHED 73
#define JM_TM_RECV_SETUP 64
#define JM_TM_GIVEME_FILE 89
#define JM_TM_GIVEME_FILE2 107
#define QM_XX_REQ_VERSION 76
#define QM_TM_SUSPEND_JOB 79
#define QM_TM_RESUME_JOB 80
#define QM_TM_KILL_JOB 93
#define MAX_ORGS 28
#define MAX_APPS 30
#define MAX_SUB_APPS 50
#define MAX_GEN_APPS 10
#define LOCAL 0
#define NFS 1
#define MSC_QUEUE 0
#define LSF_QUEUE 1
#define NQS_QUEUE 2
#define MSC_NASTRAN 1
#define HKS_ABAQUS 2
#define MSC_MARC 3
#define GENERAL 20
#define MAX_NUM_FILE_SYS 20
#define UNITS_WORDS 0
#define UNITS_64BIT_WORDS 99
#define UNITS_KB 1
#define UNITS_MB 2
Main Index
Chapter 9: Application Procedural Interface (API) 209
Include File
#define UNITS_GB 3
#define JOB_SUBMITTED 0
#define JOB_QUEUED 1
#define JOB_RUNNING 2
#define JOB_SUCCESSFUL 0
#define JOB_ABORTED 1
#define JOB_FAILED 2
#define FILE_STILL_DOWNLOADING 1
#define FILE_DOWNLOAD_COMPLETE 0
/* ---------------------------- */
#define IC_CLEAN 0
#define IC_CANT_GET_ADDRESS -100
#define IC_CANT_OPEN_HOST_FILE -101
#define IC_CANT_ALLOC_MEM -102
#define IC_NOT_ENUF_HOSTS -103
#define IC_CANT_OPEN_QUE_FILE -104
#define IC_MISSING_FIELDS -105
#define IC_CANT_FIND_HOST -106
#define IC_ADD_QUE_ERROR -107
#define IC_NOT_ENUF_QUES -108
#define IC_CANT_FIND_QUE -109
#define IC_NO_QUE_TYPE -110
#define IC_UNKNOWN_QUE_TYPE -111
#define IC_NO_QUE_PATH -112
#define IC_CANT_FIND_MACH -113
#define IC_BAD_MAXTSK -114
#define IC_TOO_FEW_QUE_APPS -115
#define IC_BAD_APP_TYPE -116
#define IC_NOT_ENUF_SUB_HOSTS -117
#define IC_BAD_PORT -118
#define IC_NO_ADMIN -119
#define IC_BAD_ADMIN -120
#define ID_CLEAN 0
#define ID_CANT_OPEN_DISK_FILE -150
#define ID_CANT_GET_ADDRESS -151
#define ID_CANT_ALLOC_MEM -152
#define ID_CANT_FSTAT -153
#define ID_NOT_ENUF_FSYS -154
#define ID_NOT_ENUF_SUBS -155
#define ID_CANT_FIND_HOST -156
#define IU_CLEAN 0
#define IU_CANT_ALLOC_MEM -180
Main Index
210 Patran Analysis Manager User’s Guide
Include File
#defineTIME_SYNC 99
#define LOG_COMMENT100
#define LOG_INIT_JOB101
#define LOG_TASK_SUBMIT102
#define LOG_TASK_RUN103
#define LOG_TASK_COMPLETE104
#define LOG_JOB_FINISHED105
#define LOG_DISK_INIT106
#define LOG_DISK_UPDATE107
#define LOG_CPU_UPDATE108
#define LOG_DISK_SUMMARY109
#define LOG_DISK_FS_SUMMARY110
#define LOG_CPU_SUMMARY111
#define LOG_LOGLINE 112
#define LOG_FILE_PRESENT 113
#define LOG_TASK_SUSPEND 114
#define LOG_TASK_RESUME 115
#define LOG_RUNNING_FILE 116
#define LOG_RUNNING_DONE 117
#define LOG_MEM_UPDATE118
#define LOG_MEM_SUMMARY119
/* ---------------------------- */
typedef struct{
char file_sys_name[PATH_LENGTH];
int disk_used_pct;
int disk_max_size_mb;
}JOB_FS_LIST;
typedef struct{
char filename[PATH_LENGTH];
int sizekb;
}FILE_LIST;
typedef struct{
char org_name[NAME_LENGTH];
char org_name2[NAME_LENGTH];
char host_name[NAME_LENGTH];
unsigned int addr;
int port;
}ORG;
typedef struct{
char prog_name[NAME_LENGTH];
char app_name[NAME_LENGTH];
int maxapptsk;
char args[PATH_LENGTH];
char extension[24];
}PROGS;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char host_name[NAME_LENGTH];
Main Index
Chapter 9: Application Procedural Interface (API) 211
Include File
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int glob_index;
int sub_index;
int maxapptsk;
char arch[NAME_LENGTH];
unsigned int address;
}HSTS;
typedef struct{
int num_hosts;
HSTS *hosts;
}HOST;
typedef struct{
char pseudohost_name[NAME_LENGTH];
char exepath[PATH_LENGTH];
char rcpath[PATH_LENGTH];
int maxapptsk;
int type;
}APPS;
typedef struct{
char host_name[NAME_LENGTH];
int num_subapps;
APPS subapp[MAX_SUB_APPS];
int maxtsk;
char arch[NAME_LENGTH];
unsigned int address;
}TOT_HST;
typedef struct{
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
int glob_index;
}QUES;
typedef struct{
int num_queues;
QUES *queues;
}QUEUE;
typedef struct{
char queue_name1[NAME_LENGTH];
char queue_name2[NAME_LENGTH];
HOST sub_host[MAX_APPS];
}TOT_QUE;
typedef struct{
char file_sys_name[NAME_LENGTH];
int model;
int max_size;
int cur_free;
}FILES;
Main Index
212 Patran Analysis Manager User’s Guide
Include File
typedef struct{
char pseudohost_name[NAME_LENGTH];
int num_fsystems;
FILES *sub_fsystems;
}TOT_FSYS;
typedef struct{
char sepuser_name[NAME_LENGTH];
}SEP_USER;
/* ---------------------------- */
typedef struct{
int QUE_TYPE;
char ADMIN[128];
int NUM_APPS;
int config_file_version;
char prog_version[32];
/* prog names */
PROGS progs[MAX_APPS];
/* host stuff */
HOST hsts[MAX_APPS];
int total_h;
TOT_HST *total_h_list;
/* que stuff */
char que_install_path[PATH_LENGTH];
char que_options[PATH_LENGTH];
int min_mem_value;
int min_disk_value;
int min_time_value;
QUEUE ques[MAX_APPS];
int total_q;
TOT_QUE *total_q_list;
/* file stuff */
Main Index
Chapter 9: Application Procedural Interface (API) 213
Include File
int total_f;
TOT_FSYS *total_f_list;
int total_u;
SEP_USER *total_u_list;
int qmgr_port;
int rmgr_port;
char qmgr_host[256];
}CONFIG;
/************************************************************************/
/* Defines for setting the different values of the config structure */
/************************************************************************/
#define CONFIG_VERSION 1
#define NO_JOB_MON 0
#define START_JOB_MON 1
#define SUBMIT_NOW 0
#define SUBMIT_DELAY 1
#define SUBMIT_SPECIFIC 2
#define SUNDAY 0
#define MONDAY 1
#define TUESDAY 2
#define WEDNESDAY 3
#define THURSDAY 4
#define FRIDAY 5
#define SATURDAY 6
#define MAIL_OFF 0
#define MAIL_ON 1
#define UI_MGR_MAIL 0
#define MASTER_MAIL 1
#define MAX_PROJ_LENGTH 16
typedef struct{
#ifndef CRAY
int pad1;
#endif
int version;
#ifndef CRAY
int pad2;
#endif
int job_mon_flag;
#ifndef CRAY
int pad3;
Main Index
214 Patran Analysis Manager User’s Guide
Include File
#endif
int time_type;
#ifndef CRAY
int pad4;
#endif
int delay_hour;
#ifndef CRAY
int pad5;
#endif
int delay_min;
#ifndef CRAY
int pad6;
#endif
int specific_hour;
#ifndef CRAY
int pad7;
#endif
int specific_min;
#ifndef CRAY
int pad8;
#endif
int specific_day;
#ifndef CRAY
int pad9;
#endif
int mail_on_off;
#ifndef CRAY
int pad10;
#endif
int bogus;
#ifndef CRAY
int pad11;
#endif
int mon_file_flag;
#ifndef CRAY
int pad12;
#endif
int copy_link_flag;
#ifndef CRAY
int pad13;
#endif
int job_max_time;
#ifndef CRAY
int pad14;
#endif
int bogus1;
char project_name[128];
char orig_pre_prog[256];
char orig_pos_prog[256];
char exec_pre_prog[256];
char exec_pos_prog[256];
char separate_user[128];
char p3db_file[256];
char email_addr[256];
Main Index
Chapter 9: Application Procedural Interface (API) 215
Include File
} Universal_Config_Info;
/* ---------------------------- */
typedef struct {
char host_name[128];
int num_running;
int num_waiting;
int maxtsk;
char stat_str[64];
}Que_List;
typedef struct {
char msg[2048];
}Msg_List;
typedef struct {
int job_number;
char job_name[128];
char job_user[128];
char job_submit_host[128];
char am_host_name[128];
char job_proj[128];
char work_dir[256];
int application;
int port_number;
char job_run_host[128];
char sub_time_str[128];
int jobstatus;
}Job_List;
typedef struct {
char host_name[128];
int cpu_util;
int free_disk;
int avail_mem;
int status;
}Cpu_List;
/************************************************************************/
/* */
/* MSC.Nastran specific configuration structures. */
/* */
/************************************************************************/
/*
** mck 6/12/98 - change to 0, so they dont get added unless you type something ...
**
#define CONFIG_DEFAULT_SMEM ( (DEFAULT_BUFFSIZE-1) * 100 )
#define CONFIG_DEFAULT_MEM 8000000
*/
Main Index
216 Patran Analysis Manager User’s Guide
Include File
#define CONFIG_DEFAULT_SMEM 0
#define CONFIG_DEFAULT_MEM 0
#define NAS_NONE 0
#define NO 0
#define YES 1
#define SINGLE 1
#define MULTI 2
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index;/* Global Host Index.*/
#ifndef CRAY
int pad2;
#endif
float mem; /* stored as whatever. */
#ifndef CRAY
int pad3;
#endif
float smem; /* stored as whatever. */
#ifndef CRAY
int pad4;
#endif
int num_cpus;/* Number cpu’s on machine.*/
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type;/* Should be set to MSC_NASTRAN*/
#ifndef CRAY
int pad2;
#endif
int default_index;/* Index just within Nas List*/
#ifndef CRAY
int pad3;
#endif
int disk_master; /* stored as KB. */
#ifndef CRAY
Main Index
Chapter 9: Application Procedural Interface (API) 217
Include File
int pad4;
#endif
int disk_dball; /* stored as KB. */
#ifndef CRAY
int pad5;
#endif
int disk_scratch; /* stored as KB. */
#ifndef CRAY
int pad6;
#endif
int disk_units; /* see defines below */
#ifndef CRAY
int pad7;
#endif
int scr_run_flag;
#ifndef CRAY
int pad8;
#endif
int save_db_flag;
#ifndef CRAY
int pad9;
#endif
int copy_db_flag;
#ifndef CRAY
int pad10;
#endif
float mem_req; /* stored as whatever */
#ifndef CRAY
int pad11;
#endif
int mem_units;
#ifndef CRAY
int pad12;
#endif
int smem_units;
#ifndef CRAY
int pad13;
#endif
int num_hosts;
#ifndef CRAY
int pad14;
#endif
int bogus;
char default_host[128];/* uihost_name is saved here*/
char default_queue[128];/* queue_name1 is saved here*/
char mem_req_str[64];
char extra_arg[256];
Nas_Config_Host *host_ptr;
} Nas_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
Main Index
218 Patran Analysis Manager User’s Guide
Include File
/************************************************************************/
/* */
/* ABAQUS specific configuration structures. */
/* */
/************************************************************************/
/*
** mck - 6/12/98 change to 0 so they dont get added unless you type something ...
**
#define DEFAULT_PRE_BUF 400000
#define DEFAULT_PRE_MEM 1000000
#define DEFAULT_MAIN_BUF 2000000
#define DEFAULT_MAIN_MEM 6000000
*/
#define DEFAULT_PRE_BUF 0
#define DEFAULT_PRE_MEM 0
#define DEFAULT_MAIN_BUF 0
#define DEFAULT_MAIN_MEM 0
Main Index
Chapter 9: Application Procedural Interface (API) 219
Include File
#define ABA_NONE 0
#define ABA_RESTART 1
#define ABA_CHECK 2
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index; /* Global Host Index. */
#ifndef CRAY
int pad2;
#endif
int num_cpus;/* Number cpu’s on machine.*/
#ifndef CRAY
int pad3;
#endif
float pre_buf;/* stored as whatever.*/
#ifndef CRAY
int pad4;
#endif
float pre_mem;/* stored as whatever.*/
#ifndef CRAY
int pad5;
#endif
float main_buf;/* stored as whatever.*/
#ifndef CRAY
int pad6;
#endif
float main_mem;/* stored as whatever.*/
char pre_buf_str[64];
char pre_mem_str[64];
char main_buf_str[64];
char main_mem_str[64];
char host_name[128]; /* Real Host Name (host_name) */
} Aba_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type; /* Should be set to HKS_ABAQUS */
#ifndef CRAY
int pad2;
#endif
int default_index;/* Index just within Aba List*/
#ifndef CRAY
int pad3;
#endif
int copy_res_file;
#ifndef CRAY
int pad4;
#endif
int save_res_file;
#ifndef CRAY
Main Index
220 Patran Analysis Manager User’s Guide
Include File
int pad5;
#endif
float mem_req; /* stored as whatever */
#ifndef CRAY
int pad6;
#endif
int mem_units;/* One of the defines above*/
#ifndef CRAY
int pad7;
#endif
int disk_units;/* One of the defines above*/
#ifndef CRAY
int pad8;
#endif
int space_req;/* stored as KB.*/
#ifndef CRAY
int pad9;
#endif
int append_fil;/* 0 = no 1 = yes*/
#ifndef CRAY
int pad10;
#endif
int num_hosts;
#ifndef CRAY
int pad11;
#endif
int use_standard; /* 0 = no 1 = yes */
char default_host[128];/* uihost_name is saved here */
char default_queue[128];/* queue_name1 is saved here */
char user_sub[128];
char mem_req_str[64];
char extra_arg[256];
Aba_Config_Host *host_ptr;
} Aba_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index;/* Index just within Aba list*/
#ifndef CRAY
int pad2;
#endif
intspecific_index;/* see description below*/
#ifndef CRAY
int pad3;
#endif
int restart;
#ifndef CRAY
int pad4;
#endif
int bogus;
char aba_input_deck[256]; /* full path and filename */
char restart_file[256];
Main Index
Chapter 9: Application Procedural Interface (API) 221
Include File
} Aba_Submit_Info;
/* The “specific_index” variable is only used when the queuing type is*/
/* not P3_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* “specific_index” has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/
/************************************************************************/
/* */
/* MSC.Marc specific configuration structures. */
/* */
/************************************************************************/
#define MAR_NONE 0
#define MAR_RESTART 1
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index; /* Global Host Index. */
#ifndef CRAY
int pad2;
#endif
int num_cpus; /* Number cpu’s on machine. */
#ifndef CRAY
int pad3;
#endif
int bogus;
char host_name[128]; /* Real Host Name (host_name) */
} Mar_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type; /* Should be set to MSC_MARC */
#ifndef CRAY
int pad2;
#endif
int default_index; /* Index just within Mar List */
#ifndef CRAY
int pad3;
#endif
int disk_units; /* One of the defines above */
#ifndef CRAY
int pad4;
#endif
int space_req; /* stored as KB. */
#ifndef CRAY
int pad5;
#endif
Main Index
222 Patran Analysis Manager User’s Guide
Include File
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index; /* Index just within Mar list */
#ifndef CRAY
int pad2;
#endif
int rid; /* Flag: restart file (-rid filename) */
#ifndef CRAY
int pad3;
#endif
int pid; /* Flag: post_name (-pid filename) */
#ifndef CRAY
int pad4;
#endif
int prog; /* Flag: program_name (-prog progname) */
#ifndef CRAY
int pad5;
#endif
int user; /* Flag: user_subroutine_name (-user subname)*/
#ifndef CRAY
int pad6;
#endif
int save; /* Flag: save executable (0/1) (-save yes/no) */
#ifndef CRAY
int pad7;
#endif
int vf; /* Flag: viewfactor file (-vf vfname) */
#ifndef CRAY
int pad8;
#endif
int nprocd; /* Number processes or domains (-nprocd #) */
Main Index
Chapter 9: Application Procedural Interface (API) 223
Include File
#ifndef CRAY
int pad9;
#endif
int host; /* Flag: hostfile (-host hostfilename) */
#ifndef CRAY
int pad10;
#endif
int iam; /* Flag: iam flag for licensing (-iam iamtag) */
#ifndef CRAY
int pad11;
#endif
int specific_index; /* see description below */
/* All files should have full path and filename */
char datfile_name[256]; /* input deck */
char restart_name[256]; /* restart file */
char post_name[256]; /* post file */
char program_name[256]; /* program file */
char user_subroutine_name[256]; /* user subroutine file */
char viewfactor[256]; /* viewfactor file */
char hostfile[256]; /* hostfile */
char iamval[256]; /* iam licensing tag - no file involved */
} Mar_Submit_Info;
/* The “specific_index” variable is only used when the queuing type is*/
/* not P3_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* “specific_index” has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/
/************************************************************************/
/* */
/* GENERAL specific configuration structures. */
/* */
/************************************************************************/
typedef struct {
#ifndef CRAY
int pad1;
#endif
int host_index; /* Global Host Index. */
#ifndef CRAY
int pad2;
#endif
int bogus;
char host_name[128]; /* Real Host Name (host_name) */
} Gen_Config_Host;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int application_type;/* Should be set to GEN - RANGE */
#ifndef CRAY
Main Index
224 Patran Analysis Manager User’s Guide
Include File
int pad2;
#endif
int default_index;/* Index just within Gen List*/
#ifndef CRAY
int pad3;
#endif
int disk_units;/* One of the defines above*/
#ifndef CRAY
int pad4;
#endif
int space_req;/* stored as KB.*/
#ifndef CRAY
int pad5;
#endif
int mem_units;/* One of the defines above*/
#ifndef CRAY
int pad6;
#endif
float mem_req;/* stored as whatever */
#ifndef CRAY
int pad7;
#endif
int num_hosts;
#ifndef CRAY
int pad8;
#endif
int translate_input;
char default_host[128];/* uihost_name is saved here */
char default_queue[128];/* queue_name1 is saved here */
char cmd_line[256]; /* command line to run with */
char mon_file[256]; /* log file to monitor */
char mem_req_str[64];
Gen_Config_Host *host_ptr;
} Gen_Configure_Info;
typedef struct {
#ifndef CRAY
int pad1;
#endif
int submit_index;/* Index just within Gen list*/
#ifndef CRAY
int pad2;
#endif
intspecific_index;/* see description below*/
char gen_input_deck[256]; /* full path and filename */
} Gen_Submit_Info;
/* The “specific_index” variable is only used when the queuing type is*/
/* not MSC_QUEUE (i.e. it is LSF). If it is -1 then that means the*/
/* task can be submitted to any host in the defined queue. If the*/
/* “specific_index” has a value other than -1, then this is a index into*/
/* the host list (host list for the application, not global index)*/
/* of the specific host the task should be submited to.*/
Main Index
Chapter 9: Application Procedural Interface (API) 225
Include File
/* ---------------------------- */
/*
** api globals ...
*/
#ifdef AM_INITIALIZE
AM_EXTERN int gbl_nwrk_timeout_secs = BLOCK_TIMEOUT;
AM_EXTERN int api_use_this_host = 0;
#else
AM_EXTERN int gbl_nwrk_timeout_secs;
AM_EXTERN int api_use_this_host;
#endif
AM_EXTERN CONFIG *cfg;
AM_EXTERN ORG *org;
AM_EXTERN int num_orgs;
AM_EXTERN Universal_Config_Info ui_config;
AM_EXTERN Nas_Configure_Info nas_config;
AM_EXTERN Nas_Submit_Info nas_submit;
AM_EXTERN Aba_Configure_Info aba_config;
AM_EXTERN Aba_Submit_Info aba_submit;
AM_EXTERN Mar_Configure_Info mar_config;
AM_EXTERN Mar_Submit_Info mar_submit;
AM_EXTERN Gen_Configure_Info gen_config[MAX_GEN_APPS];
AM_EXTERN Gen_Submit_Info gen_submit[MAX_GEN_APPS];
AM_EXTERN char api_this_host[256];
AM_EXTERN char api_user_name[256];
AM_EXTERN char api_application_name[64];
AM_EXTERN int api_application_index;
/*
* api functions ...
*/
/*
* init - MUST BE FIRST api_* call made by application ...
*/
extern int api_init(char *out_str);
/*
* just to set the global timeout for communication ...
*/
extern int api_get_gbl_timeout();
extern int api_set_gbl_timeout(int secs);
/*
* reads an org.cfg file if possible and builds the ORG struct for list of QueMgrs ...
*/
extern ORG *api_read_orgs(char *dir,int *num_orgs,int *status);
/*
* contacts running QueMgr and builds cfg struct ...
*/
extern CONFIG *api_get_config(char *qmgr_host,int qmgr_port,int *status,char *out_str);
Main Index
226 Patran Analysis Manager User’s Guide
Include File
/*
* reads *.cfg files and builds cfg struct (No QueMgr process involved) ...
*/
extern CONFIG *api_read_config(CONFIG *cfg,char *path,char *orgname,int *status,char
*out_str);
/*
* reads *.cfg files (without building path) and builds cfg struct (No QueMgr process involved) ...
*/
extern CONFIG *api_read_config_fullpath(CONFIG *cfg,char *path,int *status,char *out_str);
/*
* writes *.cfg files from cfg struct (No QueMgr process involved) ...
*/
extern void api_write_config(CONFIG *cfg,char *path,char *orgname,int *stauts,char
*out_str);
/*
* tries to contact running QueMgr and check if timestamp is ok ...
* returns 0 if all ok ...
*/
extern int api_ping_quemgr(char *qmgr_host,int qmgr_port,unsigned int timestamp,char
*out_str);
/*
* initializes UI config structs (nas, aba, gen[] subimt and config) ...
*/
extern void api_init_uiconfig(CONFIG *cfg);
/*
* gets logged in user name
*/
extern char *api_getlogin(void);
/*
* checks on job data deck and returns possible question for UI to ask, setting answer for
* submit call below ...
*/
extern int api_check_job(char *ques_text,char *ans1_text,char *ans2_text,char *out_str);
/*
* submits job (needs filled in UI config and submit structs as well as global cfg struct) ...
*/
extern int api_submit_job(char *qmgr_host,int qmgr_port,char *jobname,int background,int
*job_number,char *base_path,int *jmgr_port,int answer,char *out_str);
/*
* gets list of all running jobs from QueMgr ...
*/
extern Job_List *api_get_runningjob_list(char *qmgr_host,int qmgr_port,int *job_count,char
*out_str);
/*
* gets initial socket for later on api_mon_job_* calls ...
*/
Main Index
Chapter 9: Application Procedural Interface (API) 227
Include File
/*
* gets all messages of sev level and lower from JobMgr ...
*/
extern Msg_List *api_mon_job_msgs(int msg_sock,char *ui_host,int msg_port,int sev_level,
int *num_msgs,char *out_str);
/*
* gets current job statistics and run status ...
*/
extern JOB_FS_LIST *api_mon_job_stats(int msg_sock,char *ui_host,int msg_port,
int *cpu,int *pct_cpu,
int *mem, int *pct_mem,
int *dsk,int *pct_dsk,
int *elapsed,int *status,
int *num_fs,int *retcod,
char *out_str);
/*
* gets last 100 lines of job mon file ...
*/
extern char *api_mon_job_mon(int msg_sock,char *ui_host,int msg_port,char *out_str);
/*
* returns list of files active while job is running ...
*/
extern FILE_LIST *api_mon_job_running_files_list(int msg_sock,char *ui_host,int msg_port,
int *num_files,char *out_str);
/*
** returns general info about a job from a mon_file ...
*/
extern Job_List *api_com_job_gen(char *sub_host,char *mon_file,char *out_str);
/*
* gets job statistics and run status from mon file
*/
extern JOB_FS_LIST *api_com_job_stats(char *sub_host,char *mon_file,
int *cpu,int *pct_cpu_avg,int *pct_cpu_max,
int *mem,int *pct_mem_avg,int *pct_mem_max,
int *dsk,int *pct_dsk_avg,int *pct_dsk_max,
int *elapsed,int *status,
int *num_fs,int *retcod,
char *out_str);
/*
* gets last 100 lines of job mon file ...
*/
extern char *api_com_job_mon(char *sub_host,char *mon_file,char *out_str);
/*
* returns list of files from mon file ...
Main Index
228 Patran Analysis Manager User’s Guide
Include File
*/
extern FILE_LIST *api_com_job_received_files_list(char *sub_host,char *mon_file,int
*num_files,char *out_str);
/*
* starts file download ...
*/
extern int api_download_file_start(int msg_sock,int job_number,char *filename,char *out_str);
/*
* checks on donwload file status ...
*/
extern int api_download_file_check(int job_number,char *filename,int *filesizekb);
/*
* returns all jobs for all hosts and apps from a running QueMgr ...
*/
extern Que_List *api_mon_que_full(char *qmgr_host,int qmgr_port,int *num_tsks,char
*out_str);
/*
* gets last 4k bytes of QueMgr log file ...
*/
extern char *api_mon_que_log(char *qmgr_host,int qmgr_port,char *out_str);
/*
* gets all hosts statistics ...
*/
extern Cpu_List *api_mon_que_cpu(char *qmgr_host,int qmgr_port,char *out_str);
/*
* gets list of last 25 or so completed jobs from QueMgr ...
*/
extern Job_List *api_get_completedjob_list(char *qmgr_host,int qmgr_port,int *job_count,char
*out_str);
/*
* abort job ...
*/
extern int api_abort_job(char *qmgr_host,int qmgr_port,int job_number,char *job_user,char
*out_str);
/*
* reads rc file and overrides all UI settings found ..
*/
extern int api_rcfile_read(char *rcfile,char *out_str);
/*
* writes rc file from UI settigns ...
*/
extern int api_rcfile_write( char *rcfile,char *out_str);
extern int api_rcfile_write2(FILE *stream,int short_or_long);
/*
Main Index
Chapter 9: Application Procedural Interface (API) 229
Include File
/*
* calls admin test procedure(s) and returns status and msgs ...
*/
extern char *api_admin_test(char *orgpth,char *orgnam,int rport,int *status,char *out_str);
/*
* just to get home dir ...
*/
extern void api_get_home_dir(char *home_dir);
/*
* to reconfig quemgr ...
*/
extern char *api_reconfig_quemgr(char *qmgr_host,int quemgr_port,int *status,char *out_str);
/*
* license checkout and return ...
*/
extern int api_checkout_license(char *license_file);
extern void api_release_license(void);
#ifdef __cplusplus
}
#endif
#endif /* _AMAPI */
Main Index
230 Patran Analysis Manager User’s Guide
Example Interface
Example Interface
This is the actual source file of the TxtMgr, which uses the Analysis Manager API and the previously
shown api.h include file.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#ifndef WINNT
# include <unistd.h>
# include <sys/time.h>
# include <sys/uio.h>
# include <sys/socket.h>
# include <netinet/in.h>
# include <netdb.h>
#else
# include <winsock.h>
#endif
#include <time.h>
#define AM_INITIALIZE 1
#include “api.h”
int dont_connect = 0;
int has_qmgr_host;
int has_qmgr_port;
int has_org;
int has_orgpath;
char lic_file[256];
char org_name[256];
char binpath[256];
char orgpath[256];
char qmgr_host[256];
int qmgr_port;
int rmgr_port;
int msg_sock = -1;
int msg_port = -1;
int msg_sock_job = -1;
int auto_startup;
char sys_rcf_file[256];
char usr_rcf_file[256];
int has_cmd_rcf;
char cmd_rcf_file[256];
/* ==================== */
#define SUBMIT 1
#define ABORT 2
#define WATCHJOB 3
#define WATCHQUE_LOG 4
#define WATCHQUE_FULL 5
#define WATCHQUE_CPU 6
#define LISTCOMP 7
Main Index
Chapter 9: Application Procedural Interface (API) 231
Example Interface
#include <stdio.h>
#define RCFILEWRITE 8
#define ADMINTEST 9
#define RECONFIG 10
#define QUIT 11 /* must be highest defined number type */
#define NOTVALID 9999
/* ==================== */
#ifdef WINNT
BOOL console_event_func(DWORD dwEvent)
{
if(dwEvent == CTRL_LOGOFF_EVENT)
return TRUE;
#ifdef DEBUG
fprintf(stderr,”\nbye ...”);
#endif
fprintf(stderr,”\n”);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return FALSE;
}
#endif
/* ==================== */
/*********************************************************************/
/* First get rid of the leading path (if any). */
/*********************************************************************/
string_length = strlen(input_string);
if(string_length < 1){
output_string[0] = ‘\0’;
return;
}
found = 0;
for(i = string_length - 1; i >= 0; i--){
if( (input_string[i] == ‘/’) || (input_string[i] == ‘\\’) ){
found = 1;
strcpy(temp_string, &input_string[i + 1]);
break;
}
Main Index
232 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
}
if(found == 0)
strcpy(temp_string, input_string);
/*********************************************************************/
/* Now get rid of the extention (if any). */
/*********************************************************************/
string_length = strlen(temp_string);
if(string_length < 1){
output_string[0] = ‘\0’;
return;
}
strcpy(output_string, temp_string);
return;
}
/* ==================== */
int submit_job(void)
{
int background;
int i;
int lenc;
int submit_index;
int job_number;
int jmgr_port;
char job_name[256];
int mem;
char job_fullname[256];
int srtn;
char out_str[2048];
int ans;
char ques_text[512];
char ans1_text[32];
char ans2_text[32];
background = 0;
/*
** if not auto_startup, ask for details ...
*/
if(auto_startup == 0){
Main Index
Chapter 9: Application Procedural Interface (API) 233
Example Interface
#include <stdio.h>
background = 1;
/*
** ask jobname ...
*/
printf(“\nEnter job name: “);
scanf(“%s”,job_name);
/*
** ask memory ...
*/
printf(“\nEnter memory (in set units): “);
scanf(“%d”,&mem);
/*
** print list of hosts from QueMgr ...
** and ask for which to submit to ...
*/
if(cfg->QUE_TYPE == MSC_QUEUE){
printf(“\nhosts:\n”);
printf(“index name\n”);
printf(“------------\n”);
for(i=0;i<cfg->hsts[api_application_index-1].num_hosts;i++){
printf(“%-5d %s\n”,i+1,cfg->hsts[api_application_index-1].hosts[i].pseudohost_name);
}
printf(“\nEnter host index: “);
scanf(“%d”,&submit_index);
submit_index--;
printf(“\n”);
}else{
printf(“\nqueues:\n”);
printf(“index name\n”);
printf(“------------\n”);
for(i=0;i<cfg->ques[api_application_index-1].num_queues;i++){
printf(“%-5d %s -> %s\n”,i+1,cfg->ques[api_application_index-1].queues[i].queue_name1,
cfg->ques[api_application_index-1].queues[i].queue_name2);
}
printf(“\nEnter queue index: “);
scanf(“%d”,&submit_index);
submit_index--;
printf(“\n”);
Main Index
234 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
}
/*
** set up config/submit struct info ...
*/
strcpy(job_fullname,job_name);
lenc = (int)strlen(job_fullname);
for(i=0;i<lenc;i++){
if(job_fullname[i] == ‘\\’)
job_fullname[i] = ‘/’;
}
leafname(job_fullname,job_name);
if(api_application_index == MSC_NASTRAN){
sprintf(nas_submit.nas_input_deck,”%s”,job_fullname);
nas_config.mem_req = (float)mem;
nas_submit.submit_index = submit_index;
}else if(api_application_index == HKS_ABAQUS){
sprintf(aba_submit.aba_input_deck,”%s”,job_fullname);
aba_config.mem_req = (float)mem;
aba_submit.submit_index = submit_index;
}else if(api_application_index == MSC_MARC){
sprintf(mar_submit.datfile_name,”%s”,job_fullname);
mar_config.mem_req = (float)mem;
mar_submit.submit_index = submit_index;
}else{
sprintf(gen_submit[api_application_index-GENERAL].gen_input_deck,”%s”,job_fullname);
gen_config[api_application_index-GENERAL].mem_req = (float)mem;
gen_submit[api_application_index-GENERAL].submit_index = submit_index;
}
}else{
/*
** leave all config and submit struct settings alone, as
** the rcf/override ASSUME to have it all correct ...
** (just get job_name for use below ...)
*/
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
Main Index
Chapter 9: Application Procedural Interface (API) 235
Example Interface
#include <stdio.h>
ans = NO;
srtn = api_check_job(ques_text,ans1_text,ans2_text,out_str);
srtn = api_submit_job(qmgr_host,qmgr_port,job_name,background,&job_number,binpath,
&jmgr_port,ans,out_str);
if(out_str[0] != ‘\0’){
printf(“%s”,out_str);
}
return srtn;
}
/* ==================== */
int abort_job(void)
Main Index
236 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
{
int srtn;
Job_List *jr_ptr = NULL;
int num_running_jobs;
int job_number;
char j_numstr[100];
int found;
char job_user[256];
char job_name[256];
char proj_name[256];
int i;
char out_str[2048];
jr_ptr = api_get_runningjob_list(qmgr_host,qmgr_port,&num_running_jobs,out_str);
if(num_running_jobs == 0){
printf(“\nNo active jobs found\n”);
return 0;
}
job_number = -1;
if(auto_startup == 0){
/*
** present list to user ...
*/
printf(“\nRunning jobs ....\n\n”);
printf(“num jobname jobuser project amhost runhost subtime\n”);
printf(“----------------------------------------------------------------------------------------------------------\n”);
for(i=0;i<num_running_jobs;i++){
printf(“%-4d %-20s %-20s %-20s %-20s %-20s %-20s\n”,jr_ptr[i].job_number,
jr_ptr[i].job_name,
jr_ptr[i].job_user,
jr_ptr[i].job_proj,
jr_ptr[i].am_host_name,
jr_ptr[i].job_run_host,
jr_ptr[i].sub_time_str);
}
for(i=0;i<100;i++)
j_numstr[i] = ‘\0’;
printf(“\nEnter job number: “);
scanf(“%s”,j_numstr);
if( (j_numstr[0] == ‘q’) || (j_numstr[0] == ‘Q’) || (j_numstr[0] == ‘0’) ){
free(jr_ptr);
return 0;
Main Index
Chapter 9: Application Procedural Interface (API) 237
Example Interface
#include <stdio.h>
}
sscanf(j_numstr,”%d”,&job_number);
found = 0;
for(i=0;i<num_running_jobs;i++){
if(job_number == jr_ptr[i].job_number){
found = 1;
break;
}
}
if(!found){
printf(“Error, job number %d not in list\n”,job_number);
free(jr_ptr);
return 1;
}
printf(“\n”);
}else{
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
strcpy(proj_name,ui_config.project_name);
/*
** search list for match and set job_number ...
*/
job_number = -1;
for(i=0;i<num_running_jobs;i++){
if(strcmp(jr_ptr[i].job_name,job_name) == 0){
if(strcmp(jr_ptr[i].job_proj,proj_name) == 0){
job_number = jr_ptr[i].job_number;
break;
}
}
}
strcpy(job_user,api_user_name);
srtn = api_abort_job(qmgr_host,qmgr_port,job_number,job_user,out_str);
if(out_str[0] != ‘\0’){
printf(“%s”,out_str);
Main Index
238 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
}
free(jr_ptr);
return srtn;
}
/* ==================== */
int watch_job(void)
{
Job_List *jr_ptr = NULL;
int num_running_jobs;
int check;
char job_host[128];
int job_port = 0;
char j_numstr[100];
int found;
int srtn;
char *log_str;
char job_name[256];
char proj_name[256];
char sfile[256];
int i;
int job_number;
int sev_level;
char out_str[2048];
int num_msgs;
Msg_List *msg_ptr = NULL;
int cpu, pct_cpu;
int mem, pct_mem;
int dsk, pct_dsk;
int elapsed;
int status;
FILE_LIST *file_list = NULL;
int num_files = 0;
int file_index;
int sizekb;
int num_fs;
JOB_FS_LIST *job_fs_list;
jr_ptr = api_get_runningjob_list(qmgr_host,qmgr_port,&num_running_jobs,out_str);
if(num_running_jobs == 0){
printf(“\nNo active jobs found\n”);
return 0;
}
Main Index
Chapter 9: Application Procedural Interface (API) 239
Example Interface
#include <stdio.h>
return 1;
}
job_number = -1;
if(auto_startup == 0){
/*
** present list to user ...
*/
printf(“\nRunning jobs ....\n\n”);
printf(“num jobname jobuser project amhost runhost subtime\n”);
printf(“----------------------------------------------------------------------------------------------------------\n”);
for(i=0;i<num_running_jobs;i++){
printf(“%-4d %-20s %-20s %-20s %-20s %-20s %-20s\n”,jr_ptr[i].job_number,
jr_ptr[i].job_name,
jr_ptr[i].job_user,
jr_ptr[i].job_proj,
jr_ptr[i].am_host_name,
jr_ptr[i].job_run_host,
jr_ptr[i].sub_time_str);
}
for(i=0;i<100;i++)
j_numstr[i] = ‘\0’;
printf(“\nEnter job number: “);
scanf(“%s”,j_numstr);
if( (j_numstr[0] == ‘q’) || (j_numstr[0] == ‘Q’) || (j_numstr[0] == ‘0’) ){
free(jr_ptr);
return 0;
}
sscanf(j_numstr,”%d”,&job_number);
found = 0;
for(i=0;i<num_running_jobs;i++){
if(job_number == jr_ptr[i].job_number){
job_port = jr_ptr[i].port_number;
strcpy(job_host,jr_ptr[i].job_submit_host);
found = 1;
break;
}
}
if(!found){
printf(“Error, job number %d not in list\n”,job_number);
free(jr_ptr);
return 1;
}
}else{
if(api_application_index == MSC_NASTRAN){
leafname(nas_submit.nas_input_deck,job_name);
Main Index
240 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
}else if(api_application_index == HKS_ABAQUS){
leafname(aba_submit.aba_input_deck,job_name);
}else if(api_application_index == MSC_MARC){
leafname(mar_submit.datfile_name,job_name);
}else{
leafname(gen_submit[api_application_index-GENERAL].gen_input_deck,job_name);
}
strcpy(proj_name,ui_config.project_name);
/*
** search list for match and set job_number ...
*/
job_number = -1;
for(i=0;i<num_running_jobs;i++){
if(strcmp(jr_ptr[i].job_name,job_name) == 0){
if(strcmp(jr_ptr[i].job_proj,proj_name) == 0){
job_port = jr_ptr[i].port_number;
strcpy(job_host,jr_ptr[i].job_submit_host);
break;
}
}
}
free(jr_ptr);
#ifdef DEBUG
fprintf(stderr,”posa\n”);
#endif
/*
** get msg socket if needed ...
*/
if( (msg_sock < 0) || (msg_sock_job != job_number) ){
#ifdef DEBUG
fprintf(stderr,”posa1\n”);
#endif
msg_sock = api_mon_job_init(job_host,job_port,&msg_port,out_str);
if(msg_sock < 0){
msg_port = -1;
msg_sock_job = -1;
printf(“%s”,out_str);
return 1;
}else{
Main Index
Chapter 9: Application Procedural Interface (API) 241
Example Interface
#include <stdio.h>
msg_sock_job = job_number;
}
}
#ifdef DEBUG
fprintf(stderr,”posb\n”);
#endif
/*
** get severity if not auto ...
*/
sev_level = 3;
if(auto_startup == 0){
#ifdef MSGPOP
if(api_application_index == MSC_NASTRAN){
printf(“Enter message severity level >=: “);
scanf(“%d”,&sev_level);
printf(“\n”);
}
if(sev_level < 0) sev_level = 0;
if(sev_level > 3) sev_level = 3;
#endif
}
#ifdef DEBUG
fprintf(stderr,”posc\n”);
#endif
/*
** get monitor info ...
*/
msg_ptr = api_mon_job_msgs(msg_sock,api_this_host,msg_port,sev_level,&num_msgs,out_str);
if(num_msgs < 0){
printf(“%s”,out_str);
return 2;
}
#ifdef DEBUG
fprintf(stderr,”posd\n”);
#endif
if(msg_ptr == NULL){
printf(“%s”,out_str);
return 3;
}else if(num_msgs == 0){
printf(“\nNo messages at this time ...\n\n”);
}else{
/*
** mgs format is “severity@sevbuf@msgtxt” ... sevbuf is string “NULL” when severity=0
*/
for(i=0;i<num_msgs-1;i++){
printf(“ %s\n”,msg_ptr[i].msg);
}
Main Index
242 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
free(msg_ptr);
}
#ifdef DEBUG
fprintf(stderr,”pose\n”);
#endif
job_fs_list =
api_mon_job_stats(msg_sock,api_this_host,msg_port,&cpu,&pct_cpu,&mem,&pct_mem,
&dsk,&pct_dsk,&elapsed,&status,&num_fs,&srtn,out_str);
if(srtn != 0){
printf(“%s”,out_str);
}else{
printf(“job stats:\n”);
if(status == JOB_SUBMITTED){
printf(“cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n”,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,”submitted”);
}else if(status == JOB_QUEUED){
printf(“cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n”,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,”queued”);
}else if(status == JOB_RUNNING){
printf(“cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n”,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,”running”);
}else{
printf(“cpu=%d, %%cpu=%d, mem=%d, %%mem=%d, disk=%d, %%disk=%d,
elapsed=%d, status=%s\n”,
cpu,pct_cpu,mem,pct_mem,dsk,pct_dsk,elapsed,”unknown”);
}
/*
printf(“total num filesys = %d\n”,num_fs);
for(i=0;i<num_fs;i++){
fprintf(stdout,” %s max=%d usage=%d\n”,job_fs_list[i].file_sys_name,
job_fs_list[i].disk_max_size_mb,
job_fs_list[i].disk_used_pct);
}
*/
printf(“\n”);
#ifdef DEBUG
fprintf(stderr,”posf\n”);
#endif
log_str = api_mon_job_mon(msg_sock,api_this_host,msg_port,out_str);
Main Index
Chapter 9: Application Procedural Interface (API) 243
Example Interface
#include <stdio.h>
if(log_str == NULL){
printf(“%s”,out_str);
}else{
printf(“mon file contents:\n”);
printf(“%s”,log_str);
free(log_str);
}
file_list =
api_mon_job_running_files_list(msg_sock,api_this_host,msg_port,&num_files,out_str);
#ifdef DEBUG
printf(“api_mon_job_running_files_list: num_files = %d\n”,num_files);
#endif
if(num_files == 0)
return 0;
for(i=0;i<num_files;i++){
if(i == 0){
printf(“\ndownloadable files: (use q to quit)\n”);
printf(“index job file size (kb)\n”);
printf(“--------------------------------------------------\n”);
}
get_leaf_and_extention(file_list[i].filename,sfile);
printf(“%-10d%-30s %d\n”,i+1,sfile,file_list[i].sizekb);
}
for(i=0;i<100;i++)
j_numstr[i] = ‘\0’;
printf(“\nEnter file index to download: “);
scanf(“%s”,j_numstr);
if( (j_numstr[0] == ‘q’) || (j_numstr[0] == ‘Q’) || (j_numstr[0] == ‘0’) ){
free(file_list);
return 0;
}
sscanf(j_numstr,”%d”,&file_index);
if(file_index == 0){
free(file_list);
return 0;
}
check = 0;
if(file_index < 0){
check = 1;
file_index *= -1;
}
Main Index
244 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
if(check){
srtn = api_download_file_check(job_number,file_list[file_index-1].filename,&sizekb);
#ifdef DEBUG
printf(“check returns %d\n”,srtn);
#endif
if(srtn == FILE_STILL_DOWNLOADING){
printf(“File %s is still being transfered\n”,file_list[file_index-1].filename);
}else if(srtn == FILE_DOWNLOAD_COMPLETE){
printf(“File %s transfer complete !\n”,file_list[file_index-1].filename);
}
}else{
srtn = api_download_file_start(msg_sock,job_number,file_list[file_index-1].filename,out_str);
if(srtn != 0){
printf(“File download (%s) start failed, error = %d (%s)”,
file_list[file_index-1].filename,srtn,out_str);
}
free(file_list);
return 0;
}
/* ==================== */
if(which == WATCHQUE_LOG){
log_str = api_mon_que_log(qmgr_host,qmgr_port,out_str);
if(log_str == NULL){
printf(“%s”,out_str);
return 1;
Main Index
Chapter 9: Application Procedural Interface (API) 245
Example Interface
#include <stdio.h>
}
printf(“\n”);
printf(“%s”,log_str);
free(log_str);
return 0;
ql_ptr = api_mon_que_full(qmgr_host,qmgr_port,&num_tasks,out_str);
if(num_tasks == 0){
printf(“\nNo active jobs found\n”);
return 0;
}
for(i=0;i<num_tasks;i++){
printf(“%-35s%-6d%-6d%-6d %s\n”,
ql_ptr[i].host_name,ql_ptr[i].num_running,ql_ptr[i].num_waiting,
ql_ptr[i].maxtsk,ql_ptr[i].stat_str);
}
free(ql_ptr);
return 0;
cpu_ptr = api_mon_que_cpu(qmgr_host,qmgr_port,out_str);
if(cpu_ptr == NULL){
printf(“%s”,out_str);
return 1;
}
for(i=0;i<cfg->total_h;i++){
printf(“%-35s%-12d%-12d%-12d\n”,
Main Index
246 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
cpu_ptr[i].host_name,cpu_ptr[i].cpu_util,cpu_ptr[i].avail_mem,cpu_ptr[i].free_disk);
}
free(cpu_ptr);
return 0;
}else{
printf(“\nError, invalid selection\n”);
return 1;
}
/*NOTREACHED*/
}
/* ==================== */
int list_complete(void)
{
int num_completed_jobs;
Job_List *jc_ptr = NULL;
Job_List *jc_ptr2 = NULL;
char *mon_msgs = NULL;
int num_files;
FILE_LIST *fl_list = NULL;
int srtn;
int i;
int job_number;
char j_numstr[100];
int found;
char out_str[2048];
char sfile[256];
char mon_file[256];
int cpu_secs, pct_cpu_avg, pct_cpu_max;
int mem_kbts, pct_mem_avg, pct_mem_max;
int dsk_mbts, pct_dsk_avg, pct_dsk_max;
int elapsed,status;
int num_fs;
JOB_FS_LIST *job_fs_list;
jc_ptr = api_get_completedjob_list(qmgr_host,qmgr_port,&num_completed_jobs,out_str);
if(num_completed_jobs == 0){
printf(“\nNo completed jobs found\n”);
return 0;
}
Main Index
Chapter 9: Application Procedural Interface (API) 247
Example Interface
#include <stdio.h>
}
job_number = -1;
if(auto_startup == 0){
/*
** present list to user ...
*/
printf(“\nCompleted jobs ....\n\n”);
printf(“num jobname username subtime\n”);
printf(“------------------------------------------------------\n”);
for(i=0;i<num_completed_jobs;i++){
printf(“%-4d %-20s %-20s %-20s\n”,jc_ptr[i].job_number,
jc_ptr[i].job_name,jc_ptr[i].job_user,jc_ptr[i].sub_time_str);
}
printf(“\n”);
}else{
found = -1;
for(i=0;i<num_completed_jobs;i++){
if(job_number == jc_ptr[i].job_number){
found = i;
break;
}
}
Main Index
248 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
printf(“Job AM hostname: %s\n”,jc_ptr[found].am_host_name);
printf(“Job run host: %s\n”,jc_ptr[found].job_run_host);
if(jc_ptr[found].jobstatus == JOB_SUCCESSFUL)
printf(“Job complete status: success\n”);
else if(jc_ptr[found].jobstatus == JOB_ABORTED)
printf(“Job complete status: aborted\n”);
else if(jc_ptr[found].jobstatus == JOB_FAILED)
printf(“Job complete status: failed\n”);
else
printf(“Job complete status: unknown\n”);
/* ------------ */
sprintf(mon_file,”%s/%s.mon”,jc_ptr[found].work_dir,jc_ptr[found].job_name);
jc_ptr2 = api_com_job_gen(jc_ptr[found].job_submit_host,mon_file,out_str);
if(jc_ptr2 == NULL){
printf(“%s”,out_str);
free(jc_ptr);
return 1;
}
if(jc_ptr[found].job_number != jc_ptr2->job_number){
printf(“\nJob numbers do not match -\n”);
printf( “assuming newer job with same .mon file is currently running\n”);
printf( “so no additional job info is available\n”);
free(jc_ptr);
free(jc_ptr2);
return 1;
}
printf(“\ngeneral info:\n”);
printf(“num jobname jobuser amhost runhost subtime
status\n”);
printf(“-----------------------------------------------------------------------------------------------------------\n”);
printf(“%-4d %-20s %-20s %-20s %-20s %-30s %-6d\n”,jc_ptr2->job_number,
jc_ptr2->job_name,
jc_ptr2->job_user,
jc_ptr2->am_host_name,
jc_ptr2->job_run_host,
jc_ptr2->sub_time_str,
jc_ptr2->jobstatus);
/* ------------ */
job_fs_list = api_com_job_stats(jc_ptr[found].job_submit_host,mon_file,
&cpu_secs,&pct_cpu_avg,&pct_cpu_max,
&mem_kbts,&pct_mem_avg,&pct_mem_max,
&dsk_mbts,&pct_dsk_avg,&pct_dsk_max,
&elapsed,&status,
Main Index
Chapter 9: Application Procedural Interface (API) 249
Example Interface
#include <stdio.h>
&num_fs,&srtn,out_str);
if(srtn < 0){
printf(“%s”,out_str);
if( (num_fs > 0) && (job_fs_list != NULL) ){
free(job_fs_list);
}
free(jc_ptr);
free(jc_ptr2);
return 1;
}
printf(“\njob stats:\n”);
printf(“cpu(sec)=%d, %%cpu(avg)=%d,
%%cpu(max)=%d\n”,cpu_secs,pct_cpu_avg,pct_cpu_max);
printf(“mem(kb) =%d, %%mem(avg)=%d,
%%mem(max)=%d\n”,mem_kbts,pct_mem_avg,pct_mem_max);
printf(“dsk(mb) =%d, %%dsk(avg)=%d,
%%dsk(max)=%d\n”,dsk_mbts,pct_dsk_avg,pct_dsk_max);
printf(“elapsed =%d, status=%d\n”,elapsed,status);
/*
printf(“total num filesys = %d\n”,num_fs);
for(i=0;i<num_fs;i++){
fprintf(stdout,” %s max=%d usage=%d\n”,job_fs_list[i].file_sys_name,
job_fs_list[i].disk_max_size_mb,
job_fs_list[i].disk_used_pct);
}
printf(“\n”);
*/
/* ------------ */
mon_msgs = api_com_job_mon(jc_ptr[found].job_submit_host,mon_file,out_str);
if(mon_msgs == NULL){
printf(“Error, unable to determine mon file msgs\n%s\n”,out_str);
free(jc_ptr);
free(jc_ptr2);
return 1;
}
/* ------------ */
fl_list =
api_com_job_received_files_list(jc_ptr[found].job_submit_host,mon_file,&num_files,out_str);
#ifdef DEBUG
Main Index
250 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
printf(“api_com_job_received_files_list: num_files = %d\n”,num_files);
#endif
/* ------------ */
free(jc_ptr);
free(jc_ptr2);
return 0;
}
/* ==================== */
int write_rcfile(void)
{
int srtn;
char out_str[2048];
if(has_cmd_rcf){
srtn = api_rcfile_write(cmd_rcf_file,out_str);
if(srtn != 0){
printf(“%s”,out_str);
return 1;
}else{
printf(“\nSettings successfully written to rc file <%s>\n”,cmd_rcf_file);
}
}else{
printf(“\nWarning, no -rcf file specified so cannot write settings\n”);
}
return 0;
}
Main Index
Chapter 9: Application Procedural Interface (API) 251
Example Interface
#include <stdio.h>
/* ==================== */
int admin_test(void)
{
int status;
char *test_str = NULL;
char out_str[2048];
test_str = api_admin_test(orgpath,org_name,rmgr_port,&status,out_str);
if(status != 0){
printf(“\nAdmin test returns %d, text = %s”,status,out_str);
}
if(test_str != NULL){
printf(“\n%s”,test_str);
free(test_str);
}
return 0;
}
/* ==================== */
int reconfig_quemgr(void)
{
int status;
char *recfg_str = NULL;
char out_str[2048];
/*
** if user is Admin then ...
*/
if(strcmp(api_user_name,cfg->ADMIN) != 0){
printf(“\nError, user <%s> is not the Admin <%s>, so cannot reconfig\n”,
api_user_name,cfg->ADMIN);
return 0;
}
recfg_str = api_reconfig_quemgr(qmgr_host,qmgr_port,&status,out_str);
if(status != 0){
printf(“\nReconfig returns %d, text = %s”,status,out_str);
}
if(recfg_str != NULL){
printf(“\n%s”,recfg_str);
free(recfg_str);
}
return 0;
}
/* ==================== */
void print_menu(void)
{
Main Index
252 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
printf(“\n”);
printf(“Enter selection:\n”);
printf(“ 1). submit a job\n”);
printf(“ 2). abort a job\n”);
printf(“ 3). monitor a job\n”);
printf(“ 4). show QueMgr log file\n”);
printf(“ 5). show QueMgr jobs/queues\n”);
printf(“ 6). show QueMgr cpu/mem/disk\n”);
printf(“ 7). list completed jobs\n”);
printf(“ 8). write rcfile settings\n”);
printf(“ 9). admin test\n”);
printf(“ 10). admin reconfig QueMgr\n”);
printf(“ 11). quit\n”);
printf(“\n”);
printf(“choice: “);
return;
}
/* ==================== */
int get_response(void)
{
char bogus[100];
int choice;
(void)scanf(“%s”,bogus);
if( (bogus[0] == ‘q’) || (bogus[0] == ‘Q’))
choice = QUIT;
else
choice = atoi(bogus);
return choice;
}
/* ==================== */
if(choice == QUIT)
return -1;
#ifdef DEBUG
printf(“choice made was: %d\n”,choice);
#endif
if(dont_connect){
if(choice != ADMINTEST){
printf(“\nError, only valid option with -nocon is Admin test\n”);
Main Index
Chapter 9: Application Procedural Interface (API) 253
Example Interface
#include <stdio.h>
return 0;
}
}
if(choice == SUBMIT){
srtn = submit_job();
}else if(choice == ABORT){
srtn = abort_job();
}else if(choice == WATCHJOB){
srtn = watch_job();
}else if(choice == WATCHQUE_LOG){
srtn = watch_que(choice);
}else if(choice == WATCHQUE_FULL){
srtn = watch_que(choice);
}else if(choice == WATCHQUE_CPU){
srtn = watch_que(choice);
}else if(choice == LISTCOMP){
srtn = list_complete();
}else if(choice == RCFILEWRITE){
srtn = write_rcfile();
}else if(choice == ADMINTEST){
srtn = admin_test();
}else if(choice == RECONFIG){
srtn = reconfig_quemgr();
}else{
srtn = 0;
printf(“invalid choice ?\n”);
}
return srtn;
}
/* ==================== */
Main Index
254 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
char home_dir[256];
char tmpstr[256];
char error_msg[256];
char tmp_host[256];
char out_str[2048];
#ifdef WINNT
int err;
WORD wVersionRequested;
WSADATA wsaData;
#endif
#ifdef WINNT
extern BOOL console_event_func(DWORD );
#endif
extern void get_home_dir(char *);
/* ------------ */
/*
** necessary windows startup socket code ...
*/
#ifdef WINNT
wVersionRequested = MAKEWORD( SOCKET_VERSION1, SOCKET_VERSION2 );
if(err != 0){
printf(“Error, WSAStartup failed\n”);
return 1;
}
/* ------------ */
#ifdef WINNT
/*
** console handler ...
*/
(void)SetConsoleCtrlHandler((PHANDLER_ROUTINE)console_event_func, TRUE);
#endif
/* ------------ */
Main Index
Chapter 9: Application Procedural Interface (API) 255
Example Interface
#include <stdio.h>
/*
** get this hostname ...
*/
gethostname(api_this_host,256);
strcpy(tmp_host,api_this_host);
host_entry = (struct hostent *)gethostbyname(tmp_host);
if(host_entry != NULL)
strcpy(api_this_host,host_entry->h_name);
/* ------------ */
/*
** get this username ...
*/
user_str = api_getlogin();
strcpy(api_user_name,user_str);
/* ------------ */
/*
** assume binpath is from P3_HOME (or AM_HOME) ...
** command-line will override ...
*/
binpath[0] =’\0’;
orgpath[0] =’\0’;
#ifdef ULTIMA
ptr = getenv(“AM_HOME”);
#else
ptr = getenv(“P3_HOME”);
#endif
if(ptr != NULL){
strcpy(binpath,ptr);
}
#ifdef DEBUG
printf(“binpath = <%s>\n”,binpath);
#endif
/* ------------ */
/*
** get QueMgr host, port, app name, index, p3home (or amhome),
** and -rcf from command line args ...
*/
lic_file[0] = ‘\0’;
has_qmgr_host = 0;
has_qmgr_port = 0;
has_org = 0;
has_orgpath = 0;
has_timout = 0;
timout = BLOCK_TIMEOUT;
strcpy(org_name,”default”);
ptr = getenv(“P3_ORG”);
Main Index
256 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
if(ptr != NULL){
strcpy(org_name,ptr);
has_org = 1;
}
qmgr_host[0] = ‘\0’;
qmgr_port = -1111;
rmgr_port = RMTMGR_RESV_PORT;
api_application_name[0] = ‘\0’;
api_application_index = -1;
sys_rcf_file[0] = ‘\0’;
usr_rcf_file[0] = ‘\0’;
has_cmd_rcf = 0;
cmd_rcf_file[0] = ‘\0’;
strcpy(usr_rcf_file,”.p3mgrrc”);
get_home_dir(home_dir);
if(home_dir[0] != ‘\0’){
sprintf(usr_rcf_file,”%s/.p3mgrrc”,home_dir);
}
#ifdef DEBUG
fprintf(stderr,”usr_rcf_file = <%s>\n”,usr_rcf_file);
#endif
auto_startup = 0;
do_print = 0;
env_file[0] = ‘\0’;
if(argc > 1){
i = 1;
while(i < argc){
if((strcmp(argv[i],”-qmgrhost”) == 0) && (i < argc-1)){
has_qmgr_host = 1;
strcpy(qmgr_host,argv[i+1]);
i++;
}else if((strcmp(argv[i],”-qmgrport”) == 0) && (i < argc-1)){
has_qmgr_port = 1;
qmgr_port = atoi(argv[i+1]);
i++;
}else if((strcmp(argv[i],”-rmgrport”) == 0) && (i < argc-1)){
rmgr_port = atoi(argv[i+1]);
i++;
}else if((strcmp(argv[i],”-timeout”) == 0) && (i < argc-1)){
timout = atoi(argv[i+1]);
has_timout = 1;
i++;
}else if((strcmp(argv[i],”-org”) == 0) && (i < argc-1)){
has_org = 1;
strcpy(org_name,argv[i+1]);
i++;
}else if((strcmp(argv[i],”-orgpath”) == 0) && (i < argc-1)){
has_orgpath = 1;
strcpy(orgpath,argv[i+1]);
Main Index
Chapter 9: Application Procedural Interface (API) 257
Example Interface
#include <stdio.h>
i++;
}else if((strcmp(argv[i],”-auth”) == 0) && (i < argc-1)){
strcpy(lic_file,argv[i+1]);
i++;
}else if((strcmp(argv[i],”-app”) == 0) && (i < argc-1)){
strcpy(api_application_name,argv[i+1]);
i++;
}else if((strcmp(argv[i],”-rcf”) == 0) && (i < argc-1)){
has_cmd_rcf = 1;
strcpy(cmd_rcf_file,argv[i+1]);
i++;
#ifdef ULTIMA
}else if((strcmp(argv[i],”-amhome”) == 0) && (i < argc-1)){
#else
}else if((strcmp(argv[i],”-p3home”) == 0) && (i < argc-1)){
#endif
strcpy(binpath,argv[i+1]);
i++;
}else if((strcmp(argv[i],”-choice”) == 0) && (i < argc-1)){
auto_startup = atoi(argv[i+1]);
i++;
}else if(strcmp(argv[i],”-env”) == 0){
do_print = 1;
}else if(strcmp(argv[i],”-envall”) == 0){
do_print = 2;
}else if((strcmp(argv[i],”-envf”) == 0) && (i < argc-1)){
strcpy(env_file,argv[i+1]);
do_print = 3;
i++;
}else if((strcmp(argv[i],”-envfall”) == 0) && (i < argc-1)){
strcpy(env_file,argv[i+1]);
do_print = 4;
i++;
}else if(strcmp(argv[i],”-nocon”) == 0){
dont_connect = 1;
}else if(strcmp(argv[i],”-version”) == 0){
fprintf(stderr,”version: %s\n”,GLOBAL_AM_VERSION);
return 0;
}
i++;
}
}
#ifdef DEBUG
if(has_cmd_rcf)
fprintf(stderr,”cmd_rcf_file = <%s>\n”,cmd_rcf_file);
#endif
/* ------------ */
/*
** if binpath is still emtpy then its an error ...
*/
Main Index
258 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
#ifdef DEBUG
printf(“binpath = <%s>\n”,binpath);
#endif
if(binpath[0] == ‘\0’){
#ifdef ULTIMA
printf(“Error, AM_HOME env var not set\n”);
#else
printf(“Error, P3_HOME env var not set\n”);
#endif
return 1;
}
#ifdef LAPI
if(lic_file[0] == ‘\0’){
ptr = getenv(“MSC_LICENSE_FILE”);
if(ptr == NULL){
ptr = getenv(“LM_LICENSE_FILE”);
}
if(ptr == NULL){
printf(“Error, authorization file not set (MSC_LICENSE_FILE)\n”);
return 1;
}
strcpy(lic_file,ptr);
}
#else
strcpy(lic_file,”empty.noauth”);
#endif
/*
** change back-slashes to forward slashes for binpath ...
*/
i = 0;
j = 0;
k = (int)strlen(binpath);
while(i < k){
#ifdef DEBUG
fprintf(stderr,”txtmgr: i=%d, k=%d\n”,i,k);
fprintf(stderr,”txtmgr: binpath[i] = %c\n”,binpath[i]);
#endif
if(binpath[i] == ‘\\’){
if(i < k-1){
if(binpath[i+1] == ‘\\’){
i++;
}
}
tmpstr[j] = ‘/’;
j++;
}else{
tmpstr[j] = binpath[i];
j++;
Main Index
Chapter 9: Application Procedural Interface (API) 259
Example Interface
#include <stdio.h>
}
#ifdef DEBUG
fprintf(stderr,”HERE\n”);
#endif
i++;
}
tmpstr[j] = ‘\0’;
strcpy(binpath,tmpstr);
/*
** make sure binpath has no slash at end ...
*/
len1 = (int)strlen(binpath);
if(len1 > 0){
if( (binpath[len1-1] == ‘/’) || (binpath[len1-1] == ‘\\’) ){
binpath[len1-1] = ‘\0’;
}
}
/*
** mck - add /p3manager_files (or analysis_manager) to binpath ...
*/
#ifdef ULTIMA
strcat(binpath,”/analysis_manager”);
#else
strcat(binpath,”/p3manager_files”);
#endif
/* ------------ */
/*
** MCK MCK MCK - get orgpath - it WILL be the same as binpath
** for the org.cfg file ...
*/
if(has_orgpath == 0){
strcpy(orgpath,binpath);
}else{
/*
** change back-slashes to forward slashes for orgpath ...
*/
i = 0;
j = 0;
k = (int)strlen(orgpath);
while(i < k){
#ifdef DEBUG
fprintf(stderr,”txtmgr: i=%d, k=%d\n”,i,k);
fprintf(stderr,”txtmgr: orgpath[i] = %c\n”,orgpath[i]);
Main Index
260 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
#endif
if(orgpath[i] == ‘\\’){
if(i < k-1){
if(orgpath[i+1] == ‘\\’){
i++;
}
}
tmpstr[j] = ‘/’;
j++;
}else{
tmpstr[j] = orgpath[i];
j++;
}
#ifdef DEBUG
fprintf(stderr,”HERE\n”);
#endif
i++;
}
tmpstr[j] = ‘\0’;
strcpy(orgpath,tmpstr);
/*
** make sure orgpath has no slash at end ...
*/
len1 = (int)strlen(orgpath);
if(len1 > 0){
if( (orgpath[len1-1] == ‘/’) || (orgpath[len1-1] == ‘\\’) ){
orgpath[len1-1] = ‘\0’;
}
}
/*
** mck - add /p3manager_files (or analysis_manager) to orgpath ...
*/
#ifdef ULTIMA
strcat(orgpath,”/analysis_manager”);
#else
strcat(orgpath,”/p3manager_files”);
#endif
/* ------------ */
sprintf(sys_rcf_file,”%s/%s/p3mgrrc”,orgpath,org_name);
#ifdef DEBUG
fprintf(stderr,”sys_rcf_file = <%s>\n”,sys_rcf_file);
#endif
Main Index
Chapter 9: Application Procedural Interface (API) 261
Example Interface
#include <stdio.h>
/* ------------ */
/*
** check env vars if not set on command-line
*/
if(qmgr_host[0] == ‘\0’){
qmgr_hoststr = getenv(“P3_MASTER”);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_host[0] == ‘\0’){
qmgr_hoststr = getenv(“MSC_AM_QUEMGR”);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_host[0] == ‘\0’){
qmgr_hoststr = getenv(“QUEMGR_HOST”);
if(qmgr_hoststr != NULL){
strcpy(qmgr_host,qmgr_hoststr);
has_qmgr_host = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(“P3_PORT”);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(“MSC_AM_QUEPORT”);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}
if(qmgr_port == -1111){
qmgr_portstr = getenv(“QUEMGR_PORT”);
if(qmgr_portstr != NULL){
qmgr_port = atoi(qmgr_portstr);
has_qmgr_port = 1;
}
}
Main Index
262 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
/* ------------ */
#ifndef ULTIMA
/*
** checkout license ...
*/
if( (do_print == 0) && (dont_connect == 0) ){
if((srtn = api_checkout_license(lic_file)) != 0){
printf(“Error, Authorization failure %d.”,srtn);
if(global_auth_msg != NULL){
printf(“ Error msg = %s\n”,global_auth_msg);
}else{
printf(“\n”);
}
return 1;
}
#ifdef DEBUG
fprintf(stderr,”auth_file = %s\n”,lic_file);
fprintf(stderr,”checkout_license returns %d\n”,srtn);
#endif
}
#endif
/* ------------ */
/*
** init api ...
*/
srtn = api_init(out_str);
if(srtn != 0){
printf(“%s, error code = %d\n”,out_str,srtn);
return 1;
}
/* ------------ */
/*
** adjust global network timeout if desired ...
*/
if(has_timout == 0){
timout = 30;
}
srtn = api_set_gbl_timeout(timout);
if(srtn != 0){
printf(“Error, unable to set global timeout to %d secs\n”,timout);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 1;
}
/* ------------ */
Main Index
Chapter 9: Application Procedural Interface (API) 263
Example Interface
#include <stdio.h>
ptr = getenv(“AM_THIS_HOST”);
if(ptr != NULL){
if( (strcmp(ptr,”no”) != 0) && (strcmp(ptr,”NO”) != 0) ){
api_use_this_host = 1;
}
}
/* ------------ */
/*
** read orgs if possible (org.cfg is in binpath) ...
*/
org = NULL;
num_orgs = 0;
if( (has_qmgr_host == 0) && (has_qmgr_port == 0) ){
org = api_read_orgs(binpath,&num_orgs,&srtn);
if(srtn != 0){
printf(“Warning, unable to read org.cfg file, code = %d\n”,srtn);
/*
* use defaults ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
}else{
if( (num_orgs > 0) && (org != NULL) ){
/*
** figure out which quemgr to connect to ...
*/
done = 0;
for(i=0;i<num_orgs;i++){
if(strcmp(org[i].org_name,org_name) == 0){
strcpy(qmgr_host,org[i].host_name);
qmgr_port = org[i].port;
done = 1;
break;
}
}
if( (!done) && (has_org == 0) ){
/*
** use first available ...
*/
strcpy(qmgr_host,org[0].host_name);
qmgr_port = org[0].port;
done = 1;
}else if( (!done) && (has_org == 1) ){
/*
** no match found, assume this host and all ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
done = 1;
Main Index
264 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
}
}else{
printf(“Warning, unable to read org.cfg file, no orgs found\n”);
/*
* use defaults ...
*/
strcpy(qmgr_host,api_this_host);
qmgr_port = QUEMGR_RESV_PORT;
done = 1;
}
#ifdef DEBUG
printf(“\n”);
printf(“quemgr org = %s\n”,org_name);
printf(“quemgr host = %s\n”,qmgr_host);
printf(“quemgr port = %d\n”,qmgr_port);
#endif
/* ------------ */
if(! dont_connect){
/*
** get config info ...
*/
cfg = api_get_config(qmgr_host, qmgr_port, &srtn, error_msg);
if(srtn != 0){
printf(“Error, msg = %s, error = %d\n”,error_msg,srtn);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 1;
}
/* ------------ */
first_real_app_str[0] = ‘\0’;
not_first_real_app = 1;
first_real_app_num = 1;
for(i=0;i<MAX_APPS;i++){
if(not_first_real_app){
#ifdef DEBUG
fprintf(stderr,”cfg->progs[%d].app_name = <%s>\n”,i,cfg->progs[i].app_name);
#endif
Main Index
Chapter 9: Application Procedural Interface (API) 265
Example Interface
#include <stdio.h>
if(cfg->progs[i].app_name[0] != ‘\0’){
not_first_real_app = 0;
strcpy(first_real_app_str,cfg->progs[i].app_name);
first_real_app_num = i + 1;
break;
}
}
}
if(not_first_real_app){
/* error, no apps defined */
fprintf(stderr,”TxtMgr Error: No valid applications defined.\n”);
return 1;
}
if(api_application_name[0] != ‘\0’){
for(i=0;i<MAX_APPS;i++){
if(strcmp(api_application_name,cfg->progs[i].app_name) == 0){
api_application_index = i + 1;
break;
}
}
}
#ifdef DEBUG
fprintf(stderr,”application_name = <%s>\n”,api_application_name);
fprintf(stderr,”application_index = %d\n”,api_application_index);
#endif
/* ----------- */
/* DEBUGGING
if(orgpath[0] != ‘\0’){
api_write_config(cfg,orgpath,”bogus”,&srtn,error_msg);
if(srtn != 0){
printf(“Error, unable to write config files, msg = %s, code = %d\n”,error_msg,srtn);
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 0;
}
Main Index
266 Patran Analysis Manager User’s Guide
Example Interface
#include <stdio.h>
}
DEBUGGING */
/* ----------- */
/*
** initialize config values ...
*/
api_init_uiconfig(cfg);
/*
** because we are txtmgr, reset job_mon_flag of ui_config
** to be off by default ...
*/
if(auto_startup == SUBMIT){
ui_config.job_mon_flag = 0;
}
#ifdef DEBUG_111
api_rcfile_print(1);
#endif
/* ----------- */
/*
** override some settings if needed ...
*/
(void)api_rcfile_read(sys_rcf_file,out_str);
(void)api_rcfile_read(usr_rcf_file,out_str);
if(has_cmd_rcf){
srtn = api_rcfile_read(cmd_rcf_file,out_str);
if(srtn != 0){
printf(“%s\n”,out_str);
}
}
#ifdef DEBUG_111
api_rcfile_print(1);
#endif
/* ----------- */
/*
** if just a print env then do it and stop ...
*/
if(do_print){
if(do_print >= 3){
if(env_file[0] != ‘\0’){
Main Index
Chapter 9: Application Procedural Interface (API) 267
Example Interface
#include <stdio.h>
wp = fopen(env_file,”wb”);
if(wp != NULL){
api_rcfile_write2(wp,(do_print-3));
fclose(wp);
}
}
}else{
api_rcfile_print(do_print-1);
}
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return 0;
}
} /* ! dont_connect ... */
/* ----------- */
srtn = doit(auto_startup);
}else{
/*
** query for selection and do work ...
*/
while(1){
print_menu();
choice = get_response();
srtn = doit(choice);
#ifdef DEBUG
printf(“doit(%d) returns %d\n”,choice,srtn);
#endif
if(srtn < 0)
break;
}
srtn = 0;
api_release_license();
#ifdef WINNT
WSACleanup();
#endif
return srtn;
}
Main Index
268 Patran Analysis Manager User’s Guide
Example Interface
Main Index
MSC.Fatigue Quick Start Guide
Index
MSC Patran Analysis Manager User’s Guide
A D
ABAQUS, 13 daemon, 92
ABAQUS submittals, 13 General Manager, 94
IN abort Job Manager, 93
DE selecting job, 86 Queue Manager, 92
X action default host/queue, 52
abort, 86 disable, 10
Index configure, 34 disk configuration, 127
monitor, 68
administrator, 117 E
analysis edit file, 32
ABAQUS, 13 editor, 94
general, 14, 15 enable, 10
MSC.Nastran, 11 environment variables, 103
Analysis Preference, 8 errors, 162
applications, 118 executables, 4, 90
adding, 119 execute, 20
deleting, 121
F
C files
command arguments, 51 configuration, 114, 116
configuration created, 23
disk, 127 directory structure, 90
examples, 146 disk configuration, 150
files, 116 edit, 32
general, 49 examples, 146
organizational group, 152 host configuration, 146
queue, 130 queue configuration, 151
separate users, 153 save settings, 34
test, 134 selecting, 28
configuration management, 113 X resources, 111
configure, 34 filesystem
disk space, 35 add, 128
mail, 46 delete, 129
memory, 40 test, 140
miscellaneous, 58 fonts, 111
restart, 53
time, 47
Main Index
270 MSC Patran Analysis Manager User’s Guide
G MSC.Marc Submittals, 14
general, 15 MSC.Nastran, 10, 11
Generic submittals, 15 MSC.Nastran submittals, 11
H N
host, 29 NQS, 130
adding, 125
configuration, 124 O
deleting, 127 organization, 6
test, 135, 136 multiple, 103
host groups, 132
P
I physical hosts, 121
installation, 107 adding, 122
instructions, 108 deleting, 124
requirements, 107 test, 136
integration, 5 product information, 3
interface product purpose, 2
configuration management, 113 program, 4, 20, 90
user, 92 queue manager, 155
program arguments, 94
J project directory, 51
job stats, 94, 195
job viewer, 94, 195 Q
queue, 29
L add, 131
least loaded, 132 delete, 132
limitations, 11, 13, 14 test, 143
load leveling, 132 queue type, 117
LSF, 130 LSF, 117
NQS, 117
M
maximum application tasks, 118 R
modify reconfigure, 143
configuration files, 116 restart, 53
monitor, 68 rules, 11, 13, 14
completed job, 75
CPU Loads, 82 S
full listing, 82 startup arguments, 94
host status, 80 statistics, 94, 195
host/queue, 78 submit
job listing, 79 preparing, 8
queue manager log, 81 separate user, 51
running job, 69
Main Index
INDEX 271
T
test
application, 135
disk, 141
MSC.Patran AM host, 139
physical hosts, 136
queue, 143
test configuration/host, 135, 136
U
user interface, 92
V
variables, 103
X
X resources, 111
Main Index
272 MSC Patran Analysis Manager User’s Guide
Main Index