Sie sind auf Seite 1von 19

Knowledge Base 6418:

Planning your first 90 days as the new Remedy administrator

Adopting a new role and supporting new applications can be a challenging experience, especially when
there are unknowns about how and why things work as they do. The new administrator is justifiably
conservative, afraid to make changes for fear of what it may break, and mostly reacting to problems as
they appear. In contrast, the experienced Remedy administrator, if properly prepared, can be a sage
agent of change within your organization, recommending, evaluating, and championing improvements in
your system to reflect your evolving business needs. There is no single right path for going from one to
the other, but a frequent element in the successful journey is a bit of up front planning, evaluating what
roles will be required of the Remedy administrator and how best to facilitate them. A further reward of
this exercise is that its outputs also facilitate training for your backup administrator, or transitioning the
responsibilities to your replacement when you move on.

The paragraphs below discuss some ideas that you may want to consider in your development plan, and
references additional resources, which may be helpful.

The good news is that being an AR System administrator is similar in practice to being administrator of
other applications. The same traits which make a successful administrator are applicable: being thorough
when investigating changes, being responsive to problems reported, deploying changes in an orderly
fashion, following a plan of preventative maintenance, and preparing for disaster recovery.

Define your current environment

A good first step in assessing your needs is to define your present environment. This step may not be
necessary if you have been involved from the beginning, but in most cases it's good to come up to speed
on developments previous to your involvement. This may include putting together a summary sheet of
application and product versions, collecting any requirements definitions, design docs or logs maintained
either by your predecessor and/or consultants. Also gather product manuals, media, internal
documentation, and training materials.

These are very tangible components and specific details. Another aspect of defining your current
environment involves understanding the role of these applications within your organization, what needs
they fulfill, what constituencies they serve, and how these will grow and change over time. What
features or roles of the application are areas of upcoming development? When evaluating your current
environment, try to get a grasp of both the size and shape of the application and administrative role, as
well as a feel for its progression within your organization.

Define your roles and needs to fulfill them

The next step of the evaluation should investigate your needs to successfully perform your administrative
duties. Below are some common administrator responsibilities.

- Application configuration such as adding categories, groups, etc


- Preventative maintenance, such as performance tuning, archiving
- Addressing reported problems by users
- Investigating and implementing application customization (in some cases)
- Deploying application changes or upgrades
- Backup/restore process

This needs assessment step is an important part of preparation, and should be based upon your role and
current environment as discovered earlier rather than a generic list, but below are some suggestions to
start with:
Familiarity with configuration tools
Need to know what configuration steps are interdependent, or related, what the tradeoff in choosing one
or the other
How do the different parts of the application work together?
Are there different modes or roles in the application?
What diagnostics are available to troubleshoot problems?
What diagnostic modes are available?
Where is the application and configuration data stored?
What tools do I need?

In preparing a list of needs, try to organize them into three categories: things you need to know from
memory, things that can be available as reference material, and tasks or things you'll have to do once to
be prepared.

Collect and assess resources

Once you've defined your needs, it's time to evaluate and collect resources to address them. These two
steps, defining your needs and evaluating available resources, are often done jointly as part of general
“looking around”, but arguably you'll get a more complete list of your needs by making that list first,
without letting the most available resources steer your impression of your needs.

Generally available resources include documentation manuals, education courses, knowledge base
entries, and a variety of other technical resources available from the Supportweb web site.

Electronic versions of the manuals are generally available for download from SupportWeb, where you can
right click on the link and save the PDF file locally. Quite possibly, there will be additional manuals that
are related, either high-level conceptual overviews or advanced topics such as the Programmer's Guide,
which you will not have in hardcopy so perusing the Documentation area of the website is a good way to
look at what resources are available.

Likewise, course descriptions are available via the web site. In considering what courses to attend, go
back to your summary of needs, both in your roles and your categorization of needs (what you need to
know from memory, vs. available as reference, vs. having performed at least once.) Courses provide
coverage of the most frequently needed information, and provide a sound big picture of topics, laying
down a foundation that will be augmented by the manuals and experience administering the system.

Let's say that in your evaluation of your current environment, needs and resources, you determined that
your position will be an application administrator for the HelpDesk application, which will initially be
deployed and administered pretty much out-of-box. But within the next few months after deployment,
your role will grow to collect requested changes, to prioritize and implement them, and then to begin
work on implementing the full Asset Management application. In the review of the available manuals and
resources for the HelpDesk and Asset Management applications, and the Action Request System for those
customizations, you found the former to be too point-specific to determine the best way to approach
configuration, and the latter way too detailed to know where to start.

Since your initial role would be that of an application administrator, you may decide to attend the
HelpDesk course first, or that one plus the Asset Management course, to get a feel for both though your
focus on Asset will not come until later. The evaluation of your needs, assessment of other resources,
and some preliminary thoughts on how to set it up, would make for a productive class as you would be
able to address this questions during that time. Suggestions and ideas from the instructor or your
classmates can provide insights if you come to class after a little thought into the issues.
Having decided to not attend the “Administering the Action Request System” or “Advanced Topics”
courses just yet, you would exercise a bit of restraint in using those tools to make any customizations.
That is, you would focus on other administrative duties and diagnostics, collecting requests and
evaluating whether they can be done within the HelpDesk application itself. Perhaps a few months later,
you would schedule these additional courses on development in the Action Request System, and armed
with some specific requests and your own experience, you'd get a lot out of the courses. This is just one
approach, and others of course exist, but it's an example of how to let your needs and your timeline
determine what courses to take, when to take them, and how to get the most out of the experience.

Finally, as part of this step of familiarizing yourself with available resources, acquaint yourself with how to
access Technical Support and the specifics of your Support Plan, as well as internal system support
personnel like your system administrator or DBA. During what hours can you reach these resources, and
can you reach them after hours? How should your report problems or make requests of these resources?
What information will they need when you report problems to them?

Make a “To do” list

The last step in this process is to write a preliminary 90 days timeline for your preparation as the new
Remedy administrator. Your timeline might include weekly tasks, or a “focus for the week”, and should
include target dates for each task. Be sure to schedule classes early enough that you can position them
appropriately in your plan. The discussion above and the steps you've taken for each step should you
prepare to make an informed plan. Like all plans, it is subject to unknowns that can change it. Also, the
90 days timeframe if fairly arbitrary, it could be as short as 30 days, but be sure to give yourself sufficient
time. Consider how much time you'll have available for your administrative duties, if you also perform
other roles for your organization.

Remember that the objective is that at the completion of your plan, you'll be confident and accomplished
in your role of administrator, can efficiently and eagerly address problem reports and change requests,
able to broadly address questions and suggestions about the abilities of the system and your
implementation to address your organizational needs. So write your plan to accomplish that goal.

Also see the following knowledgebase article for some ideas in preparing your plan:

KB 6366: Suggestions for setting up a maintenance plan

A couple of additional resources of interest to new administrators:

On Diagnostics:
KB 6278: A primer on reading filter and workflow logs
KB 6161: Setting up a dev server, how and why

On Customization:
KB 6216: A checklist for unit testing application customizations
KB 6196: Suggestions for rolling out application customization
KB 5451: Customization requests addressed to Support vs. Consulting
Knowledge Base 6366:
Suggestions for setting up a maintenance plan

As many experienced administrators will attest, having a sound maintenance plan is key to both
preventing problems and resolving them quickly when they occur. Below are some suggestions for the
new Remedy administrator in putting together a maintenance plan.

A starting maintenance plan like the one suggested below might be 5 to 8 pages in length, and be used
both as a checklist of periodic duties and a place to collect important information on your system
environment. Such a document can be a big help to your backup administrator, and facilitate working
with tech support, your system administrator, DBA, network administrator, or other staff who are take
care of your systems.

1. Summary sheet of application and tool versions

Troubleshooting often begins with specifying the versions of each component, so this will save a lot of
time. The suggestion here is to keep a summary sheet of your current versions, along with how you find
the version number. For example, you can check the version of most client tools by choosing a menu
option, Help | About.

For other suggestions on checking versions see Knowledge Base (KB) entries: 5630, 5469, 5407, 5375,
5376, 5377, 4962, 4462, 2866

2. Summary sheet of startup locations, scripts, etc

For similar reasons, you want to record the installation directory of products on the server, and any
startup processes or scripts that are part of the application. This helps to map what components on the
system should be protected, backed up, and/or shared, and is especially useful if the system is shared
with other applications.

See also KB 4177 for an example on the HelpDesk application.

3. System specs and names

The server name, fully qualified domain name, IP address, version of OS, server model, memory, disk
space, and similar specifications on the AR Server and database server may be useful at different times
for troubleshooting.

4. Disaster recovery/backup plan

Arguably the most important responsibility of an administrator is to be able to restore the system to a
working state if some massive problem occurs. It's a rare occurrence, with Remedy or any other system,
but precautions should be taken to ensure it can be done smoothly. If working with a DBA or system
administrator, you'd want to ensure that they have all the information they need and that your roles are
clearly defined.

Test your backup plan by restoring your production system to a test server to make sure it works as
expected, and thoroughly compare behavior with your production server. Remember that backing up
your database is of no value, unless you can successfully restore it, so a little practice is in order.
Restoring your backup to a test server (instead of your production server) allows you to test without
compromising your production server, validates your solution in the “loss of server” scenario, and can be
very useful in testing customizations on your test server by mimicking production.
Consult your DBA or system administrator regarding database and system backup strategy and DBMS
tools available.

5. Check diagnostic queues or logs

Several applications include diagnostic queues and/or logs. For example, the HelpDesk application writes
files to the RappDir directory before processing them, and similarly, the CRM Solutions applications write
records to the Run Process Queue form for processing. Checking the directory or form for collecting
files/records, or failures when processing them, is a good idea to identify problems early. The ARServer
maintains the ARError.log file for server errors, and your web server or database server may also have
log files that are worth checking periodically.

6. Check available space on servers

Your database may allow for automatic growth to accommodate data, but you would still want to
periodically check system drives for available space. A common problem to watch for is running out of
swap space or space on the drive containing the Temp directory, which can cause a variety of strange
symptoms. Your system administrator may have a more complete list of system checks.

7. Server restart procedure

To ensure you have all the steps, sequence, and checks all in one place, it's a good idea to write a server
restart procedure for yourself so it is available when necessary. Typically, the process involves shutting
down the services/daemons in a specified order, such as application services (rappsvc, bbqueued, etc)
then AR server processes (arserverd, etc), and then database services if all on the same system. There
may also be a step of verifying that each service/daemon is stopped, and then restarting them in the
reverse order. Most important, remember to define a few tests to ensure everything came back up
correctly, including tests of network connectivity if the system was rebooted.

If processes on other systems connect to the AR System server, such as a web server, then you may
want to add a test of those systems to your restart checklist to ensure everything is working properly
afterward. Typically a server restart would be done after hours to minimize the impact on users, so test
thoroughly to avoid coming in the next morning to waiting problems. As a further step, you can consider
automating the process, using available tools appropriate for your platform. But beware, you cannot
automate the tests that everything came up properly.

See also KB 5332, 5113, 5059 that describe some examples, and ways of sequencing the server restart
process.

8. Routine performance checks

You may want to define a process to either spot check performance, or check for performance impacting
problems such as leaving logs enabled. This would be a time to evaluate your archive timeframes, the
number of server processes you have defined on the server, system resources, and so forth. Performing
this check every month or few months might be appropriate.

9. Location of docs and other resources

Collecting documentation and resources in a central location is a good idea so you can reference it,
archive it, and augment it when new resources are available. Collect design documents, requirements
specifications, implementation plans, and any documentation resulting from consulting projects.
10. Maintenance log

A good practice as an administrator is to keep a log of your activities, changes, and updates to the
server. Keeping an electronic copy allows you to archive it in the location suggested above and keyword
search for specific events. Maintenance logs sometimes reveal the root cause of an obscure problem,
which might not be noticed right away. It is a particularly good idea when administration duties are
shared.

11. Special notes or differences

It may make sense to break out a summary of some special circumstances from the maintenance log.
For example, if there a feature has been customized for a different purpose, or if you've been told that a
particular use is unsupported, it may make sense to keep a summary of these details. This might point to
relevant details in an uncommon problem, and be a list of considerations when evaluating an upgrade
plan.

12. Integration points to other systems

For similar reasons, you may want to keep a summary of integrations between AR System applications
and other systems or applications. For example, if there is an integration between a non-ARS Oracle
database, and a particular form in the AR System, a note of what forms, databases, and systems are
involved will be a good reminder of this integration point. The details of a successful integration often
fade with time until some unexpected change steps on it, so this is again a preventative suggestion to
avoid such a problem.
Knowledge Base 6278:
A Primer on reading filter and workflow logs

Diagnosing many different kinds of application problems, questions, and concerns begins with recreating
the behavior with the appropriate logs enabled and sending them to technical support. The reason this is
such a common step in the troubleshooting or investigative process, is that it is deterministic - it tells you
what happened, when, and with what data. Even more importantly, this diagnostic approach can localize
a problem caused by customization, providing insight into functionality, which was done by a third party,
a consultant, or a departed administrator leaving no documentation behind.

But creating logs is not a panacea. There are definite situations when it is more appropriate, things you
can do to make the logs more easy to decipher, and tricks to finding what you need a bit quicker. And in
the case of customization, logging may tell you what happened, without revealing how the intended
functionality is meant to work, and why it differs in some situations. Nor does taking logs definitively
correct the problem in a single step - it may localize a particular problem to a few workflow items you
must look at in Remedy Administrator, or it may just suggest the next step in troubleshooting by
localizing or ruling out potential causes of the behavior.

The paragraphs below are meant to share some of the history, science, and art of reading filter and
workflow logs, and how they are used by technical support to diagnose issues. The topics covered
include:

- A brief genealogy of logs


- Different kinds of logs, and when they are useful
- The basics of reading logs
- Sequence of operations
- Choosing the right text editor to examine logs
- Things to do to make logs more useful
- Using logs to diagnose a few common problems and questions

A brief genealogy of logs

The diagnostics available in the Action Request System have evolved over time, as the features of the
product have grown, and as requested by Technical Support to drive quicker resolution to problems.
Server side logging, including Filter, SQL and API logging have existed since very early versions. Client-
side logging in Remedy User to record the activity of Active Links and Macros was introduced in version
3.2, see KB 6084 on how to enable these logs. Beginning with Action Request System 4.0, these active
link and macro logging could be enabled by selecting an option directly from Remedy User (Choose Tools
a Options, Advanced Tab, and enable the Workflow Logging).

This macro logging was fairly limited, in the sense that it would not report values passed to macros, an
omission which was corrected in Remedy User version 4.03 and later. Beginning with Remedy User
version 4.5, there was also introduced a method to record server-side activity in client side logging,
thereby reducing the amount of information recorded to the logs which, as is discussed later, is quite
helpful in reading log files. (This feature of Remedy User 4.5 is described in the ARS Server
Administrator's Guide Ch 2).

Remedy client side API logging was available in Remedy User 4.01, as documented in KB 3456, but was
subsumed by the Remedy User 4.5x client trace modes (enabling API logging in Remedy User 4.5x)..

On the server side logging capabilities, the Action Request System version 4.5 introduced a more scalable
implementation of server side processing using threads, and with it, logging to track activity of multiple
threads. Prior to this, server logs (filter, API, SQL, etc) would write to different log files with a file
extension indicating which server process handled the activity. For example, after enabling filter logging,
you may see several files on your server: arfilter.log.390621, arfilter.log, arfilter.log.390635, and have to
search all of them for the desired log segment.

As of this writing, ARS 4.52 is the latest version, but later versions may include additional logging
features where appropriate, to facilitate diagnosizing new functionality. Since logging capabilities have
changed, an occasional request from tech support is to use a particular version of Remedy User to enable
logging and reproduce the behavior, as it gives better diagnostic information.

Different kinds of logs, and when they are useful

The most common logs requested are Filter and Workflow (Active Link/Macro) logging. There is a fair
amount of overlap of functionality which can be implemented by Active Links and/or Filters, so a common
request is enable both logs (on the client side, if on Remedy User 4.5) at the same time, and reproduce
the behavior so both can be examined. The most common symptoms reported are either an error
message or a field whose value is changed unexpectedly, and both of these (a Message action, a Set
Fields/Push Fields action, respectively) can be implemented by either an Active Link or Filter. A Run
Process action can also be performed via either an Active Link or Filter, so there are several cases where
these logs are appropriate. Compare the actions available when creating a new Filter, vs. a new Active
Link, and in all of these cases, it'd be appropriate to make concurrent Workflow/Filter logs.

Similarly, escalations can perform these actions, Set Fields, Run Process, etc, but these occur at a time
independent of user activity, and the errors appear in the Arerror.log instead of being presented to the
user. In these cases, an escalation log would be appropriate, typically running the log long enough for
the error or symptoms to occur, and then disabling the log.

SQL logs record the SQL commands sent from the Action Request System Server to the database server,
but does not include the results set returned from the database server. All of the data in forms, and the
definitions of the forms and workflow objects themselves, are stored in the database so SQL logging
generates a great deal of low level information. Interpreting SQL logs requires greater experience, both
with the SQL language and familiarity with the Remedy data dictionary. The Remedy data dictionary is
documented in the later chapters of the "Server Administrator's Guide" (ARS version 4.5), or the "Remedy
Administrator's Guide Volume 2" (ARS version 4.0x) but it's not expected that customers are familiar with
this low level information. SQL logging may be requested by Technical Support in situations of data
corruption, and to troubleshoot searches which return unexpected results, or just sometimes when it'd be
useful to search for a particular value that would only appear in the SQL log.

API logs show Remedy API calls made by a Remedy client (such as Remedy User, Remedy Administrator,
Remedy Import, or any API program such as RappSvc, BBExecd, etc) to the Action Request System
Server. Many but not all API calls have a subsequent SQL call to the database server. Exceptions include
checking user or form information which may be cached in memory on the Action Request System, and
consequently not require a call to the SQL server. The API calls are logged, but not all the parameters to
the API call. API logs record low-level information, which is not expected to be interpreted by customers,
except in cases where they write API programs. The Remedy Programmers Reference describes the
Remedy API calls. Most commonly, API logs will only be requested when written to the same file as the
SQL log, so the SQL calls can be correlated to an API call.

User logging records when users login and logout from the server. This may be useful to troubleshoot
cases where the number of logged in users is different from that expected, but is mostly used
independent of the other logs.

Thread logging is new in ARS 4.5x, to track thread activity. It is used primarily to investigate
performance issues and the load balancing behavior of the ARS Server.
The basics of reading logs

There is a unique prefix at the beginning of each line, indicating what workflow log the line is from. This
is particularly helpful when you enable multiple logs to a single file.

<FLTR> - Filter log


<API > - API log
<SQL > - SQL log
<WFLG> - Workflow log (ARS 4.5x, Active Links and macros)
<ACTL> - Active Links
<MACR> - Macro logging

Server side logs also include a server timestamp and login name of the user initiating the action on each
line. Most of the information that will concern you, however, is further to the right so scroll right to see
the workflow activity.

All logs begin with a line indicate the log has begun, and end with a line indicating it has been disabled.
The looks something like the following:

<FLTR> /* Thu May 17 2001 10:52:03.9740 Demo */


<FLTR> Filter Trace Log -- ON
….
<FLTR> /* Thu May 17 2001 10:52:03.9840 Demo */
<FLTR> Filter Trace Log - OFF

After this, they will include a line indicating the start of some operation. For example, if a user was to
create a new request, there would be a line indicating the beginning of this operation:

Start filter processing -- Operation - CREATE


or
Start active link processing -- Operation - On Window Open

And a line indicating the end of that operation:

Stop filter processing


or
Stop active link processing - On Window Open

Note that in the Active Link/Workflow logs, the end line gives an indication of which operation is ended,
whereas the server side logs just indicate the end of the current operation.

Immediately following the line beginning the operation are two or three lines with additional information
about the operation. Below are three examples. The first is from a filter operation, selecting request # 7
from the AR 4.5 Sampler form. The second is an active link log segment, recorded when opening the
base_ReportTemplate form in a "Create a New" window. The third is another active link log, but on the
operation of selecting an item from a character menu. This sniplet tells us the name of the field with the
attached menu, and indicates the menu is accessed when the form is open in a Create a New window.
As we'll discuss below, information such as the form name, user name, and the Request ID will help us
find the transaction of interest, by keyword searching for these values in the log.

<FLTR> Start filter processing -- Operation - GET


<FLTR> AR 4.5 Sampler - 000000000000007

<ACTL> Start active link processing -- Operation - On Window Open


<ACTL> For Schema - base_ReportTemplate
<ACTL> On screen type - CREATE

<ACTL> Start active link processing -- Operation - On Menu Choice


<ACTL> For Schema - base_ReportTemplate
<ACTL> Linked to field - Primary Audience (600001643)
<ACTL> On screen type - CREATE

In between the start and end line of the operation, you would see each workflow object defined to
execute on this operation is checked, in the sequence defined by their execution orders. In filter logs,
you will also see filters, which are disabled, but in workflow (active link) logs, active links, which are
disabled, are not reported in the logs.

For each workflow object checked, it will report PASSED if the Run If qualification is true, and will show
the If actions performed. If the Run If qualification is not met, the workflow will be reported as FAILED
qualification, and any Else actions will be performed, if any are defined.

Example, from filter log:

Checking CreateMessage (500)


Passed -- perform actions

And from workflow log:

Checking base_Report: Window Close (0)


Passed qualification -- perform if actions

There can be multiple If Actions, or Else Actions, or both, defined for a filter or active link. In the log,
they will be listed in the order defined in the workflow, with the first one being zero. Below, is an
example of an Active Link, which performs three actions, a Set Fields action, a Push Fields action, and a
Change Fields action.

<ACTL> Checking SeveralIFActions (0)


<ACTL> -> Passed qualification -- perform if actions
<ACTL> 0: Set Fields
<ACTL> Assigned To (4) = Demo
<ACTL> Status (7) = 0
<ACTL> 1: Push Fields
<ACTL> To Schema User on Server @
<ACTL> 2: Set Characteristics
<ACTL> For field -- Short Description (8)
<ACTL> Change field to read-only

Two important things to note about the log example above. First, it shows the actual values set, in the
Set Fields action. This is information you cannot get by looking at the Active Link itself, since the value
might be set from a user entered field, or from a different record than expected.

Second note: the partial log above does not tell you from where the data was pulled, or how exactly the
Set Fields action is defined. The Set Fields action may have been hard-coded to the value Demo, or it
could have been defined to be populated from a keyword, a user defined field, or a field on another form.
It also doesn't tell you if a pick list was presented to the user, if multiple entries matched, and the user
selected one with a value of Demo.

Thus, making appropriate log files does not entirely replace looking at the workflow definitions, but it
does help direct your investigation by recording what happened and with what data.

Sequence of operations

Below is an example of a filter log, performing a filter, which has several If Actions. Filter logs are more
involved than Active Link logs, because filter actions occur in multiple phases. (See ARS 4.5 Server
Administrator's Guide, Ch 5, for more information.) In the example, you can see that the Push Fields
action is defined first in the filter, but the action is actually performed after the Set Fields and Message
actions.

Checking FilterWithSeveralActions (500)


--> Passed -- perform actions
0: Push Fields
<deferred to phase 2>
1: Set Fields
Assigned To (4) = Myself
2: Message
Your request has been processed
/* Sun Jun 10 2001 15:14:51.8000 */ End of filter processing (phase 1)
/* Sun Jun 10 2001 15:14:51.8000 */ Restart of filter processing (phase 2)
/* Sun Jun 10 2001 15:14:51.8000 */ 0: Push Fields
<deferred from filter FilterWithSeveralActions>
Email Address (103) = youraddress@yourcompany.com
/* Sun Jun 10 2001 15:14:51.8000 */ Start filter processing -- Operation - SET
User - 000000000000001

The example above is simple, because there is just one filter defined on this form. In real world
situations, the first instance of "0: Push Fields" and the second instance of it, where the action is
performed in Phase 2, may be separated by several pages of other filters executing, and in Phase 2,
there may be several deferred Push Fields actions.

There are a few tricks you can use to make examining the log easier. First, note the line:
<deferred from filter FilterWithSeveralActions>
when the Push Fields action is performed. Another thing you can search for is
0: Push Fields
though this may instead find another filter that has a Push Fields action defined in it.

Choosing the right Text Editor to view log files

All of the log files described above are text-based, and you can use any text editor to view them.

With experience, you'll find there are several features in a text editor that are useful in viewing log files:

Enable/disable word wrap - The server side logs in particular are very long, so if you have word wrap
selected in your viewer it may be hard to see what is going on in the log, as a single line will wrap in your
window to be two or three. Typically, it's preferable to disable word wrap and scroll to the right to see
the other information. The ability to specify a smaller font may also be useful, so you can fit more per
line.
Keyword search both forward and backward - Notepad allows you easily search up or down in the file
from your current field selection, as does MS Word. Wordpad only allows you to search down.

Handling of LF characters - Log files generated on a UNIX server have line feed characters, rather than
CRLF characters, to separate lines. If you copy that log file to a Windows system and view it in Notepad,
the carriage return is not added, and the lines are not separated. Viewing the file in Wordpad makes it
easier to read. If you copy/paste it into MS Word, and then copy/paste it from Word, the CR characters
are added so you can "convert" it to a format easily viewed in other text editors.

How it handles the file in memory - when viewing huge log files, you may want to use Wordpad, or an
editor which handles memory effectively, rather than opening the whole file in memory.

Bookmarks and color coding - Programmers text editors often include the ability to add bookmarks or
define strings to be displayed in a different color. This may be helpful when viewing large log files, by
having Start and End operations in a particular color.

Things to do to make logs more helpful

When making logs to investigate problems or scope changes, there are several things you can do to
make them easier to examine. One is to keep them small, running the logs only long enough to capture
the targeted behavior. Server side logs can become large in a short period of time, so enabling logs after
hours or at a time when other user activity is minimized is preferable. The new ability to enable server-
side logging from Remedy User 4.5 is a big help in this, allowing you to limit server logs to activity
generated by a single user. When doing this, remember to leave the logs on long enough to capture the
behavior, confirming its completion before disabling the logs. A new tool available with ARS 4.5x allows
you to look at log files in real time, to facilitate this. See "Server Administrator's Guide page 2-19 for a
description of using arlogdisplay.exe.

Second, make a note of the request ID, the login name of the user, and the steps performed when the
logs are recording. This will make it much easier to search for the relevant part of the logs.

On ARS Server versions prior to 4.5, separate log files are generated for each server process, using the
RPC numbers. For example, enabling filter logging on a server with MPSO (multiple fast and list servers
enabled), may generate several log files in the same location named: arfilter.log, arfilter.log.390621,
arfilter.log.390635, etc. Remember to search/send all of these log files, as the action may be distributed
between multiple server processes, so examining all the log files is necessary to verify no information is
lost. When convenient, you may want to temporarily disable MPSO (disable all fast and list servers)
before making these server side logs on servers earlier than version 4.5. Typically, that is only practical
on test servers.

Finally, make a note of the behavior, which you observe. Error messages and text may be helpful in
searching the log. If there is a difference in behavior in two different situations, such as different users or
different workstations, make two sets of logs, and note which went with which scenario. This is
information, which is easy to capture at the time that you are making logs, but difficult to remember
afterwards, so it's good practice to jot notes as you make the logs.

Using logs to diagnose a few common problems and questions

A few common scenarios where log files are useful to diagnose or investigate functionality include:

Scenario 1: A field is being set to a particular value on submit, but not sure why, or from where the value
is being pulled.
Using logs: If the update to the field appears as soon as the request is saved, it may be a filter or an
active link, which makes the update, but most likely not an escalation. (if the change only appears
several minutes later, then it very likely could be an escalation.). To investigate: Enable both filter and
Active Link logs, and recreate the behavior. In searching the logs, you may search for the field name
updated, the field id, or the value. Remember that selection lists or option fields will show the value as
the integer position. For example, it'll be set to 0 for the first value.

Scenario 2: An error or warning message appears when you try to make a certain modification to a
request, indicating you must update some other field. You want to determine the source of this error,
and may want to disable it.

Using logs: The message is given by a Message action. As with the previous example, it can be given by
a filter, active link, or escalation, and the most likely candidates are the first two. Enable both filter and
active link logging, recreate the behavior, and search the logs for either the error message, or ":
Message" (that is "colon space Message").

Scenario 3: You want to customize or mimic the behavior of the application in one place, and want to
know what is involved.

Using logs: Since filter and active link logs show you the workflow defined on the operation, even if it
does not pass qualification, it is a more comprehensive view of what functionality may be involved in your
change. If you enable logs and see that there is significant workflow behind the operation, you can
better determine if the change is a simple one, or more of a project.

Ref:
See the "Server Administrator's Guide, Action Request System 4.5" Chapter 2 for additional information
and example of log files.
Knowledge Base 6216:
A checklist for unit testing application customizations

When changing or adding functionality to applications, an important step in the process is to consider the
impact of unexpected data or user behavior, so you can design the appropriate reaction and develop unit
tests which exercise them. That is, put yourself in the role of a quality assurance engineer, and think
about different scenarios and corner cases, which might get you into trouble. In this process, you may
decide some corner cases are too unlikely, or too innocuous to worry about, but your checklist can be a
valuable tool to make sure you didn't forget any that are important. Below are some suggestions.

Unexpected/invalid data - What should happen if a field used in workflow is blank, or the wrong kind of
data, or too long? If the process is user-interactive, you may want to design your workflow to first check
for acceptable values, and give an error to the user otherwise. In other cases, a NULL value is
acceptable, and the qualification of workflow is defined to not execute when the value is NULL.

Users populate fields in other ways - For example, you may define a dropdown character menu to assist
users in selecting a value, but what should happen if they instead type in a value, or select an item from
a pick list (assuming you've added one)?

Users access the data in different ways - Do your changes assume the user will access requests in a
particular way, such as from a shortcut, by querying for a request ID, or opening it from a “control panel”
type form? If so, what happens if they access them in other ways?

Users change data manually - Can users cause problems by changing data manually, after selecting it.
For example, what would happen if a user selected a name from a pick list, then saw that it was
misspelled, and manually updated it? Is that a scenario that you'd rather handle by appropriate user
training, or should you protect against it?

If one match? Multiple matches? No matches? - In cases where data is accessed or updated from other
forms, how should you handle these different scenarios? Your choices are easily configured with the
workflow object, but you may want to add this to your checklist to remember to double check these
settings, or set up unit tests to verify proper behavior.

Affect on different user communities - If your application supports multiple user communities, for
example, users who submit tickets, users who work them, and managers who track metrics, which of
these groups should be affected by your changes? Even if your modifications only target one of these
groups, you may want to include unit tests to verify no adverse behavior to the others.

Affect on different clients? Platforms? Versions? - Should your modifications only apply to certain clients,
or all clients? If you use web clients, such as RemedyWeb, ARWeb, or an application specific web client,
you may want to unit test them. Also, if you have different versions of the clients deployed, you may
want to check the behavior in them. You can use keywords such as $VERSION $ to have workflow
distinguish between different versions.

Impact on email template or automatic submissions via API programs - Should your changes apply to
these cases? If they should, remember to implement changes in filters instead of active links. If they
should not, you can use some flag field set by these mechanisms, to have workflow distinguish and
exclude these non-interactive submissions.

Affect on integrations - If you have integration to other applications, servers, or systems, you may want
to include a test to ensure your changes do not adversely affect it.
Existing data - Test with some existing requests, to see that no problems exist here. Remember that
when you create a new field on an existing form, all the existing records will have a NULL value for this
field. You may want to define a one-time escalation to update those records with a default value, if
appropriate.

What happens when an item is deleted? Archived? Reopened? - This is similar to the case of existing
data. You may want to consider cases like this in your unit testing.

Timing considerations - This is a generic testing approach, but does not come up too much in Remedy
development. You can specify an execution order in workflow, so paying attention to this during
implementation usually prevents this from being an issue.

Frustrated user testing - If there are steps in your application which may take a bit of time, you may
want to add a unit test where you act like a frustrated user, clicking on it repeatedly, just to see what
happens. This is just for your own information, to see if strange behavior or confusing behavior results.
That way you can note it before it is encountered in the field.

Slow system testing - Another good test, if some users have slower systems or small amounts of
memory.

Performance impact - This is usually best handled in design, rather than unit testing. By using efficient
queries, or considering the effect of big queries, you can avoid many performance issues, which might be
hard to track down later. A performance issue is more likely to appear when there is a lot of data, so if
you are making modifications to a system where there is already many requests, you may want to add
selected unit tests to verify your design accounts for the performance concerns.
Knowledge Base 6196:
Suggestions for rolling out application customization

FQ: What practices are recommended to deploy application changes?

How you deploy modifications to your applications can have a significant influence how users receive the
changes and whether new features are fully exercised. Below are some suggestions you may want to
consider to ensure a smooth and successful deployment of application customizations. You may also
want to use this process, perhaps in a more limited sense, when making configuration changes to
applications when the changes would impact existing users.

Note: Investigating, implementing, and testing application customization are natural antecedents to
deployment, but these topics are discussed in other places.

First and foremost, deployment of application changes should be scheduled rather than done on the fly.
The process itself may be quick and easy, but the point of scheduling deployment is in preparation, and
time for contingencies if problems are encountered. As some experienced administrators will attest,
separating the steps of planning and implementation can in itself lead to more thorough preparation, and
fewer hiccups in the process.

Typically, changes are deployed after hours or late in the day so the impact on users will be minimized,
and you'll have plenty of time for unit testing, troubleshooting, or disabling changes if any unforeseen
problems arise. If you have users who work late in the day, announcing the downtime beforehand is
desirable, and can be done at the same time as notifying users of upcoming changes, as discussed in
greater depth below.

At the time that you deploy the changes, you should already have a breakdown of what items need to be
created/migrated to the server to implement the changes. This is information, which should be
discovered in the process of scoping the changes on your development server, and documented in a
design document. For example, you may have a requirement to add a notification to senior management
in a particular case. Your design document may be just a summary of the requirement, relevant
assumptions or notes, and a list of the items that implement the change, including filters, active links,
fields, etc. This list should include the names of the objects to be created/migrated, so that you do not
forget any required components during deployment. Remember to use a naming and numbering
convention for your objects, so you can easily distinguish them from out of box objects.

After making the changes to the application, be sure to unit test the modifications. This is again a case
where having a set checklist ahead of time is preferable. Arguably, the best time to create unit tests is
during the design phase of planning the changes. This is the time when you can think broadly about the
affect of the changes on different users, different aspects and uses of the application, and what can go
wrong with the changes, and work through some scenarios. This preparation ahead of time, again, leads
to more efficient and comprehensive execution during your deployment.

A frequently neglected step in the deployment process is communicating the changes to the affected
users. Ideally, this information should be available prior to deployment or at least, at the time the users
will first encounter the changes. This allows them time to review the changes, and raise questions or
concerns when you can best address them. Without such notification, users may encounter the changes
at differing times, potentially seeing them as glitches in the system, or misinterpreting their intended
purpose.

To facilitate a complete and timely message, it's usually desirable to compose it beforehand rather than
late in the day after deployment. You can always revise it if appropriate prior to posting.
Typical content may include a brief description of the intent of the changes, specific examples of how to
access the new or changed functionality, and maybe a FAQ type summary of anticipated questions. This
may also include a summary of organizational or procedural changes associated with the use of the
application.

Two means of distributing this information to users are common - a broadcast email to user
constituencies, or posting a link or message within a form of the application itself. On the former,
maintaining a current list of users and their email addresses is greatly simplified if the email message can
be distributed from the AR System. This assumes that users do not login as guest accounts and their
preferred notification mechanism is Email.

On the latter approach, of posting a message or link in the application itself, this can be done in Remedy
Administrator by adding a link or message area to a commonly used form, which can be updated with the
latest updates on the application. Advantages of this approach are that the message will be accessible
even by users who do not use email, and the information will be readily accessible from the application
itself, when the information is needed. If an intranet web page is maintained for internal IT information,
adding a link to a web page is an attractive alternative, as is makes the information readily accessible,
can be referenced and updated easily, and can incorporate pictures and graphics to illustrate the
changes.

Finally, after you've completed the deployment, don't forget to update your change log, where you record
your changes to the server. Keeping an accurate change log is important in troubleshooting, planning for
future changes, and for bringing aboard new Remedy Administrators. If you've followed the suggestions
above, the entry into the change log is straightforward. You should include the information from your
design doc on the change and list of components, your standard checklist of unit testing after the
change, and optionally the notification sent out to your users to announce the change. Don't forget to
include the date of the change, and any issues that came up during deployment.
Knowledge Base 6161:
Setting up a dev server, how and why

FQ: How can I set up a development or test server without costing a fortune? Why is it important?

A sound recommendation when customizing or troubleshooting applications is to first test changes on a


development server. In some cases, a development server is not available, making a simple matter much
more risky and difficult to perform. The paragraphs below examine some obvious and not-so-obvious
advantages to having a test server, and some cost considerations in setting one up to meet your needs.

First, on the advantages of keeping a development server. Anyone who has made a simple mistake that
had far-reaching consequences can attest to the necessity of having a test server to prototype changes
and test routine tasks. A good example is writing an escalation or business rule that updates requests or
sends notifications to managers, in which case misstating the qualification can cause a mass update or a
flood of emails to affected parties. Testing it out first would avoid such errors.

Less obvious reasons to have a test server include more productive workdays, efficient troubleshooting,
and better receptivity to changes in the user population. A test server allows you to make changes
during regular business hours without impacting users, and while colleagues and technical support are
available if problems or questions arise. Troubleshooting a problem on a test server is greatly simplified
by having minimal user traffic during logging, and without waiting until after hours. Furthermore, testing
changes on a development server first allows for more successful rollout of new functionality to users,
since it can be fully scoped and implemented, and introduced in a way to minimize confusion. Finally, a
test server is a required component to a healthy administrative routine of planning and preventative
maintenance.

The ideal situation, if cost were no object, would be a test server that was a duplicate of the production
server, with the same data, applications, hardware, and simulated user load. That would provide a very
accurate test bed for changes, as well as investigating performance tuning, upgrade plans, and validating
a backup/restore process.

But for minimal cost, you could instead set up an unlicensed Remedy server on a Windows NT/2000
workstation, either on the administrator's desktop or a reasonable Win-tel system, using an unlicensed
database and the same application versions as installed on the production system. Though this system
would have limitations, it would satisfy the need for the most common use of a test system -
investigating and prototyping changes to application workflow.

At present, the AR System Server and all of the Remedy Applications can be installed and used without
licenses. There are several restrictions with this: No more than three fixed user licenses can be assigned,
no more than 2000 records can be entered per form, and you cannot define multiple server processes.
The impact of this, though, is that you cannot simulate production levels of concurrent user activity, data
load, or use some of the load-balancing and scalability features of the server and applications.

Similarly, several database vendors offer products, which can likewise be used in non-production
environments, with restrictions in performance, data load, and available features. Personal Oracle or SQL
Server Desktop Edition may be an economical avenue to setting up a test server. See your database
vendor license agreement and documentation for details on restrictions, requirements, and limitations.

In practice, a viable test server with the entire suite of ITSM or CRM applications, the database, and all
client tools can be installed on a typical administrator's desktop, if that system is a fairly recent system
running Windows NT/2000, preferably with 512MB or more RAM. Depending on the other purposes of
the system, acceptable response time, and productivity considerations, a better system may be
warranted.
In planning what expense and effort to put into a development system, consider the following:

It's best to match versions of software installed on the production server, except when testing upgrades.
Minimize differences between production and dev environments when possible. To this end, you may
differ from the above mentioned minimum test server, if your server is on a UNIX platform, for example.

If the database version used in production and development servers is the same, you can also use the
system for testing your backup/restore process. This too may warrant a deviation between the minimum
test server approach described above.

If your testing requirements require you to exceed the 3 fixed user write licenses, or 2000 request per
form limitation of the unlicensed Remedy server, you can request temporary evaluation licenses from
Remedy. If you suspect this may be a future use of the test server, you may want to select hardware,
which can support such testing, and investigate whether similar temporary licenses are available for the
database (DBMS) vendor.

Use naming conventions that make it easy to distinguish between production and development servers.
Always have client tools prompt for login, and you may want to use different account information for
production and development servers, to minimize the likelihood of accidentally making changes to the
production server.

If administration responsibilities may be shared with a backup Remedy administrator, then you may want
to put the test server on a separate system from the administrator's desktop or create a Windows login
for the backup administrator so they would have access to the test server during the administrator's
absence.

Remedy Migrator is a separately sold product that facilitates easy movement of data and workflow
between a test and production server, or reporting a list of differences in objects on the two servers. You
may want to go cheap on the dev server hardware and budget for this tool or something similar if
workflow changes will be the primary use of the dev server, or alternatively, invest in a more powerful
test server and choose not to use Remedy Migrator, if backup/restore operations, upgrade planning, and
performance testing are expected to be the primary purpose of the test server.

Remedy,a BMC Software company.Remedy,the Remedy logo and all other Remedy product or service names are registered trademarks or trademarks
of BMC Software,Inc.
© 2003 BMC Software,Inc. All rights reserved.

Das könnte Ihnen auch gefallen