Beruflich Dokumente
Kultur Dokumente
Administration Guide
AR100500-01-ACN-EN-04
OpenText™ Archive Server
Administration Guide
AR100500-01-ACN-EN-04
Rev.: 2017-June-23
This documentation has been created for software version 10.5 SP1.
It is also valid for subsequent software versions as long as no new document version is shipped with the product or is
published at https://knowledge.opentext.com.
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: https://support.opentext.com
For more information, visit https://www.opentext.com
Disclaimer
Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents
Part 1 Overview 25
Part 3 Configuration 69
Introduction
OpenText Archive Server (short: Archive Server) provides a full set of services for
content and documents. Archive Server can either be used as an integral part of
OpenText Enterprise Library or as stand-alone server in various scenarios.
“Overview” on page 25
Read this part to get an introduction of Archive Server, the architecture, the
storage systems and basic concepts like logical archives and pools. You find also
a short introduction to the Administration Client and its main objects.
“Configuration” on page 69
This part describes also the preparation of the system and the configuration of
Archive Server: logical archives, pools, jobs, security settings, connections to
SAP and scan stations.
Audience and This document is written for administrators of Archive Server, and for the project
knowledge managers responsible for the introduction of archiving. Further, all readers who
share an interest in administration tasks and have to ensure the trouble-free
operation of Archive Server. The following knowledge is required to take full
advantage of this document:
On the basis of this information you can decide which scenario you are going to use
for archiving and how many logical archives you need to configure. You can
determine the size of disk buffers and caches in order to guarantee fast access to
archived data.
ii What’s new?
This version features the following:
Applications
Enterprise
Document
Library SAP Others ...
Pipeline
Services
Storage Devices
Applications
Archive Server
Archive Server incorporates the following components for storing, managing and
retrieving documents and data:
• Document Service (DS), handles the storage and retrieval of documents and
components.
• Storage Manager (STORM), manages and controls the storage devices.
• Administration Server, provides the interface to the Administration Client
which helps the administrator to create and maintain the environment of Archive
Servers, including logical archives, storage devices, pools, etc.
Administration tools
To administer, configure and monitor the components mentioned above, you can
use the following tools:
• Administration Client is the tool to create logical archives and to perform most of
the administrative work like user management and monitoring. See also
“Important directories on Archive Server” on page 29.
• Archive Monitoring Web Client is used to monitor information regarding the
status of relevant processes, the file system, the size of the database and available
resources. This information is gathered by the Archive Monitoring Server from
Archive Server. See also “Using OpenText Archive Server Monitoring“
on page 325.
• Document Pipeline Info is used to monitor the processes in the OpenText
Document Pipeline.
Storage devices
Various types of storage devices offered by leading storage vendors can be used by
Archive Server for longtime archiving. See “Storage devices” on page 36.
<OT logging>
Directory used for Archive Server log files.
Windows default: C:\Documents and Settings\All Users\Application Data
\Open Text\var\LogDir\
UNIX default: /var/adm/opentext/log/
<OT var>
Directory used for Archive Server variables.
Windows default: C:\Documents and Settings\All Users\Application Data
\Open Text\var\
UNIX default: /var/adm/opentext/
Archive Server only stores the content of documents. The metadata describing the
business context of the documents are stored in Enterprise Library’s metadata
repository or leading application. The link between the metadata and the content is
the unique ID mentioned above.
Archive Server represents a large virtual storage system, which can be used by
various applications. All documents that belong to a business process can be
grouped together by the concept of a logical archive. In general, a logical archive is a
collection of documents that have similar properties.
Application
Buffer Cache
Pool
Write activity
Storage Device
(Volumes)
Application
Content
request
Content
delivery
Archive Server
Logical Archive
Buffer
Cache
Pool
Content
delivery
Storage Device
(Volumes)
1. Content is requested by a client. For this, the client sends the unique document
ID and archive ID to Archive Server.
2. Archive Server checks whether the content consists of more components and
where the components are stored.
3. If the content is still stored in the buffer or in the cache, it is delivered directly to
the client.
4. If the content is already archived on the storage device, Archive Server sends a
request to the storage device, gets the content and leads it forward to the
application. Content is returned in chunks, so the client does not have to wait
until the complete file is read. That is important for large files or if the client only
reads parts of a file.
The logical archive does not determine where and the way the content is archived.
The archive settings define the general aspects of data handling during archiving,
retrieval, and at the end of the document lifecycle.
Archive 1
Archive 2
• Pool(s) to specify the storage platform and to assign the buffer(s) to the
designated storage platform(s); see also “Pools and pool types” on page 38.
• Buffer(s) and disk volumes to store incoming content temporarily; see also “Disk
buffers” on page 36.
• Storage devices and storage volumes for longtime archiving of content; see also
“Installing and configuring storage devices” on page 71.
• Cache to accelerate content retrieval. Only necessary if slow storage devices are
used; see also “Caches” on page 39.
• Retention period for content; see also “Retention” on page 103.
• Compression and encryption settings; see also “Data compression” on page 100
and “Encrypted document storage” on page 155.
• Security settings and certificates; see also “Configuring the archive security
settings” on page 112.
• An Archive Cache Server, if used; see also “Configuring Archive Cache Server“
on page 225.
Documents can be fast retrieved as soon as they are in the disk buffer. The disk
buffer works as read cache in this case. Retrieval time can increase if the content is
written to the final storage platform.
Related Topics
Archive Server primarily supports storage devices that offer WORM functionality,
retention handling, or HSM functionality. Depending on their type, the storage
devices are connected via STORM, VI (vendor interface) or API (application
programming interface).
Related Topics
Below you find criteria for single file storage and ISO images.
ISO images
• Very small files
• Same document type
• Same lifecycle
• Bulk deletion at the end of the lifecycle
• Less administration effort
• Simple backup or migration
• Partial read access to documents
Related Topics
• “Installing and configuring storage devices” on page 71
• “Creating and modifying pools” on page 117
• “Pools and pool types” on page 38
Note: For backing up the documents stored in a pool, so-called shadow pools
can be assigned to the original pool; see “Creating and configuring shadow
pools” on page 122
The same storage platform can be used in different archives with different pool
types. The following pool types are currently available:
Notes
• As HDSK pools do not use a buffer, they are not intended for use in
production systems. Use them only for test purposes.
• HDSK pools cannot have shadow pools assigned.
Figure 2-4 illustrates the dependencies between pool types and storage systems.
Archive Server
Single File Storage Container Storage
Document Service
STORM
Storage Devices
NAS, HSM,
SAN
NAS, HSM,
CAS CAS
VI: Vendor interface SAN
FS: File system interface
Related Topics
• “Creating and modifying pools” on page 117
• “Installing and configuring storage devices” on page 71
2.4.5 Caches
Caches are used to speed up the read access to documents. Archive Server can use
several caches: the disk buffer, the local cache volumes and an Archive Cache Server.
The local cache resides on the Archive Server and can be configured. The local cache
is recommended to accelerate retrieval actions. An Archive Cache Server is intended
to reduce and speed up the data transfer in a WAN. It is installed on its own host in
a separate subnet.
Related Topics
• “Configuring caches” on page 93
• “Configuring disk volumes” on page 85
• “Configuring Archive Cache Server“ on page 225
2.5 Jobs
Jobs are recurrent tasks, which are automatically started according to a time
schedule or when certain conditions are met. This allows, for example, that
temporarily stored content is transferred automatically from the disk buffer to the
storage device. See also “Configuring jobs and checking job protocol“ on page 141.
3.2.1 Infrastructure
Within this object, you configure the required infrastructure objects to enable the
usage with logical archives.
Buffers
Documents are collected in disk buffers before they are finally written to the
storage medium. To create disk buffers, see “Configuring buffers” on page 88.
To get more information about buffer types, see “Disk buffers” on page 36.
Caches
Caches are used to accelerate the read access to documents. To create caches, see
“Configuring caches” on page 93.
Storage Devices
Storage devices are used for longtime archiving. To configure storage devices,
see “Installing and configuring storage devices” on page 71.
Disk Volumes
Disk volumes are used for buffers and pools. To configure disk volumes, see
“Configuring disk volumes” on page 85.
3.2.2 Archives
Within this object, you create logical archives and pools, you can define replicated
archives for remote standby scenarios and you can see external archives of known
servers.
Original Archives
Logical archives of the selected server. To create and modify archives, see
“Configuring archives and pools“ on page 99.
Replicated Archives
Shows replicated archives; see “Logical archives” on page 99.
External Archives
Shows external archives of known servers; see “Logical archives” on page 99.
3.2.3 Environment
Within this object, you configure the environment of an Archive Server. For
example, Archive Cache Servers must first be configured in the environment if it
should be assigned to a logical archive.
Cache Servers
Cache servers can be used to accelerate content retrieval in a slow WAN. See
“Configuring Archive Cache Server“ on page 225
Known Servers
Known servers are used for replicating archives in remote standby scenarios.
See “Adding and modifying known servers“ on page 215.
SAP Servers
The configuration of SAP gateways and systems to connect SAP servers to
Archive Server. See “Connecting to SAP servers“ on page 201.
Scan Stations
The configuration of scan stations and archive modes to connect scan stations to
Archive Server. See “Configuring scan stations“ on page 207.
3.2.4 System
Within this object, you configure global settings for the Archive Server. You also
find all jobs and a collection of useful utilities.
Alerts
Displays alerts of the “Admin Client Alert” type. See “Checking alerts”
on page 323. To receive alerts in the Administration Client, configure the events
and notifications appropriately. See, “Monitoring with notifications“
on page 315.
Events and Notifications
Events and notifications can be configured to get information on predefined
server events. See “Monitoring with notifications“ on page 315.
Jobs
Jobs are recurrent tasks which are automatically started according to a time
schedule or when certain conditions are met, for example, to write content from
the buffer to the storage platform. A protocol allows the administrator to watch
the successful execution of jobs. See “Configuring jobs and checking job
protocol“ on page 141.
Key Store
The certification store is used to administer encryption certificates, security keys
and timestamps. See “Configuring a certificate for document encryption”
on page 177.
Policies
Policies are a combination of rights which can be assigned to user groups. See
“Checking, creating, or modifying policies” on page 190.
Reports
Reports contains the tabs "Reports" and "Scenarios" which display the generated
reports and available scenarios respectively. See “Scenario reports“ on page 239.
Storage Tiers
Storage tiers designate different types of storage. See “Creating and modifying
storage tiers” on page 137.
Utilities
Utilities are tools which are started interactively by the administrator; see
“Utilities“ on page 267.
3.2.5 Configuration
Within this object, you can set the configuration variables for:
Archive Server
Shows configuration variables related to the Archive Server. This includes
Administration Server, database server, Document Service logging, Notification
Server, Archive Timestamp Server.
Monitor Server
Shows configuration variables related to the Archive Monitoring Server.
Document Pipeline
Shows configuration variables related to the document server.
For a description of how to set, modify, delete, and search configuration variables,
see “Setting configuration variables“ on page 241.
For a complete list including short descriptions of all configuration variables, see
“Configuration parameter reference” on page 357.
Note: The Archive Center scenario is a new feature set, which is only available
when Archive Server was installed using the OpenText Archive Center
Installer.
This part is intended for the Archive Center scenario exclusively. It covers specific
aspects that are only relevant for this scenario. Note that it does not cover all aspects
that are relevant to run Archive Server in the Archive Center scenario – you still
need to pay attention to the other parts within this guide.
For more information about the Archive Center scenario, see OpenText Archive Center
- Installation and Configuration Guide (AREIA-IGD).
New collection When the business administrator creates a collection in the Archive Center
Administration client, the following happens on Archive Server:
Note: When creating collection and data source for SAP ArchiveLink, the
course of events differs. See “SAP ArchiveLink scenario” on page 53.
1. A logical archive is created including a new single file (FS) pool and a new
buffer for the new archive. If Additional Copies are used, corresponding
shadow pools and buffers are created.
The name of the logical archive is derived from a randomly generated identifier
of the collection and an appended, consecutive number.
Note: These volumes are required to achieve a running system from within
Archive Center Administration. They should be used for demonstration or
testing only.
The created pools, buffers, and volumes adhere to the following naming
convention:
Archive: <short name>-<host identifier>-<m>
Pool: P<n>
Buffer: <short name>-<host identifier>_<m>_P<p>B
Volume for pool: <short name>_<host identifier>_<m>_P<n>_<n>
Volume for buffer: <short name>_<host identifier>_<m>_P<p>B_<n>
where <m>, <n>, and <p> are consecutive, whole numbers starting with 1. <short
name> is the tenant’s Short name, which was defined when the tenant user
group was created; see “Creating tenants” on page 195.
P1 is the original pool, P2 is the first shadow pool, P3 is the second shadow pool.
Important
Do not use the automatically created disk volumes for production
systems.
3. Write and purge jobs for the new pools are created (PoolWrite_<short
name>_<host identifier>_<m>_P<p>B, Purge_<short name>_<host
identifier>_<m>_P<p>B). They are scheduled to run every full hour by default.
Furthermore, Jobs for generating statistics are available to allow accounting.
These jobs are enabled depending on the operating mode. For details, see
“Configuring accounting and statistics“ on page 57.
New data When the business administrator creates a data source (within a previously created
source collection), the following happens on Archive Server:
Configuration All archiving services monitor their configuration, and, if the configuration is
changes changed significantly, stop themselves. Examples of significant changes are:
Disabling or deleting of the collection or the data source; changing of values for
retention, archiving mode, journaling, archiving group, or allowed domains.
The scheduler will restart the job with changed configuration. If the collection or
data source is disabled, the service gets started and will immediately stop again. The
jobs for deleted data sources will be removed.
Email scenarios
Before making any such significant changes, make sure that the email root
directory has enough disk space left to hold all emails coming in while the
archiving service is stopped. Start the archiving job manually if necessary.
Related Topics
• “Creating and configuring shadow pools” on page 122
After the business administrator has defined the name of the archive of the SAP
system in Archive Center Administration, Archive Server subsequently creates a
logical archive of identical name.
Data source In contrast to the other scenarios (File Archiving, Email), it is not possible to create
data sources directly in Archive Center Administration. ArchiveLink data sources
are created as soon as a authentication certificate is sent from the SAP system to the
Archive Server.
Note: You can also send the certificate by other means, for example, by using
the putCert option of the certtool command; compare “Creating a certificate
using the Certtool” on page 172.
Subfolders When a new data source is created in Archive Center Administration, a subfolder is
added to the email root directory. Its name corresponds to the journaling email
address. Within this subfolder, the following types of subfolders are created:
• Inbox folder
• Journal Processor <1..n>
• Problems folders (Problems Archive, Problems Rejected, Problems Retry)
• tmp
As new emails arrive, they are stored in the Inbox folder of the corresponding
journaling email address. To avoid having too many email files in one folder, a
subfolder structure is created which reflects the date and hour when the email
arrived, with consecutively numbered folders at the lowest level:
\emaildir\<journaling address>\<year>\<month>\<day>\<UTC hour>\<1..n>
Email distribu- If the email archiving service runs, the incoming emails are then distributed to the
tion Journal Processor folders. From there, they are deleted after successful archiving
or moved to one of the Problems folders. The subfolder structure is kept
throughout, that is:
\emaildir\<journaling address>\JournalProcessor 01\<year>\<month>
\<day>\<hour>\<1..n>
\emaildir\<journaling address>\Problems Archive\<year>\<month>\<day>
\<hour>\<1..n>
\emaildir\<journaling address>\Problems Rejected\<year>\<month>\<day>
\<hour>\<1..n>
\emaildir\<journaling address>\ProblemsRetry\<year>\<month>\<day>
\<hour>\<1..n>
The Problems Retry folder holds emails for which archiving failed a configured
number of times while they are waiting to be processed again.
Note: You can configure the number of times to process failed emails in the
Max. number of retries variable (internal name:
EC.ECA.configuration.processor.maxretries).
The Problems Rejected folder holds emails which did not match the restrictions
configured for the SMTP server, for example, none of the recipients was in the range
of allowed domains.
The Problems Archive folder holds emails that could not be archived within the
maximum number of retries.
Storage system As with every archiving solution, the storage system for Archive Center must be
planned, installed, and configured.
Tenants In cloud scenarios, tenant user groups must be created in Administration Client. For
further information, see “Creating tenants” on page 195.
Volumes After the business administrator has created an archive (“collection”) using Archive
Center Administration, the Archive Server administrator must attach disk volumes
to the pools and buffers of the new archive.
Tip: Save the administrator’s contact data in the Archive Server configuration.
This information is visible in Archive Center Administration, and the business
administrator can easily contact the Archive Server administrator.
For details, see “Configuring miscellaneous Archive Center options“
on page 61.
Jobs Check the scheduling of the new Archive Center jobs, for example, to balance peak
load. While the jobs are scheduled for typical scenarios, it can be required to re-
schedule them on your system.
Background
• “Setting up the infrastructure“ on page 71
• “Configuring archives and pools“ on page 99
If the email collection has been configured only recently, examine the emails in
Problems Rejected to make sure that they have been rejected justifiably rather than
due to a configuration error. Later, you can simply delete them.
Emails building up in the Problems Archive folder may point to problems with the
Archive Server or CMIS system. If such an issue has been identified and fixed,
reprocess the emails by moving them back into the Inbox folder.
Note: The original subfolder structure must be maintained when moving the
emails. Like the folder structure, the email file names are generated from the
year, month, day, and UTC hour of their arrival and a consecutive number. If
the file name does not match the folder structure, the email is rejected.
Tip: To reprocess the mails from a problem folder, drag the year’s folder from
the corresponding Problems folder to the Inbox folder.
Depending on the number and size of emails ingested into Email Cloud Archiving,
monitor the email root directory on a daily or even hourly basis. See also Monitoring
the Email Archiving Service.
Note: The default data sources (email, file shares) are licensed per user.
Enabling transaction logs is not mandatory in these cases.
Important
If your license is based on transactions you must enable this option.
• Enable detailed transaction logs
If set to on, every ArchiveLink and CMIS request is logged into a transaction log
file. The SYS_EXPORT_TRANSACTIONLOG writes the transactions to CSV formatted
files (one file per archive and per day).
Internal name: AS.DS.TRANSACTIONLOG
• Enable storage allocation
If set to on, the amount of storage allocated to hold user data (including copies) is
collected by the SYS_SNAPSHOT_STORAGE_ALLOCATION job. When this data is
exported to CSV formatted files by the SYS_EXPORT_ARCHIVE_UTILIZATION job,
the data will be augmented by their corresponding statistics data (that is
archiving and retrieval operations).
Internal name: AS.DS.STORAGE_ALLOCATION
Note: If your license is based on transactions you should enable this option
to write CSV files. Alternatively, the corresponding information can be
extracted using a database report.
Background
• “Configuring jobs and checking job protocol“ on page 141
Important
Do not change the default scheduling of the accounting and statistics jobs.
Otherwise, the accounting results can become distorted.
SYS_SNAPHOT_STORAGE_ALLOCATION job
Purpose: Creates a snapshot of the current storage allocation
Note: Calculating the storage allocation can be an expensive operation and
should be scheduled deliberately. The job is labeled for the UTC day on which
the job is running, which can differ from the scheduled time (local server time).
Default schedule: daily at 23:55
SYS_EXPORT_ARCHIVE_UTILIZATION job
Purpose: Exports for each local archive one CSV formatted file per month
displaying the data read and written by clients and the storage allocation. Used
for volume-based licensing.
Location of the files: %ECM_VAR_DIR%/statistics/<archive>/<year>/
<month>/details/<YYYYMMDD>-<archive>.atl.csv
Format of the CSV files: date; components read; components written; bytes
read (MB); bytes written (MB); storage allocated (MB)
Note: This job should run once a day and must be scheduled after the
SYS_SNAPSHOT_STORAGE_ALLOCATION job and the first run of the
SYS_CONDENSE_STATISTICS job for a UTC day as it merges these two data
sources according to date.
Default schedule: daily at 0:20
Note: If your license is based on transactions you should run this job daily
to write CSV files. Alternatively, the corresponding information can be
extracted using a database report.
SYS_EXPORT_TRANSACTIONLOG job
Purpose: Exports for each local archive one CSV formatted file per day
displaying the received client requests.
Location of the files: %ECM_VAR_DIR%/statistics/<archive>/<year>/
<month>/details/<YYYYMMDD>-<archive>.dtl.csv
Format of the CSV files: date; local time; time zone; command; archive;
docID; bytes; result; user; application; IP address; interface
Note: This job can run several times per hour to avoid a backlog of entries to
export. This job is disabled in the “On-Premises” scenario.
Default schedule: every 15 min
SYS_CONDENSE_STATISTICS job
Purpose: Combines statistics, which are sampled per minute, to hours, days, and
months to speed up access to statistics when querying days or months.
Note: Statistics are recorded according to their UTC timestamps. The server’s
time zones or daylight saving are not taken into consideration. That is, a day
from a statistics point of view usually is not identical to a calendar day in the
server’s or client’s time zone.
This job should run every hour.
Default schedule: hourly (0:15, 1:15, 2:15, ...)
• Enter the path to the directory in which to store the billing XML files as the
Value of the Output directory for billing reports configuration variable
(internal name: BILLING_REPORT_DIR).
For a description of how to set, modify, delete, and search configuration
variables, see “Setting configuration variables“ on page 241.
XML structure The billing information in the XML file is structured as shown in the example.
Allowed users Only users who belong to the <tenant>_ED group are allowed to work with Archive
Center Access.
Export directory Whenever a business administrator exports documents, the documents are
temporarily saved in a local directory. By default, this directory is C:\ProgramData\
OpenText\var\exports (%ECM_VAR_DIR%\exports). The path can be changed
during the installation of Archive Center. You can also change the path in the
configuration variable Directory for the generated EDRM-XML files (internal
name: AS.AS.BIZ_EXPORT_DIRECTORY).
If you change the directory, existing exports in the old directory cannot be seen in
Archive Center Access anymore unless copied to the new directory.
Important
Depending on the scenario and usage, the exported files can become very
big. Take care to provide enough disk space for the export directory.
Further For details about Access, see section 13 “Working with OpenText Archive Center
information Access” in OpenText Archive Center - Installation and Configuration Guide (AREIA-
IGD).
Procedure
• “Setting configuration variables“ on page 241
The following sections can help you to avoid and solve problems related to Email
Cloud Archiving.
10.1 Logging
Log files Email Cloud Archiving stores log files in the <OT logging> folder. In particular, the
following log files exist:
• ecaImap.log
Log file of the IMAP server responsible for serving Personal Archive emails to
users
• ecaSmtp.log
Log file of the SMTP server that receives incoming journaled emails
• ecaServiceStart_<journaling email address>.log
ecaServiceStop_<journaling email address>.log
Start and stop messages of the email archiving service
• emailarch_<data source identifier>.log
Log file of the email archiving service
Protocol file The email archiving service keeps a record of all processed emails in <OT var>\eca.
The name of this CSV file is emailarch_<data source identifier>-<date>.csv.
A new protocol file is started every day.
Example:
2013/12/10 16:14:39:496,Success,302,1,3,291,0,3163,30447,<52A72EAA.
40808@gmaildev.opentext.com>,jrn-processor1,
2013/12/10 16:14:39:798,Duplicate,0,1,2,455,0,3189,30446,<52A72EAA.
40808@gmaildev.opentext.com>,jrn-processor1,
Note: For further information about Java monitoring, see the Java SE
Monitoring and Management Guide on the Oracle website (http://
docs.oracle.com/javase/7/docs/technotes/guides/management/toc.html).
Enabling To enable JConsole monitoring for a Email Archiving Service, edit the corresponding
JConsole Email Cloud Archiving job.
monitoring
3. In the result pane, edit the corresponding Email Archiving job (named
EmailArchiving_<host identifier>_<m>_s<q>).
Example:
Monitored The journaling service provides the following service attributes for monitoring:
attributes
Archive Rate
Number of emails archived per millisecond
Created
Time when the service was started
LastUpdated
Time of last update of the monitoring data
ProblemsArchiveCount
Number of emails in the Problems Archive folder, that is how many emails
could not be archived
ProblemsRetryCount
Number of emails in the Problems Retry folder, that is emails that could not be
archived but the service will retry to archive them
ProblemsRejectedCount
Number of emails in the Problems Rejected folder, that is emails that are
excluded from archiving (like illegal domain names of email users)
ProcessingCount
Total number of processed emails
Processors
Number of parallel processors
TotalArchiveTime
Total time used for archiving
TotalArchived
Total number of archived emails
TotalDuplicates
Total number of emails detected as duplicates
TotalRejected
Total number of rejected emails
Related Topics
• “What happens on Archive Server?“ on page 51
Before stopping and restarting the archiving job after an out of memory error,
remove the email in question from the processing folder of the affected thread and
set it aside for reprocessing later.
• Limit the number of processing threads per Email Archiving Service by editing
the Count configuration variable (internal name:
EC.ECA.configuration.processor.count; default: 5).
• Raise the maximum Java heap size.
2. Locate the :STARTPROCESS label. Change the command below the label to,
for example,
"%JAVA_HOME%\bin\java" -Xrs -Xms128m -Xmx2048m %JMXJAVAARGS%
com.opentext.eca.scenario.ServiceControl %RMIOPTION% %RMIPORT%
%*
This will set the maximum available Java heap size to 2 GB.
3. Stop and restart the archiving job for the changes to take effect.
Note: The maximum heap size is limited by the physical memory of the
host, the number of archiving services, and the memory requirements of
Archive Server and the operating system.
Before you can start configuring the archive system, in particular the logical
archives, their pools and jobs, you have to prepare the infrastructure on which the
system is based.
1. Create and configure disk volumes at the operating system level to use it as
buffer, cache, or storage device.
2. Configure the storage device for longtime archiving. Set up the connection to
the Archive Server.
3. In the Administration Client:
• Set up the connection between the storage device and Archive Server.
• Add prepared disk volumes for various uses as buffers or local storage
devices (HDSK).
• Create disk buffers and attach hard-disk volumes.
• Create caches and specify volume paths.
• Check whether the storage device is usable.
Note: Storage devices can now be connected to Archive Server using the
Administration Client.
Storage devices are configured and administered either in the Storage Devices or in
the Disk Volumes in the Infrastructure object in the console tree. See the tables 11-1
and 11-2 below for specific systems.
Storage There are two main types of devices that are connecting using the Storage Devices
Devices node:
• Container storage: virtual jukeboxes that are managed by STORM
These kinds of devices are also called “write at-once” and are described in
“Adding a write at-once (STORM) device” on page 73.
• Single file storage: hard disk-based storage devices (“Generalized Store”, GS)
that are connected with an API.
These kinds of devices are also called “single file (vendor interface)” and are
described in “Adding a single file (VI) device” on page 79.
Disk Volumes NAS and local hard disk devices are administered in the Disk Volumes node; see
“Configuring disk volumes” on page 85.
Important
Released and certified storage platforms can be found in the Storage
Platforms Release Notes in the Knowledge Center (https://
knowledge.opentext.com/knowledge/llisapi.dll/open/12331031).
Note: Storage devices can only be added in the Administration Client, not
edited or deleted.
Click Next.
Note: You can also restart STORM later using the corresponding
commands in the action pane.
For example, you can create multiple virtual jukeboxes and then restart
STORM once.
Number of slots
The available storage capacity is dynamically allocated as Archive Server writes data
to the device. However, the server internally works with a fixed number of available
slots that are to be filled. If all available slots are exceeded, no new data can be
written to the device, because no new blank area is being found.
Usually, the internal limit is sufficient for most cases, but for large installations the
limit needs to be raised.
If you want to put more than 1000 ISO images (default) into one virtual jukebox, the
DS write job will return an error (not enough blank partitions); see the
Knowledge Center (https://knowledge.opentext.com/knowledge/cs.dll/open/
15536782).
The maxslots value also specifies the size of the devices SAVE file. Lowering
the maxslots value is not allowed and may lead to unexpected results!
Further Detailed information about configuring a CFS storage device can be found in the
information corresponding dedicated guide: OpenText Archive Center - Compliant File System
Installation and Configuration Guide (AR-ICF).
2. On the Settings page, enter the File system path to your device, that is the
mount path of the volume in the file system. The path is a drive under Windows
and a volume directory under UNIX/Linux.
On Windows, you can either specify fully-qualified paths of the form x:
\directory\ or UNC paths like \\NASserver\win_share1.
The Archive Spawner service must be able to access the path. You might have to
run the service under a dedicated user to achieve this. If you use a drive letter,
you will have to make sure that the drive is mapped at boot time before the
Spawner service is started and will not disconnect after being idle for a while.
For the latter reason, OpenText recommends using UNC paths and not mapped
network drives with drive letters.
Click Browse to open the directory browser. Select the designated directory and
click OK to confirm.
If you enter the directory path manually, ensure that a backslash is inserted in
front of the directory name if you are using volume letters (for example, e:
\vol2).
Click Test Connection to verify your settings.
5. In the action pane, click Refresh to update the view in Administration Client.
1. Add EMC Centera as write at-once (ISO) device by following the description in
“Adding a write at-once (STORM) device” on page 73.
2. On the Settings page, enter the Connection string to your device.
Prerequisites Follow the instructions in Section 2 “Configuring SSAM” in OpenText Archive Center
- IBM TSM SSAM Installation and Configuration Guide (AR-IDR) before continuing.
1. Add IBM TSM SSAM as write at-once (ISO) device by following the description
in “Adding a write at-once (STORM) device” on page 73.
2. On the Settings page, enter the following:
Management class
Enter the name of the policy that defines how objects are stored and
managed in TSM.
For details, see Section 2.3 “Management classes and retention initiation” in
OpenText Archive Center - IBM TSM SSAM Installation and Configuration
Guide (AR-IDR).
OPT file
Enter the path to the OPT file defining the connection parameters for TSM
SSAM. The OPT file must be located on the Archive Server host and the
path must be a server path.
For details, see Section 1.1 “TSM client configuration files” in OpenText
Archive Center - IBM TSM SSAM Installation and Configuration Guide (AR-
IDR).
You can now attach the device; see “Configuring STORM storage devices”
on page 78.
1. Add HDS HCP as write at-once (ISO) device by following the description in
“Adding a write at-once (STORM) device” on page 73.
2. On the Settings page, enter the
Connection URL (<protocol>://
<namespace>.<tenant>.<cluster>:<port>/rest/<basedir>)
and the
User name (name of the Data Access Account for the namespace).
Click Set Password and enter the (unencrypted) password for the Data Access
Account.
For details, see Section 3 “HCP HTTP connection information” in OpenText
Archive Center - HDS HCP Installation and Configuration Guide (AR-IHC).
Click Test Connection to verify your settings.
3. Optional To change the Maximum number of slots, click Advanced.
For details, see “Number of slots” on page 74.
4. Click Next and then click Finish.
The HDS HCP device is added in the result pane.
You can now attach the device; see “Configuring STORM storage devices”
on page 78.
Note: To determine the name of the STORM server, select Storage Devices in
the Infrastructure object in the console tree. The name of the STORM server is
displayed in brackets behind the device name, for example: WORM
(STORM1).
To attach a device:
2. Select the designated device in the top area of the result pane.
To detach a device:
2. Select the designated device in the top area of the result pane.
This device can no longer be accessed and can be turned off. The status is set to
“Detached”.
For details about a specific device, see the section that corresponds to your device
below.
11.1.3.1 Amazon S3
This section describes the setup of Amazon Simple Storage Service (Amazon S3) as a
storage system for Archive Server. Amazon S3 can only be used as single file (VI)
device.
Note: We assume that the Amazon S3 account has been created and configured
properly.
• Access information for your Amazon S3 account is available. You will need your
Amazon Security Credentials (Access Key ID and the corresponding
Secret Access Key); see https://aws-portal.amazon.com/gp/aws/
securityCredentials.
• The storage container (the Bucket) has been created; see https://
console.aws.amazon.com/s3.
• An IP connection between Archive Server and Amazon S3 has been established.
3. On the General page of the Add Storage Device wizard, type a name for the
new device in the Storage Device name field.
Select Amazon S3 as Storage type.
Click Next.
4. On the Settings page, browse for the path to the SSL certificates.
Specify a file holding one or more CA (i.e. root) certificates in PEM format. With
those certificates, an additional check against the server’s SSL certificate is
performed to verify the identity of the peer.
Tip: You can use the certificates provided in <OT config AS>/gs/
awss3_cert.pem.
1. In the result pane, select the Amazon S3 device you created before.
2. In the action pane, click Add Connection and enter the following:
Bucket Name
Basically the top-level directory your data is being stored in. The
Bucket Name has several limitations. For details, see http://
docs.amazonwebservices.com/AmazonS3/latest/dev/
BucketRestrictions.html.
Access Key
The Access Key ID key for your Amazon S3 account. It is part of the access
credentials for your Amazon S3 account and can be found at https://aws-
portal.amazon.com/gp/aws/securityCredentials.
Secret Key
The Secret Access Key for your Amazon S3 account.
It is part of the access credentials for your Amazon S3 account and can be
found at https://aws-portal.amazon.com/gp/aws/securityCredentials.
3. Click Test Connection.
If all settings are correct, click OK to add the connection.
1. In the lower part of the result pane, select the connection you created before.
2. In the action pane, click Initialize Volume.
3. Enter a name for the new volume and click OK.
Further See OpenText Archive Center - Amazon S3 Installation and Configuration Guide (AR-
information IAM) for details about the configuration.
Note: We assume that the Windows Azure account has been created and
configured properly.
3. On the General page of the Add Storage Device wizard, type a name for the
new device in the Storage Device name field.
Select Windows Azure as Storage type.
Click Next.
4. On the Settings page, browse for the path to the SSL certificates.
Specify a file holding one or more CA (i.e. root) certificates in PEM format. With
those certificates, an additional check against the server’s SSL certificate is
performed to verify the identity of the peer.
Tip: You can use the certificates provided in <OT config AS>/gs/
azure_cert.pem.
1. In the result pane, select the Windows Azure device you created before.
2. In the action pane, click Add Connection and enter the following:
Container name
Basically the top-level directory your data is being stored in. <Container> has
a minimum length of 3 characters.
Account name
The name of the Windows Azure storage account. This account must be
created using the Azure Management Portal (https://
manage.windowsazure.com/?whr=live.com#Workspace/All/dashboard).
Access Key
The Primary Access Key generated after creating the Storage Account.
1. In the lower part of the result pane, select the connection you created before.
Further See OpenText Archive Center - Windows Azure Installation and Configuration Guide (AR-
information IAZ) for details about the configuration.
Prerequisites Follow the instructions in Section 2.1 “Centera server” in OpenText Archive Center -
EMC Centera Installation and Configuration Guide (AR-ICE) before continuing.
3. On the General page of the Add Storage Device wizard, type a name for the
new device in the Storage Device name field.
Select EMC Centera as Storage type and Single File as Storage strategy.
Click Next.
1. In the result pane, select the EMC Centera device you created before.
2. In the action pane, click Add Connection and enter the Connection string.
1. In the lower part of the result pane, select the connection you created before.
Prerequisites Follow the instructions in Section 2 “Configuring SSAM” in OpenText Archive Center
- IBM TSM SSAM Installation and Configuration Guide (AR-IDR) before continuing.
3. On the General page of the Add Storage Device wizard, type a name for the
new device in the Storage Device name field.
Select IBM TSM SSAM as Storage type and Single File as Storage strategy.
Click Next.
tsmutil tool If not done already, set the password for the TSM SSAM client and define the file
space for Archive Server on the storage device.
1. Open a command line interface and navigate to the <OT install AS>\bin
directory.
2. Run the tsmutil program in password mode to set the password for the TSM
SSAM client.
For details, see Section 4.1 “Setting the password” in OpenText Archive Center -
IBM TSM SSAM Installation and Configuration Guide (AR-IDR).
3. Run the tsmutil program in filespace mode to define the file space for
Archive Server.
For details, see Section 4.2 “Defining file space” in OpenText Archive Center -
IBM TSM SSAM Installation and Configuration Guide (AR-IDR).
1. In the result pane, select the IBM TSM SSAM device you created before.
2. In the action pane, click Add Connection and enter the following:
Filespace name
Enter the name of the file space that you defined previously using the
tsmutil program.
Management class
Enter the name of the policy that defines how objects are stored and
managed in TSM.
For details, see Section 2.3 “Management classes and retention initiation” in
OpenText Archive Center - IBM TSM SSAM Installation and Configuration
Guide (AR-IDR).
OPT file
Enter the path to the OPT file defining the connection parameters for TSM
SSAM.
For details, see Section 1.1 “TSM client configuration files” in OpenText
Archive Center - IBM TSM SSAM Installation and Configuration Guide (AR-
IDR).
3. Click OK.
1. In the lower part of the result pane, select the connection you created before.
3. On the General page of the Add Storage Device wizard, type a name for the
new device in the Storage Device name field.
Select HDS HCP as Storage type and Single File as Storage strategy.
Click Next.
1. In the result pane, select the EMC Centera device you created before.
1. In the lower part of the result pane, select the connection you created before.
Volume name
Unique name of the volume
Mount path
Mount path of the volume in the file system. The mount path is a drive
under Windows and a volume directory under UNIX/Linux.
On Windows, you can either specify fully-qualified paths of the form x:
\directory\ or UNC paths like \\NASserver\win_share1.
The Archive Spawner service must be able to access the path. You might
have to run the service under a dedicated user to achieve this. If you use a
drive letter, you will have to make sure that the drive is mapped at boot
time before the Spawner service is started and will not disconnect after
being idle for a while. For the latter reason, OpenText recommends using
UNC paths and not mapped network drives with drive letters.
Click Browse to open the directory browser. Select the designated directory
and click OK to confirm.
If you enter the directory path manually, ensure that a backslash is inserted
in front of the directory name if you are using volume letters (for example,
e:\vol2).
Volume class
Select the storage medium or storage system to ensure correct handling of
documents and their retention.
Hard Disk
Hard disk volume that provides WORM functionality or that can be
used as disk buffer. Documents are written from the buffer to the
volume without additional attributes. Use this volume class for buffers.
6. Click Finish.
Create as many hard-disk volumes as you need.
Renaming disk To rename a disk volume, select it in the result pane and click Rename in the action
volumes pane.
Note: If you want to rename a disk volume, make sure that an existing
replicated disk volume is also renamed. Then start the Synchronize_Replicates
job on the remote server. This will update the volume names on both servers.
Procedure
3. Create buffers and caches as required (see sections below for details).
4. Create logical archive(s) with pools of type Single File (FS); see “Configuring
archives and pools“ on page 99.
Preconditions The hard disks must be partitioned at the operating system level and then created in
Administration Client. See “Creating and modifying disk volumes” on page 86.
Purge job
Name of the Purge_Buffer job.
Number of threads
You can change the number of threads used by the Purge_Buffer job to
improve performance (1-50 threads); default: 3.
Note: If both conditions Purge documents older than ... days and Cache
documents before purging are specified, the job runs in a way which
satisfies both conditions to the greatest possible extent. Documents that are
older than n days are also deleted even if the required storage space is
available. Conversely, documents that are more recent than n days are
deleted until the required percentage of storage space is free.
7. Schedule the Purge_Buffer job. The command and the arguments are entered
automatically and can be modified later. See “Setting the start mode and
scheduling of jobs” on page 148.
Modifying a disk To modify a disk buffer, select it and click Properties in the action pane. Proceed in
buffer the same way as when creating a disk buffer. The name of the disk buffer and the
Purge_Buffer job cannot be changed.
Deleting a disk To delete a disk buffer, select it and click Delete in the action pane. A disk buffer can
buffer only be deleted if it is not assigned to a pool.
Replicated volumes are attached to a replicated buffer on the Remote Standby Server
in the same way.
2. Select the designated disk buffer in the top area of the result pane.
3. Click Attach Volume in the action pane. A window with all available volumes
opens.
4. Select an existing volume. The volume must have been created previously; see
“Creating and modifying disk volumes” on page 86.
Related Topics
• “Creating and modifying disk volumes” on page 86
• “Creating and modifying a disk buffer” on page 88
Note: If a buffer is attached to a pool, it must have at least one attached hard-
disk volume. Thus, the last hard-disk volume cannot be detached.
2. Select the designated disk buffer in the top area of the result pane.
3. Select the volume to be detached in the bottom area of the result pane.
2. Select the designated disk buffer in the top area of the result pane.
Job name
The job name is set during buffer creation and cannot be changed.
Command
The command is set to Purge_Buffer during buffer creation.
Arguments
The argument is set to the buffer's name during buffer creation.
Start mode
Configures whether the job starts at a certain time or after a previous job
was finished. See also “Setting the start mode and scheduling of jobs”
on page 148.
5. Click Next.
7. Click Finish.
Related Topics
• “Creating and modifying jobs” on page 147
• “Setting the start mode and scheduling of jobs” on page 148
2. Select the Original Disk Buffers tab or the Replicated Disk Buffers tab,
according to the type of buffer you want to check or modify.
3. Select the designated disk buffer in the top area of the result pane.
4. Select the volume you want to check in the bottom area of the result pane.
5. Click Properties in the action pane. A window with volume information opens.
Volume name
The name of the volume
Type
Original or replicated
Capacity (MB)
Maximum capacity of the volume
Free (MB)
Free capacity of the volume
Last Backup or Last Replication
Date when the last backup or the last replication was performed. Depends
on the type of the volume.
Host
Specifies the host on which the replicated volume resides if the disk buffer
is replicated
6. Modify the volume status if necessary. To do this, select or clear the status. The
settings that can be modified depend on the volume type.
Full, Offline
These flags are set by Document Service and cannot be modified.
Write locked
No more data can be copied to the volume. Read access is possible; write
access is protected.
Locked
The volume is locked. Read or write access is not possible.
Modified
Is automatically selected, if the Document Service performs a write access to
a HDSK volume. If cleared manually, Modified is selected with the next
write access again.
7. Click OK.
To synchronize servers:
2. Select the designated disk buffer in the top area of the result pane.
3. Select the disk buffer you want to replicate in the bottom area of the result pane.
Note: If you want to rename a replicated disk volume, you also have to
rename the original disk volume to the same new name. Then start the
Synchronize_Replicates job on the remote server. This will update the
volume names on both servers.
6. Click Finish.
A cache must have at least one assigned hard-disk volume. It is also possible to
assign more disk volumes to a cache and to configure their priority.
Note: Do not mix up the local cache and Archive Cache Servers. See also
“Configuring Archive Cache Server“ on page 225).
Buffers Caches
Disk Volume Disk Volume Disk Volume Disk Volume Disk Volume Disk Volume
... ...
Purge_Buffer
activity
Pools ...
Global cache
If no cache path is configured and assigned to a logical archive, the global cache is
used. The global cache is usually created during installation but there is no volume
assigned. To use the global cache a volume must be assigned. See “Adding hard-
disk volumes to caches” on page 95.
Depending on the time when you want to cache documents, you select the
appropriate configuration setting:
Enable caching for the Caching option in the archive configuration; see “Configuring the
logical archive archive settings” on page 113
Caching when the If the Write job is performed, documents are also written to the
document is written cache.
Caching when the Cache documents before purging option in the disk buffer
buffer is purged properties. See “Creating and modifying a disk buffer” on page 88.
Related Topics
• “Adding hard-disk volumes to caches” on page 95
• “Creating and deleting caches” on page 95
• “Defining priorities of cache volumes” on page 96
To create a cache:
1. Create the volumes for the caches on the operating system level.
7. Click Finish.
Note: If you want to change the priority of assigned hard-disk volumes, see
“Defining priorities of cache volumes” on page 96.
Deleting a To delete a cache, select it and click Delete in the action pane. It is not possible to
cache delete a cache which is assigned to a logical archive. The global cache cannot be
deleted either.
Related Topics
• “Adding hard-disk volumes to caches” on page 95
• “Defining priorities of cache volumes” on page 96
Caution
Be aware that your cache content gets invalid if you change the volume
priority.
2. Select the designated cache in the top area of the result pane. In the bottom area
of the result pane, the assigned hard-disk volumes are listed.
4. Click Browse to open the directory browser. Select the designated Location of
the hard-disk volume and click OK to confirm.
5. Click Finish to add the new cache volume.
Note: If you want to change the priority of hard-disk volumes, see “Defining
priorities of cache volumes” on page 96.
Related Topics
• “Configuring caches” on page 93
• “Defining priorities of cache volumes” on page 96
To delete a HD volume:
Note: If you want to change the priority of hard-disk volumes, see “Defining
priorities of cache volumes” on page 96.
Related Topics
• “Configuring caches” on page 93
• “Defining priorities of cache volumes” on page 96
Caution
Be aware that your cache content gets invalid if you change the volume
priority.
2. Select the designated cache in the top area of the result pane. In the bottom area
of the result pane the assigned hard-disk volumes are listed.
3. Click Change Volume Priorities in the action pane. A window to change the
priorities of the volumes opens.
4. Select a volume and click the designated arrow button to increase or decrease
the priority.
5. Click Finish.
2. Select the Unavailable Volumes tab in the result pane to list all unavailable
devices.
1. Change the password on the database. Make sure to create a secure password.
2. In the console tree, expand Archive Server > Configuration and search for the
User password of database variable (internal name: AS.DBS.DBPASSWORD;
see “Searching configuration variables” on page 242).
3. Open the User password of database configuration parameter, enter the new
password and click OK.
The password is encrypted automatically.
4. For the changes to take effect, restart the Apache Tomcat and Archive Spawner
services.
1. In the console tree, expand Archive Server > Configuration and search for the
Number of minutes to wait for reconnect variable (internal name:
AS.DBS.MAXWAITTIMETORECONNECTMINUTES; see “Searching
configuration variables” on page 242).
2. Open the Number of minutes to wait for reconnect variable and enter the time
in minutes during which Archive Server tries to reconnect to the database.
Click OK.
Before you can work effectively with Archive Server, you have to perform some
configuration steps:
• Create and configure logical archives
• Create storage tiers
• Create and configure pools
• Schedule and configure jobs
• Configure security settings
• Configure the storage system
When you configure the archive system, you often have to name the configured
element. Make sure that all names follow the naming rule:
For each original archive, you give a name and configure a number of settings:
• Encryption, compression, BLOBs and single instance affect the archiving of a
document.
• Caching and Archive Cache Servers affect the retrieval of documents (see
“Configuring archive access using an Archive Cache Server” on page 235).
• Signatures, SSL and restrictions for document deletion define the conditions for
document access.
• Timestamps and certificates for authentication ensure the security of documents.
• Auditing mode, retention and deletion define the end of the document lifecycle.
Some of these settings are pure archive settings. Other settings depend on the
storage method, which is defined in the pool type. The most relevant decision
criterion for their definition is single file archiving or container archiving.
Of course, you can use retention also with container archiving. In this case, consider
the delete behavior that depends on the storage method and media (see “When the
retention period has expired” on page 247).
Formats to All important formats including email and office formats are compressed by default.
compress You can check the list and add additional formats in Configuration, search for the
List of component types to be compressed variable (internal name:
COMPR_TYPES (row1 to rowN); see “Searching configuration variables”
on page 242).
Pools with For pools using a disk buffer, the Write job compresses the data in the disk buffer
buffer and then copies the compressed data to the medium. After compressing a file, the
job deletes the corresponding uncompressed file.
If ISO images are written, the Write job checks whether sufficient compressed data
is available after compression as defined in Minimum amount of data to write. If so,
the ISO image is written. Otherwise, the compressed data is kept in the disk buffer
and the job is finished. The next time the Write job starts, the new data is
compressed and the amount of data is checked again.
HDSK pool When you create an HDSK pool, the Compress_<archive name>_<pool name> job is
created automatically for data compression. This job is activated by default.
By default, Single Instance Archiving is disabled. You can enable it, for example, for
email archives; see “Configuring the archive settings” on page 113.
Important
• OpenText strongly recommends not using single instance in combination
with retention periods for archives containing pools for single file
archiving (FS, VI, HDSK).
• If you want to use SIA together with retention periods, consider
“Retention” on page 103.
Excluding If necessary, you can exclude component types (formats) from Single Instance
formats from Archiving. Microsoft Exchange and Lotus Notes emails are excluded by default
SIA
because their bodies are unique, although the attachments are archived with SIA.
2. In the console tree, expand Archive Server > Configuration and search for the
List of component/application types that are NOT using SIA variable (internal
name: AS.DS.SIA_TYPES; see “Searching configuration variables”
on page 242.
3. Open the Properties window of the configuration variable and add the MIME
types to be excluded.
SIA and ISO Be careful when using Single Instance Archiving and ISO images: Emails can consist
images of several components, for example, logo, footer, attachment, which are handled by
Single Instance Archiving. Using ISO images, these components can be distributed
over several images. When reading an email, several ISO images must be accessed to
read all the components in order to recompose the original email. Caching for
frequently used components and proper parameter settings will improve the read
performance.
SIA for emails For emails, archiving in single instance mode decomposes emails, which means that
attachments are removed from the original email and are stored as separate
components on Archive Server. As soon as an email is retrieved from Content
Server, it is checked whether the email needs to be recomposed. If so, the
appropriate attachments are reinserted into the email and the complete email is
passed to Content Server.
Important
If you use OpenText Email Archiving or Management, do not use the Email
Composer additionally.
Configuring Composing or decomposing emails can use a lot of memory, which has impact on
email (de-)com- the performance. Therefore, you can configure how large emails are handled as
posing
described below.
12.1.3 Retention
Introduction This part explains the basic retention handling mechanism of Archive Server.
OpenText strongly recommends reading this part if you use retention periods for
documents. For administration, see “Configuring the archive retention settings”
on page 114.
Retention The retention period of a document defines a time frame, during which it is
period impossible to delete or modify the document.
The retention period – more precisely, the expiration date of the retention period – is
a property of a document and is stored in the database and additionally together
with the document on the storage medium, if possible.
Compliance Various regulations require storing documents for a defined retention period. To
facilitate compliance with regulations and meet the demand of companies, Archive
Server can handle retention of documents in cooperation with the leading
application and the storage subsystem. The leading application manages the
retention of documents, and Archive Server executes the requests or passes them to
the storage system.
Retention Modern storage systems support retention periods on hardware level. Archive
handling Server can propagate the retention period to those storage systems.
• When the retention period has expired, the leading application has to trigger the
deletion of the document. Archive Server then triggers the purge of the files on
the storage system.
If both explicit and default retention period are given, the leading application has
priority.
Archive Server only reacts to requests sent by the leading application. That is why
we talk about retention handling in Archive Server. Thereby, we avoid the situation
that a leading application still might have index information for documents already
deleted in Archive Server.
Changing the retention settings on the archive has no influence on already archived
documents. However, it is possible to prolong the retention period using the
ArchiveLink API.
Note: As regulations can change in the course of time, you can adapt the
retention period of documents by means of a complete document migration;
see “Migration” on page 271.
Handling of Notes and annotations can be added to a document, they are add-ons and do not
add-ons change the document itself. Components that are defined as add-ons and that can be
modified during the retention period are listed in the List of addon components
variable (retrieve the variable in Configuration; see “Searching configuration
variables” on page 242; internal variable name: ADDON_NAMES (row1 to.rowN).
Fixed retention
The retention period is known at creation time, and can be propagated to the
storage system. The storage system protects against illegal deletion: neither an
application nor Archive Server are able to delete the object on the storage system
before the retention period has expired.
Variable retention
The retention period is unknown at creation time, or can change during the
document life cycle. In this case, retention periods have to be handled by the
leading application only (i.e., the leading application sets retention to
READ_ONLY), and cannot be passed to Archive Server (i.e. no retention is set at
the archive).
Retention types Different retention types can be applied during the creation of a document by the
leading application or by inheritance of default values on the Archive Server (see
“Configuring the archive retention settings” on page 114).
Retention The following table lists settings and their impact on the retention behavior (see
behavior “Configuring the archive retention settings” on page 114):
Setting Description
Deferred Deferred archiving prevents Archive Server from writing the content from
archiving the disk buffer to the storage system until another call removes the deferred
flag from the document.
Destroy Destroy activates overwriting the document several times before purging.
Destroy is not available for all storage system.
Terms used The terms storage system or storage platform are used for any long-term storage device
supported by Archive Server, such as Content-Addressed Storage (CAS), Network-
Attached Storage (NAS), Hierarchical Storage Management Systems (HSM) and
others. The term delete refers to the logical deletion of a component and the term
purge is used to describe the cleanup of content on the storage system.
Related Topics
• “Configuring the archive retention settings” on page 114
• “When the retention period has expired” on page 247
Using retention periods requires a thorough planning. The storage system, the pool
type in use, and other settings (Single File, ISO, BLOBs, single instance archiving,
etc.) can influence retention handling.
Tips
• If you use retention for archives with Single Instance Archiving (SIA), make
sure that documents with identical attachments are archived within a short
time frame and the documents in one archive have similar retention
periods. See also: “Single instance” on page 101.
• You cannot export volumes containing at least one document with non-
expired retention.
• If retention periods vary strongly, delete requests for the documents will
spread over a long period. In this case, single document storage should be
preferred.
• If documents stored within the same archive have a similar retention
period, the retention will expire within a short time window for these
documents. In this case, ISO images can be used for storage.
Retention on The following table lists the storage systems and their retention handling.
storage
systems
Table 12-3: Retention on storage systems
For the concrete retention support of the storage system, refer to the storage release
notes.
New: With version 10.5.0 SP1, you can also use the AutoDelete job to find and
remove documents with expired retention. For more information, see “Other
jobs” on page 144.
When the retention periods of documents have expired, documents can be deleted
mainly to
• Document deletion settings for the logical archive (see Document deletion
on page 113) and
• The maintenance level of Archive Server (see “Setting the operation mode of
Archive Server” on page 350).
For the concrete retention support of the storage system, refer to the Storage Release
Notes.
Deletion The following lists the deletion behavior per pool type.
behavior
ISO Images
Purging a document in an ISO image cannot be completed before all documents
on the image have been deleted. Only after that, the ISO image file can be
purged from the storage system.
Single Instance Archiving
Be careful when using single instance archiving (SIA) and retention periods; see
also “Retention on storage systems” on page 106.
Example: An email with an attachment is archived in 2005 with the retention period of 5
years. ISO images are used. The ISO image is stored as a file on the storage system with a
retention period, which is the maximum of all documents in the ISO image. Assume the
maximum is 2010.
Another email with the same attachment is archived in 2007 and retention period of 5
years.
The components cannot be deleted from Archive Server since they are belonging to a
document with a proper retention. However, the image file on the storage system could
be purged by tools of the storage system, as in 2010 the retention period of the ISO image
expires.
BLOB
Take care when using containers such as BLOBs. A BLOB has a retention which
is the maximum retention of all documents within the BLOB.
Single documents within a BLOB cannot be copied and nor be purged, BLOBs
can only be copied or purged as a whole.
Purge process A document or component can be deleted after the retention of the document has
expired or no retention has been applied.
The leading application can delete a single component or delete the document.
Deleting a document implies that all components are deleted and then the document
itself. Due to the nature of storage, deletion cannot be handled within a transaction.
Purge process
ISO, BLOB
Delete requests cannot be propagated to the storage system.
The document is deleted in Archive Server. The content remains on the storage
system until all documents on the media or container have been deleted. The
DELETE_EMPTY_VOLUMES job purges the container files on the storage
system.
Single file pools
Delete requests for the components and documents initiate a synchronous purge
request on the storage system.
The following error situation can arise:
Storage system reports an error when the document or component is to be
deleted.
• For documents: The document information in Archive Server is deleted (as all
component information is already deleted).
• For components: The component information in Archive Server is deleted.
Note: This is new for versions from 10.0 on. In former versions, the
leading applications received an error message and the component
information was not deleted.
Purging content In single file archiving scenarios, the content on the storage system is purged during
the delete command. Content on ISO images cannot be purged, and an additional
job is necessary to purge the content as soon as all content of the partition is deleted
from Archive Server.
The purging capabilities depend on storage system and pool type. The following
table lists the purge behavior depending on the pool type.
Use DELETE_EMPTY_
PARTITIONS job.
Single File (FS) YES Destroy is propagated to the storage
system but not all storage systems will
execute the destruction.
Note: If the document’s retention date has changed on the original server due
to a migrate call, the new values are only held by Archive Server and not
written to the ATTRIB.ATR file, which holds the technical metadata of the
document. The ATTRIB.ATR file will only be updated if the document is
updated, for example, if a component is added on the original server or if the
document is copied to a different volume.
As soon as the updated ATTRIB.ATR has been replicated to the Remote Standby
Server, the new retention value will be known on the Remote Standby Server.
Export of Export of volumes is prohibited if the volume contains document components under
volumes retention. Exception: there is at least one logical copy of each component under
retention on another volume. This is typically the case after a VolumeMigration.
Note: Fast VolumeMigration and local backups do not create logical copies of
components.
Fast Volume Migration does not change nor apply retention periods to single
documents. Only a retention period for the ISO image file is set according to the
rules listed below.
• The retention of the source image has not yet expired: The target image will
inherit the retention of the remaining period.
• The retention has already expired or was set to NONE: No retention will be
applied to the target image.
2. Click New Archive in the action pane. The window to create a new logical
archive opens.
Archive name
Unique name of the new logical archive. Consider the Naming rule for
archive components on page 99.
In the case of SAP applications, the archive name consists of two
alphanumeric characters (only uppercase letters and digits).
Description
Brief, self-explanatory description of the new archive.
Note: After creating the logical archive, default configuration values for all
settings are provided. If you want to change these settings, open the Properties
window and modify the settings of the corresponding tab.
General The description of the new archive can be viewed and modified (open Properties in
information the action pane and select the General tab).
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Check the settings and modify it, if needed.
• Read documents
• Update documents
• Create documents
• Delete documents
Each permission marked for the current archive has to be checked when
verifying the signed URL. With their first request, clients evaluate the access
permissions required for the current archive and preserve this information.
With the next request, the signed URL contains the access permissions
required, if these are not in conflict with other access permission settings
(for example, set per document).
The settings determine the access rights to documents in the selected
archive which were archived without a document protection level, or if
document protection is ignored. The document protection level is defined
by the leading application and archived with the document. It defines for
which operations on the document a valid secKey is required.
See also “Activating secKey usage for a logical archive” on page 153.
Select the operations that you want to protect. Only users with a valid
secKey can perform the selected operations. If an operation is not selected,
everybody can perform it.
SSL
Specifies whether SSL is used in the selected archive for authorized,
encrypted HTTP communication between the Imaging Clients, Archive
Servers, Archive Cache Servers and OpenText Document Pipelines.
Document deletion
Here you decide whether deletion requests from the leading application are
performed for documents in the selected archive, and what information is
given. You can also prohibit deletion of documents for all archives of the
Archive Server. This central setting has priority over the archive setting.
See also: “Setting the operation mode of Archive Server” on page 350.
Deletion is allowed
Documents are deleted on request, if no maintenance mode is set and
the retention period is expired.
4. Click OK to resume.
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Settings tab. Check the settings and modify them, if needed.
Compression
Activates data compression for the selected archive.
See also: “Data compression” on page 100
Encryption
Activates the data encryption to prevent that unauthorized persons can
access archived documents.
See also: “Encrypted document storage” on page 155.
Blobs
Activates the processing of BLOBs (binary large objects).
Very small documents are gathered in a meta document (the BLOB) in the
disk buffer and are written to the storage medium together. The method
improves performance. If a document is stored in a BLOB, it can be
destroyed only when all documents of this BLOB are deleted. Thus, BLOBs
are not supported in single-file storage scenarios and should not be used
together with retention periods.
Single instance
Enables single instance archiving.
See also: “Single instance” on page 101.
Deferred archiving
Select this option, if the documents should remain in the disk buffer until
the leading application allows Archive Server to store them on final storage
media.
Example: The document arrives in the disk buffer without a retention period and
the leading application will provide the retention period shortly after. The
document must not be written to the storage media before it gets the retention
period.
Audit enabled
If auditing is enabled, all document-related actions are audited (see
“Configuring auditing” on page 335).
Cache enabled
Activates the caching of documents to the DS cache at read access.
Cache
Pull down menu to select the cache path. Before you can assign a cache
path, you must create it. (See “Creating and deleting caches” on page 95
and “Configuring caches” on page 93).
Important
After assigning a cache to an archive you must restart Archive
Server.
4. Click OK to resume.
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Retention tab. Check the settings and modify them, if needed.
No retention
Use this option if the leading application does not support retention, or if
retention is not relevant for documents in the selected archive. Documents
can be deleted at any time if no other settings prevent it.
Infinite retention
Documents in the archive never can be deleted. Use this setting for
documents that must be stored for a very long time.
Destroy (unrecoverable)
This additional option is only relevant for archives with hard disk storage.
If enabled, the system at first overwrites the file content several times and
then deletes the file.
4. Click OK to resume.
Important
Documents with expired retention period are only deleted
• if document deletion is allowed; see “Configuring the archive security
settings” on page 112, and
• if no maintenance mode is set; see “Setting the operation mode of Archive
Server” on page 350.
Related Topics
• “Retention” on page 103
• “When the retention period has expired” on page 247
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Timestamps tab. In the Timestamps area, select one of the following
options:
Old Timestamps
Use old timestamps.
No Timestamps
No use of timestamps, i.e., Archive Server generates no timestamp for the
archived documents.
ArchiSig
Enables ArchiSig timestamp usage, i.e., an ArchiSig timestamp is generated
for the archived documents.
For a description of ArchiSig, see “Timestamps” on page 160.
4. In the Verification area, select one of the following options:
None
Timestamps are not verified. Each requested document is delivered.
Relaxed
Timestamps are verified. Each requested document is delivered. If the
timestamp cannot be verified, an auditing entry is written (if auditing is
enabled).
Strict
Timestamps are verified. Requested documents are delivered only if the
timestamp is verified.
5. Click OK to resume.
The procedure for creating and configuring a pool depends on the pool type. The
main differences in the configuration are:
• Usage of a disk buffer. All pool types, except the HDSK (write through) pools,
require a buffer.
• Settings of the Write job. The Write job writes the data from the buffer to the
final storage media. For all pool types, except the HDSK (write through) pools, a
Write job must be configured.
• Backup of documents in the pool(s) of a logical archive.
• For the ISO pool type, a backup jukebox can be created.
• Shadow pools can be created for all pool types, except for HDSK (write
through) pools. Multiple shadow pools can be assigned to a single original
pool. A Copy job copies the documents from the original pool to the shadow
pool(s) according the current archive settings.
To determine the pool type that suits the scenario and the storage system in use, see
the Storage Platform Release Notes in the Knowledge Center (https://
knowledge.opentext.com/knowledge/llisapi.dll/open/12331031)).
Background
• “Pools and pool types” on page 38
Note: HDSK pools are not intended for use in production systems but for test
purposes and special requirements. Use not more than one HDSK pool.
3. Click New Pool in the action pane. The window to create a new pool opens.
4. Enter a unique, descriptive Pool name. Consider the naming conventions; see
Naming rule for archive components on page 99.
5. Select Write through (HSDK) and click Next.
6. Select a Storage tier (see “Creating and modifying storage tiers” on page 137).
The name of the associated compression job is created automatically.
8. Select the pool in the top area of the result pane and click Attach Volume. A
window with all available hard-disk volumes opens (see “Creating and
modifying disk volumes” on page 86).
Scheduling the To schedule the associated compression job, select the pool and click Edit Compress
compression Job in the action pane. Configure the scheduling as described in “Configuring jobs
job
and checking job protocol“ on page 141.
Modifying a To modify pool settings, select the pool and click Properties in the action pane. Only
HDSK pool the assignment of the storage tier can be changed.
To create a pool:
a. Select the pool in the top area of the result pane and click Attach Volume.
A window with all available hard-disk volumes opens (see “Creating and
modifying disk volumes” on page 86).
b. Select the designated disk volume and click OK to attach it.
9. Schedule the Write job; see “Configuring jobs and checking job protocol“
on page 141.
Modifying a To modify pool settings, select the pool and click Properties in the action pane.
pool Depending on the pool type, you can modify settings or assign another buffer.
Important
You can assign another buffer to the pool. If you do so, make sure that:
• all data from the old buffer is written to the storage media,
• the backups are completed,
• no new data can be written to the old buffer.
Data that remains in the buffer will be lost after the buffer change.
Deleting a pool
If a shadow pool has been assigned to an original pool, the Delete option of the
Properties in the action pane is not available for the original pool.
Storage Selection
Storage tier
Select the designated storage tier (see “Creating and modifying storage tiers”
on page 137).
Buffering
Buffer assignment
Make sure that each buffer is assigned to one pool only (original pool or
shadow pool). Do not assign the same buffer to pools in different archives.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 141.
Original jukebox
Select the original jukebox.
Note: For some storage systems, the maximum size is not required; see the
documentation of your storage system in the Knowledge Center (https://
knowledge.opentext.com/knowledge/llisapi.dll/Open/12331031).
Note: Make sure that the size of the smallest document to be written is less
than the difference between Minimum amount of data and Maximum
volume size.
• The size of the ISO image created by the Archive Server is larger than the
Minimum amount of data value and less than the Maximum volume
size value. If an ISO image in creation does not meet this criterion, no
image is written.
• If compression is enabled for the archive, the size of the compressed
documents (components) is applicable.
Backup
Backup enabled
Enable this option if the volumes of a pool are to be backed up locally on a
second device (jukebox) of this Archive Server. During the backup operation, the
Local_Backup jobs only considers the pools for which backup has been enabled.
Backup jukebox
Select the backup jukebox. For virtual jukeboxes with HD-WO media, we
strongly recommend to configure the original and backup jukeboxes on
physically different storage systems.
Related Topics
Storage selection
Storage tier
Select the designated storage tier (see “Creating and modifying storage tiers”
on page 137).
Buffering
Buffer assignment
Make sure that each buffer is assigned to one pool only (original pool or
shadow pool). Do not assign the same buffer to pools in different archives.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 141.
Documents written in parallel
Number of documents that can be written at once.
Related Topics
• “Creating and modifying pools with a buffer” on page 119
• “Pools and pool types” on page 38
Shadow pools can be created for the following original pool types:
• FS
• ISO
• VI
Multiple shadow pools can be assigned to an original pool. The group of an original
pool and its assigned shadow pool(s) is called a pool cluster.
A1 Archive layer
P1 P2 P3 Original pools
Note: When logical archives are replicated, only the original pools are
replicated. Shadow pools assigned to the original pools are not replicated.
A1 A1 ~
1. The application sends the incoming content to a logical archive. The logical
archive stores the content temporarily in the disk buffer of an original pool.
2. Write jobs copy the content from the disk buffer to the associated storage
volumes of the original pool for longtime archiving.
3. Copy jobs copy the content from the disk buffer or the attached storage volumes
of the original pool to the corresponding shadow pool(s):
• to the disk buffer of the shadow pool and then, by executing a Write job, to
the storage volumes of the shadow pool
• directly to the storage volume of the shadow pool, if the shadow pool uses an
FS-type storage volume.
The handling of Copy jobs is similar to the handling of Write jobs, except for the
error handling. The special settings for Copy jobs are described in separate
sections.
Copy jobs and Copy jobs require copy orders to copy components from the original pool to a shadow
copy orders pool. Copy orders are automatically created for copying documents that are newly
archived to the original pool’s buffer or storage volumes after the shadow pool was
created. However, in the following cases, specific copy orders must be explicitly
created for each document:
• The documents are archived in the storage volumes of the original pool before the
shadow pool was created.
• The documents are contained in storage volumes that are attached to the original
pool after the shadow pool was created.
The Create Copy Orders utility is provided to create the missing copy orders (see
“Creating copy orders for shadow pools” on page 130).
3. Select the original pool, which is to be backed up by a shadow pool, in the top
area of the result pane.
4. Click New Shadow Pool in the action pane. The window to create a new
shadow pool opens.
5. Enter a unique (per archive), descriptive Shadow Pool Name. Consider the
naming conventions; see Naming rule for archive components on page 99
7. Enter the Backup and Buffering settings according to the selected shadow pool
type:
9. For FS or VI shadow pool types, select the shadow pool in the top area of the
result pane and click Attach Volume. A window with all available storage
volumes opens (see “Creating and modifying disk volumes” on page 86).
10. Select the designated storage volume and click OK to attach it.
11. Schedule the Copy job; see “Configuring jobs and checking job protocol“
on page 141.
Modifying a To modify the shadow pool settings, select the pool and click Properties in the
shadow pool action pane. Depending on the pool type, you can modify the settings.
Backup
Copy job
Enter a unique (per archive), descriptive name for the Copy job. The name can be
modified later via the Properties settings in the action pane. To schedule the
Copy job, see “Configuring jobs and checking job protocol“ on page 141.
Number of components
Maximum number of components copied during a single run of the Copy job.
Note: The Create copy orders for existing documents option is available
only when a new shadow pool is created.
Copy orders cannot be created when modifying a shadow pool’s
properties.
Buffering
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 141.
Related Topics
• “Creating and modifying pools with a buffer” on page 119
• “Pools and pool types” on page 38
Backup
Copy job
Enter a unique (per archive), descriptive name for the Copy job. The name can be
modified later via the Properties settings in the action pane. To schedule the
Copy job, see “Configuring jobs and checking job protocol“ on page 141.
Number of components
Maximum number of components copied during a single run of the Copy job.
Note: The Create copy orders for existing documents option is available
only when a new shadow pool is created.
Copy orders cannot be created when modifying a shadow pool’s
properties.
Buffering
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 141.
Original jukebox
Select the original jukebox.
Note: For some storage systems, the maximum size is not required; see the
documentation of your storage system in the Knowledge Center (https://
knowledge.opentext.com/knowledge/llisapi.dll/Open/12331031).
Note: Make sure that the size of the smallest document to be written is less
than the difference between Minimum amount of data and Maximum
volume size.
• The size of the ISO image created by the Archive Server is larger than the
Minimum amount of data value and less than the Maximum volume
size value. If an ISO image in creation does not meet this criterion, no
image is written.
Related Topics
• “Creating and modifying pools with a buffer” on page 119
• “Pools and pool types” on page 38
Backup
Copy job
Enter a unique (per archive), descriptive name for the Copy job. The name can be
modified later via the Properties settings in the action pane. To schedule the
Copy job, see “Configuring jobs and checking job protocol“ on page 141.
Note: The Create copy orders for existing documents option is available
only when a new shadow pool is created.
Copy orders cannot be created when modifying a shadow pool’s
properties.
Buffering
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring jobs and checking job protocol“ on page 141.
Related Topics
• “Creating and modifying pools with a buffer” on page 119
• “Pools and pool types” on page 38
Copy order The Create Copy Orders utility creates the copy orders if required.
utility
There are various ways to start the copy order utility:
• Check the Create copy orders for existing documents check box when creating a
new shadow pool or attaching a new storage volume to an original pool.
• Select Archive Server > System > Utilities > Create Copy Orders.
• Select Create Copy Orders in the action pane of the original pool.
Create Copy Orders is displayed only if at least one shadow pool is defined for
the original pool.
• Select Create Copy Orders in the context menu of a volume attached to the
original pool if, at least, one shadow pool is defined for the original pool.
This option is typically used if copy orders were not created when the volume
was attached to the original pool.
Notes
• If multiple shadow pools are defined, the copy order utility creates the copy
orders for all existing shadow pools. Therefore, check Create copy orders for
existing documents only when creating the last shadow pool for an original
pool.
• Copy orders can only be created during the creation of a shadow pool. Copy
orders cannot be created when modifying a shadow pool’s properties.
• The copy order utility creates copy orders for all storage volumes of the
original pool. This may be time consuming. Always wait until the Create
Copy Orders status is FINISHED. However, working on the original pool is
possible while the copy order utility is running.
Note: If the copy order utility is started from the Attach Volume
dialog, working on the original pool is not possible while the utility is
running.
• Only one instance of the copy order utility can run at the same time.
• Restarting the server while the copy order utility is running stops the utility
before all required copy orders have been created. The copy order utility
does not resume copy order processing when the server restarts. To get all
required copy orders, you must start the copy order utility again.
1. In the New Shadow Pool window of the selected shadow pool type, check
Create copy orders for existing documents in the Backup settings.
2. Complete the Backup and Buffering settings according to the selected shadow
pool type and click Finish to create the shadow pool.
4. Click Close when the Creating Copy Orders for pool ... utility has finished.
The copy orders for all document components in the buffer and storage
volumes of the original pool have been created.
1. When the Copy job run is completed, check for copy errors.
a. Select Archive Server> System> Jobs and select the Copy job. The job
protocol shows the status of the copy errors.
• Pending copy orders are executed with the next run of the Copy job.
Run the Copy job again to clear all Pending-status copy orders
• Failed copy orders are not executed with the next run of the Copy job.
b. To investigate failed copy orders, use the Report Shadow Copy Errors
utility (see “Report of shadow copy errors” on page 133).
2. Use the Clear Shadow Copy Errors utility to clear shadow copy errors from the
Copy job (see “Clearing shadow copy errors” on page 133).
1. Select Archive Server > System > Utilities > Report Shadow Copy Errors.
2. Enter the Archive Name and Shadow Pool Name.
3. Select the type of error report.
Note: Failed copy orders are not executed with the next run of the
Copy job.
• Detailed report of each error
Detailed report for each error, including error type, document ID, and
component name.
4. Click Run.
• Failed-status copy orders of a Copy job can be set to the Pending status.
Pending copy orders are executed with the next run of the Copy job.
• Copy orders for nonexistent components (ERROR_SOURCE_MISSING errors) can be
deleted from the Copy job; see “To delete copy orders for nonexistent
components from the Copy job:“ on page 134.
1. Select Archive Server > System > Utilities > Clear Shadow Copy Errors.
2. Enter the Archive Name and Shadow Pool Name.
3. Enter the Error Type:
• Enter a specific error type retrieved from the Report Shadow Copy Errors
report.
• Leave Error Type empty to reset all Failed-status copy orders to the
Pending status.
• Reset errors
The Failed-status copy orders of the specified Error Type are reset to the
Pending status. The Pending copy orders are executed with the next run of
the Copy job.
Note: To select Delete error, you must specify a valid error type from
the detailed Report Shadow Copy Errors report in the Error Type
field.
Caution
Delete error deletes the copy order. The copy order is no longer
executed when running the Copy job.
Use Delete error only for ERROR_SOURCE_MISSING errors, that is, to
delete copy orders for nonexistent components from the Copy job.
Contact OpenText Customer Support before deleting copy orders
for any other copy error type.
5. Click Run.
To delete copy orders for nonexistent components from the Copy job:
1. If, after re-running a Copy job, there are still Failed-status copy orders reported,
check the detailed report for ERROR_SOURCE_MISSING errors. This copy error
indicates a copy order for nonexistent components. See “Report of shadow copy
errors” on page 133).
3. Use the Clear Shadow Copy Errors utility (see “Clearing shadow copy errors”
on page 133).
Caution
Use Delete error only for ERROR_SOURCE_MISSING errors, that is, to
delete copy orders for nonexistent components from the Copy job.
Contact OpenText Customer Support before deleting copy orders
for any other copy error type.
• Click Run.
The original pool containing the defective storage volumes is replaced by a new pool
of the same pool type. The existing shadow pool is kept as backup pool.
Note: The recovery procedure described here also works if the type of the
existing shadow pool is different from the type of the original pool.
Prerequisites
• At least one shadow pool is assigned to the original pool with the defective
storage volumes.
• Data of the original pool are contained in the original pool’s buffers and/or the
shadow pool(s).
Note: Data that are exclusively stored in the defective storage volumes, that
is, all data that are not additionally stored in a buffer or shadow pool,
cannot be recovered by this procedure and may be lost.
1. Create new storage volumes (see “Configuring disk volumes” on page 85).
a. Create new storage volumes for the recovered original pool if the pool type
is FS or VI. Do not attach the new volumes yet
b. One additional local hard disk volume as temporary disk buffer volume
Note: If the original pool’s disk buffer is shared with other pools, for
example, in different archives, you must create spare hard disk volumes to
be used for the disk buffers of the new pool that replaces the original pool.
2. In the disk buffer of the original pool, set all storage volumes to Write locked
(see “Checking and modifying attached disk volumes” on page 91).
3. Restart the dsaux spawner service.
> spawncmd restart dsaux
5. Create an additional shadow pool for the original pool (see “Creating and
configuring shadow pools” on page 122).
a. Specify the original pool’s type as pool type for the new shadow pool.
Select the Create copy orders for existing documents option (see “Creating
copy orders when defining new shadow pools” on page 131).
b. Assign the disk buffer created in Step 4 to the new shadow pool.
c. Wait until the Create Shadow Pool wizard utility has completed its run.
Assign the storage volumes created in step 1a to the new shadow pool.
d. Wait until the Create Copy Orders utility has completed its run (see
“Creating copy orders when defining new shadow pools” on page 131).
6. Copy all documents from the original pool to the existing shadow pool and to
the newly created shadow pool until no more documents can be copied. To do
so, run the following jobs:
• Purge jobs for the disk buffers of the existing and new shadow pools
7. Detach all storage volumes from the original pool (see “Detaching a volume
from a disk buffer” on page 90).
Note: For the recovery procedure, all storage volumes are considered
defective.
8. Restore the new shadow pool (created in Step 5) as original pool. Use Restore To
Original Pool in the context menu of the shadow pool.
9. If the old original pool’s disk buffer was shared with other pools: Clean up the
volumes of this disk buffer.
• Make sure that 10 minutes have passed since restarting the dsaux Spawner
service (see Step 3).
• Run the Write jobs and the Purge jobs for the pools sharing the old original
pool’s disk buffer.
10. Using the Export Volumes utility, export all hard disk volumes from the
original pool’s buffer (see “Exporting volumes” on page 250).
11. Attach the spare hard disk volumes (see step 1) to the new original pool’s disk
buffer.
12. Clean up the orphaned ds_job entries for the old original pool that has
disappeared:
a. Run clnJobs -d -x
• Business-critical
Description: Important to the enterprise, reasonable performance, good
availability
• Accessible Online Data
Description: Low access
• Nearline Data
Description: Rare access, large volumes
1. Select Storage Tiers in the System object. The present storage tiers are listed in
the result pane.
4. Click Finish.
Modifying To modify a storage tier, select it and click Properties in the action pane. Proceed in
storage tiers the same way as when creating a storage tier.
Related Topics
Important
In case you are using Archive Cache Server, consider that a re-initialization
in secure environments can only work if the current certificates are available
on the Archive Cache Server. To avoid problems, the Update documents
security setting must be deselected before certificates are enabled; see Step 3.
To enable certificates:
1. Select the logical archive in the Original Archives or Replicated Archives object
of the console tree.
Tip: Alternatively, you can also navigate to System > Key Store >
Certificates.
2. Select the Certificates tab in the result pane.
For scenarios using an Archive Cache Server, go on with Step 3.
Otherwise, go on with Step 4.
4. Select the respective certificate by its name (in the result pane).
3. In the Change Server Priorities window, select the server(s) to add from the
Related servers list on the left.
Click the button to move the selected server(s) to the Set priorities list.
4. Use the arrows on the right to define the order of the servers: Select a server and
click the or to move the server up or down in the list, respectively.
If you want to remove a server from the priorities list, select the server to
remove and click the button.
5. Click Finish.
pagelist job See “Configuring security settings for pagelist job” on page 145 below for further
details on the pagelist job.
Command Description
Write_CD Writes data from disk buffer to storage media as ISO images, belongs
to ISO pools.
Command Description
ShadowCopy Copies the documents of an original pool to the specified shadow
pool.
Command Description
AutoDelete Finds and optionally deletes all documents with expired retention;
syntax:
AutoDelete [-d <duration>] [-g <graceperiod>] <mode>
<archive>
Arguments:
• -d <duration>
Optional; max. processing time in seconds, default unlimited, min.
1s
• -g <graceperiod>
Optional; number of days since the retention has expired, default
10 d, min. 0 d
• <mode>
QUERY or Q: report number of documents to be deleted; DELETE or
D: find and destroy; REPORT or R: report deleted documents
• <archive>
Name of the logical archive
Copy_Back Transfers cached documents from the Archive Cache Server to the
Archive Server. The Copy_Back job is disabled by default and must
only be enabled for Archive Servers with enabling “write back” mode.
See “Configuring Archive Cache Server“ on page 225.
Command Description
start<DPname> Starts the Document Pipelines for the import scenarios:
• Import content (documents/data) with extraction of attributes from
content (CO*),
• Import content (documents/data) and attributes (EX*),
• Import forms (FORM).
For further information, see OpenText Document Pipelines - Overview
and Import Interfaces (AR-CDP).
The certificate is sent to the Archive Server with the putCert command or imported
with the Import Certificate for Authentication utility (see “Configuring a certificate
for authentication” on page 174). You can use the certtool utility (command line)
to create a certificate, or to generate a request to get a trusted certificate. For details,
see “Creating a certificate using the Certtool” on page 172.
Always signing You can configure the pagelist job to always sign the URL.
URL
To always sign the URL for the pagelist job:
2. Depending on the actual status of the scheduler click Start Scheduler or Stop
Scheduler in the action pane to change the status. The actual status is displayed
in the first line of the jobs tab.
To start and stop certain jobs, see “Starting and stopping jobs” on page 146.
2. Select the Jobs tab in the top area of the result pane. The jobs are listed.
4. Depending on the actual status of the job, click Start or Stop in the action pane
to change the status of the job.
2. Select the Jobs tab in the top area of the result pane. The jobs are listed.
4. Click Enable or Disable in the action pane to change the status of the job.
1. To check, create, modify and delete jobs, select Jobs in the System object in the
console tree.
2. Select the Jobs tab in the top area of the result pane. The jobs are listed.
3. Select the job you want to check. The latest message of this job is listed in the
bottom area of the result pane.
4. Click Edit to check details of the job. See also “Creating and modifying jobs”
on page 147.
Pool-related Copy jobs are configured for backing up documents in a shadow pool.
See also “Creating and configuring shadow pools” on page 122. The name of a Copy
job is specified during the creation of the shadow pool and can be modified later.
To create a job:
2. Select the Jobs tab in the top area of the result pane.
3. Click New Job in the action pane. The wizard to create a new job opens.
4. Enter a name for the new job. Select the command and enter the arguments
depending on the job.
Name
Unique name of the job that describes its function so that you can
distinguish between jobs having the same command. Do not use blanks and
special characters. You cannot modify the name later.
Command
Select the job command to be executed. See also “Important jobs and
commands” on page 141.
Argument
Entries can expand the selected command. The entries in the Arguments
field are limited to 250 characters. See also “Important jobs and commands”
on page 141.
6. Depending on the start mode, define the scheduling settings or the previous job.
See also “Setting the start mode and scheduling of jobs” on page 148.
Modifying jobs To modify a job, select it and click Edit in the action pane. Proceed in the same way
as when creating a job.
• at a certain time,
• when another job is finished,
• when another job is finished with a certain return value,
• at a certain time when an job has finished.
Start Mode
Specification of the start mode. Check the mode to define specific settings.
Scheduled
If you use this start mode, you can define the start time of the job, specified
by month, day, hour and minute. Thus, you can define daily, weekly and
monthly jobs or define the repetition of jobs by setting a frequency (hours or
minutes).
• Jobs accessing jukebox drives must not collide: different Write jobs,
Local_Backup, Synchronize_Replicates (Remote Standby Server) and
Save_Storm_Files.
• Only one drive is used for Write jobs on WORM/UDO. Therefore, only one
WORM/UDO can be written at a time. That means, only one logical archive can
be served at a time.
• Backup jobs need two drives, one for the original, one for the backup media.
The entries in the job protocol are regularly deleted by the SYS_CLEANUP_PROTOCOL
job that usually runs weekly. You can modify the maximum age and number of
protocol entries in Configuration, search for the Max. number of job protocol
entries variable (internal name: ADMS_PROTOCOL_MAX_SIZE; see “Searching
configuration variables” on page 242).
2. Select the Jobs tab in the top area of the result pane.
2. Select the Protocol tab in the top area of the result pane. All protocol entries are
listed. Protocol entries with a red icon are terminated with an error. Green icons
identify jobs that have run successfully.
3. Select a protocol entry to see detailed messages in the bottom area of the result
pane.
2. Select the Protocol tab in the top area of the result pane. All protocol entries are
listed.
14.1 Overview
Introduction Archive Server provides several methods to increase security for data transmission
and data integrity:
Configuration The main GUI elements used for configuration and administration of security
and administra- settings include:
tion
• The Archives node: each time a new archive is added or new pools are created,
security settings are to be configured (Security tab of the Properties dialog).
• The Key Store in the System object of the console tree: used for configuration of
certificates and system keys.
Further You can find more information on security topics in the “Security” folder in the
information Knowledge Center (https://knowledge.opentext.com/knowledge/llisapi.dll/open/
15491557).
Configuration settings concerning security topics are described in more detail in
“Configuration parameter reference” on page 357, in particular:
To archive “clean” documents, you must protect the documents from viruses
before archiving. Archive Server does not perform any checks for viruses. To
ensure error-free work of Archive Server, locations where documents are
stored temporarily, like disk buffer volumes, cache volumes, and Document
Pipeline directories, must not be scanned by any antivirus software while
Archive Server is using them.
Signed URLs are verified using public keys within certificates; see “Certificates”
on page 169.
If secKeys are used, the administrator must provide the necessary certificate
comprising the appropriate public key for each application. Thus, he has to send or
import the certificates comprising their public keys to the Archive Server. In
addition, the administrator must configure the usage of secKeys on the Archive
Server.
secKey usage A secKey requests the right of access. When a document is accessed, Archive Server
checks whether the secKey should be checked.
Procedure
• “Activating secKey usage for a logical archive” on page 153
• “secKeys from leading applications and components” on page 153
• “secKeys from SAP” on page 154
• “Configuring a certificate for authentication” on page 174
These signed URLs must include information on these permissions. If the secKey of
a request does not meet the permissions required by the archive, access is denied.
Each permission marked for the current archive has to be checked when verifying
the signed URL.
Activating Select the operations that you want to protect. Only client applications using a valid
secKey usage secKey can perform the selected operations. If an operation is not selected,
everybody can perform it.
To activate secKeys:
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Check the settings and modify them, if needed.
• Read documents
• Update documents
• Create documents
• Delete documents
4. Click OK to resume.
1. Create a certificate with the certtool utility (command line), or create the
request and send it to a trust center (see “Generate self-signed certificates”
on page 172 and “Request a certificate from a trust center ” on page 173).
Example for the a result: the <key>.pem file contains the private key and is used
to sign the URL. <cert>.pem contains the public key and the certificate that
Archive Server uses to verify the signatures.
2. Store the certificate and the private key on the server of your leading
application (see the corresponding Administration Guide for details). Correct
the path, if necessary, and add the file names.
By storing the certificates in the file system, they are recognized by Enterprise
Scan and the client programs.
Important
For security reasons, limit the read permission for these directories to
the system user (Windows) or the archive user (UNIX/Linux).
3. To provide the certificate to the Archive Server use one of the following options:
Repeat this step if you want to use the certificate for several archives.
Procedure
• “Activating secKey usage for a logical archive” on page 153
• “Creating a certificate using the Certtool” on page 172
• “Configuring a certificate for authentication” on page 174
Document encryption can be activated per logical archive. It is performed when the
documents are transferred to the buffer of the logical archive for temporary storage.
For document encryption, a symmetric key (system key) is used. The administrator
creates this system key and stores it in the Archive Server's keystore. The system key
itself is encrypted on the Archive Server with the Archive Server’s public key and
can then only be read with the help of the Archive Server's private key. RSA
(asymmetric encryption) is used to exchange the system key between the Archive
Server and the remote standby server.
New: From Update 2014.4 on, system keys are assigned to logical archives.
Keys per Update 2014.4 changed the way system keys are used: Before the update, only one
archive key could be active for all archives of an Archive Server; after it, each archive uses its
own key. In particular, several system keys can be used in parallel for different
archives.
Encryption for HDSK pools do not use a buffer. To encrypt documents use the designated
documents in Compress_ job; see “Data compression” on page 100.
HDSK pools
(write through)
Note: HDSK pools are not released for use in production systems. Use them
only for test purposes.
Procedure
• “Activating encryption usage for a logical archive” on page 156
• “Creating a system key for document encryption” on page 156
• “Exporting and importing system keys” on page 158
• “Configuring a certificate for document encryption” on page 177
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Activate Encryption (mark the check box).
4. Click OK to resume.
System keys are encrypted using the encryption certificate (see “Configuring a
certificate for document encryption” on page 177).
Caution
Be sure to store the system key securely, so that you can re-import it if
necessary.
If the key gets lost, the documents that were encrypted with it can no
longer be read!
Do not delete any key if you set a newer one as current. The old key is still
used for decryption.
3. Archive Center scenario only: Define the system key folder to which the keys are
exported.
Click System Key Folder in the action pane and specify the path to the Export
folder.
You can split the contents of the key store into different files (Number of token
files, maximum: 8). Further, you can specify how many of them must be
Notes
• Specifying the system key folder is required for the Archive Center
scenario. Business administrators can trigger the creation of a new
system key from within the Archive Center Administration web client.
In this case, the new system key is exported to the system key folder
automatically.
• Collections cannot use encryption before the system key folder is set.
4. Click Generate System Key in the action pane. A new key is generated.
5. Unless using Archive Center: Export the new system key using the recIO
command line tool and store it at a safe place (see “Exporting and importing
system keys” on page 158).
6. Make a backup of the key/certificate pair used by recIO to encrypt the System
Keys:
Copy the <OT config AS>/config/setup/as.pem file and store it alongside
with the exported system key and at a save place.
Important
In the case of system failure or restore scenarios it can be vital to have
backups of the system keys (and the related certificates).
7. Select the created system key and click Set as current key. A key can only be set
as current key if it is successfully exported (see Step 5).
New documents are now encrypted with the current key, while decryption
always uses the appropriate key.
Handling for The Synchronize_Replicates job updates the system keys and certificates between
replicated Archive Servers before it synchronizes the documents. The system keys are
archives
transmitted encrypted.
If you do not want to transmit the system keys through the network, you can also
export them from the original server to an external data medium and re-import
them on the remote standby server. See “Exporting and importing system keys”
on page 158.
L
Lists the contents of the System key node (without the keys themselves) in a
table.
The user must log on.
Example:
E
Exports the contents of the System key node. Use the export in particular to
store the system keys for document encryption.
The user must log on and specify a path for the export files. The option -t NN:MM
splits the contents of the key store into MM different files (maximum: 8). At least
NN files must be reimported to restore the complete key store.
Example:
C:\Program Files\OpenText\Archive Server 10.5.0\bin>recIO E -t 3:5
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.5.0.332
IMPORTANT: -----------------------------------------------------
recIO 10.5.0.332 Copyright ¬ 2013 Open Text Corporation
Please authenticate!
User :dsadmin
Password :
Writing keystore with 2 system-keys to 5 token-files (3 required to restore)
Token[1/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey.pem
Token[2/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey2.pem
Token[3/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey3.pem
Token[4/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey4.pem
Token[5/5] (default = A:\ixoskey.pem )
File (CR to accept above) : Z:\share\otaskey5.pem
V
Verifies the contents of the System key node against the exported files.
The user must log on and specify the path for the exported data. Then the
exported data is compared with the key store on the Archive Server.
Example:
C:\Program Files\OpenText\Archive Server 10.5.0\bin>recIO V
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.5.0.332
IMPORTANT: -----------------------------------------------------
recIO 10.5.0.332 Copyright ¬ 2013 Open Text Corporation
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey2.pem
Token[2/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey3.pem
Token[3/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey4.pem
key 1 : EB9C088BFA4F1847 : OK
key 2 : 7CB5CA683339CC60 : OK
D
Displays the information on the exported files. The information is shown in a
table.
Example:
C:\Program Files\OpenText\Archive Server 10.5.0\bin>recIO D
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.5.0.332
IMPORTANT: -----------------------------------------------------
recIO 10.5.0.332 Copyright ¬ 2013 Open Text Corporation
Token[1/?] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey2.pem
Token[2/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey3.pem
Token[3/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey5.pem
idx ID created origin
---------------------------------------------------
1 EB9C088BFA4F1847 2014/01/14 11:58:23 <servername>
2 7CB5CA683339CC60 2014/02/20 11:41:20 <servername>
I
Imports the saved contents of the System key node.
The user must log on and specify the path for the exported data. The data in the
System key node is restored, encrypted with the Archive Server's public key
and sent to the administration server. The results are displayed. Keys already
contained in the Archive Server's store are not overwritten.
Example:
C:\Program Files\OpenText\Archive Server 10.5.0\bin>recIO I
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.5.0.332
IMPORTANT: -----------------------------------------------------
recIO 10.5.0.332 Copyright ¬ 2013 Open Text Corporation
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey5.pem
Token[2/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey4.pem
Token[3/3] (default = A:\ixoskey.pem)
File (CR to accept above) : Z:\share\otaskey2.pem
14.4 Timestamps
Timestamps are used to verify that documents have not been altered since archiving
time. The verification process checks these timestamps. A timestamp service is
required for this. Creating a timestamp means: The computer calculates a unique
number, a cryptographic checksum or hash value, from the content of the document.
The timestamp provider (a qualified Time Stamping Authority or Archive Timestamp
Server) adds the time to this checksum, creates a checksum of this created object and
signs the new checksum with its private key.
The signature is stored together with the document component. When a document is
requested, Archive Server can verify whether the component was modified after
storing it by looking at the signature. Archive Server needs the public key of the
timestamp provider’s certificate for verification. The OpenText products Windows
Viewer or Java Viewer can be used to display the verification result.
ArchiSig With ArchiSig timestamps, the timestamps are not added per document, but for
timestamps containers of hash trees calculated from the documents (Figure 14-1).
Time stamp
Time Stamp
h7=Hash(h5|h6)
Fingerprint
h5=Hash(h1|h2) h6=Hash(h3|h4) (hash value)
A job builds the hash tree that consists of hash values of as many documents as
configured, and adds one single timestamp. Thus, you can collect, for example, all
documents of a day in one hash tree. Only one timestamp per hash tree is required.
The verification process needs only the document and the hash chain leading from
the document to the timestamp but not the whole hash tree (Figure 14-2).
Document d1
Document Each document component gets a timestamp when it arrives in the archive, or more
timestamps precisely: when it arrives in the disk buffer and is known to the Document Service.
This (old) method requires a huge amount of timestamps, depending on the number
of documents. Thus, it is available only for archives that used timestamps in former
Archive Server versions. You can migrate these timestamps to ArchiSig timestamps;
see “Migrating existing document timestamps” on page 169.
Configuration You can set up signing documents with timestamps and the verification of
timestamps including the response behavior for each archive; see “Configuring the
archive settings” on page 113. Consider the recommendations given above.
If you use both methods in parallel, the document timestamp secures the document
until the hash tree is built and signed. As this time period is short, a document
timestamp is sufficient for these documents, while the hash tree, in general, gets a
timestamp created with a certificate of an accredited provider. This trusted
certificate is used for verification.
Timestamps and hash trees may become invalid or unsafe. To prevent this, they can
be renewed. See “Renewing timestamps of hash trees” on page 168 and “Renewing
hash trees” on page 168.
Related Topics
• “Configuring Archive Timestamp Server“ on page 181
Procedure
• “Basic timestamp settings” on page 162
• “Activating and configuring timestamp usage” on page 116
• “Creating a hash tree” on page 167
• “Configuring a certificate for timestamp verification” on page 177
Configuration The following description includes the most relevant parameters for ArchiSig
timestamps. There are further parameters; in general, you do not need to modify
those. For more information, see “Configuring connection parameters” on page 163.
1. Select Configuration, and one by one, search for the following variables (see
“Searching configuration variables” on page 242).
3. Set the minimum and the maximum number of components per hash tree:
4. Set the pool to be used for the hash trees: Pool for timestamps variable (internal
name: TS_POOL), default: ATS_POOL
5. Check the other values. In general, you can use the default values. See
“Configuring connection parameters” on page 163.
Archive Further, you can use OpenText Archive Timestamp Server for testing. Archive
Timestamp Timestamp Server is not a TSA and is not recommended for production systems. For
Server
more information, see “Configuring Archive Timestamp Server“ on page 181.
Example: tshost1:32001;tshost2:10318
Timestamps Classic timestamps are neither supported nor recommendable with a timestamping
(old) service over the Internet. The cost would be extremely high since every document
component is signed and you would be charged for each timestamp. Finally, dsSign
does not communicate using SSL/TLS.
14.4.2.3 Quovadis
Introduction Quovadis offers qualified timestamps over the Internet. This kind of service
provides the highest level of trustworthiness.
Timestamps Classic timestamps are neither supported nor recommendable with a timestamping
(old) service over the Internet.
Example: tshost1:32001;tshost2:10318
Connection: close
1. In the Archives object of the console tree, create a new archive (for example,
with the name ATS) and a pool named POOL to define where the hash trees are
stored.
Important
The name of the pool is determined by the Pool for timestamps
configuration variable (internal name: AS.DS.TS_POOL). Its default
value is ATS_POOL, which means that you must call the pool POOL.
If the name of the pool and the value of the variable do not fit, the job
building the hash tree will fail.
2. In the System > Jobs object of the console tree, create jobs to build the hash
trees. You need one job for each archive that uses timestamps.
See also: “Configuring jobs and checking job protocol“ on page 141.
Command
hashtree
Arguments
Archive name
Scheduling
If you use ArchiSig timestamps, schedule a nightly job. If the hash trees are
written to a storage system, make sure that the job is finished before the
Write job starts.
If you need to renew your hash trees, contact OpenText Customer Support.
You need only one new timestamp per hash tree. No access to the documents is
necessary.
To renew timestamps:
3. In the resulting list, find the distinguished subject name(s) of your timestamp
service (subject of the service’s certificate).
Note: The name of the logical archive (<archive name>) must always be
included in the dsHashTree commands.
The utility finds all timestamps for the given archive that were created with the
certificate indicated in the command. It calculates hash values for the timestamps
and builds new hash trees. Each hash tree is signed with a new timestamp.
Note: Do not delete the old time stamp server certificate. It may still be used
for another logical archive.
Important
You can migrate document timestamps only once! Never disable ArchiSig
timestamps after starting migration.
2. In a command line, run the timestamp migration tool for each pool to be
migrated:
dsReSign -p <pool name>
3. Call the hash tree creation tool for each archive with migrated timestamps:
dsHashTree <archive name>
The tools calculate hash values from the existing timestamps, build hash trees, and
get a timestamp for each tree.
14.5 Certificates
Certificates A certificate is an electronic document which uses a digital signature to bind
together a public key with information on the client issuing this public key
(information such as the name of a person or an organization, their address, and so
forth). The certificate can be used to verify that a public key belongs to an
individual, for example, an archive uses this information to verify requests based on
signed URLs from various clients.
Certificate use Archive Server uses certificates for various use cases:
cases
• Authentication certificates, used for signed URLs; see “Configuring a certificate
for authentication” on page 174.
PEM files Privacy Enhanced Mail Security Certificate (PEM) files are encoded certificate files
used to store the public key and the certificate. Archive Server uses various PEM
files.
Certificates for In a Remote Standby environment, the Synchronize_Replicates job copies the
Remote certificates for authentication. Only enabled certificates are copied. The certificate on
Standby
the remote server is disabled after synchronization. To enable it, follow the
instructions in “Enabling a certificate” on page 171.
To establish validity of someone's certificate, you can trust that a third individual
has gone through the process of validating it. A Certification Authority (CA), for
example, is responsible for ensuring that prior to issuing a certificate, he or she
carefully checks it to be sure the public key portion really belongs to the purported
owner. Anyone who trusts the CA will automatically consider any certificates
signed by the CA to be valid.
To check a certificate:
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective tab and the designated certificate and click View
Certificate in the action pane.
General
This tab provides detailed information to identify the certificate
unambiguously: the certificate's issuer, the duration of validity, and the
fingerprint.
Certification Path
Here you can follow the certificate's path from the root to the current
certificate. A certificate can be created from another certificate. The path
shows the complete derivation chain. You can also view the parent
certificate information from here.
To enable a certificate:
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective certificate by its name and click Enable in the action pane.
To delete a certificate:
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective tab and the designated certificate and click Delete
Certificate in the action pane.
If you have to manage a large number of certificates, make sure that the AuthIDs and
the names of the certificates are unique.
Send your <requestOutFile> file to a trust center. The trust center will return you a
certificate including the public key. The certificate from the trust center must be in
PEM format.
After using the Refresh action (System > Key Store > Certificates), the certificates
sent using putCert are displayed in Administration Client.
Note: putCert cannot be used with SSL. To transfer the certificate to the
server, switch the SSL settings for the logical archive to May use or Don’t use.
Alternatively, if provided, you can also use dsh to send the certificate to Archive
Server.
A global certificate can be imported (that is added) and assigned to all logical
archives (globally) at once. Global certificates are valid for all logical archives –
also for archives that will be created later on. A global certificate can only be
enabled or disabled generally.
• Assigned to one single archived (assigned to one archive only)
These certificates are valid for a single logical archive of the Archive Server.
Procedure
• “Importing an authentication certificate” on page 175
• “Granting privileges for a certificate” on page 176
• “Checking a certificate” on page 170
• “Enabling a certificate” on page 171
• “Generate self-signed certificates” on page 172
• “Send the certificate to an Archive Server (putCert)” on page 173
1. Select the Certificates node of the Key Store in the System object of the console
tree.
In the console tree select System > Key Store > Certificates.
4. Click Browse to open the file browser for the Archive Server file system and
select the designated Certificate. Click OK to resume.
For example, a scan station may not be allowed to delete documents. Thus, the
privilege “delete documents” must not be set in the certificate that is used to
communicate with the scan station.
Important
Any change to the settings affects all archives that use this certificate!
To grant privileges:
2. Select the Certificates entry in the result pane and then the Global tab. All
imported certificates are listed.
3. Select the designated certificate and click Change Privileges in the action pane.
4. Select (set check box) the privileges you want to assign to the certificate. The
following privileges are available:
• Read documents
• Create documents
• Update documents
• Delete documents
• Pass by
This privilege is only evaluated in Enterprise Library scenarios. Pass by must
be set for the certificate of the
Pass by must not be set for all other kinds of client certificates, for example,
SAP.
1. Select the Certificates entry of the Key Store node in the System object of the
console tree.
2. Select the Encryption Certificates tab in the result pane. All available
certificates are listed.
4. Enter the path and the complete file name of the certificate or click Browse to
open the file browser. Select the designated Certificate and click OK to confirm.
Procedure
• “Generate self-signed certificates” on page 172
• “Send the certificate to an Archive Server (putCert)” on page 173
• “Importing an encryption certificate” on page 177
• “Checking a certificate” on page 170
• “Enabling a certificate” on page 171
1. Select the Certificates entry of the Key Store node in the System object of the
console tree.
2. Click Import Timestamp Certificate in the action pane.
3. Enter a new ID or select an existing ID if you want to replace an existing
certificate.
4. Click Browse to open the file browser and select the designated Certificate.
Click OK to resume.
5. Click OK to start the import.
A protocol window shows the progress and the result of the import. To check
the protocol later on, see “Checking utilities protocols” on page 268.
Procedure
• “Importing a certificate for timestamp verification” on page 178
• “Checking a certificate” on page 170
• “Enabling a certificate” on page 171
Enterprise Scan Enterprise Scan generates checksums for all scanned documents and passes them on
to Document Service. Document Service verifies the checksums and reports errors
(see “Monitoring with notifications“ on page 315). On the way from Document
Service to STORM, the documents are provided with checksums as well, in order to
recognize errors when writing to the media.
Timestamp and The leading application, or some client, can also send a timestamp (including
checksum checksum) instead of the document checksum; see “Timestamps” on page 160.
Verification can check timestamps as well as checksums.
The certificates for those timestamps must be known to the Archive Server and
enabled, before the timestamp checksums can be verified (see “Importing a
certificate for timestamp verification” on page 178).
Enterprise This topic describes the special treatment when using ArchiveLink connections and
Library only Enterprise Library. Signed ArchiveLink connections between external applications
and Enterprise Library require that the Common Name (CN) Subject of the
certificate and the name of the client application (for example, Enterprise Library
Server) for Enterprise Library are identical. This can be achieved in two ways:
• You can define the name of the application and configure the certificate
correspondingly (for example, if you set up a whole new system). Thus, use the
application name as Common Name when creating the certificate, for example,
using the Certtool (see “Creating a certificate using the Certtool” on page 172).
• You can retrieve the Subject from the certificate and use it as application ID
(name of the application); see the procedure below.
2. In the console tree, expand Archiving and Storage and log on to the Archive
Server.
3. Select the Archives > Original Archives > <archive to connect> node.
4. In the result pane, from the Certificates tab, select the imported certificate.
5. In the action pane, click View Certificate.
6. From the Subject entry, note or copy the value after CN=
Use this value as the application ID when creating the application (<server name>
> Enterprise Library Services > Applications).
Archive Timestamp Server is installed and configured together with Archive Server.
It handles the incoming requests, creates the timestamps, and sends the reply. It
runs as an Archive Server component.
After the installation of Archive Server and Archive Timestamp Server, basic
settings of Archive Timestamp Server are preset, for example, default signature key
and certificate are provided. You can also configure other settings, if required.
Note: Archive Timestamp Server allows you to use the timestamp features
independent from external software, for example, for test cases. However, it
does not provide the same high-security level as a trusted service provider.
Configuration The configuration and administration of Archive Timestamp Server is done in the
and administra- Administration Client. See “Configuration variables for Archive Timestamp Server”
tion
on page 183.
Background
• “Timestamps” on page 160
However, this method provides no security against an intruder with read access
to the server configuration.
Configuration You must administer the required settings using configuration variables in
variables Administration Client. Search the following configuration variables in the
Configuration node (see “Searching configuration variables” on page 242):
3. Extract the ZIP file and copy the four PEM files to the <OT config AS>/
timestamp/ directory.
Verify the paths and filenames for each certificate (Path to the certificate 1 to 3
variables). Verify the Path to the private-key file (stampkey.pem). If required,
correct the paths.
5. Import the new certificate files: Expand Archive Server > System > Key Store >
Certificates and then click Import Timestamp Certificate to start the wizard for
each file. Import the certificates with the following IDs and in this order:
Certificate ID Certificate
CA <OT config AS>/timestamp/cert_ca.pem
ROOT <OT config AS>/timestamp/cert_root.pem
TSS <OT config AS>/timestamp/cert_tss.pem
Configuration recommendation
ArchiSig timestamps
Timestamps (old)
Example: tshost1:32001;tshost2:10318
Checking the You can retrieve and display the general status of Archive Timestamp Server
status together with some details about its configuration with a standard Web browser.
Enter the following URL:
http://<servername>:<port>
As <servername>, use the host name of Archive Timestamp Server and as <port>,
use the configured port. The default port is 32001.
Note: The status can only be retrieved on computers that are configured as
administration hosts in the Archive Timestamp Server setup. If Allow remote
administration from any host is enabled, the web status can be accessed from
any host.
Timestamps From the command line, enter the following command: dsSign -t
(old)
The result should be similar to this:
IMPORTANT: about to mount server WORM on host localhost, port 0, mount point /views_hs
IMPORTANT: about to mount server CDROM on host localhost, port 0, mount point /views_hs
Success!
Date/Time: Thu Jun 18 09:41:48 2015
cert 0:
expired: Wed Apr 01 02:00:00 2020
Archive Server needs a few specific administrative users for proper work. They are
managed in the System object of the Archive Server. The required settings are preset
during installation. Use the user management in the following cases:
• You want to change the password of the dsadmin administrator of the Archive
Server.
Important
See “Password security and settings” below for additional information on
passwords.
• You want to change settings of users, groups, or policies.
• You need a user with specific rights.
The users of the leading application are managed in other user management
systems, for example OpenText Directory Services (OTDS). To set up a connection to
Directory Services, see “Connecting to Directory Services” on page 198.
Important
Changing the password of dsadmin is also required in the OTDS scenario!
Although signing in as dsadmin into Administration Client is not possible if
OTDS is used, dsadmin is still used by other components.
Changing A standard change password dialog for dsadmin users is provided in the
password for Administration Client to change their password, for example, after first login.
dsadmin
Depending on the kind of user management (OTDS or Archive Server’s built-in
system), you find the dialog at a different place.
2. In the console tree, open the Archive Server > System > Users and Groups
node, and in the result pane, select the Users tab.
3. Open the Properties of the dsadmin user and change the password.
2. In the console tree, select Archive Server and in the action pane, click Set
Password.
3. Enter the old and the new password, confirm the new password and then click
OK.
Password You can specify a minimum length for passwords, if a user is locked out after
settings several unsuccessful logons and how long the lockout is to be.
Minimum length You can define a minimum character length for passwords. If you do not set this
for passwords property, the default value is eight.
1. In the console tree, expand Archive Server > Configuration and search for the
Min. password length variable (internal name: AS.DS.DS_MIN_PASSWD_LEN).
Lock out after You can define that a user is locked out after a specified number of failed attempts to
failed logons log on; default is 0 (no lockout).
1. In the console tree, expand Archive Server > Configuration and search for the
Max. retries before disabling variable (internal name: AS.
DS.DS_MAX_BAD_PASSWD).
2. In the Properties window of the variable, change the Value as required (in
number of retries).
A value of 0 means that users will never be locked out.
Unlock after You can define how long a user is locked out after a failed attempt; default is zero
failed logons seconds.
1. In the console tree, expand Archive Server > Configuration and search for the
Time after which bad passwords are forgotten variable (internal name: AS.
DS.DS_BAD_PASSWD_ELAPS).
2. In the Properties window of the variable, change the Value as required (in
seconds).
A value of 0 means that users will never be locked out.
16.2 Concept
Modules To keep administrative effort as low as possible, the rights are combined in policies
and users are combined in user groups. The concept consists of three modules:
User groups
A user group is a set of users who have been granted the same rights. Users are
assigned to a user group as members. Policies are also assigned to a user group.
The rights defined in the policy apply to every member of the user group.
Users
A user is assigned to one or more user groups, and he is allowed to perform the
functions that are defined in the policies of these groups. It is not possible to
assign individual rights to individual users.
Policies
A policy is a set of rights, i.e. actions that a user with this policy is allowed to
carry out. You can define your own policies in addition to using predefined and
unmodifiable policies.
Standard users During the installation of Archive Server, some standard users, user groups, and
policies are configured:
Tenants Tenants are special user groups intended for OpenText Archive Center. For more
information, see “Creating tenants” on page 195.
1. Create and configure the policy; see “Creating and modifying policies”
on page 191.
2. Create the user; see “Checking, creating, or modifying users” on page 192.
3. Create and configure the user group and add the users and the policies; see
“Checking, creating, or modifying user groups” on page 193.
Note: The standard policies are write-protected (read only) and cannot be
modified or deleted.
Group Description
Archive Administration Summary of rights to control creation, configuration and deletion
of logical archives.
Archive Users Summary of rights to control creation, configuration and deletion
of users and groups and their associated policies.
Notifications Summary of rights to control creation, configuration and deletion
of notifications and events.
Group Description
Policies Summary of rights to control creation, configuration and deletion
of policies.
Important
Rights out of the following policy groups should no longer be used. These
rights are still available to ensure compatibility to policies created for former
versions of Archive Server.
• Accounting
• Administration Server
• DPinfo
• Scanning Client
• Spawner
1. Select Policies in the System object in the console tree to check, create, modify
and delete policies. All available policies are listed in the top area of the result
pane. In the bottom area the assigned rights are shown as a tree view.
2. To check a policy, select it in the top area of the result pane. The assigned rights
are listed in the bottom area.
1. Select Policies in the System object in the console tree. All available policies are
listed in the top area of the result pane.
2. Click New Policy in the action pane. The window to create a new policy opens.
Name
Name of the policy. Spaces are not allowed. The name cannot be modified
after creation.
Description
Short description of the role the user can assume by means of this policy.
4. The Available Rights tree view shows all rights that are currently not
associated with the policy. Select a single right or a group of rights that should
be assigned to the policy and click Add >>.
5. To remove a right or a group of rights, select it in the Assigned Rights tree view
and click << Remove.
Modifying a To modify a self-defined policy, select the policy in the top area of the result pane
policy and click Edit Policy in the action pane. Proceed in the same way as when creating a
new policy. The name of the policy cannot be changed.
Deleting a To delete a self-defined policy, select the policy in the top area of the result pane and
policy click Delete in the action pane. The rights themselves are not lost, only the set of
them that makes up the policy. Pre-defined policies cannot be deleted.
Related Topics
• “Checking, creating, or modifying users” on page 192
• “Checking, creating, or modifying user groups” on page 193
• “Concept” on page 189
1. Select Users and Groups in the System object in the console tree to check,
create, modify and delete users.
2. Select the Users tab in the top area of the result pane to list all users.
3. To check a user, select the entry in the top area of the result pane. The groups
which the user is assigned to are listed in the bottom area.
4. To create and modify a user, see “Creating and modifying users” on page 192.
To create a user:
1. Select Users and Groups in the System object in the console tree.
2. Select the Users tab in the result pane. All available users are listed in the top
area of the result pane.
3. Click New User in the action pane. The window to create a new user opens.
4. Enter the user name and the password.
Username
Name of the user to administer the Archive Server. The name can be a
maximum of 14 characters in length. Spaces are not permitted. This name
cannot be changed subsequently.
Password
Password for the specified user.
Confirm password
Enter exactly the same input as you have already entered under Password.
Click Next.
5. Select the groups the user should be assigned to. Click Finish.
Modifying user To modify a user's settings, select the user and click Properties in the action pane.
settings Proceed in the same way as when creating a new user. The name of the user cannot
be changed.
Deleting users To delete a user, select the user and click Delete in the action pane.
Related Topics
1. Select Users and Groups in the System object in the console tree to check,
create, modify and delete user groups.
2. Select the Groups tab in the top area of the result pane to list all groups.
3. To check a user group, select the entry in the top area of the result pane.
Depending on the tab you selected, additional information is listed in the
bottom area:
Members tab
List of users who are members of the selected group.
Policies tab
List of policies which are assigned to the selected group.
4. To create and modify a user group, see “Creating and modifying user groups”
on page 194.
1. Select Users and Groups in the System object in the console tree.
2. Select the Groups tab in the top area of the result pane. All available groups are
listed in the top area of the result pane.
3. Click New Group in the action pane. The window to create a new group opens.
Name
A name that clearly identifies each user group. The name can be a
maximum of 14 characters in length. Spaces are not permitted.
Implicit
Implicit groups are used for the central administration of clients. If a group
is configured as implicit, all users are automatically members. If users who
have not been explicitly assigned to a user group log on to a client, they are
considered to be members of the implicit group and the client configuration
corresponding to the implicit group is used. If several implicit groups are
defined, the user at the client can select which profile is to be used.
5. Click Finish.
Modifying group To modify the settings of a group, select it and click Properties in the action pane.
settings Proceed in the same way as when creating a user group.
Deleting a user To delete a user group, select it and click Delete in the action pane. Neither users
group nor policies are lost, only the assignments are deleted.
Related Topics
• “Creating and modifying policies” on page 191
• “Checking, creating, or modifying users” on page 192
• “Concept” on page 189
• “Adding users and policies to a user group” on page 194
1. Select the user group in the top area of the result pane for which users and
policies should be added.
2. Select the Members tab in the bottom area. Click Add User in the action pane. A
window with available users opens.
3. Select the users which should be added to the group and click OK.
4. Select the Policies tab in the bottom area. Click Add Policy in the action pane. A
window with available policies opens.
5. Select the policies which should be added to the group and click OK.
Removing To remove a user or a policy, select it in the bottom area and click Remove in the
users and action pane.
policies
Note: Tenant groups were introduced with Archive Server Update 2013.2.
Groups On Archive Server, the tenant <name> is defined by the following DS user groups:
• <name> (with policy BusinessAdministration)
• <name>_ED (with policy ArchiveAccess)
• <name>_UG (with policy MyArchive)
Groups of the same name (<name>, <name>_ED, <name>_UG) are created within the
partition OTInternal in OpenText Directory Services.
Users who belong to <name> are allowed to work with the Archive Center
Administration client.
Users who belong to <name>_ED are allowed to work with the Archive Center Access
client.
Users who belong to <name>_UG are allowed to work with the My Archive client.
For the Email scenarios, all IMAP users who are allowed to access their personal
archives must be added to the <name>_UG group. To get emails archived at all, this
user must also be a member of the group (or any subgroup) of the OTDS group
specified in the For Group field in Archive Center Administration.
Important
The <name>_SU group is intended for technical users only. Do not add any
human users to this group as these users would have access to ACLs and the
BCC fields of emails.
On-premises In the on-premises scenario, only one tenant is allowed per installation of Archive
Center. The scenario is defined in the Operating Mode configuration variable
(internal name: AS.AS.BIZ_OPERATING_MODE).
1. Select Users and Groups in the System object in the console tree.
2. Click New Tenant in the action pane. The window to create a new tenant opens.
Tip: The short name is used as a prefix for the names of this tenant’s
logical archives, buffers, and jobs. Thereby, you can easily sort the
corresponding lists by tenants.
4. Optional Cloud operating modes only: In the Contract ID field, you can enter any
unique, arbitrary text. This ID is used to identify the tenant when exporting the
billing information (XML file).
5. Optional Additionally to creating the tenant group, you can create the following
users:
Administration User
This user is added to the new tenant, with assigned policy
“BusinessAdministration,” and thereby is allowed to perform all tasks
related to Archive Center Administration.
Access User
This user is added to a new user group. The new group has the name <new
tenant>_ED and the assigned policy “ArchiveAccess,” and thereby is
allowed to perform all tasks related to Archive Center Access. This policy
enables the eDiscovery user to search for holds and create EDRM exports,
for example. The policy does not allow writing to archives; in particular,
setting holds is not possible.
6. Click OK.
The tenant user group with assigned policy is created.
Related Topics
• “Configuring miscellaneous Archive Center options“ on page 61
1. Select Users and Groups in the System object of the console tree.
2. Select the Users tab in the top area of the result pane and select the user. Note
the groups listed under Members in the bottom area.
3. Select the Groups tab in the top area of the result pane and select Policies in the
bottom area of the result pane.
4. Select one of the groups you noted and note also the assigned policies listed in
the bottom area.
6. Select one of the policies you noted. The associated groups of rights and
individual rights appear in the bottom area. Make a note of these.
7. Repeat Step 6 for all policies that you noted for the user group.
8. Repeat steps 4 to 7 for the other user groups which the user is a member of.
OTDS administrator
Enter the name of the OTDS administrator; default: otadmin@otds.admin
Click Next.
Click Next.
5. On the Summary page, verify your entries and click Finish. The resource is
created.
Restart Archive Server to activate the new resource.
To modify it, you must sign on to Directory Services.
Securing If you configure Archive Server to connect to OTDS using SSL (that is using https
connection as <protocol>) the identity of the OTDS server will not be checked by default. For a
most secure connection, you can force Archive Server to trust the OTDS server only
if its server certificate has been issued by a trusted certification authority.
Note: The “strict” verification requires fully and properly set up trust and keys
stores at both Archive Server and OTDS application servers.
The default (“lazy”) verification performs basic validity checks of the provided
certificate and checks of the server’s host name against the information in the
certificate but does not require a corresponding trust store setup.
Linking You can easily transfer the permissions and policies in Archive Server’s built-in user
permissions management to a corresponding user in OTDS as follows:
and policies
1. For a logged-in OTDS user, all OTDS groups are checked for whether there is a
group of the same name in the Archive Server’s built-in user management.
Note: Only the group name is important for the OTDS groups. The check
does not consider the user partition.
2. In case of matching groups, the policies assigned to the corresponding group in
the built-in user management are looked up.
3. It is checked whether the permission of the OTDS user allows to execute the
desired command.
1. Create groups with the same group names in the Archive Server’s built-in user
management and in OTDS (in any user partition).
2. Assign the policies as required to the group in the built-in user management.
Further For details on OTDS, see OpenText Directory Services with the OpenText Administration
information Client - Installation and Administration Guide (OTDS-IGD).
If you use SAP as leading application, you configure the connection not only in the
SAP system but also in Administration Client. OpenText Document Pipeline for
DocuLink and OpenText Document Pipeline for SAP Solutions – in particular the
DocTools R3Insert, R3Formid, R3AidSel, and cfbx – require some connection
information. These Document Pipelines can send data back to the SAP server, for
example, the document ID in bar code scenarios. For these scenarios, Document
Pipeline for SAP Solutions must be installed. The basic and scenario customizing for
SAP is described in OpenText Archiving and Document Access for SAP Solutions -
Scenario Guide (ER-CCS). The configuration in the OpenText Administration Client
includes:
• “Creating and modifying SAP gateways” on page 203
• “Creating and modifying SAP system connections” on page 201
• “Assigning an SAP system to a logical archive” on page 204
3. Click SAP System Connection in the action pane. A window to configure the
SAP system opens.
Connection name
SAP system connection name with which the administered server
communicates. You cannot modify the name later.
Description
Enter an optional description (restricted to 255 characters).
Server name
Name of the SAP server on which the logical archives are set up in the SAP
system.
Client
Three-digit number of the SAP client in which archiving occurs.
Feedback user
Feedback user in the SAP system. The cfbx process sends a notification
message back to this SAP user after a document has been archived using
asynchronous archiving. A separate feedback user (CPIC type) should be
set up in the SAP system for this purpose.
Password
Password for the SAP feedback user. This is entered, but not displayed,
when the SAP system is configured. The password for the feedback user
must be identical in the SAP system and in OpenText Administration
Client.
Instance number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapdp<xx> service on the gateway server in order
to determine the number of the TCP/IP port (<xx> = instance number) being
used.
Codepage
Specifies the encoding of the document metadata fields as defined by the
ATTRIBUTES statements in the pipeline attribute definition file (IXATTR).
This is mainly relevant for free-text fields with characters outside the 7-bit
range. A four-digit number specifies the type of character set that is used by
the functions in SAP RFC libraries. These libraries convert the metadata
from the character set specified by this setting to the character set of the
SAP server. The default value is 1100 (ISO-8859-1). Other possible values
are, for example, 4110 (UTF-8) or 8000 (Shift-JIS).
Language
Language of the SAP system; default is English. If the SAP system is
installed exclusively in another language, enter the SAP language code here.
Test Connection
Click this button to test the connection to the SAP system. A window opens
and shows the test result.
5. Click Finish.
Modifying SAP To modify a SAP system, select it in the SAP System Connections tab and click
system Properties in the action pane. Proceed in the same way as when creating a SAP
connections
system connection.
Deleting SAP To delete a SAP system, select it in the SAP System Connections tab and click
system Delete in the action pane.
connection
Testing a SAP To test a SAP connection, select it in the SAP System Connections tab and click Test
connection Connection in the action pane. A window opens and shows the test result.
3. Click New SAP Gateway in the action pane. A window to configure the SAP
gateway opens.
Subnet address
Specifies the address for the subnet in which an Archive Server or
Enterprise Scan is located. At least the first part of the address (for example,
NNN.0.0.0 in case of IPv4) must be specified. A gateway must be
established for each subnet.
IPv6
If you use IPv6, do not enclose the IPv6 address with square brackets.
IPv4
Enter a subnet mask, for example 255.255.255.0.
IPv6
Enter the address length, i.e. the number of relevant bits, for example
64.
Gateway address
Name of the server on which the SAP gateway runs. This is usually the SAP
server.
Gateway number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapgwxx service on the gateway server to
determine the number of the TCP/IP port (xx = instance number; for
example, instance number = 00, sapgw00, port 3300).
5. Click Finish.
Modifying SAP To modify a SAP gateway, select it in the SAP Gateways tab and click Properties in
gateways the action pane. Proceed in the same way as when creating a SAP gateway.
Deleting SAP To delete a SAP gateway, select it in the SAP Gateways tab and click Delete in the
gateways action pane.
Requirements:
• The gateway to the SAP system is created and configured; see “Creating and
modifying SAP gateways” on page 203.
• The SAP system is created and configured; see “Creating and modifying SAP
system connections” on page 201.
2. Select the Archive Assignments tab in the result pane. All archives are listed in
the top area of the result pane.
3. Select the archive to which a SAP system should be assigned. Keep in mind, that
SAP system can be assigned only to original archives.
4. Click New Archive SAP Assignment in the action pane. A window to configure
the SAP archive assignment opens.
Protocol
Communication protocol between the SAP application and Archive Server.
Fully configured protocols, which can be transported in the SAP system, are
supplied with the SAP products of OpenText.
6. Click Finish.
Modifying To modify an archive assignment, select it in the bottom area of the result pane and
archive click Properties in the action pane. Proceed in the same way as when assigning a
assignments
SAP system.
Removing To delete an archive assignment, select it in the bottom area of the result pane and
archive click Remove Assignment in the action pane.
assignments
There are archiving scenarios in which scan stations submit scanned content to
logical archives. For these scenarios, the scan stations needs information about the
archiving operation. They need to know to which logical archives the documents are
sent, and how the documents are to be indexed when archived. The archive mode
contains this information.
Archive modes are assigned to every scan station. When a scan station starts, it
queries the archive modes that are defined for it at the specified Archive Server. The
employee at the scan station assigns the appropriate archive mode to the scanned
documents in the course of archiving.
The following details must be configured correctly to archive from scan stations:
• Archive in which the documents are stored, scenario and conditions, workflow.
See “Adding and modifying archive modes” on page 209.
• Scan station to which an archive mode applies. See “Adding a new scan host and
assigning archive modes” on page 212.
• If SAP is the leading application: the SAP system to which the barcode and the
document ID are sent, and the communication protocol and version of the
ArchiveLink interface. See “Assigning an SAP system to a logical archive”
on page 204.
For more information about archiving scenarios, see “Scenarios and archive modes”
on page 207.
You need the Document Pipelines for SAP (R3SC) for all archiving scenarios.
Note: For scenarios in which archiving is started from the SAP GUI, you do not
need an archive mode.
PS_ENCODING_BASE64_UTF8N 1
Pre-indexing to Tasks inbox of PDMS GUI
Documents are indexed in Enterprise Scan first. The archiving process archives the
document to the Transactional Content Processing Servers and creates a task in the TCP
Application Server PDMS GUI inbox for a particular user, or for any user in a particular
group.
DMS_Indexing n/a n/a BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<name>
Late indexing to Process Inbox of TCP GUI
Archives the document to the Transactional Content Processing Servers and starts a process
with the document in the TCP GUI inbox. Documents are indexed in TCP.
DMS_Indexing n/a <processname> PS_MODE LEA_9_7_0
PS_ENCODING_BASE64_UTF8N 1
BIZ_REG_INDEXING
Leave the values empty
BIZ_APPLICATION<name>
Late indexing to Tasks inbox of PDMS GUI
Archives the document to the Transactional Content Processing Servers and creates a task in
the TCP Application Server PDMS GUI inbox for a particular user, or for any user in a
particular group. Documents are indexed in TCP.
DMS_Indexing PILE_INDEX n/a BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<group>
Late indexing for plug-in event
Archives the document to the Transactional Content Processing Servers and calls a plug-in
event in the TCP Application Server. Documents are indexed in TCP.
DMS_Indexing PILE_INDEX n/a BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
BIZ_PLG_EVENT=<plugin>:
<event>
5. Click Finish.
Thus you can create several archive modes, for example, if you want to assign
document types to different archives.
Modifying an To modify the settings of an archive mode, select it in the Archive Modes tab in the
archive mode result pane and click Properties in the action pane. Proceed in the same way as
when adding an archive mode. For details, see “Archive Modes properties”
on page 210.
Deleting an To delete an archive mode, select it in the Archive Modes tab in the result pane.
archive mode Click Delete in the action pane. If the archive mode is assigned to a scan host, it
must be removed first, see “Removing assigned archive modes” on page 214.
Scenario
Name of the archiving scenario (also known by the technical name Opcode).
Scenarios apply to leading applications.
Archive name
Name of the logical archive, to which the document is sent.
Pipeline Info
Use local Pipeline configuration: The Document Pipeline configuration
installed on the client is used (the actual pipeline to be used can be remote,
though).
Use the following Remote Pipeline: The Document Pipelines can be installed
on a separate computer. The pipeline is accessed via an HTTP interface. For this
configuration, the protocol, the pipeline host, and the port must be set.
Protocol
Protocol that is used for the communication with the pipeline host. For security
reasons, HTTPS is recommended.
Pipeline host
The computer where the Document Pipeline is installed.
Port
Port that is used for the communication with the pipeline host. Use 8080 for
HTTP or 8090 for HTTPS.
Advanced tab
Workflow
Name of the workflow that will be started in Enterprise Process Services when
the document is archived. For details concerning the creation of workflows, see
the Enterprise Process Services documentation.
Conditions
These archiving conditions are available:
R3EARLY
Early archiving with SAP.
BARCODE
If this option is activated, the document can only be archived if a barcode
was recognized. For Late Archiving, this is mandatory. For Early Archiving,
the behavior depends on your business process:
• If a barcode or index is required on every document, select the Barcode
condition. This makes sure that an index value is present before
archiving. The barcode is transferred to the leading application.
• If no barcode is needed, or it is not present on all documents, do not
select the Barcode condition. In this case, no barcode is transferred to the
leading application.
PILE_INDEX
Sorts the archived documents into piles for indexing according to certain
criteria. For example, the pile can be assigned to a document group, and the
access to a document pile in a leading application like Transactional Content
Processing can be restricted to a certain user group.
INDEXING
Indexing is done manually.
ENDORSER
Special setting for certain scanners. Only documents with a stamp are
stored.
Extended Conditions
This table is used to hand over archiving conditions to the COMMANDS file, for
example, to provide the user name so that the information is sent to the correct
task inbox. The extended conditions are key-value pairs. Click Add to enter a
new condition. To modify a extended condition select it and click Edit. Click
Remove to delete the selected condition.
Related Topics
• “Adding a new scan host and assigning archive modes” on page 212
4. Click Add Scan Host in the action pane. A window with available scan hosts
opens.
Related Topics
• “Adding and modifying archive modes” on page 209
• “Adding a new scan host and assigning archive modes” on page 212
Site
Describes the location of the scan host.
Description
Brief, self-explanatory description of the scan host.
5. Click Finish.
Deleting an To delete an archive mode, select it in the Archive Mode tab in the result pane. Click
archive mode Delete in the action pane. If the archive mode is assigned to a scan host, it must be
removed first, see “Adding a new scan host and assigning archive modes”
on page 212.
Related Topics
• “Adding and modifying archive modes” on page 209
• “Adding additional archive modes” on page 213
• “Archive Modes properties” on page 210
4. Click Add Archive Mode in the action pane. A window with available archive
modes opens.
Related Topics
• “Adding and modifying archive modes” on page 209
• “Archive Modes properties” on page 210
3. Select the scan host for which you want to change the default archive mode.
3. Select the scan host in the top area of the result pane.
4. Select the archive mode which you want to remove in the bottom area of the
result pane.
6. Click OK to confirm.
Known servers are used to realize remote standby scenarios to increase data
security. If a server is added as a known server to the environment, all archives of
this server can be checked in External Archives in the Archives object of the console
tree. If a logical archive of a known server is replicated to the original server, this
archive can be checked in Replicated Archives in the Archives object of the console
tree. See “Configuring remote standby scenarios“ on page 219.
Note: Instead of the host name, you can also use IPv4 addresses. IPv6
addresses are not supported.
You can configure whether HTTP or HTTPS is used in the following way:
• If you only want to allow secure connections using HTTPS, set the value
of Port to 0 (zero) and specify the HTTPS port in Secure port.
• If you only want to allow connections using HTTP, set the value of
Secure port to 0 (zero) and specify the HTTP port in Port.
If both Port and Secure port are set to a value larger than 0, the
ADMS_KNOWN_SERVER_PROTOCOL variable is used to determine the used
protocol. At least one of the port values must be larger than 0.
To enable replication:
4. In the dialog box, click OK to enable the encryption certificate of the known
server.
Disabling You can disable replication to a known server again by selecting the known server in
replication the result pane and clicking Disable Replication in the action pane. After you have
confirmed with OK, also the encryption certificate of the known server will be
disabled.
4. To modify the settings of a known server, proceed in the same way as when
adding a known server. Additional to the New known server window, you get
more information of the known server:
Version
The version number of the known server.
Startup time
The date and time when the known server was started last.
Build Information
Detailed information of the software build and revision of the known
server.
Description
Shows the short description of the known server, if available.
5. Click OK.
Modifying To modify the settings of a known server, select it in the top area of the result pane
known server and click Properties in the action pane. Proceed in the same way as when adding a
settings
known server.
Disk Volume(s) Buffer P1b Pool 1b Disk Volume(s) Buffer P3b Pool 3b
Disk Volume(s) Buffer P2b Pool 2b Disk Volume(s) Buffer P1b Pool 1b
In a remote standby scenario, all new and modified documents are asynchronously
transmitted from the original archive to the replicated archive of a known server.
This is done by the Synchronize_Replicates job on the Remote Standby Server.
The job physically copies the data on the storage media between these two servers.
Therefore, the Remote Standby Server provides more data security than the local
backup of media.
With a Remote Standby Server, not the entire server is replicated but just the logical
archives. Further, it is possible to use two servers crosswise, that is one Archive
Server is the Remote Standby Server of the other and vice versa.
2. Add the Remote Standby Server as known server (see “Adding known servers”
on page 215). Ensure that Remote server is allowed to replicate from this host
is set.
3. Click OK. The Remote Standby Server is listed in Known Servers in the
Environment object of the console tree.
Important
The replicate volumes must have the same names as the original volumes.
The replicate volumes need at least the same amount of disk space.
2. Add the original server as known server (see “Adding known servers”
on page 215).
Unless the two servers mutually replicate each others’ archives, you must not
enable Remote server is allowed to replicate from this host.
4. Select External Archives in the Archives object in the console tree. All logical
archives of the known servers are listed.
5. Select the archive which should be replicated in the result pane and click
Replicate in the action pane.
The archive is moved to Replicated Archives. A message is shown, that the
pools of the replicated archive must be configured (see “Backups on a Remote
Standby Server” on page 223).
6. Select the replicated archive, and then select the Server Priorities tab in the
result pane.
7. Click Change Server Priorities in the action pane. A wizard to assign the
sequence of server priorities opens (see “Changing the server priorities”
on page 139).
8. Assign the server priorities. The order should be: first the Remote Standby
Server, then the original server.
9. Select the Replicated Archives object in the console tree, and then click
Synchronize Servers in the action pane.
3. Select the archive to be replicated, and then select the Server Priorities tab in
the result pane.
5. Assign the server priorities. The order should be: first the original server, then
the Remote Standby Server.
1. On the Remote Standby Server, select the replicated archive, and then select the
Pools tab in the result pane.
2. Select the first pool in the top area. In the bottom area, the assigned volumes are
listed. Volumes that are not configured are labeled with the missing type.
a. Select the first missing volume and click Attach or Create Missing Volume
in the action pane.
b. Enter Mount Path and Device Type and click OK. Repeat this for every
missing volume.
ISO volumes
ISO volumes will be replicated by the asynchronously running
Synchronize_Replicates job (see also “ISO volumes” on page 223).
a. Select Replicated Archives in the console tree and select the designated
archive.
b. Select a replicated pool in the console tree and click Properties in the action
pane.
c. Select the backup jukebox. For virtual jukeboxes with HD-WO media,
OpenText strongly recommends configuring the original and backup
jukeboxes on physically different storage systems.
d. Configure the Synchronize_Replicates job according to your needs (see
“Setting the start mode and scheduling of jobs” on page 148).
Note: On the original Archive Server, the backup jobs can be disabled if
no additional backups should be written.
2. Select the known server which disk buffer needs to be replicated in the top area
of the result pane. The assigned disk buffers are listen in the bottom area of the
result pane.
3. Select the disk buffer which needs to be replicated and click Replicate in the
action pane.
4. Enter the name of the disk buffer and click Next.
A message is shown, that the disk buffer gets replicated and a volume has to be
attached to this disk buffer.
5. Select Buffers in the Infrastructure object in the console tree.
6. Select the Replicated Disk Buffers tab in the result pane. The replicated buffers
are listed in the top area.
7. Select the replicated buffer in the top area. In the bottom area, the assigned
volumes are listed. Volumes which are not configured are labeled with the
missing type.
8. Select the first missing volume and click Attach or Create Missing Volume in
the action pane.
9. Enter Mount Path and click OK. Repeat this for every missing volume.
Related Topics
• “Configuring disk volumes” on page 85
• “Installing and configuring storage devices” on page 71
Note: For backup and recovery of GS, ISO (HD-WO), and FS volumes, contact
OpenText Customer Support.
2. Add a new EMC Centera or Hitachi HCP GS device as single file (VI) storage
device with a separate internal storage pool.
Note: For details, see the storage installation guides in the OpenText
Knowledge Center.
EMC Centera GS devices:
3. Select Replicated Archives in the console tree and select the designated archive.
4. Select a replicated pool in the console tree and click Properties in the action
pane.
5. Select the newly created GS single file device and confirm with OK.
Archive Cache Server distinguishes between read and write requests. In case of read
requests, the Archive Cache Server tries to satisfy the request from its local cache
instead of transferring the document via slow WAN from an Archive Server. If not
found in local cache, the document will be cached for later access.
write through
In this mode, all documents are transferred to the Archive Server, but on the fly,
they are also cached in the local store to speed up later read requests.
write back
In this mode, all the documents are cached in the local store of the Archive
Cache Server. Archive Server just will be informed that there are new
documents residing on the Archive Cache Server. The configured Copy_Back job
will later transfer these documents to the Archive Server.
The following figure shows a simple outlay of a scenario with only one Archive
Server and one Archive Cache Server. In real environments, one Archive Cache
Server can support more than one Archive Server and one Archive Server can have
more than one Archive Cache Server attached. Clients can also access the Archive
Server directly without using Archive Cache Server. This depends on the
configuration; see “Configuring access using an Archive Cache Server” on page 234.
Remote site
Clients
Document
transfer
WAN
Administrative calls
Archive Server
Document Service
As the diagram hints, the Administration Server is central to the coordination of the
cache scenario at large. Administration Client is used to configure the settings of
each Archive Cache Server and the associated clients and archives.
Important
To ensure accurate retention handling, the clock of the Archive Cache Server
must be synchronized with the clock of the Archive Server.
Topic Description
Restrictions valid for “write back”
MTA documents MTA documents can be stored but the single document in an
MTA document cannot be accessed until they are transferred
to an Archive Cache Server.
Attribute Search Attribute Search in print lists is not available until the content
is transferred from an Archive Cache Server to the related
Archive Server.
VerifySig The signature verification is processed for write back items but
the signer chain is not verified (no timestamp certificates are
available on related Archive Server).
Deletion behavior To avoid problems with deletion, do not use the following
archive settings:
• Original Archive > Properties > Security > Document
Deletion > Deletion is ignored (see also “Configuring the
archive security settings” on page 112)
• Archive Server > Modify Operation Mode > Documents
cannot be deleted, no errors are returned (see also “Setting
the operation mode of Archive Server” on page 350
Retention behavior As long as write back documents are just stored on the Archive
Cache Server, there is no protection based on the document
retention. After transferring documents to a related Archive
Server, the retention behavior gets effective. If there is no client
retention, the retention setting of the logical archive is used.
Audit There are no audit trails for documents as long as they are not
transferred to the related Archive Server.
Update Document This call is not supported for write back documents.
migrateDocument Results in an error if just the pool name or storage tier is
changed.
Important
Target archives must be enabled to be cached by this
Archive Cache Server, otherwise update calls will fail.
Topic Description
Versioning of components As long as components are just stored on the Archive Cache
Server, there is no version control! This means, after a
successful modification, the modified component is available,
but the version number will not be increment. A subsequent
info call still will deliver back version “1” of the just modified
component, until the component has been transferred to the
related Archive Server.
Transfer and commit Write-back documents are transferred to the related Archive
Server in a two-phase process:
Description
Brief, self-explanatory description of the Archive Cache Server.
Host (client)
Physical host name of the Archive Cache Server, used by a client when
accessing Archive Cache Server.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
The <name to use by ACS for itself> name and the Host (archive server)
name must be identical. Otherwise, problems will arise during the
write-back scenario.
http://<host>:<port><context>?...
https://<host>:<secure port><context>?...
4. Click Finish.
5. Configure the Copy_Back job. See also “Configuring jobs and checking job
protocol“ on page 141 and “Other jobs” on page 144.
Note: Be aware that this job is disabled by default. If you intend to use the
"write back" mode, enable this job.
6. Click Finish. The new Archive Cache Server is added to the environment.
Next step:
Note: If <name to use by ACS for itself> and Host (archive server) are different
from each other, it is required to rename one or the other to make them
identical. To rename the Archive Cache Server, change the value of the
MY_HOST_NAME variable in the ACS.Setup file to <name to use by ACS for
itself>.
Caution
Do not modify the host name while writing back.
• Select the Copy_Back job that is assigned to the Archive Cache Server and click
Start in the action pane. The cached documents are transferred to the related
Archive Server. A window to watch the transfer status opens.
2. Select the Archive Cache Server you want to modify and click Properties in the
action pane.
3. Modify the Archive Cache Server parameters. See also “Adding an Archive
Cache Server to the environment” on page 229.
4. Click Finish.
1. Detach the Archive Cache Server from all logical archives it is attached to. See
“Deleting an assigned Archive Cache Server” on page 238.
3. Select the Copy_Back job which is assigned to the Archive Cache Server and
click Start in the action pane. The cached documents are transferred to the
related Archive Server. A window to watch the transfer status opens.
Caution
This step ensures that pending write-back documents are transferred
to the related Archive Server. If this step fails, the Archive Cache
Server must not be deleted before the problem is solved.
7. Click Yes to confirm. The Archive Cache Server is deleted from the
environment.
Adding cache Adding a write-back volume or write-through volumes involves the same steps.
volumes There can only be one write-back volume but several write-through volumes.
For each new cache volume, two new properties are required:
• Path where the volume is located
• Volume size
a. Volume path - Add the volume path name of the new volume to the WBVOL
variable. Make sure this path already exists.
b. Volume size - Add the volume size of the new volume (in MB) to the
WBSIZE variable.
a. Volume path - Add the volume path name of the new volume to the
VOL<n> variable, where <n> is the number of the first unassigned volume.
Make sure this path already exists.
b. Volume size - Add the volume size of the new volume (in MB) to the
SIZE<n> variable, where <n> is the number of the first unassigned volume.
Note: The new volume is not yet available. See “Activating the
modification” on page 233.
Resizing cache You can change the size of existing cache volumes if necessary.
volumes
Caution
Danger of loss Make sure not to remove the write-back volume accidentally or to change
of data the path of the write-back volume.
To resize volumes:
2. To resize the write-back volume, change the volume size of the volume (in MB)
in the WBSIZE variable.
3. To resize a write-through volume, change the volume size of the volume (in
MB) in the SIZE<n> variable, where <n> is the number of the volume to be
changed.
Note: The new volume size is not yet valid. See “Activating the
modification” on page 233.
Note: Running cscommand requires that a JDK or JRE is included in the PATH
environment variable.
Note: Resized volumes can be viewed only after restart of the server.
4. Copy all data from the current database location (see step 2) to the new location
(provided in step 1). The file permissions of the copy and the original must
match.
5. Configure the Archive Cache Server to use the new database location:
In the ACS.Setup file, change the value of the DERBY variable to the new
database directory name.
Client n (123.144.130.m)
Subnet 123.235.155.0
Client 1 (123.235.155.46)
Archive Server
Important
The subnet configuration will only be evaluated by clients using the
OpenText Archive Server API.
Note: Archive Cache Server keeps track of any relevant changes to the archive
settings and is synchronized automatically.
4. Enter settings:
Cache server
The name of the Archive Cache Server assigned to this archive.
Caching enabled
If caching is enabled, one of the following modes can be set.
Write through
The Archive Cache Server will operate in “write through” mode for this
logical archive.
Write back
The Archive Cache Server will operate in “write back” mode for this
logical archive.
Note: If caching is disabled, the Archive Cache Server does not cache any
new documents for this logical archive. Instead, it acts as a proxy and
forwards all requests to Archive Server. Outstanding write-back
documents can still be retrieved.
5. Click Next and enter settings for subnet address and subnet mask/length.
The combination of subnet mask and subnet address specifies a subnet. Clients
residing in this subnet will use the selected Archive Cache Server. Typically, the
Archive Cache Server resides in the same subnet. It is possible to add more than
one subnet definition to an Archive Cache Server; see also “Subnet assignment
of an Archive Cache Server” on page 234.
Several subnets
If a client belongs to more than one subnet, it will use the Archive Cache
Server that is assigned to the best matching subnet.
Subnet address
Specifies the address for the subnet in which a Archive Cache Server is
located. At least the first part of the address (for example, NNN.0.0.0 in case
of IPv4) must be specified. A gateway must be established for each subnet.
IPv6
If you use IPv6, do notenclose the IPv6 address with square brackets.
IPv4
Enter a subnet mask, for example 255.255.255.0.
IPv6
Enter the address length, i.e. the number of relevant bits, for example
64.
Modifying To modify the settings of an Archive Cache Server, select it in the top area of the
cache server result pane and click Properties in the action pane. Proceed in the same way as
settings
when configuring an Archive Cache Server.
Further For details on working with certificates, see “Certificates” on page 169.
information
To configure the certificates for write-back:
1. On the Archive Server enable the Archive Cache Server certificate named
CS_ACS_<archive_cache_server_hostname>.
Important
The certificate must be enabled regardless of the security settings of the
archive.
2. On the Archive Server import and enable the Archive Server certificate as global
authentication certificate unless this has already been done during the Archive
Server configuration.
Important
The certificate must be imported and enabled regardless of the security
settings of the archive.
2. Select the logical archive which the Archive Cache Server is assigned to.
3. Select the Cache Servers tab in the top area of the result pane and select the
Archive Cache Server. In the bottom area, the subnet definitions are listed.
4. Click New Subnet Definition in the action pane and enter settings for subnet
mask and subnet address. See also “Configuring archive access using an
Archive Cache Server” on page 235
5. Click Finish.
2. Select the logical archive which the Archive Cache Server assigned to.
3. Select the Cache Servers tab in the top area of the result pane and select the
Archive Cache Server. In the bottom area, the subnet definitions are listed.
4. Select the subnet definitions in the bottom area of the result pane and click
Properties.
Modify the settings for subnet mask and subnet address. See also “Configuring
archive access using an Archive Cache Server” on page 235
5. Click Finish.
Note: The steps 3 to 6 are only necessary if you use an Archive Cache Server
that operates in “write-back” mode.
2. Select the logical archive to which the Archive Cache Server is assigned.
3. Select the Cache Servers tab in the top area of the result pane and select the
Archive Cache Server you want to delete.
5. Deselect enabled to stop caching. See also “Configuring archive access using an
Archive Cache Server” on page 235.
7. Select the Copy_Back job which is assigned to the Archive Cache Server you
want to delete and click Start. The cached documents are transferred to the
related Archive Server. A window to watch the transfer status opens.
8. Select the Archive Cache Server you want to delete again and click Delete in the
action pane.
9. Click Yes to confirm. The Archive Cache Server is no longer assigned to the
logical archive.
The Reports node is used to generate reports comprising information on certain well
defined scenarios. Reports are based on scripts describing a specific scenario. A
scenario is a kind of template (or order form) describing the content and the layout
of a report. Running the script generates a report, an output file in html format.
Multiple reports can be generated per scenario. Currently, the Reports node is used
to generate reports comprising details of archives and pools currently available on
the Archive Server. You can use a report when asking for support. The information
provided by reports can be evaluated by the service personnel.
The Reports node comprises the Reports tab and the Scenarios tab.
To generate a report:
2. Select the Scenarios tab in the top area of the result pane.
Information The following information per report is displayed in the result pane:
about a report
Name Name of the report. The name is predefined, it is derived from the respective
scenario name extended by a serial number.
Date Date and time when the report was generated.
Format YYYY-MM-DD HH:MM:SS.
Size Size of the HTML file displayed in kB.
Deleting reports To delete a report, select it and click Delete in the action pane. Confirm the
displayed message with OK.
To display a report:
2. Select the Reports tab in the top area of the result pane.
Within this object, you can set the configuration variables for:
• Archive Server
• Document Pipeline
• Email Cloud Archive (if OpenText Archive Center is installed)
• Monitoring Server
For a complete list including short descriptions of all configuration variables, see
“Configuration parameter reference” on page 357.
General tab
Displays the name, the current value, a short description and information
on whether a server restart is required upon modifying this variable
Advanced tab
Displays the full qualified internal name of the variable
5. Select the General tab and modify the current value.
Resetting to To reset a value to its default value, select it and click Reset to Default in the action
default value pane. This action is sensitive only if the value is currently not the default value.
Retrieving In the list of configuration variables, undefined values are marked with *** Value
unspecified not defined ***. In the properties window, undefined values are marked with an
values
icon:
Example: Search for port and you will get results with port as name, as internal name and,
if set, as value.
Example: If you enter port, the result, among others, can be the following:
Note: Click on the arrow icon to the right of the search icon (see figure
below) and select Search All Configuration Variables to display all
configuration variables.
1. Select the Configuration object (or one of the objects assigned to it).
This chapter describes tasks that are relevant for storage systems: export and import,
consistency checks. If you archive documents with retention periods, you also have
to check for correct deletion of the documents and clear volumes whose documents
are deleted completely.
Document When the leading application sends the delete request for a document, the archive
deletion system works as follows:
2. The delete request is not propagated to the storage system and the content
remains in the storage. Only logically empty volumes can be removed in a
separate step.
Delete empty If documents with retention periods are stored in container files, the container
partitions volume gets the retention period of the document with the longest retention. The
retention period of the volume is propagated to the storage subsystem if possible.
The volume – and the content of all its documents – can be deleted only if all
documents are deleted from the archive database. The volume is purged by the
Delete_Empty_Volumes job. It checks for logically empty volumes meeting the
conditions defined in Configuration (see “Searching configuration variables”
on page 242):
Delete volumes which have not been modified since days variable
(internal name: ADMS_DEL_VOL_NOT_MODIFIED_SINCE_DAYS)
Delete volumes which are more than percent full variable
(internal name: ADMS_DEL_VOL_AT_LEAST_FULL)
and deletes these volumes automatically. You can schedule the job and run it
automatically, or use the List Empty Volumes/Images utility to display the empty
volumes first and then start the deletion job manually (see “Checking for Empty
Volumes and Deleting Them Manually” on page 249).
Important
To ensure correct deletion, you must synchronize the clocks of the Archive
Server and the storage subsystem, including the devices for replication.
Notes
• Not all storage systems release the space of the deleted volumes (see
documentation for your storage system).
• Blobs are handled like container file archiving.
2. Click List Empty Volumes in the action pane. A window to start the utility
opens.
3. Enter settings.
6. Select the Delete_Empty_Volumes job and click Start in the action pane.
During export, the entries about documents and their components on the volume
are deleted from the archive database. The volume gets the internal status exported
and is treated as nonexistent. After that, you remove the ISO medium together with
its local backups from the virtual jukebox. The database entries can be restored by
importing the volume.
Important
• Do not use the Export utility for volumes belonging to archives that are
configured for single instance archiving (SIA). A SIA reference to a
document may be created long after the document itself has been stored;
the reference is stored on a newer medium than the document. SIA
documents can be exported only when all references are outdated but the
Export utility does not analyze references to the documents.
• Volumes containing at least one document with non expired retention are
not exported.
To export volumes:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
Volume name(s)
Name of the volumes(s) to be exported. You can use wildcards to export
multiple volumes at the same time.
5. Click Run. A protocol window shows the progress and the result of the export.
The export process can take some time.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import ISO Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the volume(s) to be imported.
STORM server
Name of the STORM server by which the imported volume is managed.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type. Not available for ISO volumes.
Arguments
Additional arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
Related Topics
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import HD Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the hard-disk volume to be imported.
Base directory
Mount path of the volume.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional Arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the FS or HDSK pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the pool.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Import GS Volume utility in the result pane and click Run in the
action pane.
3. Enter settings:
Volume name
Name of the hard-disk volume to be imported.
Base directory
Mount path of the volume.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
You can start the utilities in the System object in the console tree. When the utility is
started, a message window shows the progress of the utility.
The volume to be checked must be online. You can only check the volume, or try to
repair inconsistencies.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
4. Type the volume name and specify how inconsistencies are to be handled.
Volume
Name of the volume that is to be checked.
copy document/component from other partition
The utility attempts to find the missing component on another volume. If
the component is found, it is copied to the checked volume. If not, the
component entry is deleted from the database, i.e. the component is
exported.
export component
The database entry for the missing component on the checked volume is
deleted.
Repair, if needed
Check this box if you really want to repair the inconsistencies.
If the option is deactivated, the test is performed and the result is displayed.
Nothing is copied and no changes are made to the database.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the
archive. If in doubt, contact OpenText Customer Support.
5. Click Run.
A protocol window shows the progress and the result of the check.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
The volume to be checked must be online. You can only check the volume, or try to
repair inconsistencies.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
4. Type the volume name and specify how documents missing in the database are
to be handled.
Volume
Name of the volume that is to be checked.
5. Click Run.
A protocol window shows the progress and the result of the check.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
To check a document:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
DocID
Type the document ID accordingly to the Type setting.
You can determine the string form of the document ID by searching for the
document in the application (for example, on document type and object
type) and displaying the document information in Windows Viewer or in
Java Viewer.
Type
Select the type of document ID. The ID can be entered in numerical
(Number) or string (String) form.
Repair document, if needed
Check this box if you want to repair defective documents. The utility
attempts to copy the document from another volume. If this option is
deactivated, the utility simply performs the test and displays the result.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the
archive. If in doubt, contact OpenText Customer Support.
5. Click Run.
A protocol window shows the progress and the result of the check.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Count Documents/Components utility.
3. Click Run in the action pane.
4. Enter the name of the volume.
5. Click Run.
A protocol window shows the progress and the result of the counting.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
To check a volume:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
5. Click Run.
A protocol window shows the progress and the result of the check.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
Basically, you can backup archived data by means of the storage system or by means
of the Archive Server (local backup, Remote Standby). Some scenarios can be
restricted to one of these ways. The backup medium should be the same type as the
original medium. For detailed information, see the Storage Platform Release Notes
in the Knowledge Center (https://knowledge.opentext.com/knowledge/llisapi.dll/
open/12331031).
Number of Partitions 1
Number of Backups 1
Backup Jukebox Must be different from Original Jukebox
Backup On for Local_Backup job
The backup concept used by Archive Server ensures that documents are protected
against data loss throughout their entire path to, through, and in the Archive Server.
Archive Server
Storage
Systems
There are several parts that have to be protected against data loss:
Volumes
All hard-disk volumes that can hold the only instance of a document must be
protected against data loss by RAID. Which volumes have to be protected you
find in the “Installation overview” chapter of the installation guides for Archive
Server.
Document Pipelines
The Document Pipeline of OpenText Imaging Enterprise Scan must be protected
against data loss; for details, see Section 19.2 “Backing up the Document Pipeline
directory” in OpenText Imaging Enterprise Scan - User and Administration Guide
(CLES-UGD).
Database
The database with the configuration for logical archives, pools, jobs, relations to
other Archive Servers, and leading applications must be protected against data
loss. The process depends on the type of database you are using (see “Backing
up the database” on page 262).
Storage Manager configuration
The configuration of the Storage Manager must be saved; see “Backing up and
restoring of the Storage Manager configuration” on page 264.
Data in storage systems
Data that is archived on storage systems like HSM, NAS, CAS also needs a
backup, either by means of the storage system or with Archive Server tools; see
“Backup for storage systems” on page 258.
Archive Cache Server
If “write back” mode is enabled, the Archive Cache Server stores newly created
documents locally without saving them immediately to the destination. It is
recommended to perform regular backups of the Archive Cache Server data; see
“Backup and recovery of an Archive Cache Server” on page 264.
Directory Services
If OpenText Directory Services (OTDS) is used, OpenText recommends backing
up the OTDS server on a regular basis (for example, weekly).
To avoid data loss and extended down times you, as system administrator, should
back up the database regularly and in full, and complement this full backup with a
daily backup of the log files. In general: The more backups are performed, the safer
the system is. Backups should be performed at times of low system load.
It is advisable to back up the archive database at the same time as the database of the
leading application if possible.
The database must be backed up at regular intervals. However, because its data
contents are constantly changing, all database operations are written to special files
(online and archived redo logs under Oracle, transaction logs for SQL Server). As a
result, the database can always be restored in full on the basis of the backup and
these files.
Important
During the configuration phase of installation, you can either select default
values for the database configuration or configure all relevant values. To
make sure that this guide remains easy to follow, the default values are used
below. If you configured the database with non-default values, replace these
defaults with your values.
For details on password change, see “Changing the database user password”
on page 97.
Tip: To find out whether “maintenance mode” is active, start a command line
and enter
cscommand -c isOnline
or
cscommand -c getStatistics
cscommand With the Archive Cache Server installation comes a small utility (cscommand), which
utility allows to activate or deactivate the maintenance mode. The commands to activate
and deactivate maintenance mode can be called from any script or batch file.
Usually, the commands are added to the script that controls your backup. You can
find cscommand in the <OT config>\Archive Cache Server\bin directory.
Note: Running cscommand requires that a JDK or JRE is included in the PATH
environment variable.
3. Start your backup. Make sure that all relevant directories are included.
4. Deactivate maintenance mode:
cscommand -c setOnline -u <username> -p <password>
Directories to be backed up
Note: The directories used by Archive Cache Server are configured during the
installation.
Cache volumes One or more cache volumes to be used for write through caching. Not
highly critical but useful for reducing time to rebuild cached data.
Write-back One single cache volume to be used for write back caching. This
volume volume contains the following subdirectories:
dat
Components are stored here.
idx
Per document, additional information is stored, which contains all
necessary information to reconstruct the data in case of a crash.
log
Special protocol files (one per day) are stored here. Containing
relevant info when a document is transferred to and committed by
the Document Service.
Important
Protocol files are not deleted automatically. Ensure regular
deletion of protocol files to avoid storage problems.
Path to store The absolute path to the volume where the Archive Cache Server stores
database files its metadata for the cached documents. Necessary to recover.
As with “Backup of Archive Cache Server data” on page 264, you need the
cscommand in the <OT config>\Archive Cache Server\bin directory.
Note: Running cscommand requires that a JDK or JRE is included in the PATH
environment variable.
This proceeding recovers the Archive Cache Server to the state of a previous backup.
This means all data in the time span between last backup and crash are lost.
Documents that are already transferred to the Archive Server are not affected.
If successful, this proceeding recovers the actual state of the Archive Cache Server.
2. If the write-back volume is still available, rename the root directory of the write-
back volume (see Step 5, <location of write back data>).
3. Copy your backup of the data to the correct location to replace the corrupt one.
If you have also a partial loss of data volumes, copy the lost data from your
backup to the correct location.
Important
Each successfully recovered document is listed on the command line
and removed from <location of write back data>. This means that
the recover operation can just be processed once.
6. If you do not get any error messages, the renamed directory (<location of
write back data>) can be deleted. Any data left in this subtree is no longer
needed for operation.
Important
If you get error messages, do not delete any data. If you cannot fix the
problem, contact OpenText Customer Support.
Utilities
Utilities are tools that are started interactively by the administrator. The following
table provides an overview of all utilities that can be reached in Utilities in the
System object in the console tree. Cross references are leading to detailed
descriptions in the relevant chapters. You also find a description of how to start
utilities and how to check the utility protocol in this chapter.
Some utilities are assigned directly to objects and can be reached in the action pane.
Protocols of these utilities can also be reached in Utilities in the System object in the
console tree
Note: Some utilities need to enter the name of the STORM server. To
determine the name, select Storage Devices in the Infrastructure object in the
console tree. The name of the STORM server is displayed in brackets behind
the device name; for example:
WORM(STORM1)
Utility Link
Check Database Against Volume “Checking database against volume” on page 254
Check Document “Checking a document” on page 256
Check Volume “Checking a volume” on page 258
Check Volume Against Database “Checking volume against database” on page 256
Count Documents/Components “Counting documents and components in a
volume” on page 257
Export Volumes “Exporting volumes” on page 250
Import GS Volume “Importing GS volumes for Single File (VI) pool”
on page 253
Import HD Volume “Importing hard-disk volumes” on page 252
Import ISO Volume “Importing ISO volumes” on page 251
Report Shadow Copy Errors “Handling shadow copy errors” on page 132
Review Attribute Migration Errors “Attribute migration“ on page 309
View Installed Archive Server Patches “Viewing installed Archive Server patches”
on page 344
VolMig Cancel Migration Job “Canceling a migration job” on page 302
VolMig Continue Migration Job “Continuing a migration job” on page 301
Utility Link
VolMig Fast Migration of ISO “Creating a local fast migration job for ISO
Volume volumes” on page 293
VolMig Fast Migration of remote ISO “Creating a remote fast migration job for ISO
Volume volumes” on page 294
VolMig Migrate Components on “Creating a local migration job” on page 287
Volume
VolMig Migrate Remote Volumes “Creating a remote migration job” on page 290
VolMig Pause Migration Job “Pausing a migration job” on page 301
VolMig Renew Migration Job “Renewing a migration job” on page 302
VolMig Status “Monitoring the migration progress“ on page 297
2. Select the Utilities tab in the top area of the result pane. All available utilities
are listed in the top area of the result pane.
2. Select the Utilities tab in the top area of the result pane. All available utilities
are listed in the top area of the result pane.
4. Select the Results tab in the bottom area of the result pane to check whether the
execution of the utility was successful
or
select the Message tab in the bottom area of the result pane to check the
messages created during execution of the utility.
2. Select the Protocol tab in the top area of the result pane.
To clear protocols:
2. Select the Protocol tab in the top area of the result pane.
Re-reading Utilities and jobs are read by Archive Server during the startup of the server. If
scripts utilities or jobs are added or modified, they can be re-read. This avoids a restart of
Archive Server.
To re-read scripts:
2. Select the Protocol tab in the top area of the result pane.
1. Create copy orders for the volume components, using the VolMig Migrate
Components on Volume utility.
3. Check the migration status using the VolMig Status utility. For more
information, see “Monitoring the migration progress“ on page 297.
Attribute Apart from the volume migration, you can use the attribute migration job to move
migration the metadata information that is stored in the ATTRIB.ATR files of archived
documents to the database; see “Attribute migration“ on page 309. In particular,
you must run the attribute migration job after upgrading to version 10.5.0.
Important
Attribute migration must be finished for all documents to be migrated.
Otherwise, the volume migration will fail.
27.2 Restrictions
The following restrictions are valid for the volume migration features:
• Remote single-file
Remote migration is only possible for volumes that are handled by STORM and
that can be mounted using NFS. Single-File volumes like HSM or HD volumes
cannot be migrated from a remote Archive Server.
• DBMS provider
Remote migration is only possible if the remote Archive Server uses the same
DBMS provider as the local Archive Server. For a cross-provider migration setup,
contact OpenText Global Technical Services.
Caution
Consider that replication and backup settings are not transferred to the
target archive during migration. Therefore, the configuration for backup
and replicated archives must be performed for the migrated archive again.
See “Configuring remote standby scenarios“ on page 219 and “Creating
and modifying pools” on page 117.
1. Select Configuration object in the console tree and search for the respective
variable (see “Searching configuration variables” on page 242).
1. Select Configuration object in the console tree, search for the respective variable
(see “Searching configuration variables” on page 242).
2. Specify the logging parameters for the volume migration:
1. If migrating from Archive Server before 10.5.0: Ensure that the attribute migration
is done for all documents to be migrated by running the
SYS_MIGRATE_ATTRIBUTES job; see “Attribute migration“ on page 309.
Important
Attribute migration must be finished for all documents to be migrated.
Otherwise, the volume migration will fail.
2. Start the Administration Client, select the dedicated logical archive and create a
new pool for the migration. See “Creating and modifying pools” on page 117.
Note: Components not listed in the ds_comp table are ignored. To ensure
that all components of one medium are listed in the ds_comp table,
OpenText recommends that you call volck first.
4. Create and schedule a job in the OpenText Administration Client for the
Migrate_Volumes command. See “Configuring jobs and checking job protocol“
on page 141.
Preconditions
<source db> =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = <source host>)
(PORT = <source port>))
(ADDRESS = (PROTOCOL = IPC)(Key = <source key>))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <source service name>)
)
)
3. The actual read access of the media is done via NFSSERVERs. To add access to
oldarchive media, set the corresponding variable: In Configuration, search for
the NFS Server n variable (internal name: NFSSERVER<N>; see “Searching
configuration variables” on page 242) on the local server newarchive.
Add an entry for each NFSSERVER on the remote computer (at least for those
that you intend to read from). This will create access to the media on
oldarchive.
5. For the newarchive, select Archive Server > Configuration in the console tree.
6. Search for the List of mappings from remote NFSSERVER names to local
names variable in Configuration (internal name: AS.VMIG.NFSMAP_LIST;
see “Searching configuration variables” on page 242).
Open the Properties. For each remote NFSSERVER to read from, add an entry.
The syntax is:
<remote database server>:<remote NFSSERVER>:local:<local NFSSERVER
alias>
Example: If the named instance is INST_A and the DB service is ECR, the example
from above would have to be changed to
database_server_of_old_archive\INST_A:ECR:WORM:local:WORM2
database_server_of_old_archive\INST_A:ECR:CDROM:local:CDROM2
1. Create and schedule a job in the OpenText Administration Client for the
Migrate_Volumes command. See “Configuring jobs and checking job protocol“
on page 141.
2. Disable backup for the original pool to avoid that the server creates additional
(unwanted) backups in the original pool.
ORACLE DATABASE
On the local server, extend $TNS_ADMIN/tnsnames.ora to contain a section
for the remote computer.
SQL SERVER
If the database of the remote Archive Server (oldarchive) is hosted on
another server (remote database), add an SQL Alias on the target Archive
Server (newarchive) using SQL Server Configuration Manager.
As Alias Name, enter the name of the Archive Server (oldarchive) that is
the source of the migration and that is used for the NFSSERVER mapping
and for the migration job; see below.
Add an alias for SQL Native Client 10.0 Configuration and for SQL Native
Client 10.0 Configuration (32 bit). The alias names must not end with a
blank.
3. On the target Archive Server (newarchive), search for the List of mappings
from remote NFSSERVER names to local names variable in Configuration
(internal name: AS.VMIG.NFSMAP_LIST).
Open the Properties. For each remote NFSSERVER to read from, add an entry.
The syntax is:
<remote database server>:<remote NFSSERVER>:local:<local NFSSERVER
alias>
4. Required only for remote server version 10.5 or later: On the local server (new
archive), call
vmclient -h <remote_server> -u dsadmin:<password> putCert
Thus, the certificate of the local server (in as.pem) is known at the remote
server.
5. Required only for remote server version 10.5 or later: On the remote server (old
archive), enable the new certificate with the name of the local server.
6. On the remote server (old archive), modify the DS configuration (<OT config
AS>/DS.Setup).
Add the variable
BACKUPSERVER1 = BKCD,<newarchive>, 0
where <newarchive> is the hostname of the target Archive Server.
Important
Do not use blanks and do not type the angle brackets in the value!
Note: For remote fast migration, remote server version 9.7.1 or later is
required.
Note: In case of an error message, check and verify that the correct
certificate has been transferred previously. Compare fingerprints.
9. Disable backup for the original pool to avoid that the server creates additional
(unwanted) backups in the original pool.
To allow no more data to be copied to the migrated volume, you can set the volume
to write locked. Read access is possible; write access is protected.
3. Select the Pools tab in the top area of the result pane. The attached volumes are
listed in the bottom area of the result pane.
4. Select the volume to be write locked and click Properties in the action pane.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
4. Enter appropriate settings to all fields (see Settings for local migration
on page 287).
Click Run.
• the scheduler of the Administration Server calls the job Migrate_Volumes and
• all previous jobs have been processed.
Source Volume
Specify the source volume(s) name. The following characters are provided
therefore:
Character Description
* Wildcard: 0 to n arbitrary characters
For example, vol5*, matches all volumes that name begins with vol5;
for example, vol5a, vol5c78, vol52e4r
? Wildcard: exactly one arbitrary character
For example, volx?x, matches volxax to volxzx and volx0x to
volx9x
\ Is used to escape wildcards (*, ?), if they are used as “real” characters in
volume names.
[] Specifies a set of volume names:
• “[ ]” can be used only once
• “,” can be used to separate numbers
• “-” can be used to specify a range
For example, [001,005-099]
Target archive
Enter the target archive name.
Target pool
Enter the target pool name.
Migrate only components that were archived: On date or after
You can restrict the migration operation to components that were archived after
or on a given date. Specify the date here. The specified day is included.
Note: The retention date of migrated documents can only be kept or extended.
The following table provides allowed settings:
Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying attributes of a
migration job” on page 304 (-v parameter).
Additional arguments
-e
Export source volumes after successful migration.
-k
Keep exported volume (export only the document entries, allow
dsPurgeVol to destroy this medium).
-i
Migrate only latest version, ignore older versions.
-A <archive>
Migrate components only from a certain archive.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
4. Enter appropriate settings to all fields (see Settings for remote migration
on page 290). Click Run.
• the scheduler of the Administration Server calls the Migrate_Volumes job and
• all previous jobs have been processed.
Source Volume
Specify the source volume(s) name. The following characters are provided
therefore:
Character Description
* Wildcard: 0 to n arbitrary characters
For example, vol5*, matches all volumes that name begins with vol5;
for example, vol5a, vol5c78, vol52e4r
? Wildcard: exactly one arbitrary character
For example, volx?x, matches volxax to volxzx and volx0x to
volx9x
\ Is used to escape wildcards (*, ?), if they are used as “real” characters in
volume names.
[] Specifies a set of volume names:
• “[ ]” can be used only once
• “,” can be used to separate numbers
• “-” can be used to specify a range
For example, [001,005-099]
• > 0 (days)
• 0 (none)
• -1 (infinite)
• -8 (keep old value)
Note: The retention date of migrated documents can only be kept or extended.
The following table provides allowed settings:
Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying attributes of a
migration job” on page 304 (-v parameter).
Additional arguments
-i
Migrates only latest version, ignores older versions.
-A <archive>
Migrates components only from a certain archive.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the VolMig Fast Migration of ISO Volume utility.
3. Click Run in the action pane.
4. Enter appropriate settings to all fields. Click Run.
Settings for local fast migration
Source Volume
Specify the source volume(s) name. The following characters are provided
therefore:
Character Description
* Wildcard: 0 to n arbitrary characters
For example, vol5*, matches all volumes that name begins with
vol5; for example, vol5a, vol5c78, vol52e4r
? Wildcard: exactly one arbitrary character
For example, volx?x, matches volxax to volxzx and volx0x to
volx9x
\ Is used to escape wildcards (*, ?), if they are used as “real”
characters in volume names.
[] Specifies a set of volume names:
• “[ ]” can be used only once
• “,” can be used to separate numbers
• “-” can be used to specify a range
For example, [001,005-099]
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
4. Enter appropriate settings to all fields (see Settings for remote fast migration
on page 294). Click Run.
• the scheduler of the Administration Server calls the Migrate_Volumes job and
• all previous jobs have been processed.
Character Description
* Wildcard: 0 to n arbitrary characters
For example, vol5*, matches all volumes that name begins with vol5;
for example, vol5a, vol5c78, vol52e4r
? Wildcard: exactly one arbitrary character
For example, volx?x, matches volxax to volxzx and volx0x to
volx9x
\ Is used to escape wildcards (*, ?), if they are used as “real” characters in
volume names.
[] Specifies a set of volume names:
• “[ ]” can be used only once
• “,” can be used to separate numbers
• “-” can be used to specify a range
For example, [001,005-099]
Verification mode
Select the verification mode which should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying attributes of a
migration job” on page 304 (-v parameter).
Additional arguments
-d (dumb mode)
Import of document/component entries into local database by dsTools
instead of reading directly from the remote database. The dumb mode
disables automatic verification. Archive and retention settings cannot be
changed.
-A <archive>
Migrates components only from a certain archive. Does not work with dumb
mode (-d ).
You can display an overview of migration jobs to check the progress of migration.
Each migration job has a unique ID, optional flags and a status. This information is
also needed to manipulate migration jobs. See “Manipulating migration jobs“
on page 301
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
• New
• In progress
• Finished
• Cancelled
• Error
5. Click Run.An overview of migration jobs with the demanded job status opens.
• New (enqueued)
VolMig has not yet started to process this migration job.
• Impt (import remote DB entries)
VolMig has started replicating DB entries for archives, documents, components
and component types of volumes from a remote source.
• Prep (prepare component list)
VolMig has started to query the components on the current medium to be
migrated.
• Iso (create and write an ISO image file)
For fast migration jobs, entire ISO images are replicated at once. This state
indicates that VolMig is retrieving an ISO image file from a local or remote
volume or is writing that image file to the target storage.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to pause via the VolMig Status
utility; see “Monitoring the migration progress“ on page 297.
5. Enter the ID of the migration job that you want to pause in the Migration Job
ID(s) field.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to continue via the VolMig
Status utility; see “Monitoring the migration progress“ on page 297.
5. Enter the ID of the migration job that you want to continue in the Migration Job
ID(s) field.
6. Click Run.A protocol window shows the progress and the result of the
migration. The migration job is set back to the status before it has been paused
or the error occurred.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to cancel via the VolMig Status
utility. See “Monitoring the migration progress“ on page 297.
5. Enter the ID of the migration job that you want to cancel in the Migration Job
ID(s) field.
6. Click Run.
A protocol window shows the progress and the result. The migration job is set
to the Canc status. All copy jobs for this migration job are deleted.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Determine the ID of the migration job you want to renew via the VolMig Status
utility. See “Monitoring the migration progress“ on page 297.
5. Enter the ID of the migration job that you want to renew in the Migration Job
ID(s) field.
6. Click Run.A protocol window shows the progress and the result of the
migration. The migration job is set to the New status and is started from the
beginning.
The volume migration suite provides additional utilities to support you to perform
your migration. These utilities must be executed in a command shell. The following
sections explains the most important vmclient commands with their corresponding
attributes.
jobID
The ID of the migration job to be deleted.
jobID
The ID of the migration job to be finished.
jobID
The ID of the migration job to be modified.
attribute
The attributes which can be modified.
-e (export)
Export source volumes after successful migration.
-k (keep)
Do not set the exported flag for the volume (so dsPurgeVol can destroy it).
-r <value> (retention)
Set a new value for the retention of the migrated documents.
Not supported in Fast Migration scenarios.
old poolname
Is constructed by concatenating the source archive name, an underscore
character and the source pool name, for example, H4_worm.
new poolname
Is constructed by concatenating the target archive name, an underscore
character and the target pool name, for example, H4_iso.
-d
Update pools in ds_job only.
-v
Update pools in both, ds_job and vmig_jobs.
Note: This works only for local migration scenarios. Write jobs in a remote
migration environment remain on the remote server and cannot be moved to
the local machine.
jobID
The ID of the migration job which components should be listed.
max results
How many components should be listed at most.
archive
The archive name.
pool 1
Name of the first pool.
pool 2
Name of the second pool.
archive
The archive name.
pool
The pool name.
sequence number
New number of the sequence.
sequence letter
New letter (for ISO pools only).
volume name
Name of the primary volume.
output file
File to write the output to instead of stdout.
Attribute migration
“On the fly” The information within the document’s ATTRIB.ATR file is migrated to the database
migration upon the first access of the document. This automatic migration process does not
require any user interaction.
“Bulk” migration Additionally to the automatic migration mentioned before, a job exists that migrates
the metadata in the ATTRIB.ATR files to the database. By default, the job is scheduled
to run every Sunday at 0:30.
• Follow the procedure in “Starting and stopping jobs” on page 146 to start the
attribute migration. The name of the job is SYS_MIGRATE_ATTRIBUTES.
The job runs the AttribAtrMigrate command, which requires the following
parameters:
AttribAtrMigrate { [-t <threads>] migrate {null|err} <time to run> |
report }
where
Failed If the migration to the database failed for a document (states other than O or Y) you
migrations can run a job to retry the migration. If the error persists, the problem must be fixed
manually. Use the Review Attribute Migration Errors utility to list the failed
documents; see below.
• Follow the procedure in “Starting and stopping jobs” on page 146 to start the
attribute migration. The name of the job is
SYS_RETRY_ATTRIBUTE_MIGRATION.
The job runs the AttribAtrMigrate command with the following parameters:
AttribAtrMigrate migrate err 60
1. To start the utility, follow the procedure in “Starting utilities” on page 268.
2. When requested, enter the number of errors to review or keep the field empty to
use the default value (1000).
To monitor the archiving system, you can use Administration Client, Archive Server
Monitoring, and Document Pipeline Info. Administration Client and Document
Pipeline Info must be installed on the administrator's computer and can connect to
different Archive Servers and Document Pipeline hosts via network. Archive
Monitoring Web Client is installed on the Archive Server and is performed in a
browser, accessible with an URL.
Administration Client
• Checking the success of jobs, in particular of the Write and Backup jobs
• Checking for notifications according to your configuration (emails, alerts,
execution of files; see “Monitoring with notifications“ on page 315)
• Checking free disk space
By setting up a notification service, you can reduce the amount of work associated
with monitoring the archive system. The Notification Server sends notifications
when certain predefined server events occur. You can define both the events and the
type and recipient of the notification. You can also restrict the time slot in which
particular notifications are sent. For example, you can define notifications sent to the
workstation during working hours and by email to the on-call service outside
working hours. Thus, you ensure that responsible persons are addressed directly
when a particular event occurs.
1. Define the events filter to which the system should respond; see “Creating and
modifying event filters” on page 315.
2. Create the type and settings of the notifications and assign them specific event
filters; see “Creating and modifying notifications” on page 319.
Some important event filters are already predefined. You can change them and
define new event filters.
1. Select Events and Notifications in the System object in the console tree.
2. Select the Event Filters tab. All available event filters are listed in the top area of
the result pane.
3. Click New Event Filter in the action pane. The window to create a new event
filter opens.
4. Enter the conditions for the new event filter. See “Conditions for event filters”
on page 316.
5. Click Finish.
Modifying event To modify an event filter, select it in the top area of the result pane and click
filters Properties in the action pane. Proceed in the same way as when creating a new
event filter. The name of the event filter cannot be changed.
Deleting event To delete an event filter, select it in the top area of the result pane and click Delete in
filters the action pane.
Related Topics
• “Conditions for event filters” on page 316
• “Available event filters” on page 318
• “Creating and modifying notifications” on page 319
• “Checking alerts” on page 323
Name
A self-explaining name
Message class
Classifies and characterizes events
• Any (all classes are recorded)
• Administration: events that affect administration
• Database: database event
• Server: server event
Component
Specifies the software component that issues the message. If nothing is specified
here, all components are recorded (Any). The most important components are:
• Administration Server: mainly monitors the execution of the jobs
• Monitor Server: reports status changes of archive components, i.e. whenever
a status display changes in Archive Monitoring Web Client
• Document Service: monitors the jds, which provides archived documents
and archives documents
• Storage Manager: reports errors that occur when writing to storage media
• Archive Timestamp Server: reports errors that occur when creating or
administering timestamps
• High Availability: reports errors associated with High Availability software
and the cluster software it uses
• Volume Migration: reports errors that occur during volume migration
• BASE DocTools: reports errors associated with BASE DocTools
• R/3 DocTools: reports errors associated with R/3 DocTools (SAP)
• Filter Service: not used
Severity
Specifies the importance.
Message codes
Specifies which message codes should be considered by the event filter. The
codes are used to filter out concrete events and are usually defined in a message
catalog, which belongs to a component. For each component, the catalog is
installed in
<OT config>\msgcat\<COMPNAME>_<lang>.cat
Example: ADMS_us.cat is the English message catalog for the Administration Server
component.
It is possible to enter the code number directly, but it is recommended and more
comfortable to use the Select button. This offers a window with current
available message codes and associated descriptions.
2. Click Select. A window with current available message codes opens. The
available message codes depend on the selected combination of message
class, component and severity.
3. Select the designated message code and click OK to resume. If you define a
range, select the first and the last message code (from – to).
Related Topics
User-defined In addition, you can define other events to get notifications if they occur. Useful
events events are:
Job Error
This event records errors that are listed in the job protocol and notifies you with
a particular message. Use this configuration:
Severity: Error
Message class: Server or <any>
Component: Administration Server
Message code: 1
Severity: Error
Message class: Server or <any>
Component: Monitor Server
Message code: -
Severity: Warning
Message class: Server or <any>
Component: Monitor Server
Message code: -
Related Topics
• “Conditions for event filters” on page 316
• “Checking alerts” on page 323
• “Creating and modifying notifications” on page 319
• Alert, passive notification type, alerts must be checked by the administrator; see
“Checking alerts” on page 323
• Mail Message, active notification type, when the assigned event occurs, a
message is sent
• TCL Script, active notification type, when the assigned event occurs, a tcl script
is executed
• Message File, passive notification type, notifications are written in a specific file
• SNMP Trap, active notification type, notifications are sent to an external
monitoring system via the SNMP protocol
To create a notification:
1. Select Events and Notifications in the System object in the console tree.
2. Select the Notifications tab. All available notifications are listed in the top area
of the result pane.
3. Click New Notification in the action pane. The wizard to create a new
notification opens.
4. Enter the name and the type of the notification and click Next. Enter the
additional settings for the new notification event. See “Notification settings”
on page 320.
6. Select the new notification in the top area of the result pane.
7. Click Add Event Filter in the action pane. A window with available event filters
opens.
8. Select the event filters which should be assigned to the notification and click
OK.
Modifying notifi- To modify the notification settings, select the notification in the top area of the result
cations settings pane and click Edit in the action pane. Proceed in the same way as when creating a
new notification. The name of the notification cannot be changed.
Deleting notifi- To delete a notification, select the notification in the top area of the result pane and
cations click Delete in the action pane.
Adding event To add event filters, select the notification in the top area of the result pane. Click
filters Add Event Filter in the action pane. Proceed in the same way as when creating a
new notification.
Remove an To remove an event filter, select it in the bottom area of the result pane and click
event filter Remove in the action pane. The notification events are not lost, only the assignments
is deleted.
Related Topics
• “Notification settings” on page 320
• “Checking alerts” on page 323
• “Using variables in notifications” on page 322
Name
The name should be unique and meaningful.
Notification Type
Select the type of notification and enter the specific settings. The following
notification types and settings are possible:
Alert
Alerts are notifications, which can be checked by using Administration
Client. They are displayed in Alerts in the System object in the console tree
(see “Checking alerts” on page 323).
Mail Message
Emails can be sent to respond immediately to an event or in standby time. If
you want to send it via SMS, consider that the length of SMS text (includes
Subject and Additional text) is limited by most providers. Enter the
following additional settings:
• Sender address: Email address of the sender. It appears in the from field
in the inbox of the recipient. The entry is mandatory.
• Mail host: Name of the target mail server. The mail server is connected
via SMTP. The entry is mandatory.
• Recipient address: Email address of the recipient. If you want to specify
more than one recipient, separate them by a semicolon. The entry is
mandatory.
• Subject of the mail, $ variables can be used (see “Using variables in
notifications” on page 322). If not specified, the subject is $SEVERITY
message from $HOSTNAME/$USERNAME($TIME).
• Include Standard Text: If selected, you get an introduction in the
notification: “The preceding notification message was generated by ...”.
This introduction is followed by the message text. If you send SMS
messages, clear this check box.
• Max. Length of mail message text: Use this setting to restrict the number
of characters in the email body. If you send notifications as SMS
message, thus you can enter a value according to the limitation of your
provider.
TCL Sript
Enter the name and the path of the tcl script. It will be executed if the event
occurs.
Message File
The notification is written to a file. Enter name and path of the target file or
click Browse to open the file browser. Select the designated message file and
click OK to confirm.
Enter also the maximum size of the message file in bytes.
SNMP Trap
Provides an interface to an external monitoring system that supports the
SNMP protocol. Enter the information on the target system.
Text
Free text field with the maximum length of 255 characters. $ variables can be
used (see “Using variables in notifications” on page 322).
Active Period
Weekdays and time of the day at which the notification is to be sent.
Related Topics
• “Creating and modifying notifications” on page 319
$CLASS
Message class, characterizes the event
$COMP
Component that has output the message
$SEVERITY
Type of message, characterizes the importance
$TIME
Date and time when the message was output from the component (system time
of the computer on which the component is installed)
$HOST
Name of the computer on which the reported event occurred. For server
processes, “daemon” is output
$USER
Name of the user under which the processes run on the $HOST machine
$MSGTEXT
Message text from the message catalog. Important messages are listed first. If
there is no catalog message, the default text provided by the component is used
$MSGNO
Code number from the message catalog
Related Topics
• “Notification settings” on page 320
• “Checking alerts” on page 323
To check alerts:
1. Select Alerts in the System object in the console tree. All notifications of the
alert type are listed in the top area of the result pane.
2. Select the alert to be checked in the top area of the result pane. Alert details are
displayed in the bottom area of the result pane. The yellow icon of the alert
entry turns to grey if read.
Marking To mark all messages as read click Mark All as Read in the action pane. The yellow
messages as icons of the alert entries turn to grey.
read
Tasks The OpenText Archive Server Monitoring Web Client provides the following
monitoring functions:
• Archive Server Statistics - Checking the archiving and retrieving activities and
the Archive Server’s read/write performance.
• Archive Server Health Status - Checking the status of the Archive Server
components.
• Checking free storage space in the log directories
• Checking free storage space in pools and volumes
• Checking the Document Service and access to unavailable volumes
• Checking the Storage Manager
• Archive Server Threat Detection - Checking quota limit violations reported for
Archive Server users.
OpenText Archive Server Monitoring is used solely to observe the global system and
to identify problem areas. Monitoring collects information about the status of
Archive Server components at regular intervals.
Monitoring cannot be used to eliminate errors, modify the configuration, or start and
stop processes.
OpenText Archive Server Monitoring can be started using the URL of the Archive
Server host, for example,
https://alpha.opentext.com:8090/archive/monitoring (see “Starting the
Archive Monitoring Web Client” on page 329).
Warning and With Administration Client, you can configure warning and error messages that are
error messages sent when the status of Archive Server components changes (see “Monitoring with
notifications“ on page 315). You can also use external system management tools
within the scope of special project solutions.
Note: The default certificates delivered for Archive Timestamp Server expired
on 2015-05-28. To avoid error messages, you can replace the certificate files. For
more information, see “Renewing expired certificates” on page 182.
Alternatively, you can switch off the monitoring of the timestamp service by
setting the What kind of timestamp-server the script should expect
configuration variable (internal name: AS.TSTP.IXTWATCH_TS_SYSTEM) to
none.
Security
• The Archive Monitoring Web Client requires authentication.
To create a dedicated user and group for Archive Monitoring Web Client (built-
in user management):
a. Full access:
Create a new group and assign the MonitoringChangeAccess and
MonitoringReadAccess policies to it:
i. In the console tree, select Archive Server > System > Users and Groups
ii. In the action pane, click New Group.
iii. Enter a Group name and click OK.
iv. In the result pane, on the Groups tab, select the group you just created.
In the action pane, click Add Policy.
v. Select the MonitoringChangeAccess and MonitoringReadAccess
policies and click OK.
b. Read-only access:
Create a new group and assign the MonitoringReadAccess policy to it:
i. In the console tree, select Archive Server > System > Users and Groups
ii. In the action pane, click New Group.
iii. Enter a Group name and click OK.
iv. In the result pane, on the Groups tab, select the group you just created.
In the action pane, click Add Policy.
v. Select the MonitoringReadAccess policy and click OK.
The user now has full access or read-only access to the Archive Monitoring Web
Client.
a. Full access:
Create a new group and assign the MonitoringChangeAccess and
MonitoringReadAccess policies to it:
i. In the console tree, select Archive Server > System > Users and Groups
ii. In the action pane, click New Group.
iii. Enter a Group name and click OK.
iv. In the result pane, on the Groups tab, select the group you just created.
In the action pane, click Add Policy.
v. Select the MonitoringChangeAccess and MonitoringReadAccess
policies and click OK.
b. Read-only access:
Create a new group and assign the MonitoringReadAccess policy to it:
i. In the console tree, select Archive Server > System > Users and Groups
iv. In the result pane, on the Groups tab, select the group you just created.
In the action pane, click Add Policy.
4. Create a new (OTDS) group and name it exactly as the group in built-in user
management you created before. The user partition of the new group must be a
member of the access role for the Archive Server resource:
a. In the console tree, select Directory Services > User Partitions > <your user
partition>.
Tip: To verify that the user partition is a member of the access role for
the Archive Server resource, select Directory Services > Access Roles >
<your access role>. In the result pane, the user partition must be listed in
the Members tab.
5. You can add an existing user to the group or create a new one. In case of a non-
synchronized resource, you can create a user in the following way:
a. In the console tree, select Directory Services > User Partitions > <your user
partition> and in the action pane, click New User.
b. In the New User wizard, specify all required information and click Finish.
6. In the result pane, on the Groups tab, select the group you’ve just created.
The user now has full access or read-only access to the Archive Monitoring Web
Client.
Example: https://archiveserver.example.com:8090/archive/monitoring
After signing in, the Archive Server Monitoring main page displays the links to the
monitoring menus:
• Archive Server Statistics
see “Archive Server Statistics” on page 329
• Archive Server Health Status
see “Archive Server Health Status” on page 330
• Archive Server Threat Detection
see “Threats” on page 332
Diagrams show the number of Components and the Data volume handled by the
Archive Server during a specific period of time, as well as the read/write
Performance.
Note: The monitor does not provide archive-specific statistics. The monitor
diagrams refer to all Archive Server activities.
• Supported diagrams:
• Number of components handled by the Archive Server (read/write)
• Data volume (MB) handled by the Archive Server (read/write)
The component status can be Ok, Warning and Error. Details are displayed for the
following groups:
Note: Depending on the installed Document Pipelines and the current Archive
Server configuration, the Health Status can report more status change groups.
37.4.1 Database
The monitor checks the logfiles of tools for database errors.
<jukebox_name>
Provides an overview of the volumes for each attached jukebox. The possible
status specifications are Ok, Warning or Error. Warning means that there are no
writeable volumes or no empty slots in the jukebox. Error is displayed if at least
one corrupt medium is found in a jukebox (display -bad- in Devices in
OpenText Administration Client).
The following information is displayed in Details:
37.4.3 Services
The monitor checks the Document Service, the Archive Server component that
archives documents and delivers them for display. The monitor checks comprise the
following services:
The status of admsrv, bksrvr, tstp, and auxsrvr is Active or Error. Error means that
the component cannot be executed and must be restarted.
The status of the Storage Manager is Active if the server is running. A status of
either Can't call server, Can't connect to server, or Not active indicates that the
server is either not reachable or not running. Check the jbd.log log file for errors. If
necessary, solve the problem and start the Storage Manager again.
37.5 Threats
For each user, the monitor reports the number of components and the data volume
(number of bytes) downloaded per day during the last 30 days.
When a defined download quota limit per day and per user is exceeded, a threat
report (event) is created.
• Only one threat report will be sent per day and per user, unless the Threat
Settings are changed during the day.
• Component quota
• Data volume quota (MB)
• Block user
• Notify to
2. Specify the quota limits that, if exceeded, trigger a threat report that is displayed
in the Threats menu.
Click the Back button after each change to the Settings.
• Component quota
Maximum number of components a user has downloaded per day.
• Data volume quota (MB)
Maximum data volume in MB a user has downloaded per day.
• Block user
Specify whether a user is blocked from further downloading, when a quota
limit is exceeded.
Move the slider to the On or Off position.
• Notify to
Specify the E-MAIL SETTINGS for sending a notification message if a user
has exceeded the quota limit.
User history list The Threats menu displays the list of users who have exceeded the specified quota
limits.
• If Block user is set to Off for the quota limit, a warning is displayed.
• If Block user is set to On for the quota limit, further downloading activities are
blocked until midnight..
You can unblock a user’s downloading activities if the specified quota limits
were exceeded:
• Set Block user to Off for the quota limits.
• Specify higher quota limits.
• At midnight, all users’ downloading activities are automatically unblocked.
For each USER HISTORY, the record of retrieving and downloading activities is
displayed in Charts and in Table format.
38.1 Auditing
The auditing feature of Archive Server traces events of two aspects:
Important
Administrative changes are only recorded if they are done with
Administration Client. To get complete audit trails, make sure that other
ways of configuration cannot be used, for example, editing configuration
files directly. At least, such tasks must be logged by other means.
The auditing data is collected in separate database tables and can be extracted from
there with the exportAudit command to files, which can be evaluated in different
ways.
To audit the lifecycle of the documents, activate the Auditing option of the archive.
As the auditing mode is related to logical archives, enable it for each archive that is
subject to auditing.
Cleanup job
exportAudit To extract the data of the document-related information to files, use the command
exportAudit -S
To extract the data of the administration-related information to files, use the
command
exportAudit -A
Options With further options, you can adapt the output to your needs. For example, you
should probably define the timeframe for data extraction (-s and -e options).
Without these dates, you get all audit data until the current date and time, which
could result in very big files and exporting times.
Run exportAudit /? to get a list of all options.
Command:
exportAudit -S -s 2005/07/14:12:00:00 -e 2005/07/19:08:00:00 -o
csv -h -a
Event Description
EVENT_CREATE_DOC Document created
EVENT_CREATE_COMP Document component created on volid1
EVENT_UPDATE_ATTR Attributes updated
EVENT_TIMESTAMPED Document timestamped on volid1 (dsSign,
dsHashTree)
Event Description
EVENT_TIMESTAMP_VERIFIED Timestamp verified on volid1
EVENT_TIMESTAMP_VERIF_FAILED Timestamp verification failed on volid1
EVENT_COMP_MOVED Document component moved from HDSK volid1
to volid2 (dsCD etc. with -d)
EVENT_COMP_COPIED Document component copied from volid1 to
volid2 (dsCD etc. without -d)
EVENT_COMP_PURGED Document component purged from HDSK volid1
(dsHdskRm)
EVENT_COMP_DELETED Component deleted from volid1
EVENT_COMP_DELETE_FAILED Component deletion from volid1 failed
EVENT_COMP_DESTROYED Component destroyed from volid1
EVENT_DOC_DELETED Document deleted
EVENT_DOC_MIGRATED Document migrated
EVENT_DOC_SET_EVENT setDocFlag with retention called
EVENT_DOC_SECURITY Security error when attempting to read doc
Related Topics
• “Searching configuration variables” on page 242
38.2 Accounting
Archive Server allows collecting of accounting data for further analysis and billing.
To use accounting:
2. Evaluate the accounting data; see “Evaluating accounting data” on page 338.
Suppressed Accounting is disabled for the following jobs by default: INFO (7), ADMINFO (25),
jobs and SRVINFO (26). If you want to enable accounting for any of these jobs, you must
add the configuration variable ACC_SUPPRESSED_JOBS to the DS.setup file. The
value of the variable must hold all job numbers that are to be disabled for
accounting, separated by commas. A value of 0 means that no job is disabled. For
details, see the Knowledge Base article 15666398 (https://knowledge.opentext.com/
knowledge/llisapi.dll/Open/15666398).
If you archive the old accounting data, you can also access the archived files. The
Organize_Accounting_Data job writes the DocIDs of the archived accounting files
into the ACC_STORE.CNT file which is located in the accounting directory (defined in
Path to accounting data files).
The tool saves the files in the <target directory> where you can use them as usual.
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the View Installed Archive Server Patches utility.
3. Click Run in the action pane.
4. In the field View patches for packages enter the package whose patches you
want to list. Leave the field empty to view all packages.
5. Click Run to start the utility.
Related Topics
• “Utilities“ on page 267
• “Checking utilities protocols” on page 268
If, however, any of these parameters have been chosen inappropriately, you still can
correct them by taking the following steps:
1. Create the two correct directories in the file system and make sure that they are
owned and writeable by the Archive Spawner user.
2. Correct the directory settings in the configuration:
Click OK.
3. Restart the Archive Spawner processes (for details, see “Starting and stopping
Archive Server“ on page 347).
Archive The Archive Administration Utilities are Archive Monitoring Web Client, Document
Administration Pipeline Info, and Administration Client. You can find a short summary of their use
Utilities
in “Everyday monitoring of the archive system“ on page 313.
System tools The most important error messages are also displayed in the Windows Event Viewer
or in the UNIX syslog. This information is a subset of the information generated in
the log files. Use these tools to see the error messages for all components at one
place.
You can prevent the transfer of error messages to the system tools in general or for
single components with the setting Write error messages to Event Log / syslog; see
“Log settings for Archive Server components (Except STORM)” on page 354.
Log files record the jobs of the archive components. The number of log entries and
thus the size of the log files depend on the log level that has been set. Check the size
of the log files regularly and delete larger files. They will be automatically recreated
when Archive Server is started.
The log files for Archive Server can be found in the directory<OT logging>.
Important
Stop the Spawner before you delete the log files!
On client workstations, other log files are used. For more information, refer to the
Imaging documentation.
The Oracle database also generates log and trace files for diagnostic purposes. As
administrator, you should regularly check the size of the following files and delete
them from time to time:
Windows
<ORACLE_HOME>\network\log\listener.log (log file)<ORACLE_HOME>
\network\trace\* (trace file)<ORACLE_HOME>\rdbms\trace\*trc
UNIX
$ORACLE_HOME/network/log/listener.log (log file)$ORACLE_HOME/network/
trace (trace file)$ORACLE_HOME/rdbms/log/*.trc/* (trace files)
Archive Server and the database are automatically started by the operating system
when the hardware is started. However, there are situations in which you have to
start or stop Archive Server components manually without shutting down the
hardware, for example, when you back up the system data or when you perform
system administration tasks that require a manual stop of Archive Server
components. A restart can also help to figure out the reason of a problem.
After the restart, read the log file spawner.log in the directory <OT logging>. You
can see whether all the processes have started correctly (see also “Spawner log file”
on page 351).
You can simply use the OpenText Administration Client to start and stop Archive
Server components. If the tool is not available, you can use the Windows Services, or
command line calls. Note that the order in which the components are started or
stopped is important. Call the commands in the given order.
Note: The following commands are not valid for installations in cluster
environments.
Starting
Windows To start Archive Server using the Windows Services, proceed as follows:
Services
1. To open the Windows Services, do one of the following:
Command line To start Archive Server from the command line, enter the following commands in
this order:
Stopping
Windows To stop Archive Server components using the Windows Services, proceed as
Services follows:
Command line To stop Archive Server components from the command line, enter the following
commands in this order:
Starting
Use the commands listed below to restart Archive Server after the archive system
has been stopped without shutting down the hardware.
1. Log on as root.
2. Start the archive system including the corresponding database instance with:
HP-UX /sbin/rc3.d/S910spawner
start
Stopping
1. Log on as root.
HP-UX /sbin/rc3.d/S910spawner
stop
AIX /etc/rc.spawner stop
Solaris /etc/rc3.d/S910spawner
stop
Linux /etc/init.d/spawner stop
Linux, HP-UX, Under Linux, HP-UX, and Solaris, symbolic links to the startup scripts ensure that
Solaris the archive system is automatically terminated when the host is shut down or
rebooted.
AIX Under AIX, insert the line sh /etc/rc.spawner stop into the /etc/rc.shutdown
script to ensure automatic termination. After a new installation of AIXthis script
does not exist; the system administrator must create it.
1. Under UNIX/Linux, load the Archive Server environment first: <OT config
AS>/setup/profile
2. Check the status of the process with spawncmd status (see “Analyzing
processes with spawncmd” on page 351).
Description of parameters
{start|stop}
To start or stop the specified process.
<process>
The process you want to start or stop. The name appears in the first column of
the output generated by spawncmd status.
Important
You cannot simply restart a process if it was stopped, regardless of the
reason. This is especially true for Document Service, since its processes must
be started in a defined sequence. If a Document Service process was stopped,
it is best to stop all the processes and then restart them in the defined
sequence. Inconsistencies can also occur when you start and stop the monitor
program or the Document Pipelines this way.
No maintenance mode
No restrictions to access the server.
3. Click OK.
Note: The following commands and paths for log files are not valid for
installations in cluster environments.
Note: The Spawner must be running on the computer for these commands to
take effect.
Command Under UNIX/Linux, load Archive Server environment first: <OT config AS>/setup/
profile. Under all environments, open a command line.
• exit
• reread
• restart <service>
• start <service>
• stop <service>
• startall
• status
• stopall
spawncmd The following table briefly lists the description of some processes. Enter spawncmd
status status to get the current status.
Process Description
bksrvr Backup server process
Clnt_dp Client to monitor the Document Pipelines
Clnt_ds Client to monitor the Document Service
dp Document Pipelines
ixmonSvc Monitor server process
jbd STORM daemon
notifSrvr Notification server process
timestamp Timestamp Server
doctods, Various DocTools
docrm, ...
• R means the process is running. All processes should have this status with the
exception of chkw (checkWorms), stockist and dsstockist; and under
Windows, additionally db.
• T means the process was terminated. This is the normal status of the
processes chkw (check worms), stockist, and dsstockist; and under
Windows, additionally db. If any other process has the status T, it indicates a
possible problem.
The processes chkw and db are validation processes; stockist and
dsstockist are initializing processes. They are terminated automatically as
soon as they finished their task.
• S means the Spawner waits for the process to synchronize.
• Process ID, start and stop time.
You can find information about the DocTools in Document Pipeline Info. This
interface allows you to start and stop single DocTools and to resubmit documents
for processing.
Note: The system might write several log files for a single component, or
several components are affected by a problem. To make sure you have the
most recent log files, sort them by the date.
Log file analysis When analyzing log files, consider the following:
• The message class, that is the error type, is shown at the beginning of a log entry.
• The latest messages are at the end of the file.
Note: In jbd.log, old messages are overwritten if the file size limit is
reached. In this case, check the date and time to find the latest messages.
• Messages with identical time label normally belong to the same incident.
• The final error message denotes which action has failed. The messages before
often show the reason of the failure.
• A system component can fail due to a previous failure of another component.
Check all log files that have been changed at the same or similar time. The time
labels of the messages help you to track the causal relationship.
The logging of the Storage Manager differs from the logging of other archive
components. To configure the STORM log levels, see OpenText Archive Center -
STORM Configuration Guide (AR-IST).
3. In the result pane, expand Logging. To change the log level for a certain
component, edit the configuration variable for the corresponding component in
the lower part of the result pane.
Permanent log The following incidents are always written to the log files, and usually also to the
levels Event Viewer or Syslog. You cannot switch off the corresponding log levels.
• Fatal errors indicate fatal application errors that mostly lead to server crashes
(message type FTL).
• Important errors (message type IMP).
• Security errors indicate security violations such as invalid signatures (message
type SEC).
• Errors indicate serious application errors (message type ERR).
• Warnings indicate potential problem causes (message type WRN).
Log levels for The following log levels are relevant for troubleshooting. You can change them in
troubleshooting the Server Configuration; see “Setting log levels” on page 354.
Important
Higher log levels can generate a large amount of data and even can slow
down the archive system. Reset the log levels to the default values as soon as
you have solved the problem. Delete the log files only after you have
stopped the Spawner.
Time setting Additionally to the log levels, you can define the time label in the log file for each
component. Normally, the time is given in hours:minutes:seconds. If you select
Log using relative time, the time elapsed between one log entry and the next is
given in milliseconds instead of the date, additionally to the normal time label. This
is used for debugging and fine tuning.
This is a reference of all parameters (also called variables) that are relevant for the
administration of
• Archive Server
• Document Pipeline
• Email Cloud Archiving (Archive Center scenarios)
• File Archiving (Archive Center scenarios)
• Archive Monitoring Server
• Archive Cache Server
Notes
• Parameters that are listed by the administration client, but are not described
in this reference, are provided for service purposes only and should not be
modified.
• The configuration parameter documentation uses a modular approach.
Therefore, the order of the documented components does not appear in an
alphabetical order like in the dialogs, but is grouped by certain functional
aspects.
For the individual components and building blocks described in this documentation
module, the reference lists all relevant configuration parameters and usually
provides the following information for each of them:
• Storage location: The file where the parameter is stored. This information is
for your reference; note that you should preferably access the configuration
parameters via the administration client to ensure that your settings are
consistent. See “Configuration Files” on page 360 for details.
• Variable name: The name of the parameter
• Description: The meaning of the parameter
• Type: Data type of the variable, often with upper and/or lower limit
• Predefined value
• Allowed value: Lists all allowed values, if there is a specific set of allowed
values. Note that an allowed value range can also be specified by the limits noted
with the Type information (see above).
• Protection status: Some variables are read-only for the administration client,
i.e. they can be displayed there, but cannot be changed. This is specified via the
Therefore, the references for the related paramaters specify this storage location at the
beginning of each page; it applies to al parameters listed on that page.
The path specified in the storage location refers to the following variables:
ECM_ARCHIVE_SERVER_CONF
Installation folder of Archive Server; the folder used on your system is listed in
the file C:\ProgramData\Open Text\conf_dirs\10AS.conf (Windows) or /
etc/opentext/conf_dirs/10AS.conf (UNIX).
ECM_DOCUMENT_PIPELINE_CONF
Installation folder of Document Pipelines; the folder used on your system is
listed in the file C:\ProgramData\Open Text\conf_dirs\20DP.conf
(Windows) or /etc/opentext/conf_dirs/20DP.conf (UNIX).
ECM_MONITOR_SERVER_CONF
Installation folder of Monitor Server; the folder used on your system is listed in
the file C:\ProgramData\Open Text\conf_dirs\80MONS.conf (Windows) or /
etc/opentext/conf_dirs/80MONS.conf (UNIX).
ECM_CACHE_SERVER_CONF
Installation folder of Cache Server; the folder used on your system is listed in the
file C:\ProgramData\Open Text\conf_dirs\30AS.conf (Windows) or /etc/
opentext/conf_dirs/30AS.conf (UNIX).
42.2 Priorities
Although some parameters can be defined in more than one place, the parameter
with the highest priority will have precedence over the same parameter with a lower
priority. The priorities are listed here.
Second priority:
The Document Service (DS) configuration parameters.
Third priority:
The COMMON configuration parameters.
The listed default values are values included in the program codes. Some of these
values are not set in the setup files during the installation process. The default values
will be used if configuration parameters are missing or have no value in the registry
or in the setup files.
Installation type
• Read-only variable
• Variable name: INSTALL_TYPE
• Description: Type of installation/configuration: “INSTALL” or “UPGRADE”
Installed version
• Read-only variable
• Variable name: INSTALL_VERS
• Description: Version as found in the version.txt file of the corresponding
package.
Archive Server
• http ("http")
• https ("https")
43.1.1.1 SYS_CLEANUP_PROTOCOL
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.1.2 Local_backup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.1.3 Delete_Empty_Volumes
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.1.4 SYS_EXPIRE_ALERTS
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.1.5 SYS_CLEANUP_ADMAUDIT
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.2 Buffers
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.3 Archives
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.3.1 Security
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.3.2 Settings
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
• @AD=y ("on")
• @AD=n ("off")
Blobs (ADMS_ARCH_BLOBS)
• @B=y ("on")
• @B=n ("off")
Compression (ADMS_ARCH_CMP)
• Variable name: AS.ADMS.ADMS_ARCH_CMP
• Description:
This variable specifies the default value for the archive property
“Compression” assigned to new created archives. This setting can be
changed later in the archive properties.
• Type: Enum
• Predefined Value: @Cmp=y
• Allowed Value:
• @Cmp=y ("on")
• @Cmp=n ("off")
Encryption (ADMS_ARCH_ENC)
• Variable name: AS.ADMS.ADMS_ARCH_ENC
• Description:
This variable specifies the default value for the archive property
“Encryption” assigned to new created archives. This setting can be changed
later in the archive properties.
• Type: Enum
• Predefined Value: @E=n
• Allowed Value:
• @E=y ("on")
• @E=n ("off")
Hold (ADMS_ARCH_HOLD)
• Variable name: AS.ADMS.ADMS_ARCH_HOLD
• Description:
This variable specifies the default value for the archive property “Hold”
assigned to newly created archives.
• Type: Enum
• Predefined Value: @HLD=n
• Allowed Value:
• @HLD=y ("on")
• @HLD=n ("off")
• @SI=n ("off")
• @TSV=s ("Strict")
• @TSV=r ("Relaxed")
• @TSV=n ("No verification")
43.1.3.3 Retention
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
This variable specifies the default “retention mode” for new created archives.
This setting can be changed later in the archive properties.
• Type: Enum
• Predefined Value: @Mode=ncmpl
• Allowed Value:
• @Mode=cmpl ("Compliance")
• @Mode=ncmpl ("Noncompliance")
• @Mode= ("None")
43.1.4 Pools
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
• on
• off
• on
• off
• on
• off
• http ("http")
• https ("https")
43.1.6 Certificates
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
43.1.7 Notifications
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
This value must be equal to the port of the application server which runs on
the same machine.
• Type: Integer (min: 0, max: 65535)
• Predefined Value: 8080
43.1.9 Directories
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\ADMS.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ADMS.
Setup
• on
• off
External Frontend Plain Text Port to contact Archive Server (e.g. Loadbalancer)
(EXTERNAL_FRONTEND_HOST_PORT)
The external plain text port for the frontend address can be specified by this
setting, e.g. the Loadbalancer address which forwards requests to Archive
Server.
• Type: String
• Predefined Value: on
• Allowed Value:
• on
• off
• on
• off
• on
• off
List of host names, which are allowed to send requests without a user
identification (w/o cookie), if USER_COOKIE_INTEGRATION is set to 1 or 2.
These names may be added/changed to reflect the customer environment.
• Type: Structure, consisting of subvariables - see below for details
• Sub variables:
IP Address
Max. size of a HDSK volume for which full backups are started
(HDSK_MAX_FULL_BKUP_SIZE)
43.2.5.1 Compression
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
43.2.5.2 Blobs
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
43.2.5.3 Encryption
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
• Type: Enum
• Predefined Value: SHA256
• Allowed Value:
• SHA1 ("SHA-1 (160 bit, deprecated)")
• RMD160 ("RIPEMD160 (160 bit)")
• MD5 ("MD5 (128 bit)")
• SHA256 ("SHA256 (256 bit)")
• SHA512 ("SHA512 (512 bit)")
• Description:
This is the TCP port number of the time stamp server. Popular port numbers
are 318 and 32001.
• Type: Integer (min: 0, max: 65535)
• Predefined Value: 32001
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\TstpHttp.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/
TstpHttp.Setup
Header field
• Variable name: headerfield
• Type: String
• Predefined Value: headerfield=Host
• Predefined Value: headerfield=Content-Type
• Predefined Value: headerfield=Content-Length
• Type: Enum
• Predefined Value: RMD160
• Allowed Value:
• pkcs7 ("Pkcs#7")
• ietf ("RFC 3161 (IETF)")
• on
• off
• on
• off
• on
• off
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\SiaType.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/
SiaType.Setup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\SiaName.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/
SiaName.Setup
List of component names that are NOT stored using SIA (SIA_NAMES)
• Variable name: AS.SiaName.SIA_NAMES
• Type: Structure, consisting of subvariables - see below for details
• Sub variables:
Component name
• Variable name: compname
• Type: String
• Predefined Value: compname=REFERENCES
• Predefined Value: compname=REFERENCES2
• Predefined Value: compname=REFERENCES3
• Predefined Value: compname=INFO.TXT
• Predefined Value: compname=DATA.XML
• Predefined Value: compname=META_DOCUMENT
• Predefined Value: compname=META_DOCUMENT_INDEX
43.2.6 Directories
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
Intended size of ISO image for HSM systems, e.g. EMC (EMC_SIZE)
Min. size of blank volumes (MB, used e.g. for EMC) (CDMINBLANK)
This specifies the minimum size of the image to be copied by dsCD to an ISO
image, in percent of the capacity of the medium. If the available data is less
than this size, dsCD terminates with exit code 2, without burning the image.
• Type: Integer
• Predefined Value: 70
• off
• on
• on
• off
• on
• off
Temp directory for compressed files, used by dsWorm, dsHdsk and dsGs
(COMPR_DIR)
Time (secs) after which NFS write requests time out (NFS_WRTIMO)
43.2.10 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
If 0, users are never disabled. Otherwise, users are disabled if they logon
more than DS_MAX_BAD_PASSWD times within
DS_BAD_PASSWD_ELAPS seconds.
• Type: Integer
• Predefined Value: 0
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
• Type: String
• Predefined Value: CDROM,localhost,0,/views_hs
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\DS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/DS.
Setup
• 2: NO_DSADMIN
• 4: NO_HTTP
• 8: NO_RPC
• 16: NO_DELETE
• 32: NO_DELETE_NO_ERROR
• Type: Integer (min: 0, max: 255)
• Predefined Value: 8
• Type: Enum
• Allowed Value:
• Oracle ("Oracle")
• MS SQL Server ("MS SQL Server")
• SAP HANA ("SAP HANA")
• PostgreSQL ("PostgreSQL")
• Predefined Value: 5
• Description:
Time in seconds; a message occurring several times will be prevented from
being mapped to a notification until the message has not occurred for
<DEFAULT_NOTS_REOCC> seconds. If the class of the message is SRV, the
identification key for this message is class-comp-msgno-hostname-
msgtext, else the identification key is class-comp-msgno.
• Type: Integer (min: 0)
• Predefined Value: 30
Background: During startup ADMS takes some time before being ready for
answering connection requests from the Notification Server (= notifSrvr).
• Type: Integer (min: 0)
• Predefined Value: 200
Note: This parameter is effective only if the trace level for the
component scsi is at least 1!
• Type: Enum
• Predefined Value: 0
• Allowed Value:
• 0 ("0: no logging")
• 1 ("1: logging SCSI errors")
• 2 ("2: tracing of nearly all SCSI commands")
Number of saved logging file versions after jbd restart (includes logfile of current
jbd run) (logfile.cyclicNo)
• Variable name: AS.STORM.logfile.cyclicNo
• Description:
Specifies number of logfiles to create. For every start of JBD a new logfile is
created. Old logfiles are renamed from *.log to .000, .001,... This number is
valid for the tracefile, the lastwords and the error file. The minimum value is
0, that means no cyclic logfiles. If a logfile already exists messages are
appended, else they are written in the common way (if the value is set below
0, then 1 is taken). A value of 1 creates exactly one logfile (.log) which is
erased and new created by every start of JBD.
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
[ID]
• Variable name: ID
• Type: String § §
Trace level
• Variable name: trace
• Type: Integer (min: 0, max: 4)
• Predefined Value: 1
• Predefined Value:
• ID=ixwuser
• trace=1
• lwords=4
• Predefined Value:
• ID=ixwmedia
• trace=1
• lwords=4
• Predefined Value:
• ID=ixwinout
• trace=1
• lwords=4
• Predefined Value:
• ID=ixwcache
• trace=1
• lwords=4
• Predefined Value:
• ID=cache
• trace=1
• lwords=4
• Predefined Value:
• ID=glow
• trace=1
• lwords=4
• Predefined Value:
• ID=io
• trace=1
• lwords=4
• Predefined Value:
• ID=hal
• trace=1
• lwords=4
• Predefined Value:
• ID=serial
• trace=1
• lwords=4
• Predefined Value:
• ID=scsi
• trace=1
• lwords=4
• Predefined Value:
• ID=doscsi
• trace=1
• lwords=4
• Predefined Value:
• ID=journal
• trace=1
• lwords=4
• Predefined Value:
• ID=voldb
• trace=1
• lwords=4
• Predefined Value:
• ID=file
• trace=1
• lwords=4
• Predefined Value:
• ID=config
• trace=1
• lwords=4
• Predefined Value:
• ID=utils
• trace=1
• lwords=4
• Predefined Value:
• ID=backup
• trace=1
• lwords=4
• Predefined Value:
• ID=rfs
• trace=1
• lwords=4
• Predefined Value:
• ID=jbd
• trace=1
• lwords=4
• Predefined Value:
• ID=devctl
• trace=1
• lwords=4
• Predefined Value:
• ID=fsifs
• trace=1
• lwords=4
• Predefined Value:
• ID=dyn
• trace=1
• lwords=4
• Predefined Value:
• ID=nots
• trace=0
• lwords=4
• Predefined Value:
• ID=watch
• trace=0
• lwords=4
• 0
• 1 ("Send to NOTS disabled")
• Predefined Value: 0
• Allowed Value:
• 0 ("No preserve")
• 1 ("Preserve")
Note: All sub variables of the device variable with a name starting with devfile_*
are not stored at the storage location specified above, but in a file according to the
following pattern:
%IXOS_ARCHIVE_ROOT%\config\storm\devices\<device>.dev (Windows) or
$IXOS_ARCHIVE_ROOT/config/storm/devices/<device>.dev (UNIX)
where <device> means the string specified in the device name sub variable.
Devices (devices.list)
• Variable name: AS.STORM.devices.list
• Type: Structure, consisting of subvariables - see below for details
• Sub variables:
ID
• Variable name: ID
• Description:
ID
• Type: String
ID of (cluster) node
• Variable name: ID
• Type: String
Alternate Path
• automatic
• manual
Comment
Comment Text
Device Name
Robot
Drives
Format
• Variable name: format
Raw access
• Variable name: raw
• Protection: Read-only variable
• Type: Enum
• Allowed Value:
• 0
• 1 ("enabled")
[HOST]
• Variable name: HOST
• Type: String
• Predefined Value: HOST=localhost
• Predefined Value: HOST=<empty>
• Predefined Value: HOST=<empty>
ID
• Variable name: ID
• Description:
ID
• Type: String
Limit of space usage (in MB) for backup (space must be free in defined
path)
• Variable name: size
• Description:
The maximum allowed size of the file in MegaByte.
• Type: Integer (min: 100, max: 100000)
• Predefined Value: 1024
• Predefined Value:
• ID=dest1
• path=<empty>
• size=1024
Accept of also non-ISO9660 format (e.g. more than 64KB Directories) (ixworm.
isoFinNonStandard)
• 0
• 1 ("enabled")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
Accept of also non-ISO9660 format (e.g. more than 64KB Directories) (ixworm.
isoFinNonStandard)
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_NAME_PATH/hashname
• size=35
• mode=mapped
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_FILE_PATH/hashfile
• size=35
• mode=mapped
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Accept of also non-ISO9660 format (e.g. more than 64KB Directories) (ixworm.
isoFinNonStandard)
• Description:
If the WORM file system is getting full, then there will be more and more
rehashes for a new entry. This values is set to the limit after which a warning
is generated. (the log level for the warning message is output at: MAX (9 +
(value of this parameter -<number of rehashes> ), 0) ), but only if the number
of rehashes is > rehashWarning.
• Type: Integer (min: 1, max: 100)
• Predefined Value: 40
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_NAME_PATH/hashname
• size=200
• mode=mapped
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_FILE_PATH/hashfile
• size=200
• mode=mapped
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_INODE_PATH/inodes1
• size=600
• mode=file
• Predefined Value:
• ID=chunk2
• path=REPLACE_WITH_INODE_PATH/inodes2
• size=600
• mode=file
• Predefined Value:
• ID=chunk3
• path=REPLACE_WITH_INODE_PATH/inodes3
• size=600
• mode=file
• Predefined Value:
• ID=chunk4
• path=REPLACE_WITH_INODE_PATH/inodes4
• size=600
• mode=file
• Predefined Value:
• ID=chunk5
• path=REPLACE_WITH_INODE_PATH/inodes5
• size=600
• mode=file
Accept of also non-ISO9660 format (e.g. more than 64KB Directories) (ixworm.
isoFinNonStandard)
• Description:
Directory path to the temporary files stored on disc while the files are written
by the clients. The path should never be a network attached HD. Note: There
must be enough space to hold (worst case) “maxOpenDatafiles” <max.
fileSize of one WORM file>.
• Type: Path
• Predefined Value: <empty>
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• mode=file
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_HASH_FILE_PATH/hashfile
• size=700
• mode=file
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\storm
\server.cfg
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/storm/server.
cfg
ID
• Variable name: ID
• Protection: Read-only variable
• Description:
ID
• Type: String
• file ("file")
• mapped ("mapped")
• Predefined Value:
• ID=chunk1
• path=REPLACE_WITH_INODE_PATH/inodes1
• size=800
• mode=file
• Predefined Value:
• ID=chunk2
• path=REPLACE_WITH_INODE_PATH/inodes2
• size=800
• mode=file
• Predefined Value:
• ID=chunk3
• path=REPLACE_WITH_INODE_PATH/inodes3
• size=800
• mode=file
• Predefined Value:
• ID=chunk4
• path=REPLACE_WITH_INODE_PATH/inodes4
• size=800
• mode=file
• Predefined Value:
• ID=chunk5
• path=REPLACE_WITH_INODE_PATH/inodes5
• size=800
• mode=file
• Predefined Value:
• ID=chunk6
• path=REPLACE_WITH_INODE_PATH/inodes6
• size=800
• mode=file
• Predefined Value:
• ID=chunk7
• path=REPLACE_WITH_INODE_PATH/inodes7
• size=800
• mode=file
• Predefined Value:
• ID=chunk8
• path=REPLACE_WITH_INODE_PATH/inodes8
• size=800
• mode=file
• Predefined Value:
• ID=chunk9
• path=REPLACE_WITH_INODE_PATH/inodes9
• size=800
• mode=file
• Predefined Value:
• ID=chunk10
• path=REPLACE_WITH_INODE_PATH/inodes10
• size=800
• mode=file
• Predefined Value:
• ID=chunk11
• path=REPLACE_WITH_INODE_PATH/inodes11
• size=800
• mode=file
• Predefined Value:
• ID=chunk12
• path=REPLACE_WITH_INODE_PATH/inodes12
• size=800
• mode=file
• Predefined Value:
• ID=chunk13
• path=REPLACE_WITH_INODE_PATH/inodes13
• size=800
• mode=file
• Allowed Value:
• none ("same as in TS request")
• MD5 ("MD5")
• SHA1 ("SHA1")
• RMD160 ("RipeMD160")
• SHA256 ("SHA256")
• SHA512 ("SHA512")
• Description:
Because the internal clock of a computer has limited precision, this setting
provides a possibility to set a timeout period in hours after which the service
refuses to timestamp incoming requests. The timeout counter is reset every
time you transmit the signing key. A timeout setting of “0” will disable this
feature and leave the server running unlimited.
• Type: Integer (min: 0)
• Predefined Value: 168
• Allowed Value:
• on
• off
• on
• off
• on
• off
• on
• off
• on
• off
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
• Description:
If true use “offline” instead of “onTape” state if request accesses a
component which is stored on a tape. This is for “old” only, default is “false”.
“Internal” setting only, do not modify until otherwise instructed!
• Type: Flag
• Predefined Value: false
• Allowed Value:
• true
• false
Time interval during which a pending update request may be delayed further
(JDS_ADM_REFRESH_MAXIMUM_DELAY)
• Variable name: AS.AS.JDS_ADM_REFRESH_MAXIMUM_DELAY
• Protection: Read-only variable
• Description:
After “JDS_ADM_REFRESH_MAXIMUM_DELAY” seconds a pending
update request will not be delayed further by successive update requests.
• Type: Integer (min: 0, max: 10)
• Predefined Value: 8
• true
• false
• true
• false
• on
• off
• Description:
Case sensitivity flag for original email message names (default=on).
“Internal” setting only, do not modify until otherwise instructed!
• Type: String
• on
• off
• Type: String
• Predefined Value: com.opentext.ecm.lea.filter.email.composer.
EmailComposerFilter
• Description:
Attachments in eml files smaller than this parameter will not be extracted/
decomposed!
• Type: Integer
• Predefined Value: 1024
43.9.4.1 Database
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
43.9.4.2 Command
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
43.9.4.3 Audit
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
• true
• false
43.9.4.4 OTDSconnection
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Name of the OTDS query field: domain name (user or group) (OTDS_QUERY_NAME)
• Variable name: AS.AS.OTDS_QUERY_NAME
• Protection: Read-only variable
• Description:
This setting specifies the name of the field containing the Active Directory
attribute for the domain name, user or group.
• Type: String
• Predefined Value: oTExternalID4
This setting specifies the name of the field containing the Active Directory
attribute 'objectSid'.
• Type: String
• Predefined Value: oTExternalSID
• lazy ("lazy")
• strict ("strict")
43.9.4.5 AllowedUsers
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
admuser1 (ADS_AllowedUsers_admuser1)
• Variable name: AS.AS.ADS_AllowedUsers_admuser1
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
admuser2 (ADS_AllowedUsers_admuser2)
• Variable name: AS.AS.ADS_AllowedUsers_admuser2
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
admuser3 (ADS_AllowedUsers_admuser3)
• Variable name: AS.AS.ADS_AllowedUsers_admuser3
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
admuser4 (ADS_AllowedUsers_admuser4)
• Variable name: AS.AS.ADS_AllowedUsers_admuser4
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
admuser5 (ADS_AllowedUsers_admuser5)
• Variable name: AS.AS.ADS_AllowedUsers_admuser5
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: <empty>
aradmins (ADS_AllowedUsers_aradmins)
• Variable name: AS.AS.ADS_AllowedUsers_aradmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: DS/GROUP/aradmins
dsadmin (ADS_AllowedUsers_dsadmin)
• Variable name: AS.AS.ADS_AllowedUsers_dsadmin
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
ldadmins (ADS_AllowedUsers_ldadmins)
• Variable name: AS.AS.ADS_AllowedUsers_ldadmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: DS/GROUP/ldadmins
ldagents (ADS_AllowedUsers_ldagents)
• Variable name: AS.AS.ADS_AllowedUsers_ldagents
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: DS/GROUP/ldagents
otadsadmins (ADS_AllowedUsers_otadsadmins)
• Variable name: AS.AS.ADS_AllowedUsers_otadsadmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: OTDS/GROUP/otadsadmins@otds.admin
otasadmins (ADS_AllowedUsers_otasadmins)
• Variable name: AS.AS.ADS_AllowedUsers_otasadmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: OTDS/GROUP/otasadmins@otds.admin
otldadmins (ADS_AllowedUsers_otldadmins)
• Variable name: AS.AS.ADS_AllowedUsers_otldadmins
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: OTDS/GROUP/otldadmins@otds.admin
otldagents (ADS_AllowedUsers_otldagents)
• Variable name: AS.AS.ADS_AllowedUsers_otldagents
• Description:
This setting specifies a user or group. The value of this variable must match
the pattern “<UMS>/<USERGROUP>/<NAME>” with
• UMS can be OTDS or DS
• USERGROUP can be USER or GROUP
• NAME is the concrete user or group name
• Type: String
• Predefined Value: OTDS/GROUP/otldagents@otds.admin
43.9.4.6 Policy
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
43.9.4.6.1 Assignments
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
43.9.4.7 Reports
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
43.9.4.8 SolutionRegistry
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
• true
• false
This setting specifies the directory where all elastic index and config data is
stored. If this variable is empty the default directory will $ECM_VAR_DIR/
es.
• Type: Path
• Predefined Value: <empty>
43.9.8 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Administration (LOG_ADMIN)
• Variable name: AS.AS.LOG_ADMIN
• Description:
This setting specifies the log level for “Administration” category. See also
key: “LOG_ADMIN_GROUP” There are 4 distinct settings which add
additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
• Allowed Value:
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\AS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/AS.
Setup
Administration (LOG_ADMIN_GROUP)
• Variable name: AS.AS.LOG_ADMIN_GROUP
• Protection: Read-only variable
• Description:
This setting specifies an own log category for the “Administration” by
listening all related java packages. “Internal” setting only, do not modify
until otherwise instructed!
• Type: String
• Predefined Value: com.opentext.ecm.admin,com.opentext.ecm.api,com.
opentext.ecm.archiveadmin,com.opentext.ecm.container,com.
opentext.ecm.exceptions,com.opentext.ecm.rcs.script,
com.opentext.ecm.services.administration,com.opentext.ecm.
services.adminroot,com.opentext.ecm.services.archiveadmin,com.
opentext.ecm.services.asm,com.opentext.ecm.services.
leaauthentication,com.opentext.ecm.services.leaauthorization,
com.opentext.ecm.services.leanotifications,com.opentext.ecm.
services.leausergroup,com.opentext.ecm.api.srws.impl,
com.opentext.ecm.asm.webclient,com.opentext.ecm.asm.bizadmin,
com.opentext.ecm.asm.bizconfig,com.opentext.ecm.asm.bizutil,
com.opentext.ecm.services.tenantmgmt,com.opentext.ecm.as.cmis.
client
This setting specifies an own log category for the “Document Service” by
listening all related java packages. “Internal” setting only, do not modify
until otherwise instructed!
• Type: String
• Predefined Value: com.opentext.ecm.asm,com.opentext.ecm.mdf
• true ("enable")
• false ("disable")
OTDS attribute for history of CMIS repositories of user's personal email archives
(CMIS_REPOSITORY_HISTORY_OTDS_ATTR)
• Variable name: AS.AS.CMIS_REPOSITORY_HISTORY_OTDS_ATTR
• Description:
OTDS attribute to store the history of CMIS repository IDs of the user's
personal email archives.
• Type: String
• Predefined Value: oTACEmailCmisRepositoryIdHistory
• Description:
The number of groups that can be cached.
• Type: Integer (min: 1)
• Predefined Value: 50000
• off
Time after which an export expires and can be deleted by cleanup task
(BIZ_EXPORT_EXPIRATION)
• Variable name: AS.AS.BIZ_EXPORT_EXPIRATION
• Description:
This setting specifies the time after which an export expires and can be
deleted by cleanup task.
• Type: Integer (min: 60, max: 10080)
• Predefined Value: 1440
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
• Description:
This setting controls the default value for the compression flag in the
property dialogs of an archive.
• on: Sets the default value to "On".
• off: Sets the default value to "Off"
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
to "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: Day
• Allowed Value:
• Day ("Day")
• Month ("Month")
• Year ("Year")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting controls the default value for the "Timestamps" flag in the
property dialogs of an archive.
• on: Sets the default value to "On".
• off: Sets the default value to "Off"
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
to "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting controls the appearance and behaviour of the "Encryption" flag
in the property dialogs of an archive.
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
This setting affects only the Business Administration and will only be used if
the configuration variable BIZ_ARCHIVE_DEFAULTS_OVERWRITE is set
ot "on".
• Type: Enum
• Predefined Value: supported
• Allowed Value:
• supported ("supported")
• readonly ("readonly")
• Type: Flag
• Allowed Value:
• true
• false
• true
• false
Number of check phases before a full update is executed. (frontend server only)
(REINIT_AFTER_N_PERIODS)
• Variable name: AS.ICS.REINIT_AFTER_N_PERIODS
• Protection: Read-only variable
• Description:
This setting indicates how much check phases must pass before a full re-
initialization is triggered. See also key: “LAST_REINIT_PERIOD”. Setting
this key in an “Archive Server” scenario has NO effect. “Internal” setting
only, do not modify until otherwise instructed!
• Type: Integer (min: 2)
• Predefined Value: 30
This setting specifies SSL port number of the backend server in dependence
to the scenario. It is set by Installation.
• For Archive Server scenario this port is not evaluated and should point to
the local SSL port.
• For Archive Cache Server scenario this port should point to the SSL port
of the backend Archive Server.
• true
• false
• true
• false
Time period after which an offline backend server will be probed again
(ARCHIVEOFFLINETIME)
Time to pass when reinitializing thread checks for and administrative changes
(frontend server only) (LAST_REINIT_PERIOD)
• Type: String
• Allowed Value:
• true
• false
• Type: Flag
• Allowed Value:
• true
• false
43.12.4 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ICS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ICS.
Setup
Authentication (LOG_AUTH)
• Variable name: AS.ICS.LOG_AUTH
• Protection: Read-only variable
• Description:
This setting specifies the log level for “Authentication” category. See also
key: “LOG_AUTH_GROUP” There are 4 distinct settings which add
additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
Legacy (LOG_LEGACY)
• Variable name: AS.ICS.LOG_LEGACY
• Description:
This setting specifies the log level for “Legacy Code” category. See also key:
“LOG_LEGACY_GROUP” There are 4 distinct settings which add additional
logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ICS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ICS.
Setup
Authentication (LOG_AUTH_GROUP)
This variable specifies the minimum size required to insert new documents
into the Document Pipeline directory. If the limit is reached, DP rejects
registration of new documents.
• Type: Integer
• Predefined Value: 200
• Description:
This variable specifies two parameters: p1/p2.
The parameter p1 is the so-called “call timeout”. This is the overall maximum
time (in seconds), the Document Pipeline process waits for an answer from a
DocTool he called (default: 5 sec.). After this period of time, the call is no
longer repeated and marked as erroneous.
The parameter p2 represents a “retry timeout” (in seconds) which indicates
how long the Document Pipeline process waits for an answer from a
DocTool before it repeats the call (default: 5 sec). In case of an error the call is
repeated as long as the “call timeout” is not yet reached.
• Type: String
• Predefined Value: 5/5
44.1.1 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup
\COMMON.Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/COMMON.
Setup
• on
• off
• on
• off
• on
• off
This variable specifies if info logging is enabled for the Document Pipeline.
• Type: Flag
• Predefined Value: off
• Allowed Value:
• on
• off
• Type: Flag
• Predefined Value: on
• Allowed Value:
• on
• off
44.2.1 Logging
Storage location:
- Windows: Configuration file: <ECM_DOCUMENT_PIPELINE_CONF>\config\setup
\DPRI.Setup
- UNIX: Configuration file: <ECM_DOCUMENT_PIPELINE_CONF>/config/setup/
DPRI.Setup
• Description:
This variable specifies the communication protocol with the Rendition
Server.
• Type: Enum
• Predefined Value: http
• Allowed Value:
• http
• https
• Description:
This variable specifies the port number on which the timestamp server
listens.
• Type: Integer (min: 0, max: 65535)
• Description:
This variable specifies the name of a protocol file for COLD pipelines. If a
name is specified, the path name of the data files in the external directory is
written into the file.
• Type: Path
• Description:
A regular pattern specifying all files that must be copied from external
directory (EXT_DIR) to document directory (DPDIR).
• Type: String
• Predefined Value: .*
This variable specifies whether the files contained in the external directory
should be removed after the document is processed (remove: on). If the
directory itself should be removed as well, use with directory.
• Type: Enum
• Predefined Value: off
• Allowed Value:
• on
• off
• with directory
• Type: Path
Enabled (enabled)
• Variable name: ECA.ECA.enabled
• Description:
Allows to disable the start of all email archiving services, for example due to
maintenance reasons.
• Type: Flag
• Predefined Value: true
• Allowed Value:
• true
• false
45.1.1 Configuration
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
• true
• false
because the 'Archive Server' trusts the 'Archive Proxy Server' using
certificates plus SSL protocol.
• Type: Flag
• Predefined Value: false
• Allowed Value:
• true
• false
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
• Allowed Value:
• true
• false
• true
• false
• true
• false
• true
• false
45.1.1.3 Credential
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
45.1.1.4 Dispatcher
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
This parameter is evaluated only for Google mailbox scan. The value has to
be a scope URL for Google Apps like gmail, google drive, .... and the default
value is https://mail.google.com
• Type: String
• Predefined Value: https://mail.google.com
45.1.1.5.1 session
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
• Description:
This variable specifies timeout of e-mail client connection in seconds
(default=1800). Client connection will be closed by IMAP Server if timeout
expires. Note: Timeout value must always be less than "TTL for OTDS
Ticket" of Archive Server configuration.
• Type: Integer (min: 1)
• Predefined Value: 1800
SMTP Server Password of TLS key store and private key (configuration.
smtpserver.keystorepassword)
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
• Allowed Value:
• yes
• no
If the value is empty, then the port number of the CMIS connection is taken
to send messages to the notification server. An non-empty value has higher
priority and is taken as port of the notification server.
• Type: String
• Predefined Value: <empty>
Protocol (configuration.notificationsystem.notsprotocol)
• Variable name: ECA.ECA.configuration.
notificationsystem.notsprotocol
• Description:
If the value is empty, then the protocol information of the CMIS connection is
taken to send messages to the notification server. An non-empty value has
higher priority and is taken to send messages to the notification server.
• Type: Enum
• Predefined Value: <empty>
• Allowed Value:
• https ("https")
• http ("http")
• <empty> ("Protocol of CMIS connection")
45.1.1.14 Processor
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
Count (configuration.processor.count)
• Variable name: ECA.ECA.configuration.processor.count
• Description:
The number of worker threads (= processors). The default value is 5
processors.
• Type: Integer
• Predefined Value: 5
• false
45.1.1.15 Status
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
Count (configuration.worker.count)
• true
• false
and the value is 'true', then the emails are marked only. In case of value 'false'
the email items are moved to the 'Deleted' folder.
• Type: Flag
• Predefined Value: true
• Allowed Value:
• true
• false
From (configuration.workingtime.from)
Schedule (configuration.workingtime.schedule)
To (configuration.workingtime.to)
• Variable name: ECA.ECA.configuration.workingtime.to
• Description:
The feature of the 'working time' is evaluated only for Mailbox scan (Google
or Exchange mailboxes).
• Type: String
• Predefined Value: 00:00
45.1.2 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
• Predefined Value: 9
45.1.3 Monitoring
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ECA.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ECA.
Setup
Communication port for the Archive Link interface of the Archive Server
(AL_PORT)
• Variable name: FSA.FSA.AL_PORT
• Description:
This setting specifies the communication port for the Archive Link interface
of the Archive Server. Only used in the Cost-Saving scenario (Create Shortcut
option in Business Administration).
• Type: Integer (min: 1, max: 65535)
• Predefined Value: 8090
Communication protocol for the Archive Link interface of the Archive Server
(AL_PROTOCOL)
• Variable name: FSA.FSA.AL_PROTOCOL
• Description:
This setting specifies the communication protocol for the Archive Link
interface of the Archive Server. Only used in the Cost-Saving scenario
(Create Shortcut option in Business Administration).
• Type: Enum
• Predefined Value: https
• Allowed Value:
• https ("HTTPS")
• http ("HTTP")
Expire time (in minutes) for entries in FSA user cache (LRU) (FSA_USER_LRU_TTL)
• Variable name: FSA.FSA.FSA_USER_LRU_TTL
• Protection: Read-only variable
• Description:
“Internal” setting only, do not modify until otherwise instructed!
• Type: Integer (min: 1, max: 600)
• Predefined Value: 30
Name of the directory where the shortcut icons are stored (FSA_ICON_DIR)
• Variable name: FSA.FSA.FSA_ICON_DIR
• Protection: Read-only variable
• Description:
“Internal” setting only, do not modify until otherwise instructed!
• Type: String
• Predefined Value: _otx_fsa_icons_
This setting specifies the amount of disk space [MB] per volume which the
underlying cache administration is trying to to keep free. “Internal” setting
only, do not modify until otherwise instructed!
• Type: Integer (min: 0)
External server name (in environments with multiple host names) (MY_HOST_NAME)
• Variable name: AS.ACS.MY_HOST_NAME
• Description:
When working with different networks, domains, or hostnames, the external
server name can be specified by this setting. “Internal” setting only, do not
modify until otherwise instructed!
• Type: String
Maximum number of worker threads to retrieve content (must not be smaller than
Minimum) (CS_MAX_WTHREADS)
• Variable name: AS.ACS.CS_MAX_WTHREADS
• Protection: Read-only variable
• Description:
This setting specifies the maximum number of worker threads the ACS
internally uses to retrieve content from the backend. “Internal” setting only,
do not modify until otherwise instructed!
• Type: Integer (min: 1, max: 255)
• Description:
This setting states the second possible volume to be used for write-through
caching. Volume is specified by giving an absolute pathname.
Recommendation: use separate path only be used by the Archive Cache
Server. Best be used located on separate disk or partition.
• Type: Path
• Type: Enum
• Predefined Value: http
• Allowed Value:
• https ("https")
• http ("http")
Proxy ID (BIZ_PROXYID)
• Variable name: AS.ACS.BIZ_PROXYID
• Description:
This ID uniquely identifies the Proxy installation.
• Type: String
• Description:
This setting specifies the maximum size [MB] used for the corresponding
volume.
Recommendation: ensure that there is always sufficient space to be used by
this volume.
• Type: Integer (min: 20)
• true
• false
48.1.1 Scheduler
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ACS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ACS.
Setup
48.1.2 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ACS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ACS.
Setup
Administration (LOG_ADMIN)
• Variable name: AS.ACS.LOG_ADMIN
• Description:
This setting specifies the log level for “Administration” category. See also
key: “LOG_ADMIN_GROUP” There are 4 distinct settings which add
additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Scheduler (LOG_SCHED)
• Variable name: AS.ACS.LOG_SCHED
• Description:
This setting specifies the log level for “Scheduler” category. There are 4
distinct settings which add additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ACS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ACS.
Setup
Administration (LOG_ADMIN_GROUP)
• Variable name: AS.ACS.LOG_ADMIN_GROUP
• Protection: Read-only variable
• Description:
This setting specifies an own log category for the “Administration” by
listening all related java packages. “Internal” setting only, do not modify
until otherwise instructed!
• Type: String
• Predefined Value: com.opentext.ecm.asm.bizadmin,com.opentext.
ecm.asm.ca,com.opentext.ecm.as.cmis.client,com.opentext.ecm.
admin,com.opentext.ecm.persistence
This setting specifies a separate log category for the “CMIS Interface” by
listing all related java packages. “Internal” setting only, do not modify unless
instructed otherwise!
• Type: String
• Predefined Value: com.opentext.ecm.as.cmis.proxy,com.opentext.ecm.
asm.cmis
Scheduler (LOG_SCHED_GROUP)
• Variable name: AS.ACS.LOG_SCHED_GROUP
• Protection: Read-only variable
• Description:
This setting specifies an own log category for the “Scheduler” by listening all
related java packages. “Internal” setting only, do not modify until otherwise
instructed!
• Type: String
• Predefined Value: com.opentext.ecm.admin.schedule,com.opentext.
ecm.scheduling
Time after which an export expires and can be deleted by cleanup task
(BIZ_EXPORT_EXPIRATION)
This setting specifies the time after which an export expires and can be
deleted by cleanup task.
• Type: Integer (min: 60, max: 10080)
• Predefined Value: 1440
type is replaced by any set default content type. See also key:
“DS_DEFAULT_CONTENTTYPE,
DS_REPLACE_ILLEGAL_CONTENTTYPE” Setting this key in an “Archive
Server” scenario has NO effect. “Internal” setting only, do not modify until
otherwise instructed!
• Type: Flag
• Allowed Value:
• true
• false
• false
This setting specifies the default value of the URL parameter “ixUser”, which
is used by internal request. Setting this key in an “Archive Server” scenario
has NO effect.
• Type: String
• Description:
“Internal” setting only, do not modify until otherwise instructed!
• Type: Integer
Number of check phases before a full update is executed. (frontend server only)
(REINIT_AFTER_N_PERIODS)
• Variable name: AS.ICS.REINIT_AFTER_N_PERIODS
• Protection: Read-only variable
• Description:
This setting indicates how much check phases must pass before a full re-
initialization is triggered. See also key: “LAST_REINIT_PERIOD”. Setting
this key in an “Archive Server” scenario has NO effect. “Internal” setting
only, do not modify until otherwise instructed!
• Type: Integer (min: 2)
• Predefined Value: 30
• false
Time period after which an offline backend server will be probed again
(ARCHIVEOFFLINETIME)
• Variable name: AS.ICS.ARCHIVEOFFLINETIME
• Protection: Read-only variable
• Description:
If any backend server is no longer available this setting specifies the time
period [ms] before probing it again. Setting this key in an “Archive Server”
scenario has NO effect. “Internal” setting only, do not modify until otherwise
instructed!
• Type: Integer
Time to pass when reinitializing thread checks for and administrative changes
(frontend server only) (LAST_REINIT_PERIOD)
• Variable name: AS.ICS.LAST_REINIT_PERIOD
• Protection: Read-only variable
• Description:
This setting specifies the time period [ms] passed before a background
thread checks for any administrative changes on the backend server. Setting
this key in an “Archive Server” scenario has NO effect. “Internal” setting
only, do not modify until otherwise instructed!
• Type: Integer
• Predefined Value: 10000
• true
• false
requests to the target host. There should normally be no need to set this
value.
• Type: String
• true
• false
• true
• false
• true
• false
• false
• false
48.2.4 Logging
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ICS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ICS.
Setup
Authentication (LOG_AUTH)
• Variable name: AS.ICS.LOG_AUTH
• Protection: Read-only variable
• Description:
This setting specifies the log level for “Authentication” category. See also
key: “LOG_AUTH_GROUP” There are 4 distinct settings which add
additional logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
Legacy (LOG_LEGACY)
• Variable name: AS.ICS.LOG_LEGACY
• Description:
This setting specifies the log level for “Legacy Code” category. See also key:
“LOG_LEGACY_GROUP” There are 4 distinct settings which add additional
logging from top to bottom.
• Type: Enum
• Predefined Value: Warn
• Allowed Value:
• Warn ("Errors and Warnings")
• Info ("Errors, Warnings and Info")
• Debug ("Errors, Warnings, Info and Debug")
• Trace ("Errors, Warnings, Info, Debug and Trace")
Storage location:
- Windows: Configuration file: <ECM_ARCHIVE_SERVER_CONF>\config\setup\ICS.
Setup
- UNIX: Configuration file: <ECM_ARCHIVE_SERVER_CONF>/config/setup/ICS.
Setup
Authentication (LOG_AUTH_GROUP)
Annotation
Archive Box
OpenText Archive Center term. If enabled for a File Archiving data source, all
folders and documents below the specified path are archived and replaced by a
single folder shortcut.
This option is intended for documents, and optionally folders, that need to be
archived but are no longer in daily use. Thus, the required disk space on the file
server, including the total number of files, can be reduced. This is in contrast to
the shortcut scenario where every file is replaced by an individual link.
Separate machine, on which documents are stored temporarily. That way the
network traffic in the WAN can be reduced.
Archive Center
See OpenText Archive Center.
Archive ID
Archive mode
Specifies the different scenarios for the scan client (such as late archiving with
barcode, preindexing).
Web based administration tool for monitoring the state of the processes, storage
areas, OpenText Document Pipeline and database space of the Archive Server.
Archive Spawner
Service program which starts and terminates the processes of the archive system.
A timestamp provider signs documents by adding the time and signing the
cryptographic checksum of the document. To ensure evidence of documents, use
an external, qualified timestamp provider like Timeproof or AuthentiDate.
OpenText Archive Timestamp Server is a timestamp provider for demonstration
or testing.
See Also Time Stamping Authority (TSA).
ArchiveLink
See SAP ArchiveLink.
Buffer
Burn buffer
A special burn buffer is required for ISO pools in addition to a disk buffer. The
burn buffer is required to physically write an ISO image. When the specified
amount of data has accumulated in the disk buffer, the data is prepared and
transferred to the burn buffer in the special format of an ISO image. From the
burn buffer, the image is transferred to the storage medium in a single,
continuous, uninterruptible process referred to “burning” an ISO image. The
burn buffer is transparent for the administration.
Cache
CMIS
See Content Management Interoperability Services (CMIS).
Collection
OpenText Archive Center term. Controls and defines all archiving activities and is
mapped to a tenant-specific logical archive.
See Also Data source.
Primary time standard by which the world regulates clocks and time. It is one of
several closely related successors to Greenwich Mean Time (GMT). For most
purposes, UTC is synonymous with GMT.
Data source
OpenText Archive Center term. Specifies the origin and properties of the
documents that are archived by a collection.
Device
Short term for a storage device in the Archive Server environment. A device is a
physical unit that contains at least storage media, but can also contain additional
software or hardware to manage the storage media. Devices include the
following:
Digital signature
Disk buffer
See Buffer.
DocID
See Document ID (DocID).
DocTools
Document ID (DocID)
DP
See Document Pipeline (DP).
DPDIR
The directory in which the documents are stored that are being currently
processed by a document pipeline.
DS
See Document Service (DS).
OpenText Archive Center term. Directory in which all incoming emails are
temporarily saved. If archiving of an email fails, it is kept in a subdirectory and is
not automatically deleted. In this case, depending on the number and size of the
failed emails, the directory can grow very quickly.
The location of the directory can be defined during the installation of Archive
Center and in the ECA.ECA.configuration.emailsystem.rootfolder
configuration variable.
Enterprise Scan
GMT
Greenwich Mean Time; former global time standard. For most purposes, GMT is
synonymous with UTC.
See Also Coordinated Universal Time (UTC).
Hold
Logical archives can be put on hold, which means that its documents and
components cannot be changed or deleted. Adding further documents to the
archive is still possible.
Hot standby
ISO image
An ISO image is a container file containing documents and their file system
structure according to ISO 9660. It is written at once and fills one volume.
Job
Known server
A known server is an Archive Server whose archives and disk buffers are known
to another Archive Server. Making servers known to each other provides access
to all documents archived in all known servers. Read-write access is provided to
other known servers. Read-only access is provided to replicate archives. When a
request is made to view a document that is archived on another server and the
server is known, the inquired Archive Server is capable of displaying the
requested document.
Late Archiving
In the Late Archiving with Barcode scenario, paper documents are passed through
the office and are not archived until all document-related work has been
completed. If documents are archived in this way, indexing by barcode, patch
Log file
Log level
Adjustable diagnostic level of detail on which the log files are generated.
Logical archive
Logical area on the Archive Server in which documents are stored. The Archive
Server can contain many logical archives. Each logical archive can be configured
to represent a different archiving strategy appropriate to the types of documents
archived exclusively there. An archive can consist of one or more pools. Each pool
is assigned its own exclusive set of volumes which make up the actual storage
capacity of that archive.
Media
Short term for “long term storage media” in the Archive Server environment. A
medium is a physical object: hard disks and hard disk storage systems with or
without WORM feature.
Obtains status information about archives, pools, hard disk and database space
on the Archive Server. MONS is the configuration parameter name for the
Monitor Server.
MONS
See Monitor Server (MONS).
MTA documents
Meta (MTA) documents, also known as document lists, are one comprehensive
file containing several individual documents of the same file format. If indexing
information is provided for the meta document (META_DOCUMENT_INDEX
component), the individual documents can be searched for and retrieved quickly
and easily.
Notes
Pool
A pool is a logical unit, a set of volumes of the same type that are written in the
same way, using the same storage concept. Pools are assigned to logical archives.
RC
See Read Component (RC).
Part of the Document Service that provides documents by reading them from the
archive.
Remote Standby
Archive server setup scenario including two (ore more) associated Archive
Servers. Archived data is replicated periodically from one server to the other in
order to increase security against data loss. Moreover, network load due to
document display actions can be reduced since replicated data can be accessed
directly on the replication server.
Replication
Retention
SAP ArchiveLink
A standardized interface, mainly used to connect an SAP system with the archive
system.
Scan station
Workstation for high volume scanning on which the Enterprise Scan client is
installed and to which a scanner is connected. Incoming documents are scanned
here and then transferred to Archive Server or Archive Center.
SecKey
With SecKeys, you can protect the connections between a client and Archive
Server. A SecKey is an additional parameter in the URL of the archive access. It
contains a digital signature and a signature time and date. The client application
creates a signature for the relevant parameters in the URL and the expiration
time, and signs it with a private key. Archive Server verifies the signature with
the public key, and accepts requests only with a valid signature and if the
SecKey's expiration time has not been reached.
SIA
See Single Instance Archiving (SIA).
Single Instance Archiving means that requests to archive the same component do
not result in an additional copy of the component on Archive Server. Instead, the
component is archived only once and then referenced by subsequent instances.
SIA is mainly used if a large number of emails with identical attachments has to
be archived.
Slot
Spawner
See Archive Spawner.
Tenant
TSA
See Time Stamping Authority (TSA).
UTC
See Coordinated Universal Time (UTC).
Volume
• A volume is a memory area of a storage media that contains documents.
Depending on the device type, a device can contain many volumes (for
example, real and virtual jukeboxes), or is treated as one volume (for example,
storage systems without virtual jukeboxes). Volumes are logically attached
(assigned or linked) to pools.
• Volume is a technical collective term with different meaning in STORM and
Document Service (DS). A DS volume is a virtual container of volumes with
identical documents (after the complete backup is written). A STORM volume
is a virtual container of all identical copies of a volume. For ISO volumes,
there is no difference between DS and STORM volumes.
WC
See Write Component (WC).
Windows Viewer
WORM
WORM means Write Once Read Multiple. A WORM disk supports incremental
writing. On storage systems, a WORM flag is set to prevent changes in
documents.
Component of the Document Service that carries out all possible modifications. It is
used to archive incoming documents (store them in the buffer), modify and delete
existing documents, set, modify and delete attributes, and manage pools and
volumes.
Write job