Sie sind auf Seite 1von 895

ADSM/TSM QuickFacts

in alphabetical order, supplemented thereafter by topic discussions

as compiled by Richard Sims (r b s @ b u . e d u),


Boston University (www.bu.edu), Office of Information Technology
On the web at http://people.bu.edu/rbs/ADSM.QuickFacts

Last update: 2005/03/07

This reference was originally created for my own use as a systems programmer's
"survival tool", to accumulate essential information and references that I knew
I would have to refer to again, and quickly re-find it. In participating in the
ADSM-L mailing list, it became apparent that others had a similar need, and so
it made sense to share the information. The information herein derives from many
sources, including submissions from other TSM customers. This, the information
is that which everyone involved with TSM has contributed to a common knowledge
base, and this reference serves as an accumulation of that knowledge, largely
reflective of the reality of working with the TSM product as an administrator.
I serve as a compiler and contributor. This informal, "real-world" reference is
intended to augment the formal, authoritative documentation provided by Tivoli
and allied vendors, as frequently referenced herein. See the REFERENCES area at
the bottom of this document for pointers to salient publications.

Command syntax is included for the convenience of a roaming techie carrying a


printed copy of this document, and thus is not to be considered definitive or
inclusive of all levels for all platforms: refer to manuals for the syntax
specific to your environment.
Upper case characters shown in command syntax indicates that at least those
characters are required, not that they have to be entered in upper case.
I realize that I need to better "webify" this reference, and intend to do so in
the future. (TSM administration is just a tiny portion of my work, and many
other things demand my time.)

In dealing with the product, one essential principle must be kept in mind, which
governs the way the product operates and restricts the server administrator's
control of that data: the data which the client sends to a server storage pool
will always belong to the client - not the server. There is no provision on the
server for inspecting or manipulating file system objects sent by the client.
Filespaces are the property of the client, and if the client decides not to do
another backup, that is the client's business: the server shall take no action
on the Active, non-expiring files therein. It is incumbent upon the server
administrator, therefore, to maintain a relationship with client administrators
for information to be passed when a filespace is obsolete and discardable, when
it has fallen into disuse.

? "Match-one" wildcard character used in


Include/Exclude patterns to match any
single character except the directory
separator; it does not match to end of
string. Cannot be used in directory
or volume names.
* "Match-all" wildcard character used in
Include/Exclude patterns to match zero
or more characters, but it does not
cross a directory boundary. Cannot be
used in directory or volume names.
* (asterisk) SQL SELECT: to specify that all columns
in a table are being referenced, which
is to say the entirety of a row. As in:
SELECT COUNT(*) AS -
"Number of nodes" FROM NODES
*.* Wildcard specification often seen in
Windows include-exclude specifications.
Note that *.* means any file name with
the '.' character anywhere in the name,
whereas * means any file name.
*SM Wildcard product name first used on
ADSM-L by Peter Jodda to generically
refer to the ADSM->TSM product - which
has become adroit, given the increasing
frequency with which IBM is changing the
name of the product.
See also: ESM; ITSM
& (ampersand) Special character in the MOVe DRMedia,
MOVe MEDia, and Query DRMedia commands,
CMd operand, as the lead character for
special variable names.
% (percent sign) In SQL: With the LIKE operator, %
functions as a wildcard character which
means any one or more characters. For
example, pattern A% matches any
character string starting with a capital
A. See also: _
%1, %2, %3, etc. These are symbolic variables within a
MACRO (q.v.).
_ (underscore) In SQL, can be used to match exactly one
character. See also: %
[ "Open character class" bracket character
used in Include/Exclude patterns to
begin the enumeration of a character
class. That is, to wildcard on any of
the individual characters specified.
End the enumeration with ']'; which is
to say, enclose all the characters
within brackets.
You can code like [abc] to represent the
characters a, b, and c; or like [a-c] to
accomplish the same thing. Within the
character class specification, you can
code special characters with a
backslash, as in [abc\]de] to include
the ']' char.
> Redirection character in the server
administrative command line interface,
if at least one space on each side of
it, saying to replace the specified
output file. There is no "escape"
character to render this character
"un-special", as a backslash does in
Unix. Thus, you should avoid coding
" > " in an SQL statement: eliminate at
least one space on either side of it.
Ref: Admin Ref "Redirecting Command
Output"
>> Redirection characters in the server
administrative command line interface,
if at least one space on each side of
it, saying to append to the specified
output file.
Ref: Admin Ref "Redirecting Command
Output"
{} Use braces in a file path specification
within a query or restore/retrieve to
isolate and explicitly identify the file
space name (or virtual mount point name)
to *SM, in cases where there can be
ambiguity. By default, *SM uses the
file space with the longest name which
matches the beginning of that file path
spec, and that may not be what you want.
For example: If you have two filespaces
"/a" and "/a/b" and want to query
"/a/b/somefile" from the /a file system,
specify "{/a/}somefile".
See: File space, explicit specification
|| SQL: Logical OR operator. Also effects
concatenation, as in
SELECT filespace_name || hl_name ||
ll_name AS "_______File Name________"
Note that not all SQL implementation
support || for concatenation: you may
have to use CONCAT() instead.
- "Character class range" character
used in Include/Exclude patterns to
specify a range of enumerated characters
as in "[a-z]".
] "Close character class" character used
in Include/Exclude patterns to end the
enumeration of a character class.
\ "Literal escape" character used in
Include/Exclude patterns to cause an
enumerated character class character to
be treated literally, as when you want
to include a closing square bracket as
part of the enumerated string
([abc\]xyz]).
... "Match N directories" characters used in
Include/Exclude patterns to match zero
or more directories.
Example: "exclude /cache/.../*" excludes
all directories (and files) under
directory "/cache/".
... As a filespace name being displayed at
the server, indicates that the client
stored the filespace name in Unicode,
and the server lacks the "code page"
which allows displaying the name in its
Unicode form.
/ (slash) At the end of a filespec, in Unix means
"directory". A 'dsmc i' on a filespec
ending in a slash says to backup only
directories with matching names. To back
up files under the directories, you need
to have an asterisk after the slash
(/*). If you specify what you know to be
a directory name, without a slash, *SM
will doggedly believe it to be the name
of a file - which is why you need to
maintain the discipline of always coding
directory names with a slash at the
end.
/... In ordinary include-exclude statements,
is a wildcard meaning zero or more
directories.
/... DFSInclexcl: is interepreted as the
global root of DFS.
/.... DFSInclexcl: Match zero or more
directories (in that "/..." is
interepreted as the global root of DFS).
/* */ Used in Macros to enclose comments. J
The comments cannot be nested and cannot
span lines. Every line of a comment must
contain the comment delimiters.
= (SQL) Is equal to. The SQL standard specifies
that the equality test is case sensitive
when comparing strings.
!= (not equal) For SQL, you instead need to code "<>".
<> SQL: Means "not equal".
$$ACTIVE$$ The name given to the provisional active
policy set where definitions have been
made (manually or via Import), but you
have not yet performed the required
VALidate POlicyset and ACTivate
POlicyset to commit the provisional
definitions, whereafter there will be a
policy set named ACTIVE.
Ref: Admin Guide
See also: Import

0xdeadbeef Some subsystems pre-populate allocated


memory with the hexadecimal string
0xdeadbeef (this 32-bit hex value is a
data processing affectation) so as to be
able to detect that an application has
failed to initialize an acquired subset
with binary zeroes. Landing on a
halfword boundary can obviously lead to
getting variant "0xbeefdead".
10.0.0.0 - 10.255.255.255 Private subnet address range, as defined
in RFC 1918, commonly used via Network
Address Translation behind some firewall
routers/switches. You cannot address
such a subnet from the Internet: private
subnet addresses can readily initiate
communication with each other and
servers on the Internet, but Internet
users cannot initiate contacts with
them.
See also: 172.16.0.0 - 172.31.255.255;
192.168.0.0 - 192.168.255.255
1500 Server port default number for serving
clients. Specify via TCPPort server
option and DEFine SERver LLAddress.
1501 Client port for backups (schedule).
Note that this port exists only when the
scheduled session is due: the client
does not keep a port when it is waiting
for the schedule to come around.
1510 Client port for Shared Memory.
1543 ADSM HTTPS port number.
1580 Client admin port. HTTPPort default.
See also: Web Admin
1581 Default HTTPPort number for the Web
Client TCP/IP port.
172.16.0.0 - 172.31.255.255 Private subnet address range, as defined
in RFC 1918, commonly used via Network
Address Translation behind some firewall
routers/switches. You cannot address
such a subnet from the Internet: private
subnet addresses can readily initiate
communication with each other and
servers on the Internet, but Internet
users cannot initiate contacts with
them.
See also: 10.0.0.0 - 10.255.255.255;
192.168.0.0 - 192.168.255.255
192.168.0.0 - 192.168.255.255 Private subnet address range, as defined
in RFC 1918, commonly used via Network
Address Translation behind Asante and
other brand firewall routers/switches.
You cannot address such a subnet from
the Internet: private subnet addresses
can readily initiate communication with
each other and servers on the Internet,
but Internet users cannot initiate
contacts with them.
See also: 10.0.0.0 - 10.255.255.255;
172.16.0.0 - 172.31.255.255
2 GB limit (2 GB limit) Through AIX 4.1, Raw Logical Volume
(RLV) partitions and files are limited
to 2 GB in size. It takes AIX 4.2 to
go beyond 2 GB.
2105 Model number of the IBM Versatile
Storage Server. Provides SNMP MIB
software ibm2100.mib .
www.ibm.com/software/vss
32-bit executable in AIX? To discern whether an AIX command or
object module is 32-bit, rather than
64-bit, use the 'file' command on it.
(This command references "signature"
indicators listed in /etc/magic.) If
32-bit, the command will report like:
executable (RISC System/6000) or object
module not stripped
See also: 64-bit executable in AIX?
32-bit vs. 64-bit TSM for AIX See IBM site Technote 1154486 for a
table of filesets.
3420 IBM's legacy, open-reel, half-inch tape
format, circa 1974.
Records data linearly in 9 tracks (1
byte plus odd parity). Reels could hold
as much as 2400 feet of tape.
Capacity: 150 MB
Pigment: Iron
Models 4,6,8 handle up to 6250 bpi, with
an inter-block gap of 0.3".
Reel capacity: Varies according to block
size - max is 169 MB for a 2400' reel at
6250 bpi.
3466 See also: Network Storage Manager (NSM)
3466, number of *SM servers Originally, just one ADSM server per
3466 box. But as of 2000, multiple, as
in allowing the 3466 to perfor DR onto
another TSM server. (See http://www.
storage.ibm.com/nsm/nsmpubs/nspubs.htm)
3466 web admin port number 1580. You can specify it as part of the
URL, like http://______:1580 .
3480, 3490, 3490E, 3590, 3494... IBM's high tape devices (3480, 3490,
3490E, 3590, 3494, etc.) are defined in
SMIT under DEVICES then TAPE DRIVES;
not thru ADSM DEVICES. This is because
they are shipped with the tape hardware,
not with ADSM. Also, these devices use
the "/dev/rmtX" format: all other ADSM
tape drives are of the format "/dev/mtX"
format.
3480 IBM's first generation of this 1/2" tape
cartridge technology, announced March
22, 1984 and available January, 1985.
Used a single-reel approach and servo
tracking pre-recorded on the tape for
precise positioning and block
addressing. Excellent start-stop
performance. The cartridge technology
would endure and become the IBM
cartridge standard, prevailing into the
3490 and 3590 models for at least 20
more years.
Tracks: 18, recorded linearly and in
parallel until EOT encountered (not
serpentine like later technologies),
whereupon the tape would be full.
Recording density: 38,000 bytes/inch
Read/write rate: 3 MB/sec
Rewind time: 48 seconds
Tape type: chromium dioxide (CrO2)
Tape length: 550 feet
Cartridge dimensions: 4.2" wide x 4.8"
high x 1" thick
Cartridge capacity: Varies according to
block size - max is 208 MB.
Transfer rate: 3 MB/s
Next generation: 3490
3480 cleaning cartridge Employs a nylon filament ribbon instead
of magnetic tape.
3480 tape cartridge AKA "Cartridge System Tape".
Color: all gray.
Identifier letter: '1'.
See also: CST; HPCT; Media Type
3480 tape drive definition Defined in SMIT under DEVICES then
TAPE DRIVES; not thru ADSM DEVICES.
This is because as an IBM "high tape
device" it is shipped with the tape
hardware, not with ADSM. Also, these
devices use the "/dev/rmtX" format: all
other ADSM tape drives are of the format
"/dev/mtX".
3490 IBM's second generation of this 1/2"
tape cartridge technology, circa 1989,
using a single-reel approach and servo
tracking pre-recorded on the tape for
precise positioning. Excellent
start-stop performance.
Media type: CST
Tracks: 18 (like its 3480 predecessor)
recorded linearly and in parallel until
EOT encountered (not serpentine like
later technologies), whereupon the tape
would be full.
Transfer rate: 3 MB/sec sustained
Capacity: 400 MB physical
Tape type: chromium dioxide (CrO2)
Tape length: 550 feet
Note: Cannot read tapes produced on
3490E, due to 36-track format of that
newer technology.
Previous generation: 3480
Next generation: 3490E
3490 cleaning cartridge Employs a nylon filament ribbon instead
of magnetic tape.
3490 EOV processing 3490E volumes will do EOV processing
just before the drive signals end of
tape (based on a calculation from IBM
drives), when the drive signals end of
tape, or when maxcapacity is reached, if
maxcapacity has been set. When the
drive signals end of tape, EOV
processing will occur even if
maxcapacity has not been reached.
Contrast with 3590 EOV processing.
3490 not getting 2.4 GB per tape? In MVS TSM, if you are seeing your 3490
cartridges getting only some 800 MB per
tape, it is probably that your Devclass
specification has COMPression=No rather
than Yes. Also check that your
MAXCAPacity value allows filling the
tape, and that at the 3490 drive itself
that it isn't hard-configured to prevent
the host from setting a high density.
3490 tape cartridge AKA "Enhanced Capacity Cartridge System
Tape".
Color: gray top, white base.
Identifier letter: 'E'
Capacity: 800 MB native; 2.4 GB
compressed (IDRC 3:1 compression)
3490 tape drive definition Defined in SMIT under DEVICES then
TAPE DRIVES; not thru ADSM DEVICES.
This is because as an IBM "high tape
device" it is shipped with the tape
hardware, not with ADSM. Also, these
devices use the "/dev/rmtX" format:
all other ADSM tape drives are of the
format "/dev/mtX".
3490E IBM's third generation of this 1/2"
tape cartridge technology, using a
single-reel approach and servo tracking
pre-recorded on the tape for precise
positioning. Excellent start-stop
performance.
Designation: CST-2
Tracks: 36, implemented in two sets of
18 tracks: the first 18 tracks are
recorded in the forward direction until
EOT is encountered, whereupon the heads
are electronically switched (no physical
head or tape shifting) and the tape is
then written backwards towards BOT.
Can read 3480 and 3490 tapes.
Capacity: 800 MB physical; 2.4 GB with
3:1 compression.
IDRC recording mode is the default, and
so tapes created on such a drive must be
read on an IDRC-capable drive.
Transfer rate: Between host and tape
unit buffer: 9 MB/sec. Between buffer
and drive head: 3 MB/sec.
Capacity: 800 MB physical
Tape type: chromium dioxide (CrO2)
Tape length: 800 feet
Previous generation: 3490
Next generation: 3590
3490E cleaning cartridge Employs a nylon filament ribbon instead
of magnetic tape.
3490E Model F 36-track head to read/write 18 tracks
bidirectionally.
349x tape library use, define "ENABLE3590LIBRary" definition in the
server options file.
Ref: Installing the Server and
Administrative Client.
3494 IBM robotic libary with cartridge tapes,
originally introduced to hold 3490 tapes
and drives, but later to hold 3590 tapes
and drives (same cartridge dimensions).
Model HA1 is high availability: instead
of just one accessor (robotic mechanism)
at one end, it has two, at each end.
The 3494 does not maintain statistics
for its volumes: it does not track how
many times a volume was mounted, how
many times it suffered an I/O error,
etc.
See also: Convenience Input-Output
Station; Dual Gripper; Fixed-home Cell;
Floating-home Cell; High Capacity Output
Facility; Library audit; Library; 3494,
define; Library Manager;
SCRATCHCATegory; Volume Categories;
Volume States
3494, access via web This was introduced as part of the IBM
StorWatch facility in a 3494 Library
Manager component called 3494 Tape
Library Specialist, available circa late
2000. It is a convenience facility, that
is read-only: one can do status
inquiries, but no functional operations.
If at the appropriate LM level, the
System Summary window will show
"3494 Specialist".
3494, add tape to 'CHECKIn LIBVolume ...'
Note that this involves a tape mount.
3494, audit tape (examine its barcode 'mtlib -l /dev/lmcp0 -a -V VolName'
to assure physically in library) Causes the robot to move to the tape and
scan its barcode.
'mtlib -l /dev/lmcp0 -a -L FileName'
can be used to examine tapes en mass, by
taking the first volser on each line of
the file.
3494, CE slot See: 3494 reserved cells
3494, change Library Manager PC In rare circumstances it will be
necessary to swap out the 3494's
industrial PC and put in a new one. A
major consideration here is that the
tape inventory is kept in that PC, and
the prospect of doing a Reinventory
Complete System after such a swap is
wholly unpalatable in that it will
discard the inventory and rebuid it -
with all the tape category code values
being lost, being reset to Insert. So
you want to avoid that. (A TSM AUDit
LIBRary can fix the category codes,
but...) And as Enterprise level
hardware and software, such changes
should be approached more intelligently
by service personnel, anyway. Realize
that the LM consists of the PC, the LM
software, and a logically separate
database - which should be as manageable
as all databases can be. If you activate
the Service menu on the 3494 control
panel, under Utilities you will find
"Dump database..." and "Restore
database...", which the service
personnel should fully exploit if at all
possible to preserve the database across
the hardware change. (The current LM
software level may have to be brought up
to the level of the intended, new PC for
the database transfer to work well.)
3494, change to manual operation On rare occurrences, the 3494 robot will
fail and you need to continue
processing, by switching to manual
operation. This involves:
- Go to the 3494 Operator Station and
proceed per the Using Manual Mode
instructions in the 3494 OpGuide. Be
sure to let the library Pause
operation complete before entering
Manual Mode.
- TSM may have to be told that the
library is in manual mode. You cannot
achieve this via UPDate LIBRary: you
have to define another instance of
your library under a new name, with
LIBType=MANUAL. Then do UPDate
DEVclass to change your 3590 device
class to use the library in manual
mode for the duration of the robotic
outage.
- Either watch the Activity Log, doing
periodic Query REQuest commands; or
run 'dsmadmc -MOUNTmode'. REPLY to
outstanding mount requests to inform
TSM when a tape is mounted and ready.
If everything is going right, you should
see mount messages on the tape drive's
display and in the Manual Mode console
window, where the volser and slot
location will be displayed. If a tape
has already been mounted in Manual Mode,
dismounted, and then called for again,
there will be an "*" next to the slot
number when it is displayed on the tape
drive calling for the tape, to clue you
in that it is a recent repeater.
3494, count of all volumes Via Unix command:
'mtlib -l /dev/lmcp0 -vqK'
3494, count of cartridges in There seems to be no way to determine
Convenience I/O Station this. One might think of using the cmd
'mtlib -l /dev/lmcp0 -vqK -s ff10' to
get the number, but the FF10 category
code is in effect only as the volume is
being processed on its way to the
Convenience I/O. The 3494 Operator
Station status summary will say:
"Convenience I/O: Volumes present", but
not how many. The only recourse seems to
be to create a C program per the device
driver manual and the mtlibio.h header
file to inspect the
library_data.in_out_status value,
performing an And with value 0x20 and
looking for the result to be 0 if the
Convenience I/O is *not* all empty.
3494, count of CE volumes Via Unix command:
'mtlib -l /dev/lmcp0 -vqK -s fff6'
3494, count of cleaning cartridges Via Unix command:
'mtlib -l /dev/lmcp0 -vqK -s fffd'
3494, count of SCRATCH volumes Via Unix command:
(3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E'
category code)
3494, eject tape from See: 3494, remove tape from
3494, identify dbbackup tape See: dsmserv RESTORE DB, volser unknown
3494, inventory operations See: Inventory Update; Reinventory
complete system
3494, list all tapes 'mtlib -l /dev/lmcp0 -qI'
(or use options -vqI for verbosity, for
more descriptive output)
3494, manually control Use the 'mtlib' command, which comes
with 3494 Tape Library Device Driver.
Do 'mtlib -\?' to get usage info.
3494, monitor See: mtevent
3494, not all drives being used See: Drives, not all in library being
used
3494, number of drives in Via Unix command:
'mtlib -l /dev/lmcp0 -qS'
3494, number of frames (boxes) The mtlib command won't reveal this.
The frames show in the "Component
Availability" option in the 3494 Tape
Library Specialist.
3494, partition/share TSM SAN tape library sharing support is
only for libraries that use SCSI
commands to control the library robotics
and the tape management. This does *not*
include the 3494, which uses network
communication for control. Sharing of
the 3494/3590s thus has to occur via
conventional partitioning or dynamic
drive sharing (which is via the
Auto-Share feature introduced in 1999).
There is no dynamic sharing of tape
volumes: they have to be pre-assigned to
their separate TSM servers via Category
Codes.
Ref: Redpaper "Tivoli Storage Manager:
SAN Tape Library Sharing".
Redbook "Guide to Sharing and
Partitioning IBM Tape Library Data"
(SG24-4409)
3494, ping You can ping a 3494 from another system
within the same subnet, regardless of
whether that system is in the LM's list
of LAN-authorized hosts. If you cannot
ping the 3494 from a location outside
the subnet, it may mean that the 3494's
subnet is not routed - meaning that
systems on that subnet cannot be reached
from outside.
3494, remote operation See "Remote Library Manager Console
Feature" in the 3494 manuals.
3494, remove tape from 'CHECKOut LIBVolume LibName VolName
[CHECKLabel=no] [FORCE=yes]
[REMove=no]'
To physically cause an eject via AIX
command, change the category code to
EJECT (X'FF10'):
'mtlib -l /dev/lmcp0 -vC -V VolName
-t ff10'
The more recent Library Manager software
has a Manage Import/Export Volumes menu,
wherein Manage Insert Volumes claims
ejectability.
3494, RS-232 connect to SP Yes, you can connect a 3494 to an
RS/6000 SP via RS-232, though it is
uncommon, slow, and of limited distance
compare to using ethernet.
3494, status 'mtlib -l /dev/lmcp0 -qL'
3494, steps to set up in ADSM - Define the library
- Define the drives in it
- Add "ENABLE3590LIBRARY YES" to
dsmserv.opt
- Restart the server. (Startup message
"ANR8451I 349x library LibName is
ready for operations".)
3494 Cell 1 Special cell in a 3494: it is specially
examined by the robot after the doors
are closed. You would put here any tape
manually removed from a drive, for the
robot to put away. It will read the
serial name, then examine the cell which
was that tape cartridge's last home:
finding it empty, the robot will store
the tape there. The physical location of
that cell: first frame, inner wall,
upper leftmost cell (which the library
keeps empty).
3494 cells, total and available 'mtlib -l /dev/lmcp0 -qL' lines:
"number of cells", "available cells".
3494 cleaner cycles remaining 'mtlib -l /dev/lmcp0 -qL' line:
"avail 3590 cleaner cycles"
3494 cleaning cartridge See: Cleaner Cartridge, 3494
3494 connectivity A 3494 can be simultaneously connected
via LAN and RS-232.
3494 diagnosis See: trcatl
3494 ESCON device control Some implementations may involve ESCON
connection to 3490 drives plus SCSI
connection to 3590 drives. The ESCON
3490 ATL driver is called mtdd and the
SCSI 3590 ATL driver was called atldd,
and they have shared modules between
them. One thus may be hesitant to
install atldd due to this "sharing". In
the pure ESCON drive case, the commands
go down the ESCON channel, which is also
the data path. If you install atldd,
the commands now first go to the Library
Manager, which then reissues them to
those drives. Thus, it is quite safe to
install atldd for ESCON devices.
3494 inaccessible (usually after Check for the following:
just installed) - That the 3494 is in an Online state.
- In the server, that the atldd software
(LMCPD) has been installed and that
the lmcpd process is running.
- That your /etc/ibmatl.conf is correct:
if a TCP/IP connection, specify the IP
addr; if RS/232, specify the /dev/tty
port to which the cable is attached.
- If a TCP/IP connection, that you can
ping the 3494 by both its network name
and IP address (to assure that DNS was
correctly set up in your shop).
- If a LAN connection:
- Check that the 3494 is not on a Not
Routed subnet: such a router
configuration prevents systems
outside the subnet from reaching
systems residing on that subnet.
- A port number must be in your host
/etc/services for it to communicate
with the 3494. By default, the
Library Driver software installation
creates a port '3494/tcp' entry,
which should matches the default
port at the 3494 itself, per the
3494 installation OS/2 TCP/IP
configuration work.
- Your host needs to be authorized to
the 3494 Library Manager, under "LAN
options", "Add LAN host". (RS/232
direct physical connection is its
own authorization.) Make sure you
specify the full host network name,
including domain (e.g., a.b.com).
If communications had been working
but stopped when your OS was
updated, assure that it still has
the same host name!
- If an RS/232 connection:
- Check the Availability of your
Direct Attach Ports (RS-232): the
System Summary should show them by
number, if Initialized, in the "CU
ports (RTIC)" report line. If not,
go into Service Mode, under
Availability, to render them
Available.
- Connecting the 3494 to a host is a
DTE<->DTE connection, meaning that
you must employ a "null modem" cable
or connector adapter.
- Certainly, make sure the RS-232
cable is run and attached to the
port inside the 3494 that you think
it is.
- Try performing 'mtlib' queries to
verify, outside of *SM, that the
library can be reached.
Presuming 3590 drives in the 3494, make
sure your server options file includes:
ENABLE3590LIBRARY YES
3494 Intervention Required detail The only way to determine the nature of
the Int Req on the 3494 is to go to its
Operator Station and see, under menu
Commands->Operator intervention.
There is no programming interface
available to allow you to get this
information remotely.
Odd note: A vision failure does not
result in an Int Req!
3494 IP address, determine Go to the 3494 control panel.
From the Commands menu, select
"LAN options", and then
"LM LAN information".
3494 Manual Mode If the 3494's Accessor is nonfunctional
you can operate the library in Manual
Mode. Using volumes in Manual Mode
affects their status: The 3494 redbook
(SG24-4632) says that when volumes are
used in Manual Mode, their LMDB
indicator is set to "Manual Mode", as
used to direct error recovery when the
lib is returned to Auto mode. This is
obviously necessary because the location
of all volumes in the library is
jeopardized by the LM's loss of control
of the library. The 3494 Operator Guide
manual instructs you to have Inventory
Update active upon return to Auto mode,
to re-establish the current location of
all volumes.
3494 microcode level See: "Library Manager, microcode level"
3494 port number See: Port number, for 3494
communication
3494 problem: robot is dropping This has been seen where the innards of
cartridges the 3494 have gone out of alignment, for
any of a number of reasons.
Re-teaching can often solve the problem,
as the robot re-learns positions and
thus realigns itself.
3494 problem: robot misses some During its repositioning operations, the
fiducials - but not all robot attempts to align itself with the
edges of each fiducial, but after
dwelling on one it keeps on searching,
as though it didn't see it.
This operation involves the LED, which
is carried on the accessor along with
the laser (which is only for barcode
reading). The problem is that the light
signal involved in the sensing is too
weak, which may be due to dirt, an aged
LED, or a failing sensor. The signal is
marginal, so some fiducials are seen,
but not others.
3494 problems See also "3494 OPERATOR STATION
MESSAGES" section at the bottom of this
document.
3494 reserved cells A 3494 minimally has two reserved cells:
1 A 1 Gripper error recovery (1 A 3
if Dual Gripper installed).
1 A 20 CE cartridge (3590). 1 A 19 is
also reserved for 3490E, if
such cartridges participate.
_ K 6 Not a cell, but a designation
for a tape drive on wall _.
3494 scratch category, default See: DEFine LIBRary
3494 sharing Can be done with TSM 3.7+, via the
"3494SHARED YES" server option; but you
still need to "logically" partition the
3494 via separate tape Category Codes.
Ref: Guide to Sharing and Partitioning
IBM Tape Library Dataservers,
SG24-4409. Redbooks: Tivoli Storage
Manager Version 3.7.3 & 4.1: Technical
Guide, section 8.2; Tivoli Storage
Manager SAN Tape Library Sharing.
See also: 3494SHARED; DRIVEACQUIRERETRY;
MPTIMEOUT
3494 sluggish The 3494 may be taking an unusually long
time to mount tapes or scan barcodes.
Possible reasons:
- A lot of drive cleaning activity can
delay mounts. (A library suddenly
exposed to a lot of dust could
evidence a sudden surge in cleaning.)
A shortage of cleaning cartridges
could aggravate that.
- Drive problems which delay ejects or
positioning.
- Library running in degraded mode.
- lmcpd daemon or network problems which
delay getting requests to the library.
- See if response to 'mtlib' commands is
sluggish. This can be caused by DNS
service problems to the OS2 embedded
system. (That PC is typically
configured once, then forgotten; but
DNS servers may change in your
environment, requiring the OS2 config
to need updating.)
Use the mtlib command to get status on
the library to see if any odd condition,
and visit the 3494 if necessary to
inspect its status. Observe it
responding to host requests to gauge
where the delay is.
3494 SNMP support The 3494 (beginning with Library Manager
code 518) supports SNMP alert messaging,
enabling you to monitor 3494 operations
from one or more SNMP monitor stations.
This initial support provides more than
80 operator-class alert messages
covering:
3494 device operations
Data cartridge alerts
Service requests
VTS alerts
See "SNMP Options" in the 3494 Operator
Guide manual.
3494 status 'mtlib -l /dev/lmcp0 -qL'
3494 Tape Library Specialist Provides web access to your 3494 LM.
Requires that the LM PC have at least
64 MB of memory, be at LM code level 524
or greater, and have FC 5045 (Enhanced
Library Manager).
3494 tapes, list 'mtlib -l /dev/lmcp0 -qI'
(or use options -vqI for verbosity, for
more descriptive output)
3494 TCP/IP, set up This is done during 3494 installation,
in OS/2 mode, upon invoking the HOSTINST
command, where a virtual "flip-book"
will appear so that you can click on
tabs within it, including a Network tab.
After installation, you could go into
OS/2 and there do 'cd \tcpip\bin' and
enter the command 'tcpipcfg' and click
in the Network tab.
Therein you can set the IP address,
subnet mask, and default gateway.
3494 vision failure May be simply a dusty lens, where
cleaning it will fix the problem.
3494 volume, list state, class, 'mtlib -l /dev/lmcp0 -vqV -V VolName'
volser, category
3494 volume, last usage date 'mtlib -l /dev/lmcp0 -qE -uFs
-V VolName'
3494 volumes, list 'mtlib -l /dev/lmcp0 -qI'
(or use options -vqI for verbosity, for
more descriptive output)
3494SHARED To improve performance of allocation of
3590 drives in the 3494, introduced by
APAR IX88531... ADSM was checking all
available drives on a 3494 for
availability before using one of them.
Each check took 2 seconds and was being
performed twice per drive, once for each
available drive and once for the
selected drive. This resulted in
needless delays in mounting a volume.
The reason for this is that in a shared
3494 library environment, ADSM
physically verifies that each drive
assigned to ADSM is available and not
being used by another application. The
problem is that if ADSM is the only
application using the assigned drives,
this extra time to physically check the
drives is not needed. This was addressed
by adding a new option, 3494SHARED, to
control sharing.
Selections:
No (default) The, the 3494 is not
being shared by any other application.
That is, only one or more ADSM servers
are accessing the 3494.
Yes ADSM will select a drive that is
available and not being used by any
other application. You should only
enable this option if you have more
than two (2) drives in your library.
If you are currently sharing a 3494
library with other application, you
will need to specify this option.
See also: DRIVEACQUIRERETRY; MPTIMEOUT
3495 Predecessor to the 3494, containing a GM
robot, like used in car assembly.
3570 The IBM 3570 Tape Subsystem is based on
the same technology as the IBM 3590 High
Performance Tape Subsystem. It
functionally expands the capability of
tape to perform both write and
read-intensive operations. It provides a
faster data access than other tape
technologies with a drive time to
read/write data of eight seconds from
cassette insertion. The 3570 also
incorporates a high-speed search
function. The tape drive reads and
writes data in a 128-track format, four
tracks at a time. Data is written using
an interleaved serpentine longitudinal
recording format starting at the center
of the tape (mid-tape load point) and
continuing to near the end of the
tape. The head is indexed to the next
set of four tracks and data is written
back to the mid-tape load point. This
process continues in the other direction
until the tape is full.
Cartridge: The 3570 uses a unique,
robust, twin-hub tape cassette that is
approximately half the size of the
3490/3590 cartridge tapes, with a
cassette capacity of 5 GB uncompressed
and up to 15G per cassette with LZ1 data
compaction.
Also called "Magstar MP" (where the MP
stands for Multi-Purpose), supported by
the Atape driver. Think "3590, Jr."
The tape is half-wound at load time, so
can get to either end of the tape in
half the time than if the tape were
fully wound.
Cartridge type letter: 'F' (does not
participate in the volser).
An early problem of "Lost tension" was
common, attributed to bad tapes, rather
than the tape drives.
*SM library type: SCSI Library
Product summary:
http://www.ibm.com/ibm/history/
exhibits/storage/storage_3570.html
Manuals:
http://www.ibm.com/servers/storage/
support/tape/3570/installing.html
3570 "tapeutil" for NT See: ntutil
3570, to act as an ADSM library Configure to operate in Random Mode and
Base Configuration. This allows ADSM to
use the second drive for reclamation.
(The Magstar will not function as a
library within ADSM when set to
"automatic".) The /dev/rmt_.smc SCSI
Media Changer special device allows
library style control of the 3570.
3570/3575 Autoclean This feature does not interfere with
ADSM: the 3570 has its own slot for the
cleaner that is not visible to ADSM, and
the 3575 hides the cleaners from ADSM.
3570 configurations Base: All library elements are
available to all hosts. In dual drive
models, it is selected from Drive 1 but
applies to both drives. This config is
primarily used for single host
attachment. (Special Note for dual
drive models: In this config, you can
only load tapes to Drive 1 via the LED
display panel as everything is keyed
off of Drive 1. However, you may load
tapes to Drive 2 via tapeutil if the
Library mode is set to 'Random'.)
Split: This config is most often used
when the library unit is to be
twin-tailed between 2 hosts. In this
config, the library is "split" into 2
smaller half size libraries, each to be
used by only one host. This is
advantageous when an application does
not allow the sharing of one tape drive
between 2 hosts. The "first/primary"
library consists of:
Drive 1
The import/export (priority) cell
The right most magazine
Transport Mechanism
The "second" library consists of:
Drive 2
The leftmost magazine
Transport Mechanism
3570 Element addresses Drive 0 is element 16, Drive 1 is
element 17.
3570 mode A 3570 library must be in RANDOM mode to
be usable by TSM: AUTO mode is no good.
3570 tape drive cleaning Enable Autocleaning. Check with the
library operator guide.
The 3570 has a dedicated cleaning tape
tape storage slot, which does not take
one of the library slots.
3575 3570 library from IBM.
Attachment via: SCSI-2.
As of early 2001, customers report
problem of tape media snapping: the
cartridge gets loaded into the drive by
the library but it never comes ready:
such a cartridge may not be repairable.
Does not have a Teach operation like the
3494.
Ref: Red book: Magstar MP 3575 Tape
Library Dataserver: Muliplatform
Implementation.
*SM library type: SCSI Library
3575, support C-Format XL tapes? In AIX, do 'lscfg -vl rmt_': A drive
capable of supporting C tapes should
report "Machine Type and Model 03570C.."
and the microcode level should be at
least 41A.
3575 configuration The library should be device /dev/smc0
as reflected in AIX command
'lsdev -C tape'...not /dev/lb0 nor
/dev/rmtX.smc as erroneously specified
in the Admin manuals.
3575 tape drive cleaning The 3575 does NOT have a dedicated
cleaning tape storage slot. It takes up
one of the "normal" tape slots, reducing
the Library capacity by one.
357x library/drives configuration You don't need to define an ADSM device
for 357x library/drives under AIX: the
ADSM server on AIX uses the /dev/rmtx
device. Don't go under SMIT ADSM DEVICES
but just run 'cfgmgr'. Once the rmtx
devices are available in AIX, you can
define them to ADSM via the admin
command line. For example, assuming you
have two drives, rmt0 and rmt1, you
would use the following adsm admin
commands to define the library and
drives:
DEFine LIBRary mylib LIBType=SCSI
DEVice=/dev/rmt0.smc
DEFine DRive mylib drive1
DEVice=/dev/rmt0 ELEMent=16
DEFine DRive drive mylib drive2
DEVice=/dev/rmt1 ELEMent=17
(you may want to verify the element
numbers but these are usually the
default ones)
3575 - L32 Magstar Library contents, Unix: 'tapeutil -f /dev/smc0 inventory'
list
358x drives These are LTO Ultrium drives.
Supported by IBM Atape device driver.
See: LTO; Ultrium
3580 IBM model number for LTO Ultrium tape
drive. A basic full-height, 5.25 drive
SCSI enclosure; two-line LCD readout.
Flavors: L11, low-voltage differential
(LVD) Ultra2 Wide SCSI; H11,
high-voltage differential SCSI.
Often used with Adaptec 29160 SCSI
card (but use the IBM driver - not the
Adaptec driver).
The 3580 Tape Drive is capable of data
transfer rates of 15 MB per second with
no compression and 30 MB per second at
2:1 compression. (Do not expect to come
close to such numbers when backing up
small files: see "Backhitch".)
Review: www.internetweek.com/reviews00/
rev120400-2.htm
The Ultrium 1 drives have had problems:
- Tapes would get stuck in the drives.
IBM (Europe?) engineered a field
compensation involving installing a
"clip" in the drive. This is ECA 009,
which is not a mandatory EC; to be
applied only if the customer sees
frequent B881 errors in the library
containing the drive. The part number
is 18P7835 (includes tool). Taks about
half an hour to apply. One customer
reports having the clip, but still
problems, which seems to be inferior
cartridge construction.
- Faulty microcode. As evidenced in late
2003 defect where certain types of
permanent write errors, with
subsequent rewind command, causes an
end of data (EOD) mark to be written
at the BOT (beginning of tape).
See also: LTO; Ultrium
3580 (LTO) cleaning cartridge life The manual specifies how much you should
expect out of a cleaning cartridge:
"The IBM TotalStorage LTO Ultrium
Cleaning Cartridge is valid for 50
uses." (2003 manual)
Customers report that if you insert a
cleaning tape when the drive is not
seeking to be cleaned, that it will not
clean the drive. However, the usage
count for the cleaning cartridge will
still be incremented. (This behavior is
subject to microcode changes.)
3581 IBM model number for LTO Ultrium tape
drive with autoloader. Houses one drive
and seven slots: five in front, two in
the rear.
*SM library type: SCSI Library
See also: Backhitch; LTO; Ultrium
3581, configuring under AIX Simply install the device driver and you
should be able to see both the drive and
medium changer devices as SCSI tape
devices (/dev/rmt0 and /dev/smc0). When
you will configure the library and drive
in TSM, use device type "LTO", not SCSI.
Ref: TSM 4.1.3 server README file
3582 IBM LTO Ultrium cartridge tape library.
Up to 2 Ultrium 2 tape drives and 23 tape
cartridges.
Requires Atape driver on AIX and like
hosts: Atape level 8.1.3.0 added support
for 3582 library.
Reportedly not supported by TSM 5.2.2.
See also: Backhitch; LTO; Ultrium
3583 IBM LTO Ultrium cartridge tape library.
Formal name: "LTO Ultrium Scalable Tape
Library 3583". (But it is only slightly
scalable: look to the 3584 for higher
capacity.)
Six drives, 18 cartridges.
Can have up to 5 storage columns, which
the picker/mounter accesses as in a
silo. Column 1 can contain a single-slot
or 12-slot I/O station. Column 2
contains cartridge storage slots and is
standard in all libraries. Column 3
contains drives. Columns 4 and 5 may be
optionally installed and contain
cartridge storage slots. Beginning with
Column 1 (the I/O station column), the
columns are ordered clockwise. The three
columns which can house cartridges do so
with three removable magazines of six
slots each: 18 slots per column, 54
slots total. Add two removable I/O
station magazines through the door and
one inside the door to total 72 cells,
60 of which are wholly inside the unit.
total cartridge storage slots. (There
are reports that 2 of those 60 slots are
reserved for internal tape drive mounts,
though that doesn't show up in the doc.)
Model L72: 72 cartridge storage slots
As of 2004 handles the Ultrium 2 or
Ultrium 1 tape drive. The Ultrium 2
drive can work with Ultrium 1 media, but
at lesser speeds (see "Tape Drive
Performance" in the 3583 Setup and
Operator Guide manual.
Cleaning tapes should live in the
reserved, nonaddressable slots at the
top of silo columns (where the picker's
bar code reader cannot look).
http://www.storage.ibm.com/hardsoft/tape
/pubs/pubs3583.html
*SM library type: SCSI Library
The 3583 had a variety of early problems
such as static buildup: the picker would
run fine for a while, until enough
static built up, then it would die for
no reason apparent to the user. The fix
was to replace the early rev picker with
a newer design.
Reports indicate that IBM is rebranding
what is actually an ADIC library: IBM
and Dell OEM this library from ADIC.
Beware that may replacement parts are
refurbished rather than new.
See also: 3584; Accelis; L1; Ultrium
3583, convert I/O station to slots Via Setup->Utils->Config.
Then you have to get the change
understood by TSM - and perhaps the
operating system. A TSM AUDit LIBRary
may be enough; or you may have to incite
an operating system re-learning of the
SCSI change, which may involve rebooting
the opsys.
3583 cleaning cartridge Volser must start with "CLNI" so that
the library recognizes the cleaning tape
as such (else it assumes it's a data
cartridge). The cleaning cartridge is
stored in any slot in the library.
Recent (2002/12) updates to firmware
force the library to handle cleaning
itself and hide the cleaning cartridges
from *SM.
3583 door locked, never openable See description of padlock icon in the
3583 manual. A basic cause is that the
I/O station has been configured as all
storage slots (rather than all I/O
slots). In a Windows environment, this
may be cause by RSM taking control of
the library: disable RSM when is it not
needed. This condition may be a fluke
which power-cycling the library will
undo.
3583 driver and installation The LTO/Ultrium tape technology was
jointly developed by IBM, and so they
provide a native device driver. In AIX,
it is supported by Atape; in Solaris, by
IBMtape; in Windows, by IBMUltrium; in
HP-UX, by atdd.
1. Install the Ultrium device driver,
available from
ftp://ftp.software.ibm.com/storage
/devdrvr/<YourOpSys>/ directory
2. In NT, under Tape Devices, press ESC
on the first panel.
3. Select the Drivers tab and add your
library.
4. Select the 3583 library and click on
OK.
5. Press Yes to use the existing files.
3583 "missing slots" If not all storage cells in the library
are usable (the count of usable slots is
short), it can be caused by a corrupt
volume whose label cannot be read during
an AUDit LIBRary. You may have to
perform a Restore Volume once the volume
is identified.
3584 The high end of IBM's mid-range tape
library offerings. Formal name:
LTO UltraScalable Tape Library
Initially housed LTO Ultrium drives and
cartridges; but as of mid 2004 also
supports 3592 J1A.
Twelve drives, 72 cartridges. Can also
support DLT.
Interface: Fibre Channel or SCSI
Its robotics are reported to be much
faster than those in the 3494, making
for faster mounting of tapes. In Unix,
the library is defined as device
/dev/smc0, and by default is LUN 1 on
the lowest-number tape drive in the
partition - normally drive 1 in the
library, termed the Master Drive by CEs.
(Remove that drive and you suffer
ANR8840E trying to interact with the
library.) In AIX, 'lsdev -Cc tape'
should show all the devices.
The 3584 has a web interface
("Specialist"), but the library control
panel cannot be seen from it.
*SM library type: SCSI Library
See also: LTO; Ultrium
3584 bar code reading The library can be set to read either
just the 6-char cartridge serial
("normal" mode) or that plus the "L1"
tape cartridge identifier as well
("extended" mode).
3584 cleaning cartridge Volser must start with "CLNI" or "CLNU"
so that the library recognizes the
cleaning tape as such (else it assumes
it's a data cartridge). The cleaning
cartridge is stored in any data-tape
slot in the library (but certainly not
the Diagnostic Tape slot).
Follow the 3584 manual's procedure for
inserting cleaning cartridges.
Auto Clean should be activated.
The cleaning tape is valid for 50 uses.
When the cartridge expires, the library
displays an Activity screen like the
following:
Remove CLNUxxL1
Cleaning Cartridge Expired
3590 IBM's fourth generation of this 1/2"
tape cartridge technology, using a
single-reel approach and servo tracking
pre-recorded on the tape for precise
positioning. Excellent start-stop
performance. Uses magneto-resistive
heads for high density recording.
Introduced: 1995
Tape length: 300 meters (1100 feet)
Tracks: 128, written 16 at a time, in
serpentine fashion. The head contains 32
track writers: As the tape moves
forward, 16 tracks are written until EOT
is encountered, whereupon electronic
switching causes the other 16 track
writers in the heads to be used as the
tape moved backwards towards BOT. Then,
the head is physically moved (indexed)
to repeat the process, until finally all
128 tracks are written as 8 interleaved
sets of 16 tracks.
Transfer rate: Between host and tape
unit buffer: 20 MB/sec with fast, wide,
differential SCSI; 17 MB/sec via ESCON
channel interface. Between buffer and
drive head: 9 MB/sec.
Pigment: MP1 (Metal Particle 1)
Note that "3590" is a special, reserved
DEVType used in 'DEFine DEVclass'.
Cartridge type letter: 'J' (does not
participate in the volser).
See publications references at the
bottom of this document.
See also: 3590E
Previous generation: 3490E
Next generation: 3590E
See also: MP1
3590, AIX error messages If a defective 3590 is continually
putting these out, rendering the drive
Unavailable from the 3494 console will
cause the errors to be discontinued.
3590, bad block, dealing with Sometimes there is just one bad area on
a long, expensive tape. Wouldn't it be
nice to be able to flag that area as bad
and be able to use the remainder of the
tape for viable storage? Unfortunately,
there is no documented way to achieve
this with 3590 tape technology: when
just one area of a tape goes badk the
tape becomes worthless.
3590, handling DO NOT unspool tape from a 3590
cartridge unless you are either
performing a careful leader block
replacement or a post-mortem. Unspooling
the tape can destroy it! The situation
is clearances: The spool inside the
cartridge is spring-loaded so as to keep
it from moving when not loaded. The tape
drive will push the spool hub upward
into the cartridge slightly, which
disengages the locking. The positioning
is exacting. If the spool is not at just
the right elevation within the
cartridge, the edge of the tape will
abrade against the cartridge shell,
resulting in substantial, irreversible
damage to the tape.
3590, write-protected? With all modern media, a "void" in the
sensing position indicates writing not
allowed. IBM 3480/3490/3590 tape
cartridges have a thumbwheel (File
Protect Selector) which, when turned,
reveals a flat spot on the thumbwheel
cylinder, which is that void/depression
indicating writing not allowed. So,
when you see the dot, it means that the
media is write-protected. Rotate the
thumbwheel away from that to make the
media writable. Some cartridges show a
padlock instead of a dot, which is a
great leap forward in human engineering.
See also: Write-protection of media
3590 barcode Is formally "Automation Identification
Manufacturers Uniform Symbol Description
Version 3", otherwise known as Code 39.
It runs across the full width of the
label. The two recognized vendors:
Engineered Data Products (EDP) Tri-Optic
Wright Line Tri-Code
Ref: Redbook "IBM Magstar Tape Products
Family: A Practical Guide", topic
Cartridge Labels and Bar Codes.
See also: Code 39
3590 Blksize See: Block size used for removable media
3590 capacity See: 3590 'J'; 3590 'K'
See also: ESTCAPacity
3590 cleaning See: 3590 tape drive cleaning
3590 cleaning interval The normal preventve maintenance
interval for the 3590 is once every 150
GB (about once every 15 tapes).
Adjust via the 3494 Operator Station
Commands menu selection "Schedule
Cleaning, in the "Usage clean" box. The
Magstar Tape Guide redbook recommends
setting the value to 999 to let the
drive incite cleaning, rather than have
the 3494 Library Manager initiate it
(apparently to minimize drive wear).
Ref: 3590 manual; "IBM Magstar Tape
Products Family: A Practical Guide"
redbook
3590 cleaning tape Color: Black shell, with gray end
notches
3590 cleaning tape mounts, by drive, Put the 3494 into Pause mode;
display Open the 3494 door to access the given
3590's control panel;
Select "Show Statistics Menu";
See "Clean Mounts" value.
3590 compression of data The 3590 performs automatic compression
of data written to the tape, increasing
both the effective capacity of the 10 GB
cartridge and boosting the effective
write speed of the drive. The 3590's
data compression algorithm is a
Ziv-Lempel technique called IBMLZ1, more
effective than the BAC algorithm used in
the 3480 and 3490.
Ref: Redbook "Magstar and IBM 3590 High
Performance Tape Subsystem Technical
Guide" (SG24-2506)
See also: Compression algorithm, client
3590 Devclass, define 'DEFine DEVclass DevclassName
DEVType=3590 LIBRary=LibName
[FORMAT=DRIVE|3590B|3590C|
3590E-B|3590E-C]
[MOUNTLimit=Ndrives]
[MOUNTRetention=Nmins]
[PREFIX=TapeVolserPrefix]
[ESTCAPacity=X]
[MOUNTWait=Nmins]'
Note that "3590" is a special, reserved
DEVType.
3590 drive* See: 3590 tape drive*
3590 EOV processing There is a volume status full for 3590
volumes. 3590 volumes will do EOV
processing when the drive signals end
of tape, or when the maxcapacity is
reached, if maxcapacity has been set.
When the drive signals end of tape, EOV
processing will occur even if
maxcapacity has not been reached.
Contrast with 3490 EOV processing.
3590 errors See: MIM; SARS; SIM; VCR
3590 exploded diagram (internals) http://www.thic.org/pdf/Oct00/
imation.jgoins.001003.pdf page 20
3590 Fibre Channel interface There are two fibre channel interfaces
on the 3590 drive, for attaching to up
to 2 hosts. Supported in TSM 3.7.3.6
Available for 3590E & 3590H drives but
not for 3590B.
3590 'J' 3590 High Performance Cartridge Tape
(HPCT), the original 3590 tape
cartridge, containing 300 meters of
half-inch tape.
Predecessor: 3490 "E"
Barcodette letter: 'J'
Color of leader block and notch tabs:
blue
Compatible drives: 3590 B; 3590 E;
3590 H
Capacity: 10 GB native on Model B drives
(up to 30 GB with 3:1 compression);
20 GB native on Model E drives (up to
60 GB with 3:1 compression);
30 GB native on Model H drives (up to
90 GB with 3:1 compression);
Notes: Has the thickest tape of the 3590
tape family, so should be the most
robust.
See also: 3590 cleaning tape; 3590 tape
cartridge; 3590 'K'; EHPCT; HPCT
3590 'K' (3590 K; 3590K) 3590 Extended High Performance Cartridge
Tape, aka "Extended length",
"double length": 600 meters of thinner
tape.
Available: March 3, 2000
Predecessor: 3590 'J'
Barcodette letter: 'K'
Color of leader block and notch tabs:
green
Compatible drives: 3590 E; 3590 H
Capacity: 40 GB native on 3590 E drives
(up to 120 GB with 3:1 compression,
depending upon the compressability of
the data);
60 GB native on Model H drives (up to
120 GB with 3:1 compression);
Hardware Announcement: ZG02-0301
Notes: The double length of the tape
spool makes for longer average
positioning times. Fragility: Because
so much tape is packed into the
cartridge, it tends to be rather close
to the inside of the shell, and so is
more readily damaged if the tape is
dropped, as compared to the 3590 'J'.
3590 microcode level Unix: 'tapeutil -f /dev/rmt_ vpd'
(drive must not be busy)
see "Revision Level" value
AIX: 'lscfg -vl rmt_'
see "Device Specific.(FW)"
Windows: 'ntutil -t tape_ vpd'
Microcode level shows up as
"Revision Level".
3590 Model B11 Single-drive unit with attached
10-cartridge Automatic Cartridge
Facility, intended to be rack-mounted
(IBM 7202 rack). Can be used as a mini
library. Interface is via integral
SCSI-3 controller with two ports.
As of late 1996 it is not possible to
perform reclamation between 2 3590 B11s,
because they are considered separate
"libraries".
Ref: "IBM TotalStorage Tape Device
Drivers: Installation and User's Guide",
<OStype> Tape and Medium Changer Device
Driver section.
3590 Model B1A Single-drive unit intended to be
installed in a 3494 library.
Interface is via integral SCSI-3
controller with two ports.
3590 Model E11 Rack-mounted 3590E drive with attached
10-cartridge ACF.
3590 Model E1A 3590E drive to be incorporated into a
3494.
3590 modes of operation (Referring to a 3590 drive, not
in a 3494 library, with a tape magazine
feeder on it.)
Manual: The operator selects Start to
load the next cartridge.
Accumulate: Take each next cartridge
from the Priority Cell, return
to the magazine.
Automatic: Load next tape from magazine
without a host Load request.
System: Wait for Load request from host
before loading next tape from
magazine.
Random: Host treats magazine as a mini
library of 10 cartridges and
uses Medium Mover SCSI cmds to
select and move tapes between
cells.
Library: For incorporation of 3590 in a
tape library server machine
(robot).
3590 performance See: 3590 speed
3590 SCSI device address Selectable from the 3590's mini-panel,
under the SET ADDRESS selection, device
address range 0-F.
3590 Sense Codes Refer to the "3590 Hardware Reference"
manual.
3590 servo tracks Each IBM 3590 High Performance Tape
Cartridge has three prerecorded servo
tracks, recorded at time of manufacture.
The servo tracks enable the IBM 3590
tape subsystem drive to position the
read/write head accurately during the
write operation. If the servo tracks are
damaged, the tape cannot be written to.
3590 sharing between two TSM servers Whether by fibre or SCSI cabling, when
sharing a 3590 drive between two TSM
servers, watch out for SCSI resets
during reboots of the servers.
If the server code and hardware don't
mesh exactly right, its possible to get
a "mount point reserved" state, which
requires a TSM restart to clear.
3590 speed Note from 1995 3590 announcement, number
195-106: "The actual throughput a
customer may achieve is a function of
many components, such as system
processor, disk data rate, data block
size, data compressibility, I/O
attachments, and the system or
application software used. Although the
drive is capable of a 9-20MB/sec
instantaneous data rate, other
components of the system may limit the
actual effective data rate. For example,
an AS/400 Model F80 may save data with a
3590 drive at up to 5.7MB/sec. In a
current RISC System/6000 environment,
without filesystem striping, the disk,
filesystem, and utilities will typically
limit data rates to under 4MB/sec.
However, for memory-to-tape or
tape-to-tape applications, a RISC
System/6000 may achieve data rates of up
to 13MB/sec (9MB/sec uncompacted). With
the 3590, the tape drive should no
longer be the limiting component to
achieving higher performance.
See also IBM site Technote
"D/T3590 Tape Drive Performance"
3590 statistics The 3590 tape drive tracks various usage
statistics, which you can ask it to
return to you, such as Drive Lifetime
Mounts, Drive Lifetime Megabytes Written
or Read, from the Log Page X'3D'
(Subsystem Statistics), via discrete
programming or with the 'tapeutil'
command Log Sense Page operation,
specifying page code 3d and a selected
parameter number, like 40 for Drive
Lifetime Mounts. Refer to the 3590
Hardware Reference manual for byte
positions.
See also: 3590 tape drive, hours powered
on; 3590 tape mounts, by drive
3590 tape cartridge AKA "High Performance Cartridge Tape".
See: 3590 'J'
3590 tape drive The IBM tape drive used in the 3494 tape
robot, supporting 10Gbytes per cartridge
uncompressed, or typically 30Gbytes
compressed via IDRC. Uses High
Performance Cartridge Tape.
3590 tape drive, hours powered on Put the 3494 into Pause mode;
Open the 3494 door to access the given
3590's control panel;
Select "Show Statistics Menu";
See "Pwr On Hrs" value.
3590 tape drive, release from host Unix: 'tapeutil -f dev/rmt? release'
after having done a "reserve" Windows: 'ntutil -t tape_ release'
3590 tape drive, reserve from host Unix: 'tapeutil -f dev/rmt? reserve'
Windows: 'ntutil -t tape_ reserve'
When done, release the drive:
Unix: 'tapeutil -f dev/rmt? release'
Windows: 'ntutil -t tape_ release'
3590 tape drive Available? (AIX) 'lsdev -C -l rmt1'
3590 tape drive cleaning The drive may detect when it needs
cleaning, at which point it will display
its need on its front panel, and notify
the library (if so attached via RS-422
interface) and the host system (AIX gets
Error Log entry ERRID_TAPE_ERR6, "tape
drive needs cleaning", or
TAPE_DRIVE_CLEANING entry - there will
be no corresponding Activity Log entry).
The 3494 Library Manager would respond
by adding a cleaning task to its Clean
Queue, for when the drive is free. The
3494 may also be configured to perform
cleaning on a scheduled basis, but be
aware that this entails additional wear
on the drive and makes the drive
unavailable for some time, so choose
this only if you find tapes going
read-only due to I/O errors.
Msgs: ANR8914I
3590 tape drive model number Do 'mtlib -l /dev/lmcp0 -D'
The model number is in the third
returned token.
For example, in returned line:
" 0, 00116050 003590B1A00"
the model is 3590 B1A.
3590 tape drive serial number Do 'mtlib -l /dev/lmcp0 -D'
The serial number is the second
returned token, all but the last digit.
For example, in returned line:
" 0, 00116050 003590B1A00"
the serial number is 11605.
3590 tape drive sharing As of TSM 3.7, two TSM servers to be
connected to each port on a twin-tailed
3590 SCSI drive in the 3494, in a
feature called "auto-sharing". Prior to
this, individual drives in a 3494
library could only be attached to a
particular server (library
partitioning): each drive was owned by
one server.
3590 tape drive status, from host 'mtlib -l /dev/lmcp0 -qD -f /dev/rmt1'
3590 tape drive use, define "ENABLE3590LIBRary" definition in the
server options file.
3590 tape drives, list From AIX: 'mtlib -l /dev/lmcp0 -D'
3590 tape drives, list in AIX 'lsdev -C -c tape -H -t 3590'
3590 tape drives, not being used in a See: Drives, not all in library being
library used
3590 tape mounts, by drive Put the 3494 into Pause mode;
Open the 3494 door to access the given
3590's control panel;
Select "Show Statistics Menu";
See "Mounts to Drv" value.
See also: 3590 tape drive, hours powered
on; 3590 statistics
3590 volume, veryify Devclass See: SHow FORMAT3590 _VolName_
3590B The original 3590 tape drives.
Cartridges supported: 3590 'J' (10-30
GB), 'K' (20-60 GB)
(Early B drives can use only 'J'.)
Tracks: 128 total tracks, 16 at a time,
in serpentine fashion.
Number of servo tracks: 3
Interfaces: Two, SCSI (FWD)
Previous generation: none in 3590
series; but 3490E conceptually.
See also: 3590C
3590B vs. 3590E drives A tape labelled by a 3590E drive cannot
be read by a 3590B drive. A tape
labelled by a 3590B drive can be read by
a 3590E drive, but cannot be written by
a 3590E drive.
The E model can read the B formatted
cartridge.
The E model writes in 256 track format
only and can not write or append to a B
formatted tape.
The E model can reformat a B format tape
and then can write in the E format.
The B model can not read E formatted
data.
The B model can reformat an E format
tape and then can write in the B format:
the B model device must be a minimum
device code level (A_39F or B_731) to do
so.
3590C FORMAT value in DEFine DEVclass for the
original 3590 tape drives, when data
compression is to be performed by the
tape drive.
See also: 3590C; DRIVE
3590E IBM's fifth generation of this 1/2" tape
cartridge technology, using a
single-reel approach and servo tracking
pre-recorded on the tape for precise
positioning. Excellent start-stop
performance.
Cartridges supported: 3590 'J' (20-60
GB), 'K' (40-120 GB)
Tracks: 256 (2x the 3590B), written 16
at a time, in serpentine fashion. The
head contains 32 track writers: As the
tape moves forward, 16 tracks are
written until EOT is encountered,
whereupon electronic switching causes
the other 16 track writers in the heads
to be used as the tape moved backwards
towards BOT. Then, the head is
physically moved (indexed) to repeat the
process, until finally all 256 tracks
are written as 16 interleaved sets of 16
tracks.
Number of servo tracks: 3
Interfaces: Two, SCSI (FWD) or FC
As of March, 2000 comes with support for
3590 Extended High Performance Cartridge
Tape, to again double capacity.
Devclass: FORMAT=3590E-C (not DRIVE)
Previous generation: 3590B
Next generation: 3590K
3590E? (Is a drive 3590E?) Expect to be able to tell if a 3590
drive is an E model by visual
inspection:
- Rear of drive (power cord end) having
stickers saying "Magstar Model E" and
"2x" (meaning that the EHPC feature is
installed in the drive).
- Drive display showing like "E1A-X"
(drive type, where X indicates
extended) in the lower leftcorner.
(See Table 5 in 3590 Operator Guide
manual.)
3590EE Extra long 3590E tapes (double length),
available only from Imation starting
early 2000. The cartridge accent color
is green instead of blue and have a K
label instead of J. Must be used with
3590E drives.
3590H IBM's sixth generation of this 1/2"
cartridge technology, using a
single-reel approach and servo tracking
pre-recorded on the tape for precise
positioning. Excellent start-stop
performance.
Cartridges supported: 3590 'J' (30-90
GB), 'K' (60-180 GB)
Capacity: 30GB native, ~90 GB
compressed
Tracks: 384 (1.5 times the 3590E)
Compatibility: Can read, but not write,
128-track (3590) and 256-track(3590E)
tapes.
Supported in: TSM 5.1.6
Interfaces: Two, SCSI (FWD) or FC
Devclass: FORMAT=3590E-C (not DRIVE)
Previous generation: 3590E
Next generation: 3592 (which is a
complete departure, wholly incompatible)
3590K See: 3590 'K'
3590L AIX ODM type for 3590 Library models.
3592 The IBM TotalStorage Enterprise Tape
Drive and Cartridge model numbers,
introduced toward the end of 2003.
The drive is only a drive: it slides
into a cradle which externally provides
power to the drive. The small form
factor more severely limits the size of
panel messages, to 8 chars.
This model is a technology leap, akin to
3490->3590, meaning that though
cartridge form remains the same, there
is no compatibility whatever between
this and what came before. Cleaning
cartridges for the 3592 drive are
likewise different.
Rather than having a leader block, as in
3590 cartridges, the 3592 has a leader
pin, located behind a retractable door.
The 3592 cartridge is IBM's first one
in the 359x series with an embedded
memory chip (Cartridge Memory): Records
are written to the chip every time the
cartridge is unloaded from a 3592 J1A
tape drive. Data is read and written to
the CM via short range radio frequency
communication and includes volser, the
media in the cartridge, the data on the
media, and tape errors. These records
are then used by the IBM Statistical
Analysis and Reporting System (SARS) to
analyze and report on tape drive and
cartridge usage and help diagnose and
isolate tape errors. SARS can also be
used to proactively determine if the
tape media or tape drive is degrading
over time. Cleaning tapes also have CM,
emphatically limiting their usage to 50
cycles. Currently, only the tape drive
has the means to interact with the CM:
in the future, the robotic picker might
have that capability.
The 3592 cartridges come in four types:
- The 3592 "JA" long rewritable
cartridge: the high capacity tape
which most customers would probably
buy.
Native capacity: 300 GB (Customers
report getting up to 1.2 TB.)
Can be initialized to 60 GB to serve
in a fast-access manner.
Works with 3592 J1A tape drive.
- The 3592 "JJ" short rewritable
cartridge: the economical choice
where lesser amounts of data is
written to separate tapes.
Native capacity: 300 GB.
Works with 3592 J1A tape drive.
- The 3592 "JW" long write-once (WORM)
cartridge.
Native capacity: 300 GB.
- The 3592 "JR" short write-once (WORM)
cartridge.
Native capacity: 60 GB.
Compression type: Byte Level Compression
Scheme Swapping. With this type, it is
not possible for the data to expand.
(IBM docs also say that the drive uses
LZ1 compression, and Streaming Lossless
Data Compression (SLDC) data compression
algorithm, and ELDC.)
The TSM SCALECAPACITY operand of DEFine
DEVClass can scale native capacity back
from 100 GB down to a low of 60 GB.
The 3592 cartridges may live in either a
3494 library (in a new frame type - L22,
D22, and D24 - separate from any other
3590 tape drives in the library); or a
special frame of a 3584 library.
Host connectivity: Dual ported switched
fabric 2-Gbps Fibre Channel attachment
(but online to only one host at a time).
Physical connection is FC, but the drive
employs the SCSI-3 command set for
operation, in a manner greatly
compatible with the 3590, simplifying
host application support of the drive.
As with the 3590 tape generation, the
3592 has servo information
factory-written on the tape. (Do not
degauss such cartridges. If you need to
obliterate the data on a cartridge,
perform a Data Security Erase.)
Drive data transfer rate: up to 40MB/s
Data life: 30 years
Barcode label: Consists of 8 chars, the
first 6 being the tape volser, and the
last 2 being media type ("JA").
Tape vendors: Fuji, Imation (IBM will
not be manufacturing tape)
The J1A version of the drive is
supported in the 3584 library, as of mid
2004.
IBM brochure, specs: G225-6987-01
http://www.fuji-magnetics.com/en/company
/news/index2_html
Next generation: None, as of 2004/09
3599 An IBM "machine type / model" spec for
ordering any Magstar cartridges:
3599-001, -002, -003 are 3590 J
cartridges;
3599-004, -005, -006 are 3590 K
cartridges;
3599-007 is 3590 cleaning cartridge;
3599-011, -012, -013 are 3592 cartridges
3599-017 is 3592 cleaning cartridge.
3599 A product from Bow Industries for
cleaning and retensioning 3590 tape
cartridges.
www.bowindustries.com/3599.htm
3600 IBM LTO tape library, announced
2001/03/22, withdrawn 2002/10/29.
Models:
3600-109 1.8 TB autoloader
3600-220 2/4 TB tower; 1 or 2 drives
3600-R20 2/4 TB rack; 1 or 2 drives
The 220 and R20 come with two removable
magazines that can each hold up to 10
LTO data or cleaning cartridges.
3995 IBM optical media library, utilizing
double-sided, CD-sized optical platters
contained in protective plastic
cartridges. The media can be rewritable
(Magneto-Optical), CCW (Continuous
Composite Write-once), or permanent WORM
(Write-Once, Read-Many).
Each side of a cartridge is an Optical
Volume. The optical drive has a fixed,
single head: the autochanger can flip
the cartridge to make the other side
(volume) face the head.
See also: WORM
3995 C60 Make sure Device Type ends up as WORM,
not OPTICAL.
3995 drives Define as /dev/rop_ (not /dev/op_).
See APAR IX79416, which describes
element numbers vs. SCSI IDs.
3995 manuals http://www.storage.ibm.com/hardsoft/
opticalstor/pubs/pubs3995.html
3995 web page http://www.storage.ibm.com/hardsoft/
opticalstor/3995/maine.html
http://www.s390.ibm.com/os390/bkserv/hw/
50_srch.html
4560SLX IBM $6500 Modular Tape Library Base: a
tiny library which can accommodate one
or two LTO or SDLT tape drives and can
support up to 26 SDLT tape cartridges or
up to 30 LTO tape cartridges. This
modular, high-density automated tape
enclosure is available in rack version
only. Each 5U unit contains a power
supply and electronics logic. Two rows
of tape storage cells occupy the left
and right sides of the cabinet, with a
picker mechanism running down the center
aisle, feeding two drives at the far end
of the aisle.
56Kb modem uploads With 56Kb modem technology, 53Kb is the
fastest download speed you can usually
expect, and 33Kb is the highest upload
speed possible. And remember that phone
line quality can reduce that further.
Ref: www.56k.com
64-bit executable in AIX? To discern whether an AIX command or
object module is 64-bit, rather than
32-bit, use the 'file' command on it.
(This command references "signature"
indicators listed in /etc/magic.) If
64-bit, the command will report like:
64-bit XCOFF executable or object
module not stripped
See also: 32-bit executable in AIX?
64-bit filesize support Was added in PTF 6 of the version 2
client.
64-bit ready? (Is ADSM?) Per Dave Cannon, ADSM Development,
1998/04/17, the ADSM server has always
used 64-bit values for handling sizes
and capacities.
7206 IBM model number for 4mm tape drive.
Media capacity: 4 GB
Transfer rate: 400 KB/S
7207 IBM model number for QIC tape drive.
Media capacity: 1.2 GB
Transfer rate: 300 KB/S
7208 IBM model number for 8mm tape drive.
Media capacity: 5 GB
Transfer rate: 500 KB/S
7331 IBM model number for a tape library
containing 8mm tapes. It comes with a
driver (Atape on AIX, IBMtape on
Solaris) for the robot to go with the
generic OST driver for the drive. That's
to support non-ADSM applications, but
ADSM has its own driver for these
devices.
Media capacity: 7 GB
Transfer rate: 500 KB/S
7332 IBM model number for 4mm tape drive.
Media capacity: 4 GB
Transfer rate: 400 KB/S
7337 A DLT library. Define in ADSM like:
DEFine LIBRary autoDLTlib LIBType=SCSI
DEVice=/dev/lb0
DEFine DRive autodltlib drive01
DEVice=/dev/mt0 ELEMent=116
DEFine DRive autodltlib drive02
DEVice=/dev/mt1 ELEMent=117
DEFine DEVclass autodlt_class
DEVType=dlt LIBRary=autodltlib
DEFine STGpool autodlt_pool
autodlt_class MAXSCRatch=15
8200 Refers to recording format for 8mm
tapes, for a capacity of about 2.3 GB.
8200C Refers to recording format for 8mm
tapes, for a capacity of about 3.5 GB.
8500 Refers to recording format for 8mm
tapes, for a capacity of about 5.0 GB.
8500C Refers to recording format for 8mm
tapes, for a capacity of about 7.0 GB.
8900 Refers to recording format for 8mm
tapes, for a capacity of about 20.0 GB.
8mm drives All are made by Exabyte.
8mm tape technology Yecch! Horribly unreliable. Tends to be
"write only" - write okay, but tapes
unreadable thereafter.
9710/9714 See: StorageTek
9840 See: STK 9840
9940b drive Devclass:
- If employing the Gresham Advantape
driver: generictape
- If employing the Tivoli driver:
ecartridge

ABC Archive Backup Client for *SM, as on


OpenVMS. The software is written by
SSSI. It uses the TSM API to save and
restore files.
See also: OpenVMS
ABSolute A Copy Group mode value (MODE=ABSolute)
that indicates that an object is
considered for backup even if it has not
changed since the last time it was
backed up; that is, force all files to
be backed up.
See also: MODE
Contrast with: MODified.
See also: SERialization (another Copy
Group parameter)
Accelis (LTO) Designer name for the next generation
(sometimes misspelled "Accellis") 3570 tape, LTO. Cartridge is same as
3570, including dual-hub, half-wound for
rapid initial access to data residing at
either end of the tape (intended to be
10 seconds or less). Physically sturdier
than Ultrium, Accelis was intended for
large-scale automated libraries.
But Accelis never made it to reality:
increasing disk capacity made the
higher-capacity Ultrium more realistic;
and two-hub tape cartridges are wasteful
in containing "50% air" instead of tape.
Accelis would have had:
Cartridge Memory (LTO CM, LTO-CM) chip
is embedded in the cartridge: a
non-contacting RF module, with
non-volatile memory capacity of 4096
bytes, provides for storage and
retrieval of cartridge, data
positioning, and user specified info.
Recording method: Multi-channel linear
serpentine
Capacity: 25 GB native, uncompressed
Transfer rate: 10-20 MB/second.
http://www.Accelis.com/
"What Happened to Accelis?":
http://www.enterprisestorageforum.com/
technology/features/article.php/1461291
See also: 3583; LTO; MAM; Ultrium (LTO)
ACCept Date TSM server command to cause the server
to accept the current date and time as
valid when an invalid date and time are
detected. Syntax:
'ACCept Date'
Note that one should not normally have
to do this, even across Daylight Savings
Time changes, as the conventions under
which application programs are run on
the server system should let the server
automatically have the correct date and
time. In Unix systems, for example, the
TZ (Time Zone) environment variable
specifies the time zone offsets for
Daylight and Standard times. In AIX you
can do 'ps eww <Server_PID>' to inspect
the env vars of the running server.
In a z/OS environment, see IBM site
article swg21153685.
See also: Daylight Savings Time
Access Line-item title from the 'Query Volume
Format=Detailed' report, which says how
the volume may be accessed: Read-Only,
Read/Write, Unavailable, Destroyed,
OFfsite. Use 'UPDate Volume' to change
the access value.
If Access is Read-Only for a storage
pool within a hierarchy of storage
pools, ADSM will skip that level and
attempt to write the data to the next
level.
Access TSM db: Column in Volumes table.
Possible values: DESTROYED, OFFSITE,
READONLY, READWRITE, UNAVAILABLE
Access Control Lists (AIX) Extended permissions which are preserved
in Backup/Restore.
"Access denied" A message which may be seen in some
environments; usually means that some
other program has the file open in a
manner that prevents other applications
from opening it (including ADSM).
Access mode A storage pool and storage volume
attribute recorded in the ADSM database
specifying whether data can be written
to or read from storage pools or storage
volumes. It can be one of:
Read/write Can read or write volume
in the storage pool.
Set with UPDate STGpool or
UPDate Volume.
Read-only Volume can only be read.
Set with UPDate STGpool or
UPDate Volume.
Unavailable Volume is not available
for any kind of access.
Set with UPDate STGpool or
UPDate Volume.
DEStroyed Possible for a primary
storage pool (only), says
that the volume has been
permanently damaged. Do
RESTORE STGpool or RESTORE
Volume.
Set with UPDate Volume.
OFfsite Possible for a copy
storage pool, says that
volume is away and can't
be mounted.
Set with UPDate Volume.
Ref: Admin Guide
See also: DEStroyed
Access time When a file was last read: its "atime"
value (stat struct st_atime).
The Backup operation results in the
file's access timestamp being changed as
each file is backed up, because as a
generalized application it is performing
conventional I/O to read the contents of
the file, and the operating system
records this access. (That is, it is not
Backup itself which modifies the
timestamp: it's merely that its actions
incidentally cause it to change.)
Beginning with the Version 2 Release 1
Level 0.1 PTF, UNIX backup and archive
processes changed the ctime instead of
user access time (atime). This was done
because the HSM feature on AIX uses
atime in assessing a file's eligibility
and priority for migration. However,
since the change of ctime conflicts with
other existing software, with this Level
0.2 PTF, UNIX backup and archive
functions now perform as they did with
Version 1: atime is updated, but not
ctime.
AIX customers might consider geting
around that by the rather painful step
of using the 'cplv' command to make a
copy of the file system logical volumes,
then 'fsck' and 'mount' the copy and run
backup; but that isn't very reliable.
One thinks of maybe getting around the
problem by remounting a mounted file
system read-only; but in AIX that
doesn't work, as lower level mechanisms
know that the singular file has been
touched. (See topic "MOUNTING FILE
SYSTEMS READ-ONLY FOR BACKUP" near the
bottom of this documentation.)
Network Appliance devices can make an
instant snapshot image of a file system
for convenient backup, a la AFS design.
Veritas Netbackup can restore the atime
but at the expense of the ctime
(http://seer.support.veritas.com/docs/
240723.htm)
See also: FlashCopy
Accessor On a tape robot (e.g., 3494) is the part
which moves within the library and
carries the arm/hand assembly.
See also: Gripper
Accounting Records client session activities, with
an accounting record written at the end
of each client node session (in which a
server interaction is required). The
information recorded chiefly reflects
volumetrics, and thus would be more
useful for cross-charging purposes than
for more illuminating uses. Note that a
client session which does not require
interaction with the server, such as
'q option', does not result in an
accounting record being written.
A busy system will create VOLUMINOUS
accounting files, so use judiciously.
See also: dsmaccnt.log; SUMMARY
Accounting, query 'Query STatus', seek "Accounting:".
Unfortunately, its output is meager,
revealing only On or Off.
See also: dsmaccnt.log
Accounting, turn off 'Set ACCounting OFf'
Accounting, turn on 'Set ACCounting ON'
See also: dsmaccnt.log
Accounting log Unix: Is file dsmaccnt.log, located in
the server directory were no overriding
environment variables are in effect, or
the directory specified by the
DSMSERV_DIR environment variable, or the
directory specified on the
DSMSERV_ACCOUNTING_DIR environment
variable.
MVS (OS/390): the recording occurs in
SMF records, subtype 14.
Accounting recording begins when
'Set ACCounting ON' is done and client
activity occurs. The server keeps the
file open, and the file will grow
endlessly: there is no expiration
pruning done by TSM; so you should cut
the file off periodically, either when
the server starts/ends, or by turning
accounting off for the curation of the
cut-off.
See also: dsmaccnt.log
Accounting log directory Specified via environment variable
DSMSERV_ACCOUNTING_DIR (q.v.) in Unix
environments, or Windows Registry key.
If that's not specified, then the
directory will be that specified by the
DSMSERV_DIR environment variable; and if
that is not specified, then it will be
the directory wherein the TSM server was
started.
Introduced late in *SMv3.
Accounting record layout/fields See the Admin Guide for a description
of record contents. Field 24, "Amount
of media wait time during the session",
refers to time waiting for tape mounts.
Note that maintenance levels may add
accounting fields.
See layout description in "ACCOUNTING
RECORD FORMAT" near the bottom of this
functional directory.
Accounting records processing There are no formal tools for doing
this. The IBM FTP site's adsm/nosuppt
directory contains an adsmacct.exec REXX
script, but that's it. See
http://people.bu.edu/rbs/TSM_Aids.html
for a Perl program to do this.
ACF 3590 tape drive: Automatic Cartridge
Facility: a magazine which can hold 10
cartridges.
Note that this does not exist as such on
the 3494: it has a 10-cartridge
Convenience I/O Station, which is little
more than a pass-through area.
ACL handling (Access Control Lists) ACL info will be stored in the *SM
database by Archive and Backup, unless
it is too big, in which case the ACL
info will be stored in a storage pool,
which can be controlled by DIRMc.
See also: Archive; Backup; DIRMc;
INCRBYDate
Ref: Using the Unix Backup-Archive
Clients (indexed under Access
Permissions, describing ACLs as
"extended permissions").
ACLs (Access Control Lists) and Changes to Unix ACLs do not change the
mtime affecting backup file mtime, so such a change will not
cause the file to be backed up by date.
ACLS Typically a misspelling of "ACSLS", but
could be Auto Cartridge Loader System.
ACS Automated Cartridge System
ACSACCESSID Server option to specify the id for the
ACS access control. Syntax:
ACSACCESSID name
Code a name 1-64 characters long.
The default id is hostname.
ACSDRVID Device Driver ID for ACSLS.
ACSLOCKDRIVE Server option to specify if the drives
within the ACSLS libraries to be locked.
Drive locking ensures the exclusive use
of the drive within the ACSLS library in
a shared environment. However, there
are some performance improvements if
locking is not performed. If the ADSM
drives are not shared with other
applications in the configuration then
drive locking are not required.
Syntax: ACSLOCKDRIVE [YES | NO]
Default: NO
ACSLS Refers to the STK Automated Cartridge
System Library Software. Based upon an
RPC client (SSI) - server (CSI) model,
it manages the physical aspects of tape
cartridge storage and retrieval, while
data retrieval is separate, over SCSI or
other method. Whenever TSM has a
command to send to the robot arm, it
changes the command into something that
works rather like an RPC call that goes
over to the ACSLS software, then ACSLS
issues the SCSI commands to the robot
arm. ACSLS is typically needed only
when sharing a library, wherein ACSLS
arbitrates requests; otherwise TSM may
control the library directly.
Performance: As of 2000/06, severely
impaired by being single-threaded,
resulting in long tape mount times as
*SM queries the drive several times
before being sure that a mount is safe.
http://www.stortek.com/StorageTek/
software/acsls/
Debugging: Use 'rpcinfo -p' on the
server to look for the following ACSLS
programs being registered in Portmap:
program vers proto port
536871166 2 tcp 4354
300031 2 tcp 4355
then use 'rpcinfo -t ...' to reflect off
the program instances.
ACSQUICKINIT Server option to specify if the
initialization of the ACSLS library
should be quick or full initialization
during the server startup. The full
initialization matches the ACSLS
inventory with the ADSM inventory and
validate the locking for each ADSM owned
volume. It also validates the drive
locking and dismount all volumes
currently in the ADSM drive. The full
initialization takes about 1-2 seconds
per volume and can take a long time
during the server startup if the library
inventory is large. ACSQUICKINIT
bypasses all the inventory matching,
lock validation and volume dismounting
from the drive. The user must ensure
the integrity of the ADSM inventory and
drive availability, all ADSM volumes or
drives are assumed locked by the same
lock_id and available. This option is
useful for server restart, and should
only be used if all ADSM inventory and
resources remain the same while the
server is down. Syntax:
ACSQUICKINIT [YES | NO]
Default: NO
ACSTIMEOUTX Server option to specify the multiple
for the build-in timeout value for ACSLS
API. The build-in timeout value for ACS
audit API is 1800 seconds, for all other
APIs are 600 seconds. If the multiple
value specifed is 5, the timeout value
for audit API becomes 9000 seconds and
all other APIs becomes 3000 seconds.
Syntax: ACSTIMEOUTX value
Code a number from 1 - 100.
Default: 1
Activate Policy Set See: ACTivate POlicyset; Policy set,
activate
ACTivate POlicyset *SM server command to specify an
existing policy set as the Active
policy set for a policy domain. Syntax:
'ACTivate POlicyset <DomainName>
<PolicySet>'
(Be sure to do 'VALidate POlicyset'
beforehand.)
You need to do an Activate after making
management class changes.
ACTIVE Column name in the ADMIN_SCHEDULES SQL
database table. Possible values: YES,
NO.
SELECT * FROM ADMIN_SCHEDULES
Active Directory See: Windows Active Directory
Active file system A file system for which space management
is activated. HSM can perform all space
management tasks for an active file
system, including automatic migration,
recall, and reconciliation and selective
migration and recall. Contrast with
inactive file system.
Active files, identify in Select Where allowed: STATE='ACTIVE_VERSION'
See also: Inactive files, identify in
Select; STATE
Active files, number and bytes Do 'EXPort Node NodeName \
FILESpace=FileSpaceName \
FILEData=BACKUPActive \
Preview=Yes'
Message ANR0986I will report the number
of files and bytes.
An alternate method, reporting MB only,
follows the definition of Active files,
meaning files remaining in the file
system - as reflected in a Unix 'df'
command and:
SELECT SUM(CAPACITY*PCT_UTIL/100) FROM
FILESPACES WHERE NODE_NAME='____'
This Select is very fast and obviously
depends upon whole file system backups.
(Selective backups and limited backups
can throw it off.)
See also: Inactive files, number and
bytes; Estimate
Active files, report in terms of MB By definition, Active files are those
which are currently present in the
client file system, which a current
backup causes to be reflected in
filespace numbers, so the following
yields reasonable results:
SELECT NODE_NAME, FILESPACE_NAME,
FILESPACE_TYPE, CAPACITY AS "File
System Size in MB", PCT_UTIL,
DECIMAL((CAPACITY * (PCT_UTIL / 100.0)),
10, 2) AS "MB of Active Files"
FROM FILESPACES ORDER BY NODE_NAME,
FILESPACE_NAME
Caveats: The amount of data in a TSM
server filespace will differ somewhat
from the client file system where some
files are excluded from backups, and
more so where client compression is
employed. But in most cases the numbers
will be good.
Active files for a user, identify via SELECT COUNT(*) AS "Active files count"-
Select FROM BACKUPS WHERE -
NODE_NAME='UPPER_CASE_NAME' AND -
FILESPACE_NAME='___' AND OWNER='___' -
AND STATE='ACTIVE_VERSION'
Active policy set The policy set within a policy domain
most recently subjected to an 'activate'
to effectively establish its
specificaitons as those to be in effect.
This policy set is used by all client
nodes assigned to the current policy
domain. See policy set.
Active Version (Active File) The most recent backup copy of an object
stored in ADSM storage for an object
that currently exists on a file server
or workstation. An active version
remains active and exempt from deletion
until it is replaced by a new backup
version, or ADSM detects during a backup
that the user has deleted the original
object from a file server or
workstation. Note that active and
inactive files may exist on the same
volumes.
See also: ACTIVE_VERSION;
Inactive Version; INACTIVE_VERSION
Active versions, keep in stgpool For faster restoral, you may want to
retain Active files in a higher storage
pool of your storage pool hierarchy.
There has been no operand in the product
to allow you to specify this explicitly;
but you can roughly achieve that end via
the Stgpool MIGDelay value, to keep
recent (Active) files in the higher
storage pool. Of course, if there is
little turnover in the file system
feeding the storage pool, Active files
will get old and will migrate.
ACTIVE_VERSION SQL DB: State value in Backups table for
a current, Active file.
See also: DEACTIVATE_DATE
Activity log Contains all messages normally sent to
the server console during server
operation. This is information stored
in the TSM server database, not in a
separate file.
Do 'Query ACtlog' to get info.
Each time the server starts it begins
logging with message:
ANR2100I Activity log process has
started.
See also: Activity log pruning
Activity log, create an entry As of TSM 3.7.3 you can, from the client
side, cause messages to be added to the
server Activity Log (ANE4771I) by using
the API's dsmLogEvent.
Another means, crude but effective: use
an unrecognized command name, like:
"COMMENT At this time we will be
powering off our tape robot."
It will show up on an ANR2017I message,
followed by "ANR2000E Unknown command -
COMMENT.", which can be ignored.
See also: ISSUE MESSAGE
Activity log, number of entries There is no server command to readily
determine the amount of database space
consumed by the Activity Log. The only
close way is to count the number of log
entries, as via batch command:
'dsmadmc -id=___ -pa=___ q act
BEGINDate=-9999 | grep ANR | wc -l'
or do: SELECT COUNT(*) FROM ACTLOG
See also: Activity log pruning
Activity log, search 'Query ACtlog ... Search='Search string'
Activity log, Select entries more than SELECT SERVERNAME,NODENAME,DATE_TIME -
an hour old FROM ACTLOG WHERE -
(CAST((CURRENT_TIMESTAMP-DATE_TIME) -
HOURS AS INTEGER)>1)
Activity log, seek a message number 'Query ACtlog ... MSGno=____' or
SELECT MESSAGE FROM ACTLOG WHERE -
MSGNO=0988
Seek one less than an hour old:
SELECT MESSAGE FROM ACTLOG WHERE -
MSGNO=0986 AND -
DATE_TIME<(CURRENT_TIMESTAMP-(1 HOUR))
Activity log, seek message text SELECT * FROM ACTLOG WHERE MESSAGE LIKE
'%<process_name>%'
Activity log, seek severity messages 'SELECT * FROM ACTLOG WHERE \
in last 2 days (SEVERITY='W' OR SEVERITY='E' OR \
SEVERITY='D') AND \
DAYS(CURRENT_TIMESTAMP)- \
DAYS(DATE_TIME) <2
Activity log content, query 'Query ACtlog'
Activity log pruning (prune) Occurs just after midnite, driven by
'Set ACTlogretention N_Days' value.
The first message which always remains
in the Activity Log, related to the
pruning, are ANR2102I and ANR2103I.
Activity log retention period, query 'Query STatus', look for "Activity Log
Retention Period"
Activity log retention period, set 'Set ACTlogretention N_Days'
Activity Summary Table See: SUMMARY table
ACTLOG The *SM database Activity Log table.
Columns: DATE_TIME, MSGNO, SEVERITY,
MESSAGE, ORIGINATOR, NODENAME,
OWNERNAME, SCHEDNAME, DOMAINNAME, SESSID
ACTlogretention See: Set ACTlogretention
AD See: Windows Active Directory
Adaptive Differencing A.k.a "adaptive sub-file backup" and
"mobile backup", to back up only the
changed portions of a file rather than
the whole file. Is employed for files >
1 KB and < 2 GB. (The low-end limit
(1024 bytes) was due to some strange
behavior with really small files, e.g.,
if a file started out at 5 k and then
was truncated to 8 bytes. The solution
was to just send the entire file if the
file fell below the 1 KB threshold - no
problem since these are tiny files.
Initially introduced for TSM4 Windows
clients, intended for roaming users
needing to back update on laptop
computers, over a telephone line. Note
that the transfer speed thus varies
greatly according to the phone line. See
"56Kb modem uploads" for insight.
(All 4.1+ servers can store the subfile
data sent by the Windows client -
providing that it is turned on in the
server, via 'Set SUBFILE'.)
Limitations: the differencing subsystem
in use is limited to 32 bits, meaning 2
GB files. The developers chose 2 GB
(instead of 4 GB) as the limit to avoid
any possible boundary problems near the
32-bit addressing limit and also because
this technology was aimed at the mobile
market (read: Who is going to have files
on their laptops > 2 GB?). As of 2003
there are no plans to go to 64 bits.
Ref: TSM 3.7.3 and 4.1 Technical Guide
redbook; Windows client manual;
Whitepaper on TSM Adaptive Sub-file
Differencing at http://www.ibm.com/
software/tivoli/library/whitepapers/
See also: Set SUBFILE; SUBFILE*
ADIC Vendor: Advanced Digital Information
Corporation - a leading
device-independent storage solutions
provider to the open systems
marketplace. A reseller.
www.adic.com
ADMIN Name of the default administrator ID,
from the TSM installation.
Admin GUI There is none for ADSMv3: there is a
command line admin client, and a web
admin client instead.
Administration Center The TSM Administration Center, a
(Admin Center) Java-based replacement for the Web Admin
interface, new in TSM 5.3. ISC is its
base and Administration Center is only a
"plug in". Beware that ISC is massive
Java.
IBM site search: TSMADMINCENTER
FAQ: IBM site Technote 1193419
See also: ISC
Administrative client A program that runs on a file server,
workstation, or mainframe. This program
allows an ADSM administrator to control
and monitor an ADSM server using ADSM
administrative commands.
Contrast with backup-archive client.
Administrative command line interface Beginning with the 3.7 client, the
Administrative command line interface is
no longer part of the Typical install,
in order to bring it in line with the
needs of the "typical" TSM user, who is
an end user who does not require this
capability. If you run a Custom install,
you can select the Admin component to be
installed.
Administrative processes which failed Try 'Query EVent * Type=Administrative
EXceptionsonly=Yes'.
Administrative schedule A schedule to control operations
affecting the TSM server.
Note that you can't redirect output from
an administrative schedule. That is, if
you define an administrative schedule,
you cannot code ">" or ">>" in the CMD.
This seems to be related to the
restriction that you can't redirect
output from an Admin command issued from
the ADSM console.
Experience shows that an admin schedule
will not be kicked off if a Server
Script is running (at least in ADSMv3).
The only restricted commands are MACRO
and Query ACtlog, because...
MACRO: Macros are valid only from
administrative clients. Scheduling of
admin commands is contained solely
within the server and the server has
no knowledge of macros.
Query ACtlog: Since all output from
scheduled admin commands is forced to
the actlog then scheduling a Query
ACtlog would force the resulitng
output right back to the actlog, thereby
doubling the size of the actlog.
See: DEFine SCHedule, administrative
Administrative schedule, run one time Define the administrative schedule with
PERUnits=Onetime.
Administrative schedules, disable See: DISABLESCheds
Administrative schedules, prevent See: DISABLESCheds
Administrator A user who is registered with an ADSM
server as an administrator.
Administrators are assigned one or more
privilege classes that determine which
administrative tasks they can perform.
Administrators can use the
administrative client to enter ADSM
server commands and queries according to
their privileges.
Be aware that ADSM associates schedules
and other definitions with the
administrator who created or last
changed it, and that removal or locking
of the admin can cause the object to
stop operating. In light of this
affiliation, it is best for a shop to
define a general administrator ID (much
like root on a Unix system) which should
be used to manage resources having
sensitivity to the adminstrator ID.
Administrator, add See: Administrator, register
Administrator, lock out 'LOCK Admin Admin_Name'
See also: Administrators, web, lock out
Administrator, password, change 'UPDate Admin Admin_Name PassWord'
Administrator, register 'REGister Admin ...' (q.v.)
The administrator starts out with
Default privilege class. To get more,
the 'GRant AUTHority' command must be
issued.
Administrator, remove 'REMove Admin Adm_Name'
Administrator, rename 'REName Admin Old_Adm_Name New_Name'
Administrator, revoke authority 'REVoke AUTHority Adm_Name
[CLasses=SYstem|Policy|STorage|
Operator|Analyst]
[DOmains=domain1[,domain2...]]
[STGpools=pool1[,pool2...]]'
Administrator, unlock 'UNLOCK Admin Adm_Name'
Administrator, update info or password 'UPDate Admin ...' (q.v.)
Administrator files Located in /usr/lpp/adsm/bin/
Administrator passwords, reset Shamefully, some sites lose track of all
their administrator passwords, and need
to restore administrator access. The one
way is to bring the server down and then
start it interactively, which is to say
implicitly under the SERVER_CONSOLE
administrator id.
See: HALT; UPDate Admin
Administrator privilege classes From highest level to lowest:
System - Total authority
Policy - Policy domains, sets,
management classes, copy
groups, schedules.
Storage - Manage storage resources.
Operator - Server operation,
availability of storage
media.
Analyst - Reset counters, track server
statistics.
Default - Can do queries.
Right out of a 'REGister Admin' cmd, the
individual gets Default privilege. To
get more, the 'GRant AUTHority' command
must be issued.
Administrators, query 'Query admin * Format=Detailed'
Administrators, web, lock out You can update the server options file
COMMMethod option to eliminate the HTTP
and HTTPS specifications.
See also: "Administrator, lock out" for
locking out a single administrator.
adsm The command used to invoke the standard
ADSM interface (GUI), for access to
Utilities, Server, Administrative
Client, Backup-Archive Client, and HSM
Client management. /usr/bin/adsm ->
/usr/lpp/adsmserv/ezadsm/adsm.
Contrast with the 'dsmadm' command,
which is the GUI for pure server
administration.
ADSM ADSTAR Distributed Storage Manager.
Version 1 Release 1 launched July 29,
1993.
V2.1 1995 V3.1 1997
Consisted of Versions 1, 2, and 3
through Release 1.
See also: IBM Tivoli Storage Manager;
Tivoli Storage Manager; TSM; WDSF
ADSM components installed AIX: 'lslpp -l "adsm*"'
See also: TSM components installed
ADSM monitoring products ADSM Manager (see
http://www.mainstar.com/adsm.htm).
Tivoli Decision Support for Storage
Management Analysis. This agent program
now ships free with TSM V4.1; however
you do need a Tivoli Decision Support
server. See redbook Tivoli Storage
Management Reporting SG24-6109.
See also: TSM monitoring products.
ADSM origins See: WDSF
ADSM server version/release level Revealed in server command Query STatus.
Is not available in any SQL table via
Select.
ADSM usage, restrict by groups Use the "Groups" option in the Client
System Options file (dsm.sys) to name
the Unix groups which may use ADSM
services. See also "Users" option.
ADSM.DISKLOG (MVS) Is created as a result of the ANRINST
job. You can find a sample of the JCL
in the ADSM.SAMPLIB.
ADSM.SYS The C:\adsm.sys directory is the
"Registry Staging Directory", backed up
as part of the system object backup
(systemstate and systemservices
objects), as the Backup client is
traversing the C: DRIVE. ADSM.SYS is
excluded from "traditional" incremental
and selective backups ("exclude
c:\adsm.sys\...\*" is implicit - but
should really be
"exclude.dir c:\adsm.sys", to avoid
timing problems.)
Note that backups may report
adsm.sys\WMI, adsm.sys\IIS and
adsm.sys\EVENTLOG as "skipped": these
are not files, but subdirectories. You
may employ "exclude.dir c:\adsm.sys"
in your include-exclude list to
eliminate the messages. (A future
enhancement may implicitly do
exclude.dir.)
For Windows 2003, ADSM.SYS includes VSS
metadata, which also needs to be backed
up.
See: BACKUPRegistry; NT Registry, back
up; REGREST
ADSM_DD_* These are AIX device errors (circa
1997), as appear in the AIX Error Log.
ADSM logs certain device errors in the
AIX system error log. Accompanying Sense
Data details the error condition.
ADSM_DD_LOG1 (0XAC3AB953)
DEVICE DRIVER SOFTWARE ERROR
Logged by the ADSM device driver when a
problem is suspected in the ADSM device
driver software. For example, if the
ADSM device driver issues a SCSI I/O
command with an illegal operation code
the command fails and the error is
logged with this identifier.
ADSM_DD_LOG2 (0X5680E405)
HARDWARE/COMMAND-ABORTED ERROR
Logged by the ADSM device driver when
the device reports a hardware error or
command-aborted error in response to a
SCSI I/O command.
ADSM_DD_LOG3 (0X461B41DE)
MEDIA ERROR
Logged by the ADSM device driver when a
SCSI I/O command fails because of
corrupted or incompatible media, or
because a drive requires cleaning.
ADSM_DD_LOG4 (0X4225DB66)
TARGET DEVICE GOT UNIT ATTENTION
Logged by the ADSM device driver after
receiving a UNIT ATTENTION notification
from a device. UNIT ATTENTIONs are
informational and usually indicate that
some state of the device has changed.
For example, this error would be logged
if the door of a library device was
opened and then closed again. Logging
this event indicates that the activity
occurred and that the library inventory
may have been changed.
ADSM_DD_LOG5 (0XDAC55CE5)
PERMANENT UNKNOWN ERROR
Logged by the ADSM device driver after
receiving an unknown error from a
device in response to a SCSI I/O cmd.
There is no single cause for this: the
cause is to be determined by examining
the Command, Status Code, and Sense
Data. For example, it could be that a
SCSI command such as Reserve (X'16') or
Release (X'17') was issued with no args
(rest of Command is all zeroes).
adsmfsm /etc/filesystems attribute, set "true",
which is added when 'dsmmigfs' or its
GUI equivalent is run to add ADSM HSM
control to an AIX file system.
Adsmpipe An unsupported Unix utility which uses
the *SM API to provide archive, backup,
retrieve, and restore facilities for any
data that can be piped into it,
including raw logical volumes. (In that
TSM 3.7+ can back up Unix raw logical
volumes, there no need for Adsmpipe to
serve that purpose. However, it is still
useful for situations where it is
inconvenient or impossible to back up a
regular file, such as capturing the
output of an Oracle Export operation
where there isn't sufficient Unix disk
space to hold it for 'dsmc i'.)
By default, files are stored on the
server under filespace name "/pipe"
(which can be overridden via -s).
Do 'adsmpipe' to see usage.
-f Mandatory option to specify the name
used for the file in the filespace.
-c To backup file to the *SM server.
-f here specifies the arbitrary name
to be assigned to the file as it is
to be stored in the *SM server.
Input comes from Stdin.
Messages go to Stderr.
-x To restore file from the *SM server.
Do not include the filespace name in
the -f spec.
Output goes to Stdout.
Messages go to Stderr.
-t To list previous backup files.
Messages go to Stderr.
-m To choose a management class.
The session will show up as an ordinary
backup, including in accounting data.
There is a surprising amount of
crossover between this API-based
facility and the standard B/A client:
'dsmc q f' will show the backup as type
"API:ADSMPIPE".
'dsmc q ba -su=y /pipe/\*' will show
the files.
'dsmc restore -su=y /pipe/<Filename>'
will restore the file.
To get the software: go to
http://www.redbooks.ibm.com/, search on
the redbook title (or "adsmpipe"), and
then on its page click Additional
Material, whereunder lies the utility.
That leads to:
ftp://www.redbooks.ibm.com/redbooks/
SG244335/
(The file may be labeled "adsmpipe.tar"
but may in fact be a compressed file;
so should actually have been named
"adsmpipe.tar.Z".)
Ref: Redbook "Using ADSM to Back Up
Databases" (SG24-4335)
.adsmrc (Unix client) The ADSMv3 Backup/Archive GUI introduced
an Estimate function. It collects
statistics from the ADSM server, which
the client stores, by *SM server
address, in the .adsmrc file in the
user's Unix home directory, or Windows
dsm.ini file.
Client installation also creates this
file in the client directory.
Ref: Client manual chapter 3 "Estimating
Backup processing Time"; ADSMv3
Technical Guide redbook
See also: dsm.ini; Estimate; TSM GUI
Preferences
adsmrsmd.dll Windows library provided with the TSM
4.1 server for Windows. (Not installed
with 3.7, though.) For Removable
Storage Management (RSM). Should be in
directory:
c:\program files\tivoli\tsm\server\
as both:
adsmrsm.dll and adsmrsmd.dll
Messages: ANR9955W
See also: RSM
adsmscsi Older device driver for Windows (2000
and lower), for each disk drive.
With Windows 2003 you instead use
tsmscsi, installing it on each drive
now, rather than having one device
driver for all the drives. See manuals.
adsmserv.licenses ADSMv2 file in /usr/lpp/adsmserv/bin/,
installed with the base server code
and updated by the 'REGister LICense'
command to contain encoded character
data (which is not the same as the hex
strings you typed into the command).
For later ADSM/TSM releases, see
"nodelock".
If the server processor board is
upgraded such that its serial number
changes, the REGister LICense procedure
must be repeated - but you should first
clear out the
/usr/lpp/adsmserv/bin/adsmserv.licenses
file, else repeating "ANR9616I Invalid
license record" messages will occur.
See: License...; REGister LICense
adsmserv.lock The ADSM server lock file. It both
carries information about the currently
running server, and serves as a lock
point to prevent a second instance from
running. Sample contents:
"dsmserv process ID 19046 started Tue
Sep 1 06:46:25 1998".
See also: dsmserv.lock
ADSTAR An acronym: ADvanced STorage And
Retrieval. In the 1992 time period, IBM
under John Akers tried spinning off
subsidiary companies to handle the
various facets of IBM business. ADSTAR
was the advanced storage company, whose
principal product was hardware, but also
created some software to help utilize
the hardware they made. Thus, ADSM was
originally a software product produced
by a hardware company. Lou Gerstner
subsequently became CEO, thought little
of the disparate sub-companies approach,
and re-reorganized things such that
ADSTAR was reduced to mostly a name,
with its ADSM product now being
developed under the software division.
ADSTAR Distributed Storage Manager A client/server program product that
(ADSM) provides storage management services to
customers in a multivendor computer
environment.
Advanced Device Support license For devices such as a 3494 robotic tape
library.
Advanced Program-to-Program An implementation of the SNA LU6.2
Communications (APPC) protocol that allows interconnected
systems to communicate and share the
processing of programs. See Systems
Network Architecture Logical Unit 6.2
and Common Programming Interface
Communications.
Discontinued as of TSM 4.2.
afmigr.c Archival migration agent.
See also: dfmigr.c
AFS Through TSM 5.1, you can use the
standard dsm and dsmc client commands on
AFS file systems, but they cannot back
up AFS Access Control Lists for
directories or mount points: use dsm.afs
or dsmafs, and dsmc.afs or dsmcafs to
accomplish complete AFS backups by file.
The file backup client is installable
from the adsm.afs.client installation
file, and the DFS fileset backup agent
is installable from adsm.butaafs.client.
In ADSM, use of the AFS/DFS clients
required purchase of the Open Systems
Environment Support license, for the
server to receive the files sent by that
client software.
As of AFS 3.6, AFS itself supports
backups to TSM through XBSA (q.v.),
meaning that buta will no longer be
necessary - and that TSM, as of 5.1, has
discontinued development of the
now-irrelevant backup functionality in
the TSM client. See:
http://www.ibm.com/software/stormgmt/
afs/manuals/Library/unix/en_US/HTML/
RelNotes/aurns004.htm#HDRTSM_NEW
See also: OpenAFS
OpenAFS OpenAFS backup using TSM can be
performed via XBSA. See:
http://www.berningeronline.net/docstacks
/HTML/relnotes/tsmbackup.html
Note that XBSA has been available only
for AIX and Solaris/Sparc.
Further:
https://lists.openafs.org/pipermail/
openafs-info/2003-August/010421.htm
AFS and TSM 5.x There is no AFS support in TSM 5.x, as
there is none specifically in AIX 5.x
(AIX 4.3.3 being the latest). This seems
to derive from the change in the climate
of AFS, where it has gone open-source,
thus no longer a viable IBM/Transarc
product.
AFS backups, delete You can use 'delbuta' to delete from AFS
and TSM.
Or: Use 'deletedump' from the backup
interface to delete the buta dumps from
the AFS backup database. The only extra
step you need to do is run 'delbuta -s'
to synchronize the TSM server. Do this
after each deletedump run, and you
should be all set.
AFS backups, reality Backing up AFS is painful no matter how
you do it... Backup by volume (using the
*SM replacement for butc) is fast, but
can easily consume a LOT of *SM storage
space because it is a full image backup
every time. To do backup by file
properly, you need to keep a list of
mount points and have a backup server
(or set of clients) that has a lot of
memory so that you can use an AFS memory
cache - and using a disk cache takes
"forever".
AFSBackupmntpnt Client System Options file option, valid
only when you use dsmafs and dsmcafs.
(dsmc will emit error message ANS4900S
and ignore the option.)
Specifies whether you want ADSM to see
a AFS mount point as a mount point (Yes)
or as a directory (No):
Yes ADSM considers a AFS mount point to
be just that: ADSM will back up
only the mount point info, and not
enter the directory.
This is the safer of the two
options, but limits what will be
done.
No ADSM regards a AFS mount point as a
directory: ADSM will enter it and
(blindly) back up all that it finds
there.
Note that this can be dangerous, in
that use of the 'fts crmount'
command is open to all users, who
through intent or ignorance can
mount parts or all of the local
file system or a remote one, or
even create "loops".
All of this is to say that file-oriented
backups of AFS file systems is
problematic.
See also: DFSBackupmntpt
Age factor HSM: A value that determines the weight
given to the age of a file when HSM
prioritizes eligible files for
migration. The age of the file in this
case is the number of days since the
file was last accessed. The age factor
is used with the size factor to
determine migration priority for a file.
It is a weighting factor, not an
absolute number of days since last
access.
Defined when adding space management to
a file system, via dsmhsm GUI or
dsmmigfs command.
See also: Size factor
agent.lic file As in /usr/tivoli/tsm/client/oracle/bin/
Is the TDPO client license file. Lower
level servers don't have server side
licensing. TSM uses that file to verify
on the client side. TDPO will not run
without a valid agent.lic file.
Aggregate See: Aggregates; Reclamation; Stored
Size.
Aggregate data transfer rate Statistic at end of Backup/Archive job,
reflecting transmission over the full
job time, which thus includes all client
"think time", file system traversal, and
even time the process was out of the
operating system dispatch queue. Is
calculated by dividing the total number
of bytes transferred by the elapsed
processing time. Both Tivoli Storage
Manager processing and network time are
included in the aggregate transfer rate.
Therefore, the aggregate transfer rate
is lower than the network transfer rate.
Contrast with Network data transfer
rate, which can be expected to be a much
higher number because of the way it is
calculated.
Ref: B/A Client manual glossary.
Aggregate function SQL: A function, such as Sum(), Count(),
Avg(), and Var(), that you can use to
calculate totals. In writing expressions
and in programming, you can use SQL
aggregate functions to determine various
statistics on sets of values.
Aggregated? In ADSMv3 'Query CONtent ...
Format=Detailed': Reveals whether or not
the file is stored in the server in an
Aggregate and, if so, the position
within the aggregate, as in "11/23". If
not aggregated, it will report "No".
See also: Segment Number; Stored Size
Aggregates Refers to the Small Files Aggregation
(aka Small File Aggregation) feature in
ADSMv3. During Backup and Archive
operations, small files are
automatically packaged into larger
objects called Aggregates, to be
transferred and managed as a whole, thus
reducing overhead (database and tape
space) and improving performance. An
Aggregate is a single file stored at the
server.
Space-managed (HSM) files are not
aggregated, which lessens HSM
performance.
The TSM API certainly supports
Aggregation; but Aggregation depends
upon the files in a transaction all
being in the same file space. TDPs use
the API, but often work with very large
files, which may each be a separate file
space of their own. Hence, you may not
see Aggregation with TDPs. But the size
of the files means that Aggregation is
not an issue for performance.
The size of the aggregate varies with
the size of the client files and the
number of bytes allowed for a single
transaction, per the TXNGroupmax server
option (transaction size as number of
files) and the TXNBytelimit client
option (transaction size as number of
bytes). Too-small values can conspire to
prevent aggregation - so beware using
TCPNodelay in AIX. As is the case with
files in general, an Aggregate will seek
the storage pool in the hierarchy which
has sufficient free space to accommodate
the Aggregate.
An aggregate that cannot fit entirely
within a volume will span volumes, and
if the break point is in the midst of a
file, the file will span volumes.
Note that in Reclamation the aggregate
will be simply copied with its original
size: no effort will be made to
construct output aggregates of some
nicer size, ostensibly because the data
is being kept in a size known to be a
happy one for the client, to facilitate
restorals. Files which were stored on
the server unaggregated (as for example,
long-retention files stored under
ADSMv2) will remain that way
indefinitely and so consume more server
space than may be realized. (You can
verify with Query CONtent F=D.)
Version 2 clients accessing a v3 server
should use the QUIET option during
Backup and Archive so that files will be
aggregated even if a media mount is
required.
Your Stgpool MAXSize value limits the
size of an Aggregate, not the size of
any one file in the Aggregate.
See also: Aggregated?; NOAGGREGATES;
Segment Number
Ref: Front of Quick Start manual;
Technical Guide redbook; Admin Guide
"How the Server Groups Files before
Storing"
Aggregates and reclamation As expiration deletes files from the
server, vacant space can develop within
aggregates. For data stored on
sequential media, this vacant space is
removed during reclamation processing,
in a method called "reconstruction"
(because it entails rebuilding an
aggregate without the empty space).
Aggregation, see in database SELECT * FROM CONTENTS WHERE
NODE_NAME='UPPER_CASE_NAME' ...
In the report:
FILE_SIZE is the Physical, or Aggregate,
size. The size reflects the TXNBytelimit
in effect on the client at the time of
the Backup or Archive.
AGGREGATED is either "No" (as in the
case of HSM, or files Archived or
Backup'ed before ADSMv3), or the
relative number of the reported file
within the aggregate, like "2/16". The
value reflects the TXNGroupmax server
limit on the number of files in an
Aggregate, plus the client TXNBytelimit
limiting the size of the Aggregate.
Remember that the Aggregate will shrink
as reclamation recovers space from old
files within the Aggregate.
AIT Advanced Intelligent Tape technology,
developed by Sony and introduced in 1996
to handle the capacity requirements of
large, data-intensive applications. This
is video-style, helical-scan technology,
wherein data is written in diagonal
slashes across the width of the 8mm tape.
Like its 8mm predecessor technology, AIT
is less reliable than linear tape
technologies because AIT tightly wraps
the tape around various heads and guides
at much sharper angles than linear tape,
and its heads are mechanically active,
making for vibration and higher wear on
the tape, lowering reliability.
Data is compressed before being written
on the tape, via Adaptive Lossless Data
Compression (ALDC - an IBM algorithm),
which offers compression averaging 2.6x
across multiple data types.
Memory-in-Cassette (MIC) feature puts a
flash memory chip in with the tape, for
remembering file positions or storing a
imited amount of data: the MIC chip
contains key parameters such as a tape
log, search map, number of times loaded,
and application info that allow flexible
management of the media and its
contents. The memory size was 16 MB in
AIT-1; is 64 MB in AIT-3.
Like DLT, AIT is a proprietary rather
than open technology, in contrast to LTO.
See: //www.aittape.com/mic.html
Cleaning: The technology monitors itself
and invokes a built-in Active Head
Cleaner as needed; a cleaning cartridge
is recommended periodically to remove
dust and build-up.
Tape type: Advanced Metal Evaporated
(AME)
Cassette size: tiny, 3.5 inch, 8mm tape.
Capacity: 35 GB native. Sony claims
their AIT drives of *all* generations
achieve 2.6:1 average compression ratio
using Adaptive Lossless Data Compression
(ALDC), which would yield 90 GB.
Transfer rate: 4 MB/s without
compression, 10 MB/s with compression
(in the QF 3 MB/s is written).
Head life: 50,000 hours
Media rating: 30,000 passes. Lifetime
estimated at over 30 years.
AIT is not an open architecture
technology - only Sony makes it - a
factor which has caused customers to
gravitate toward LTO instead.
Ref: www.sony.com/ait
www.aittape.com/ait1.html
http://www.mediabysony.com/ctsc/
pdf/spec_ait3.pdf
http://www.tapelibrary.com/aitmic.html
http://www.aittape.com/
ait-tape-backup-comparison.html
http://www.tape-drives-media.co.uk/sony
/about_sony_ait.htm
Technology is similar to Mammoth-2.
See also: MAM; SAIT
AIT-2 (AIT2) Next step in AIT.
Capacity: 50 GB native. Sony claims
their AIT drives of *all* generations
achieve 2.6:1 average compression ratio
using Adaptive Lossless Data Compression
(ALDC), which would yield 130 GB.
Transfer rate: 6 MB/sec max without
compression; 15 MB/s with.
Technology is similar to Mammoth-2.
AIT-3 (AIT3) Next Sony AIT generation - still using
8mm tape and helical-scan technology.
Capacity: 100 GB without compression,
260GB with 2.6:1 compression.
Transfer rate: 12 MB/sec max without
compression; 30 MB/s with.
MIC: 64 MB flash memory
AIT customers have become disgruntled,
finding major reliability problems which
cannot be resolved, even after replacing
drives. Helical scan technology is great
for analog video, but has historically
proven ill-suited to the rigors of
digital data processing, where linear
tracking tape technology is better.
AIX 4.2.0 Per IBMer Andy Raibeck, 1998/10/12,
responding to a question as to whether
the ADSMv3 clients are supported under
AIX 4.2.0: "AIX 4.2.0 is not a supported
ADSM platform. We would have liked to
support it, but the number of problems
we had trying to get ADSM to run on
4.2.0 made it impractical."
AIX 5L, 32-bit client The 32-bit B/A client for both AIX 4.3.3
& AIX 5L is in the package
tivoli.tsm.client.ba.aix43.32bit (API
client in
tivoli.tsm.client.api.aix43.32bit, image
client in
tivoli.tsm.client.image.aix43.32bit,
etc.). Many people seem to be confused
by "aix43"-part of the names looking for
non-existent *.aix51.32bit packages.
AIXASYNCIO and AIXDIRECTIO notes Direct I/O only works for storage pool
volumes. Further, it "works best" with
storage pool files created on a JFS
filesystem that is NOT large file
enabled. Apparently, AIX usually
implicitly disables direct I/O on I/O
transactions on large file enabled JFS
due to TSM's I/O patterns. To ensure use
of direct I/O, you have to use non-large
file enabled JFS, which limits your
volumes to 2 GB each, which is very
restrictive.
IBM recommends: AIXDIRECTIO YES
AIXSYNCIO NO
Asynchronous I/O supposedly has no JFS
or file size limitations, but is only
used for TSM database volumes. Recovery
log and storage pool volumes do not use
async I/O. AIX 5.1 documentation
mentions changes to the async I/O
interfaces to support offsets greater
than 2 GB, however, which implies that
at least some versions (32-bit TSM
server?) do in fact have a 2 GB file
size limitation for async I/O. I was
unable to get clarity on this point in
the PMR I opened.
The AIXDIRECTIO option is obsoleted in
TSM 5.3 because it is always in effect
now. (If present in the file, no error
message will be issued, at least early
in the phase-out.)
ALDC Adaptive Lossless Data Compression
compression algorithm, as used in Sony
AIT-2. IBM's ALDC employs their
proprietary version of the Lempel-Ziv
compression algorithm called IBM LZ1.
Ref: IBM site paper "Design
considerations for the ALDC cores".
See also: ELDC; LZ1; SLDC
ALL-AUTO-LOFS Specification for client DOMain option
to say that all loopback file systems
(lofs) handled by automounter are to be
backed up.
See also: ALL-LOFS
ALL-AUTO-NFS Specification for client DOMain option
to say that all network file systems
(lofs) handled by automounter are to be
backed up.
See also: ALL-NFS
ALL-LOCAL The Client User Options file (dsm.opt)
DOMain statement default, which may be
coded explicitly, to include all local
hard drives, excluding /tmp in Unix, and
excluding any removeable media drives,
such as CD-ROM. Local drives do not
include NFS-mounted file systems.
In 4.1.2, its default is to include the
System Object (includes Registry, event
logs, comp+db, system files, Cert Serv
Db, AD, frs, cluster db - depends if
pro, dc etc on which of these the system
object contains).
If you specify a DOMAIN that is not
ALL-LOCAL, and want the System Object
backed up, then you need to include
SYSTEMOBJECT, as in:
DOMAIN C: E: SYSTEMOBJECT
See also: File systems, local; /tmp
ALL-LOFS Specification for client DOMain option
to say that all loopback file systems
(lofs), except those handled by the
automounter, are to be backed up.
See also: ALL-AUTO-LOFS
ALL-NFS Specification for client DOMain option
to say that all network file systems
(lofs), except those handled by the
automounter, are to be backed up.
See also: ALL-AUTO-NFS
Allow access to files See: dsmc SET Access
Always backup ADSMv3 client GUI backup choice to back
up files regardless of whether they have
changed. Equivalent to command line
'dsmc Selective ...'. You should
normally use "Incremental (complete)"
instead, because "Always" redundantly
sends to the *SM server data that it
already has, thus inflating tape
utilization and *SM server database
space requirements.
Amanda The Advanced Maryland Automatic Network
Disk Archiver. A free backup system that
allows the administrator of a LAN to set
up a single master backup server to back
up multiple hosts to a single large
capacity tape drive. AMANDA uses native
dump and/or GNU tar facilities and can
back up a large number of workstations
running multiple versions of Unix.
Recent versions can also use SAMBA to
back up Microsoft Windows 95/NT hosts.
http://www.amanda.org/
(Don't expect to find a system overview
of Amanda. Documentation on Amanda is
very limited.)
http://sourceforge.net/projects/amanda/
http://www.backupcentral.com/amanda.html
AMENG See also: LANGuage; USEUNICODEFilenames
Amount Migrated As from 'Query STGpool Format=Detailed'.
Specifies the amount of data, in MB,
that has been migrated, if migration is
in progress. If migration is not in
progress, this value indicates the
amount of data migrated during the last
migration. When multiple, parallel
migration processes are used for the
storage pool, this value indicates the
total amount of data migrated by all
processes.
Note that the value can be higher than
reflected in the Pct Migr value if data
was pouring into the storage pool as
migration was occurring.
See also: Pct Migr; Pct Util
ANE Messages prefix for event logging.
See messages manual.
aobpswd Password-setting utility for the TDP for
Oracle. Connects to the server specified
in the dsm.opt file, to establish an
encrypted password in a public file on
your client system. This creates a file
called TDPO.<YourHostname> in the
directory specified via the
DSMO_PSWDPATH environment variable (or
the current directory, if that variable
is not set). Thereafter, this file must
be readable to anyone running TDPO. Use
aobpswd to later update the password.
Note that you need to rerun aobpswd
before the password expires on the
server.
Ref: TDP Oracle manual
APA AutoPort Aggregation
APARs applied to ADSM on AIX system See: PTFs applied to ADSM on AIX system
API Application Programming Interface.
Available for TSM Backup, Archive, and
HSM facilities plus associated queries,
providing a library such that programs
may directly perform common operations.
As of 4.1, available for: AS/400,
Netware, OS/2, Unix, Windows
AIX ADSM dir: /usr/lpp/adsm/api
AIX TSM dir: /usr/tivoli/tsm/client/api
Has historically been provided in both
product-proprietary code (dapi*,
dsmapi*, libApiDS.a) as well as the
X/OPEN interface code (xapi*,
libXApi.a) more commonly known as XBSA.
The API can not be used to access files
backed up or archived with the regular
Backup-Archive clients. Attempting to do
so will yield "ANS4245E (RC122) Format
unknown" (same as ANS1245E). Nor can
files stored via the API be seen by the
conventional clients. Nor can different
APIs see each others' files. The only
general information that you can query
is file spaces and management classes.
In the API manual, Chapter 4
("Interoperability"), briefly indicates
that the regular command line client can
do some things with data sent to the
server via the API - but not vice versa.
This is frustrating, as one would want
to use the API to gain finely controlled
access to data backed up by regular
clients. Interoperability is limited in
the product.
LAN-free support: The TSM API supports
LAN-free, as of TSM 4.2.
Note that there is no administrative
API.
Performance: The APIs typically do not
aggregate files as do standard TSM
clients. Lack of aggregation is usually
not detrimental to performance with
APIs, though, in that they are typically
used in dealing with a small number of
large files.
Encryption: Appeared at the 5.3 level.
Ref: Using the API.
API, Windows Note that the TSM API for Windows
handles objects as case insensitive but
case preserving. This is an anomaly
resulting from the fact that SQL Server
allows case-sensitive databases names.
API config file See the info in the "Using the API"
manual about configuration file options
appropriate to the API. Note that the
API config file is specified on the
dsmInit call.
API header files See: dsmapi*.h
API installed? AIX: There will be a /usr/lpp/adsm/api
directory.
APPC Advanced Program-to-Program
Communications.
Discontinued as of TSM 4.2.
Application client A software application that runs on a
workstation or personal computer and
uses the ADSM application programming
interface (API) function calls to back
up, archive, restore, and retrieve
objects.
Contrast with backup-archive client.
Application Programming Interface A set of functions that application
(API) clients can call to store, query, and
retrieve data from ADSM storage.
Arch Archive file type, in Query CONtent
report. Other types: Bkup, SpMg
ARCHDELete A Yes/No parameter on the 'REGister
Node' and 'UPDate Node' commands to
specify whether the client node can
delete its own archived files from the
server. Default: Yes.
See also: BACKDELete
Archive The process of copying files to a
long-term storage device.
V2 Archive only archives files: it does
*not* archive directories, or symbolic
links, or special files!!! Just files.
(Thus, Archive is not strictly suitable
for making file system images. See the
V2archive option in modern clients to
achieve the same operation.)
File permissions are retained,
including Access Control Lists (ACLs).
Symbolic links are followed, to archive
the file pointed to by the symlink.
Directories are not archived in ADSMv2,
but files in subdirectories are recorded
by their full path name, and so during
retrieval any needed subdirectories will
be recreated, with new timestamps. In
contrast, ADSMv3 *does* archive
directories.
Archived data belongs to the user who
performed the archive.
Include/Exclude is not applicable to
archiving: just to backups.
When you archive a file, you can specify
whether to delete the file from your
local file system after it is copied to
ADSM storage or leave the original file
intact. Archive copies may be
accompanied by descriptive information,
may imply data compression software
usage, and may be retrieved by archive
date, object name, or description.
Windows: "System Object" data (including
the Registry) is not archived. Instead,
you could use MS Backup to Backup System
State to local disk, then use TSM to
archive this.
Contrast with Retrieve.
See also: dsmc Archive;
dsmc Delete ARchive; FILESOnly;
V2archive
For a technique on archiving a large
number of individual files, see entry
"Archived files, delete from client".
Archive, compensate for primitiveness TSM Archive files management is a
particular challenge, largely because
such archived files are often no longer
in the client file system, and thus
"invisible" to users until they perform
just the right query; and files expire
without fanfare, which makes for more of
a guessing game. Sadly, IBM has left
Archive a rather primitive affair, being
about as primitive as it was in ADSMv1.
So, if you were administering a client
system with users doing Archive, how
might you improve things? A big,
relatively simple step would be to hav
users perform archive and retrieve
through an interface script. If the user
does not supply a Description, the
script supplies one, to clearly identify
the file, like:
<Username>:Archive Date: <date time>
which is a considerable improvement over
the problematic one which TSM supplies
by default. Including the time in the
Description renders each object unique.
Formulating the date in hierarchical
form (YYYYMMDD) facilitates wildcard
searches through an asterisk at the end
of YYYY*, or YYYYMM*. For further value,
add tracking, at least appending an
entry to a flat file in the user's
personal directory recording originating
system, timestamp, filename, management
class (usually, Default), and
Description used, which itself could be
searched or referenced by the user to
see what files had been sent off to the
hinterlands. The script could be of
particular value if it did the recording
asynchronously, in that it could take
the further time to do a Query Archive
to capture the "Expires on" value for
the object, without delaying the
user. Such info might be tracked instead
in a MySQL db, or the like...but that
would be just one more thing to
administer and trouble-shoot.
Archive, delete the archived files Use the DELetefiles option.
Archive, exclude files In TSM 4.1: EXCLUDE.Archive
Archive, from Windows, automatic date You can effect this from the DOS command
in Description command line, like:
dsmc archive c:\test1\ -su=y
-desc="%date% Test Archive"
Archive, latest Unfortunately, there is no command line
option to return the latest version of
an archived file. However, for a simple
filename (no wildcard characters) you
can do:
'dsmc q archive <Filename>'
which will return a list of all the
archived files, where the latest is at
the bottom, and can readily be
extracted (in Unix, via the 'tail -1'
command).
Archive, long term, issues A classic situation that site
technicians have to contend with is site
management mandating the keeping of data
for very long term periods, as in five
to ten years or more. This may be
incited by requirements as made by
Sarbanes-Oxley. In approaching this,
however, site management typically
neglects to consider issues which are
essential to the data's long-term
viability:
- Will you be able to find the media in
ten years? Years are a long time in a
corporate environment, where mergers
and relocations and demand for space
cause a lot of things to be moved
around - and forgotten. Will the site
be able to exercise inventory control
over long-term data?
- Will anyone know what those tapes are
for in the future? The purpose of the
tapes has to be clearly documented and
somehow remain with the tapes - but
not on the tapes. Will that doc even
survive?
- Will you be able to use the media
then? Tapes may survive long periods
(if properly stored), but the drives
which created them and could read them
are transient technology, with
readability over multiple generations
being rare. Likewise, operating
systems and applications greatly
evolve over time. And don't overlook
the need for human knowledge to be
able to make use of the data in the
future.
To fully assure that frozen data and
media kept for years would be usable in
the future, the whole enviroment in
which they were created would
essentially have to be frozen in time:
computer, OS, appls, peripherals,
support, user procedures. That's hardly
realistic, and so the long-term
viability of frozen data is just as
problematic. To keep long-term data
viable, it has to move with technology.
This means not only copying it across
evolving media technologies, but also
keeping its format viable. For example:
XML today, but tomorrow...what?
That said, if long-term archiving (in
the generic sense) is needed, it is best
to proceed in as "vanilla" a manner as
possible. For example, rather than
create a backup of your commercial
database, instead perform an unload:
this will make the data reloadable into
any contemporary database.
Keep in mind that it is not the TSM
administrator's responsibility to assure
anything other than the safekeeping of
stored data. It is the responsibility of
the data's owners to assure that it is
logically usable in the future.
See "TSM for Data Retention": that
product facilitates long-term retention
in several ways, including moving data
to new recording technology over time.
Archive, prevent client from doing See: Archiving, prohibit
Archive, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)]
on all volumes [DOmain=DomainName(s)]
[POoltype=ANY|PRimary|COpy'
Note: It is best to run 'AUDit LICenses'
before doing 'Query AUDITOccupancy' to
assure that the reported information
will be current.
Archive and Migration If a disk Archive storage pool fills,
ADSM will start a Migration to tape to
drain it; but because the pool filled
and there is no more space there, the
active Archive session wants to write
directly to tape; but that tape is in
use for Migration, so the client session
has to wait.
Archive archives nothing A situation wherein you invoke Archive
like 'dsmc arch "/my/directory/*"' and
nothing gets archived. Possible reasons:
- /my/directory/ contains only
subdirectories, no files; and the
subdirectories had been archived in
previous Archive operations.
- You have EXCLUDE.ARCHIVE statements
which specifies the files in this
directory.
Archive Attribute In Windows, an advanced attribute of a
file, as seen under file Properties,
Advanced. It is used by lots of other
backup software to define if a file was
already backed up, and if it has to be
backed up the next time.
As of TSM 5.2, the Windows client
provides a RESETARCHIVEATTRibute option
for resetting the Windows archive
attribute for files during a backup
operation.
See also: RESETARCHIVEATTRibute
Archive bit See: Archive Attribute
Archive copy An object or group of objects residing
in an archive storage pool in ADSM
storage.
Archive Copy Group A policy object that contains attributes
that control the generation,
destination, and expiration of archived
copies of files. An archive copy group
is stored in a management class.
Archive Copy Group, define 'DEFine COpygroup DomainName PolicySet
MGmtclass Type=Archive
DESTination=PoolName
[RETVer=N_Days|NOLimit]
[SERialization=SHRSTatic|STatic|
SHRDYnamic|DYnamic]'
Archive descriptions Descriptions are supplementary
identifiers which assist in uniquely
identifying archive files.
Descriptions are stored in secondary
tables, in contrast to the primary
archive table entries which store
archive directory and file data
information.
Archive directory An archive directory is defined to be
unique by: node, filespace,
directory/level, owner and description.
See also: CLEAN ARCHDIRectories
Archive drive contents Windows: dsmc archive d:\* -subdir=yes
Archive fails on single file Andy Raibeck wrote in March 1999:
"In the case of a SELECTIVE backup or an
ARCHIVE, if one or more files can not be
backed up (or archived) then the event
will be failed. The rationale for this
is that if you ask to selectively back
up or archive one or more files, the
assumption is that you want each and
every one of those files to be
processed. If even one file fails, then
the event will have a status of failed.
So the basic difference is that with
incremental we expect that one or more
files might not be able to be processed,
so we do not flag such a case as failed.
In other cases, like SELECTIVE or
ARCHIVE, we expect that each file
specified *must* be processed
successfully, or else we flag the
operation as failed."
Archive files, how to See: dsmc Archive
Archive operation, retry when file in Have the CHAngingretries (q.v.) Client
use System Options file (dsm.sys) option
specify how many retries you want.
Default: 4.
Archive retention grace period The number of days ADSM retains an
archive copy when the server is unable
to rebind the object to an appropriate
management class. Defined via the
ARCHRETention parameter of
'DEFine DOmain'.
Archive retention grace period, query 'Query DOmain Format=Detailed', see
"Archive Retention (Grace Period)".
Archive storage pool, keep separate It is best to keep your Archive storage
pool separate from others (Backup, HSM)
so that restorals can be done more
quickly. If Archive data was in the
same storage pool as Backups, there
would be a lot of unrelated data for the
restoral to have to skip over.
Archive users SELECT DISTINCT OWNER FROM ARCHIVES
[WHERE node_name='UpperCase']
SELECT NODE_NAME,OWNER,TYPE,COUNT(*) AS
"Number of objects" FROM ARCHIVES WHERE
NODE_NAME='____' OR NODE_NAME='____'
GROUP BY NODE_NAME,OWNER,TYPE
Archive users, files count SELECT OWNER,count(*) AS
"Number of files" FROM ARCHIVES
WHERE NODE_NAME='UPPER_CASE_NAME' GROUP
BY OWNER
Archive vs. Backup Archive is intended for the long-term
storage of individual files on tape,
while Backup is for safeguarding the
contents of a file system to facilitate
the later recovery of any part of it.
Returning files to the file system en
mass is thus the forte of Restore,
whereas Retrieve brings back individual
files as needed. Retention policies for
Archive files is rudimentary, whereas
for Backups it is much more
comprehensive.
See also: http://www.storsol.com/cfusion
/template.cfm?page1=wp_whyaisa&page2=
blank_men
Archive vs. Selective Backup, The two are rather similar; but...
differences The owner of a backup file is the user
whose name is attached to the file,
whereas the owner of an archive file is
the person who performed the Archive
operation.
Frequency of archive is unrestricted,
whereas backup can be restricted.
Retention rules are simple for archive,
but more involved for backup.
Archive files are deleteable by the end
user; Backup files cannot be selectively
deleted.
ADSMv2 Backup would handle directories,
but Archive would not: in ADSMv3+, both
Backup and Archive handle directories.
Retrieval is rather different for the
two: backup allows selection of old
versions by date; archive distinction is
by date and/or the Description
associated with the files.
ARCHIVE_DATE Column in *SM server database ARCHIVES
table.
Format: YYYY-MM-DD HH:MM:SS.xxxxxx
Example: SELECT * FROM ARCHIVES WHERE
ARCHIVE_DATE>
'1997-01-01 00:00:00.000000' AND
ARCHIVE_DATE<
'1998-12-31 00:00:00.000000'
Archived copy A copy of a file that resides in an ADSM
archive storage pool.
Archived file, change retention? The retention of individual Archive
files cannot be changed: you can only
Retrieve and then re-Archive the file.
*SM is an enterprise software package,
meaning that it operates according to
site policies. It prohibits users from
circumventing site policies, and thus
will not allow users to extend archive
retentions beyond their site-defined
values. The product is also architected
for security and privacy, providing the
server administrator no means of
retrieving, inspecting, deleting, or
altering the contents or attributes of
individual files. In terms of retention,
all that the server administrator can do
is change the retention policy for the
management class, which affects all
files in that class.
See also: Archived files, retention
period, update
Archived files, count SELECT COUNT(*) AS "Count" FROM ARCHIVES
WHERE NODE_NAME='<UpperCaseNodename>'
Archived files: deletable by client Whether the client can delete archived
node? files now stored on the server.
Controlled by the ARCHDELete parameter
on the 'REGister Node' and 'UPDate Node'
commands. Default: Yes.
Query via 'Query Node Format=Detailed'.
Archived files, delete from client Via client command:
'dsmc Delete ARchive FileName(s)' (q.v.)
You could first try it on a 'Query
ARchive' to get comfortable.
Archived files, list from client See: dsmc Query ARchive
Archived files, list from server 'SHow Archives NodeName FileSpace'
Archived files, list from server, 'Query CONtent VolName ...'
by volume
Archived files, rebinding does not From the TSM Admin. manual, chapter on
occur Implementing Policies for Client Data,
topic How Files and Directories Are
Associated with a Management Class:
"Archive copies are never rebound
because each archive operation creates
a different archive copy. Archive copies
remain bound to the management class
name specified when the user archived
them." (Reiterated in the client B/A
manual, under "Binding and Rebinding
Management Classes to Files".)
Beware, however, that changing the
retention setting of a management
class's archive copy group will cause
all archive versions bound to that
management class to conform to the new
retention.
Note that you can use an ARCHmc to
specify an alternate management class
for the archive operation.
Archived files, report by owner As of ADSMv3 there is still no way to do
this from the client. But it can be
done within the server via SQL, like:
SELECT OWNER,FILESPACE_NAME,TYPE,
ARCHIVE_DATE FROM ARCHIVES WHERE
NODE_NAME='UPPER_CASE_NAME' -
AND OWNER='joe'
Archived files, report by year Example: SELECT * FROM ARCHIVES WHERE
YEAR(ARCHIVE_DATE)=1998
Archived files, retention period Is part of the Copy Group definition.
Is defined in DEFine DOmain to provide a
just-in-case default value.
Note that there is one Copy Group in a
Management Class for backup files, and
one for archived files, so the retention
period is essentially part of the
Management Class.
Archived files, retention period, set The retention period for archive files
is set via the "RETVer" parameter of the
'DEFine COpygroup' ADSM command. Can be
set for 0-9999 days, or "NOLimit".
Default: 365 days.
Archived files, retention period, While you cannot change the retention
update for an individual file, you can change
it for all files bound to a given
Management Class:
'UPDate COpygroup DomainName SetName
ClassName Type=Archive
RETVer=N_Days|NOLimit'
where RETVer specifies the retention
period, and can be 0-9999 days, or
"NOLimit".
Default: 365 days.
Effect: Changing RETVer causes any
newly-archived files to pick up the new
retention value, and previously-archived
files also get the new retention value,
because of their binding to the changed
management class. (The TSM database
Archives table contains an Archive_Date
column: there is no "Expiration_Date"
column, and so the archived files
conform to whatever the prevailing
management class retention rules are at
the time. So if you extend your
retention policy, it pertains to all
archive files, old and new.)
Archived files, retention period, See: 'Query COpygroup ... Type=Archive'
query
Archived files, retrieve from client Via client dsmc command:
'RETrieve
[-DEscription="..."]
[-FROMDate=date] [-TODate=date]
[-FROMOwner=owner]
[-FROMNode=node]
[-PIck] [-Quiet]
[-REPlace=value]
[-SErvername=StanzaName]
[-SUbdir=No|Yes]
[-TAPEPrompt=value]
OrigFileName(s)
[NewFileName(s)]'
Archived files don't show up Some users have encountered the unusual
problem of having archived files, and
know they should not yet have expired,
but the archived files do not show up in
a client query, despite being performed
from the owning user, etc. Analysis with
a Select on the Archives table revealed
the cause to be directories missing from
the server storage pools, which
prevented hierarchically finding the
files in a client -subdir query. The fix
was to re-archive the missing
directories. Use ARCHmc (q.v.) to help
avoid problems.
ARCHIVES SQL: *SM server database table
containing basic information about each
archived object (but not its
size). Along with BACKUPS and CONTENTS,
constitutes the bulk of the *SM database
contents. Columns:
NODE_NAME, FILESPACE_NAME, TYPE,
HL_NAME, LL_NAME, OBJECT_ID,
ARCHIVE_DATE, OWNER, DESCRIPTION,
CLASS_NAME.
See also: HL_NAME; LL_NAME
Archiving, prohibit Prohibit archiving by employing one of
the following:
In the *SM server:
- LOCK Node, which prevents all access
from the client - and which may be too
extreme.
- ADSMv2: Do not define an archive Copy
Group in the Management Class used by
that user. This causes the following
message when trying to do an archive:
ANS5007W The policy set does not
contain any archive copy groups.
Unable to continue with archive.
- ADSMv3: Code NOARCHIVE in the
include-exclude file, as in:
"include ?:\...\* NOARCHIVE"
which prevents all archiving.
- 'UPDate Node ... MAXNUMMP=0', to be in
effect during the day, to prevent
Backup and Archive, but allow Restore
and Retrieve.
In the *SM client:
- Employ EXCLUDE.ARCHIVE for the subject
area. For example, you want to prevent
your client system users from
archiving files that are in file
system /fs1:
EXCLUDE.ARCHIVE /fs1/.../*
Attempts to archive will then get:
ANS1115W File '/fs1/abc/xyz'
excluded by Include/Exclude list
Retrieve and Delete Archive continue
to function as usual.
ARCHmc (-ARCHmc) Archive option, to be specified on the
'dsmc archive' command line (only), to
select a Management Class and thus
override the default Management Class
for the client Policy Domain. (ADSM v3.1
allowed it in dsm.opt; but that's not
the intention of the option.)
Default: the Management Class in the
active Policy Set.
See "Archive files, how to" for example.
As of ADSMv3.1 mid-1999 APAR IX89638
(PTF 3.1.0.7), archived directories are
not bound to the management class with
the longest retention.
See also: CLASS_NAME; dsmBindMC
ARCHRETention Parameter of 'DEFine DOmain' to specify
the retention grace period for the
policy domain, to protect old versions
from deletion when the respective
archive copy group is not available.
Specified as the number of days (from
date of archive) to retain archive
copies. Default: 365 (days)
ARCserve Competing product from Computer
Associates, to back up Microsoft
Exchange Server mailboxes.
Advertises the ability to restore
individual mailboxes, but what they
don't tell you is that they do it in a
non-Microsoft supported way: they
totally circumvent the MS Exchange APIs.
The performance is terrible and the
product as a whole has given customers
lots of problems.
See also: Tivoli Storage Manager for
Mail
ARCHSYMLinkasfile Archive option as of ADSMv3 PTF 7.
If you specify ARCHSYMLinkasfile=No then
symbolic links will not be followed: the
symlink itself will be archived.
If you specify ARCHSYMLinkasfile=Yes
(the default), then symbolic links will
be followed in order to archive the
target files.
Unrelated: See also FOLlowsymbolic
Ref: Installing the Clients manual
ARTIC 3494: A Real-Time Interface Coprocessor.
This card in the industrial computer
within the 3494 manages RS-232 and
RS-422 communication, as serial
connections to a host and
command/feedback info with the tape
drives. A patch panel with eight DB-25
slots mounted vertically in the left
hand side of the interior of the first
frame connects to the card.
AS SQL clause for assigning an alias to a
report column header title, rather than
letting the data name be the default
column title or expression used on the
column's contents. The alias then
becomes the column name in the output,
and can be referred to in GROUP BY,
ORDER BY, and HAVING clauses - but not
in a WHERE clause. The title string
should be in double quotes.
Note that if the column header widths in
combination exceed the width of the
display window, the output will be
forced into "Title: Value" format.
Sample: SELECT VOLUME_NAME AS -
"Scratch Vols" FROM LIBVOLUMES WHERE
STATUS='Scratch'
results in output like:
Scratch Vols
------------------
000049
000084
See also: -DISPLaymode
AS/400 Visit: www.as400.ibm.com
ASC SQL: Ascending order, in conjunction
with ORDER BY, as like:
GROUP BY NODE_NAME ORDER BY NODE_NAME
ASC
ASC/ASCQ codes Additional Sense Codes and Additional
Sense Code Qualifiers involved in I/O
errors. The ASC is byte 12 of the sense
bytes, and the ASCQ is byte 13 (as
numbered from 0). They are reported in
hex, in message ANR8302E.
ASC=29 ASCQ=00 indicates a SCSI bus
reset. Could be a bad adapter, cable,
terminator, drive, etc.). The drives
could be causing an adapter problem
which in turn causes a vus reset, or a
problematic adapter could be causing
the bus reset that causes the drive
errors.
ASC=3B ASCQ=0D is "Medium dest element
full", which can mean that the tape
storage slot or drive is already
occupied, as when a library's inventory
is awry. Perform a re-inventory.
ASC=3B ASCQ=0E is "Medium source element
empty", saying that there is no tape
in the storage slot as there should be,
meaning that the library's inventory
is awry. Perform a re-inventory.
See Appendix B of the Messages manual.
See also: ANR8302E
ASR Automated System Recovery - a restore
feature of Windows XP Professional and
Windows Server 2003 that provides a
framework for saving and recovering the
Windows XP or Windows Server 2003
operating state, in the event of a
catastrophic system or hardware failure.
TSM creates the files required for ASR
recovery and stores them on the TSM
server. In the backup, TSM will generate
the ASR files in the
<Systemdrive>:\adsm.sys\ASR staging
directory on your local machine and
store these these files in the ASR file
space on the TSM server.
Ref: Windows B/A Client manual, Appendix
F "ASR supplemental information";
Redbook "TSM BMR for Windows 2003 and
XP"
Msgs: ANS1468E
ASSISTVCRRECovery Server option to specify whether the
ADSM server will assist the 3570/3590
drive in recovering from a lost or
corrupted Vital Cartridge Records (VCR)
condition. If you specify Yes (the
default) and if TSM detects an error
during the mount processing, it locates
to the end-of-data during the dismount
processing to allow the drive to restore
the VCR. During the tape operation,
there may be some small effect on
performance because the drive cannot
perform a fast locate with a lost or
corrupted VCR. However, there is no loss
of data.
See also: VCR
ASSISTVCRRECovery, query 'Query OPTions', see "AssistVCRRecovery"
Association Server-defined chedules are associated
with client nodes so that the client
will be contacted to run them in a
client-server arrangement. See
'DEFine ASSOCiation',
'DELete ASSOCiation'.
ASSOCIATIONS SQL table in the TSM server reflecting
client associations with schedules, as
established with 'DEFine ASSOCiation'.
Columns: DOMAIN_NAME, SCHEDULE_NAME,
NODE_NAME, CHG_TIME, CHG_ADMIN
Note that if there is no association
between an existing schedule and a node,
then there is no entry in the table for
it. In contrast, Query ASSOCiation will
report schedules having no associations,
because that is good to know - and
reveals a distinction between Query
commands and tables.
See also: Query ASSOCiation
Atape Moniker for the Magstar tape driver,
which supports 3590, 3570, and 3575.
Download from ftp.storsys.ibm.com, in
the /devdrvr/ directory.
In AIX, is installed in /usr/lpp/Atape/.
Sometimes, Atape will force you to
re-create the TSM tape devices; and a
reboot may be necessary (as in the Atape
driver rewriting AIX's bosboot area): so
perform such upgrades off hours.
See also: IBMtape
Atape header file, for programming AIX: /usr/include/sys/Atape.h
Solaris: /usr/include/sys/st.h
HP-UX: /usr/include/sys/atdd.h
Windows: <ntddscsi.h>, <ntddtape.h>
Atape level 'lslpp -ql Atape.driver'
atime See: Access time; Backup
ATL Automated Tape Library: a frame
containing tape storage cells and a
robotic mechanism which can respond to
host commands to retrieve tapes from
storage cells and mount them for reading
and writing.
atldd Moniker for the 3494 library device
driver, "AIX LAN/TTY: Automated Tape
Library Device Driver", software which
comes with the 3494 on floppy diskettes.
Is installed in /usr/lpp/atldd/.
Download from:
ftp://service.boulder.ibm.com/storage/
devdrvr/
See also: LMCP
atldd Available? 'lsdev -C -l lmcp0'
atldd level 'lslpp -ql atldd.driver'
ATS IBM Advanced Technical Support.
They host "Lunch and Learn" conference
call seminars
ATTN messages (3590) Attention (ATTN) messages indicate error
conditions that customer personnel may
be able to resolve. For example, the
operator can correct the ATTN ACF
message with a supplemental message of
Magazine not locked.
Ref: 3590 Operator Guide (GA32-0330-06)
Appendix B especially.
Attribute See: Volume attributes
Attributes of tape drive, list AIX: 'lsattr -EHl rmt1' or
'mt -f /dev/rmt1 status'
AUDit DB Undocumented (and therefore unsuported)
server command in ADSMv3+, ostensibly a
developer service aid, to perform an
audit on-line (without taking the server
down). Syntax (known):
'AUDIT DB [PARTITION=partion-name]
[FIX=Yes]'
e.g. 'AUDIT DB PARTITION=DISKSTORAGE'
as when a volume cannot be deleted.
See also: dsmserv AUDITDB
AUDit LIBRary Creates a background process which
(as in verifying 3494's volumes) checks that *SM's knowledge of the
library's contents are consistent with
the library's inventory. This is a
bidirectional synchronization task,
where the TSM server acquires library
inventory information and may
subsequently instruct the library to
adjust some volume attributes to
correspond with TSM volume status info.
Syntax:
'AUDit LIBRary LibName
[CHECKLabel=Yes|Barcode]'
where the barcode check was added in the
2.1.x.10 level of the server to make
barcode checking an option rather than
the implicit default, due to so many
customers having odd barcodes (as in
those with more than 6-char serials).
Also, using CHECKLabel=Barcode greatly
reduces time by eliminating mounts to
read the header on the tapes - which is
acceptable if you run a tight ship and
are confident of barcodes corresponding
with internal tape labeling.
Sample: 'AUDit LIBRary OURLIB'.
The audit needs to be run when the
library is not in use (no volumes
mounted): if the library is busy, the
Audit will likely hang.
Runtime: Probably not long. One user
with 400 tapes quotes 2-3 minutes.
Tip: With a 3494 or comparable library,
you may employ the 'mtlib' command to
check the category codes of the tapes in
the library for reasonableness, and
possibly use the 'mtlib' command to
adjust errant values without resorting
to the disruption of an AUDit LIBRary.
This audit is performed when the server
is restarted (no known means of
suppressing this).
In a 349X library, AUDit LIBRary will
instruct the library to restore Scratch
and Private category codes to match
TSM's libvolumes information. This is a
particularly valuable capability for
when library category codes have been
wiped out by an inadvertent Teach or
Reinventory operation at the library
(which resets category codes to Insert).
What this does *not* do: This function
is for volume consistency, and does not
delve into volume contents, and thus
cannot help recover inventory info where
the TSM db has been lost.
AUDit LICenses *SM server command to start a background
process which both audits the data
storage used by each client node and
licensing features in use on the server.
This process then compares the storage
utilization and other licensing factors
to the license terms that have been
defined to the server to determine if
the current server configuration is in
compliance with the license terms.
The AUDITSTorage server option is
available to omit the storage
calculation portion of the operation, to
reduce server overhead.
There is no "Wait" capability, so use
with server scripts is awkward.
Syntax: 'AUDit LICense'.
Will hopefully complete with messages
ANR2825I License audit process 3
completed successfully - N nodes
audited
ANR2811I Audit License completed -
Server is in compliance with license
terms.
You may instead find: "ANR2841W Server
is NOT IN COMPLIANCE with license
terms." and 'Query LICense' reports:
Server License Compliance: FAILED
Must be done before running
'Query AUDITOccupancy' for its output to
show current values.
Note that the time of the audit shows up
in Query AUDITOccupancy output.
Msgs: ANR2812W; ANR2834W; ANR2841W;
ANR0987I
See also: Auditoccupancy; AUDITSTorage;
License...; Query LICense; REGister
LICense; Set LICenseauditperiod; SHow
LMVARS
AUDIT RECLAIM Command introduced in v3.1.1.5 to fix a
bug introduced by the 3.1.0.0 code.
See also: RECLAIM_ANALYSIS
AUDit Volume TSM server command to audit a volume,
and optionally fix inconsistencies.
If a disk volume, it must be online; if
a tape volume, it will be mounted
(unless TSM realizes that it contains no
data, as when you are trying to fix an
anomaly).
What this does is validate file
information stored in the database
with that stored on the tape. It does
this by reading every byte of every
file on the volume and checks control
information which the server imbeds in
the file when it is stored. The same
code is used for reading and checking
the file as would be used if the file
were to be restored to a client. (In
contrast, MOVe Data simply copies files
from one volume to another. There
are, however, some conditions which
MOVe Data will detect which AUDit
Volume will not.)
If a file on the volume had previously
been marked as Damaged, and Audit Volume
does not detect any errors in it this
time, that file's state is reset.
AUDit Volume is a good way to fix niggly
problems which prevent a volume from
finally reaching a state of Empty when
some residual data won't otherwise
disappear.
Syntax: 'AUDit Volume VolName
[Fix=No|Yes]
[SKIPPartial=No|Yes]
[Quiet=No|Yes]'.
"Fix=Yes" will delete unrecoverable
files from a damaged volume (you will
have to re-backup the files).
Caution: Do not use AUDit Volume on a
problem disk volume without first
determining, from the operating system
level, what the problem with the disk
actually is. Realize that a disk
electronics problem can make intact
files look bad, or inconsistently make
them look bad.
What goes on: The database governs all,
and so location of the files on the tape
is necessarily controlled by the current
db state. That is to say, Audit Volume
positions to each next file according to
db records. At that position, it
expects to find the start of a file it
previously recorded on the medium. If
not (as when the tape had been written
over), then that's a definite
inconsistency, and eligible for db
deletion, depending up Fix. The Audit
reads each file to verify medium
readability. (The Admin Guide suggests
using it for checking out volumes which
have been out of circulation for some
time.) Medium surface/recording problems
will result in some tape drives (e.g.,
3590) doggedly trying to re-read that
area of the tape, which will entail
considerable time. A hopeless file will
be marked Damaged or otherwise handled
according to the Fix rules. The Audit
cannot repair the medium problem: you
can thereafter do a Restore Volume to
logically fix it. Whether the medium
itself is bad is uncertain: there may
indeed be a bad surface problem or
creasing in the tape; but it might also
be that the drive which wrote it did so
without sufficient magnetic coercivity,
or the coercivity of the medium was
"tough", or tracking was screwy back
then - in which case the tape may well
be reusable. Exercise via tapeutil or
the like is in order.
Audit Volume has additional help these
days: the CRCData Stgpool option now in
TSM 5.1, which writes Cyclic Redudancy
Check data as part of storing the file.
This complements the tape technology's
byte error correction encoding to check
file integrity. Ref: TSM 5.1 Technical
Guide redbook
DR note: Audit Volume cannot rebuild *SM
database entries from storage pool tape
contents: there is no capability in the
product to do that kind of thing.
Msgs: ANR2333W, ANR2334W
See also: dsmserv AUDITDB
AUDITDB See: 'DSMSERV AUDITDB'
AUDITOCC SQL: TSM database table housing the data
that Query AUDITOccupancy reports.
Columns:
NODE_NAME, BACKUP_MB, BACKUP_COPY_MB,
ARCHIVE_MB, ARCHIVE_COPY_MB, SPACEMG_MB,
SPACEMG_COPY_MB, TOTAL_MB
(This separately reports primary and
copy storage pool numbers, in contrast
to 'Query AUDITOccupancy', which report
them combined.)
Be sure to run 'AUDit LICenses' before
reporting from it (as is also required
for 'Query AUDITOccupancy').
See also: AUDITSTorage;
Query AUDITOccupancy
AUDit Volume performance Will be impacted if CRC recording is in
effect.
AUDITSTorage TSM server option. As part of a license
audit operation, the server calculates,
by node, the amount of server storage
used for backup, archive, and
space-managed files. For servers
managing large amounts of data, this
calculation can take a great deal of CPU
time and can stall other server
activity. You can use the AUDITSTorage
option to specify that storage is not to
be calculated as part of a license
audit. Note: This option was previously
called NOAUDITStorage. Syntax:
"AUDITSTorage Yes|No"
Yes Specifies that storage is to be
calculated as part of a license
audit. This is the default.
No Specifies that storage is not to be
calculated as part of a license
audit. (Expect this to impair the
results from Query AUDITOccupancy)
Authentication The process of checking and authorizing
a user's password before allowing that
user access to the ADSM server.
(Password prompting does not occur if
PASSWORDAccess is set to Generate.)
Authentication can be turned on or off
by an administrator with system
privilege.
See also: Password security
Authentication, query 'Query STatus'
Authentication, turn off 'Set AUthentication OFf'
Authentication, turn on 'Set AUthentication ON'
The password expiration period is
established via 'Set PASSExp NDays'
(Defaults to 90 days).
Authorization Rule A specification that allows another user
to either restore or retrieve a user's
objects from ADSM storage.
Authorized User In the TSM Client for Unix: any user
running with a real user ID of 0 (root)
or who owns the TSM executable with the
owner execution permission bit set to s.
Auto Fill 3494 device state for its tape drives:
pre-loading is enabled, which will keep
the ACL index stack filled with volumes
from a specified category.
See /usr/include/sys/mtlibio.h
Auto Migration, manually perform for 'dsmautomig [FSname]'
file system (HSM)
Auto Migrate on Non-Usage In output of 'dsmmigquery -M -D', an
(HSM) attribute of the management class which
specifies the number of days since a
file was last accessed before it is
eligible for automatic migration.
Defined via AUTOMIGNOnuse in management
class.
See: AUTOMIGNOnuse
Auto-sharing See: 3590 tape drive sharing
AUTOFsrename Macintosh and Windows clients option
controlling the automatic renaming of
pre-Unicode filespaces on the *SM server
when a Unicode-enabled client is first
used. The filespace is renamed by
adding "_OLD" to the end of its name.
Syntax:
AUTOFsrename Prompt | Yes | No
AUTOLabel Parameter of DEFine LIBRary, as of TSM
5.2, to specify whether the server
attempts to automatically label tape
volumes for SCSI libraries.
See: DEFine LIBRary
Autoloader A strictly sequential tape magazine for
3480/3490 tape drives.
Contrast with Library, which is random.
Automatic Cartridge Facility 3590 tape drive: a magazine which can
hold 10 cartridges.
Automatic migration (HSM) The process HSM uses to automatically
move files from a local file system to
ADSM storage based on options and
settings chosen by a root user on your
workstation. This process is controlled
by the space monitor daemon
(dsmmonitord).
Is goverened by the
"SPACEMGTECH=AUTOmatic|SELective|NONE"
operand of MGmtclass.
See also: threshold migration; demand
migration; dsmautomig
Automatic reconciliation The process HSM uses to reconcile your
file systems at regular intervals set by
a root user on your workstation. This
process is controlled by the space
monitor daemon (dsmmonitord).
See: Reconciliation; RECOncileinterval
AUTOMIGNOnuse Mgmtclass parameter specifying the
number of days which must elapse since
the file was last accessed before it
is eligible for automatic migration.
Default: 0 meaning that the file is
immediately available for migration.
Query: 'Query MGmtclass' and look for
"Auto-Migrate on Non-Use".
Beware setting this value higher than
one or two days: if all the files are
accessed, the migration threshold may
be exceeded and yet no migration can
occur; hence, a thrashing situation.
See also: Auto Migrate on Non-Usage
AUTOMOUNT (ADSMv2 only) Client System Options file (dsm.sys)
option for Sun systems only. Specifies
a symbolic link to an NFS mount point
monitored by an automount daemon.
There is no support for automounted file
systems under AIX.
Availability Element of 'Query STatus', specifying
whether the server is enabled or
disabled; that is, it will be "Disabled"
if 'DISAble SESSions' had been done,
else will show "Enabled".
look for "Availability".
Average file size: ADSMv2: In the summary statistics from
an Archive or Backup operation, is the
average size of the files processed.
Note that this value is the true
average, and is not the "Total number of
bytes transferred" divided by "Total
number of objects backed up" because the
"transferred" number is often inflated
by retries and the like.
See also: Total number of bytes
transferred
AVG SQL statement to yield the average of
all the rows of a given numeric column.
See also: COUNT; MAX; MIN; SUM

B Unit declarator signifying Bytes.


Example: "Page size = 4 KB"
b Unit declarator signifying bits.
Example: "Transmit at 56 Kb/sec"
B/A Abbreviation for Backup/Archive, as when
referring to the B/A Client manual.
BAC Informal acronym for the Backup/Archive
Client.
BAC Binary Arithmetic Compression: algorithm
used in the IBM 3480 and 3490 tape
system's IDRC for hardware compression
the data written to tape.
See also: 3590 compression of data
Back up some files once a week See IBM doc "How to backup only some
files once a week":
http://www.ibm.com/support/docview.wss?
uid=swg21049445
Back up storage pool See: BAckup STGpool
BACKDELete A Yes/No parameter on the 'REGister
Node' and 'UPDate Node' commands to
specify whether the client node can
delete its own backup files from the
server, as part of a dsmc Delete
Filespace. Default: No.
See also: ARCHDELete
Backed-up files, list from client 'dsmc Query backup "*" -FROMDate=xxx
-NODename=xxx -PASsword=xxx'
Backed-up files, list from server You can do a Select on the Backups or
Contents table for the filespace; but
there's a lot of overhead in the query.
A lower overhead method, assuming that
the client data is Collocated, is to do
a Query CONTent on the volume it was
more recently using (Activity Log, SHow
VOLUMEUSAGE). A negative COUnt value
will report the most recent files first,
from the end of the volume.
Backed-up files count (HSM) In dsmreconcile log.
Backhitch Relatively obscurant term used to
describe the start/stop repositioning
that some tape drives have to perform
after writing stops, in order to
recommence writing the next burst of
data adjoining the last burst. This is
time-consuming and prolongs the backup
of small files. Lesser tape
technologies such as DLT are notorious
for this.
This effect is sometimes called
"shoe-shining", referring to the
reciprocating motion.
Redbook "IBM TotalStorage Tape
Selection and Differentiation Guide"
notes that LTO is 5x slower than 3590H
in its backhitch; and "In a non-data
streaming environment, the excellent
tape start/stop and backhitch
properties of the 3590 class provides
much better performance than LTO."
See Tivoli whitepaper "IBM LTO Ultrium
Performance Considerations"
Ref: IBM site Technote 1111444
See also: DLT and start/stop operations;
"shoe-shining"; Start-stop; Streaming
Backint SAP client; uses the TSM API and
performs TSM Archiving rather than
Backup.
Msgs prefix: BKI
See also: TDP for R/3
BACKRETention Parameter of 'DEFine DOmain' to specify
the retention grace period for the
policy domain, to protect old versions
from deletion when the respective
Copy Group is not available. You should,
however, have a Copy Group to formally
establish your retention periods: do
'Query COpygroup' to check.
Specify as the number of days (from date
of deactivation) to retain backup
versions that are no longer on the
client's system.
Backup The process of copying one or more
files, directories, and ACLs to a server
backup type storage pool to protect
against data loss.
During a Backup, the server is
responsible for evaluating
versions-based retention rules, to mark
the oldest Inactive file as expired if
the new incoming version causes the
oldest Inactive version to be "pushed
out" of the set. (See: "Versions-based
file expiration")
ADSMv2 did not back up special files:
character, block, FIFO (named pipes), or
sockets).
ADSMv3 *will* back up some special
files: character, block, FIFO (named
pipes); but ADSMv3 will *not* back up or
restore sockets (see "Sockets and
Backup/Restore").
More trivially, the "." file in the
highest level directory is not backed
up, which is why "objects backed up" is
one less than "objects inspected".)
Backups types:
- Incremental: new or changed files;
Can be one of:
- full: all new and changed files
are backed up, and takes care of
deleted files;
- partial: simply looks for files
new or changed since last backup
date, so omits old-dated files new
to client, and deleted files are
not expired. An example of a
partial incremental is
-INCRBYDate.
Via 'dsmc Incremental'.
(Note that the file will be
physically backed up again only if
TSM deems the content of the file to
have been changed: if only the
attributes (e.g., Unix permissions)
have been changed, then TSM will
simply update the attributes of the
object on the server.)
- Selective: you select the files.
Via 'dsmc Selective'.
Priority: Lower than BAckup DB, higher
than Restore.
Full incrementals are the norm, as
started by 'dsmc incremental /FSName'.
Use an Include-Exclude Options File if
you need to limit inclusion.
Use a Virtual Mount Point to start at
other than the top of a file system.
Use the DOMain Client User Options File
option to define default filesystems to
be backed up.
(Incremental backup will back up empty
directories.
Do 'dsmc Query Backup * -dirs -sub=yes'
the client to find the empties, or
choose Directory Tree under 'dsm'.)
To effect backup, TSM examines the
file's attributes such as size,
modification date and time (Unix mtime),
ownership (Unix UID), group (Unix GID),
(Unix) file permissions, ACL, special
opsys markers such as NTFS file security
descriptors, and compares it to those
attributes of the most recent backup
version of that file. (Unix atime -
access time - is ignored.) Ref: B/A
Client manual, "Backing Up and Restoring
Files" chapter, "Backup: Related
Topics", "What Does TSM Consider a
Changed File"; and under the description
of Copy Mode. This means that for normal
incremental backups, TSM has to query
the database for each file being backed
up in order to determine whether that
file is a candidate for incremental
backup. This adds some overhead to the
backup process.
TSM tries to be generic where it can,
and in Unix does not record the inode
number. Thus, if a 'cp -p' or 'mv' is
done such that the file is replaced (its
inode number changes) but only the ctime
attribute is different, then the file
data will not be backed up in the next
incremental backup: the TSM client will
just send the new ctime value for
updating in the TSM database.
Backup changes the file's access
timestamp (Unix stat struct st_atime):
the time of last "access" or
"reference", as seen via Unix 'ls -alu
...' command. The NT client uses the
FILE_FLAG_BACKUP_SEMANTICS option when a
file is opened, to prevent updating the
Access time.
See also: Directories and Backup;
-INCRBYDate; SLOWINCREMENTAL;
Updating-->
Contrast with Restore.
For a technique on backing up a large
number of individual files, see entry
"Archived files, delete from client".
Backup, batched transaction buffering See: TXNBytelimit
Backup, delete all copies Currently the only way to purge all
copies of a single file on the server
is to setup a new Management Class
which keeps 0 versions of the file.
Run an incremental while the files is
still on the local FS and specify this
new MC on an Include statement for
that file. Next change the
Include/Exclude so the file now is
excluded. The next incremental will
expire the file under the new policy
which will keep 0 inactive versions of
the file.
Backup, delete part of it ADSM doesn't provide a means for server
commands to delete part of a backup; but
you can effect it by emplacing an
Exclude for the object to be deleted:
the next backup will render it obsolete
in the backups.
Backup, exclude files Specify "EXclude" in the Include-exclude
options file entry to exclude a file or
group of files from ADSM backup
services. (Directories are never
excluded from backups.)
Backup, full (force) You can get a full backup of a file
system via one of the following methods
(being careful to weigh the
ramifications of each approach):
- In the server, do 'UPDate COpygroup
... MODE=ABSolute' in the associated
Management Class, which causes files
to be backed up regardless of having
been modified. (You will have to do a
'VALidate POlicyset' and 'ACTivate
POlicyset' to put the change into
effect.) Don't forget to change back
when the backup is done.
- Consider GENerate BACKUPSET (q.v.),
which creates a package of the file
system's current Active backup files.
See: Backup Set;
dsmc REStore BACKUPSET;
Query BACKUPSETContents
- At PC client: relabel the drive and do
a backup.
At Unix client: mount the file system
read-only at a different mount point
and do a backup.
- As server admin, do 'REName FIlespace'
to cause the filespace to be fully
repopulated in the next backup
(hence a full backup): you could then
rename this just-in filespace to some
special name and rename the original
back into place.
- Do a Selective Backup; like
'dsmc s -su=y FSname' in Unix.
(In the NT GUI, next to the Help
button there is a pull down menu:
choose option "always backup".)
- Define a variant node name which would
be associated with a management class
with the desired retention policy,
code an alternate server stanza in the
Client System Options file, and select
it via the -SErvername command line
option.
Backup, full, periodic (weekly, etc.) Some sites have backup requirements
which do not mesh with TSM's
"incremental forever" philosophy. For
example, they want to perform
incrementals daily, and fulls weekly and
monthly. For guidance, see IBM site
Solution 1083039, "Performing Full
Client Backups with TSM".
See also: Split retentions
Backup, last (most recent) Determine the date of last backup via:
Client command:
'dsmc Query Filespace'
Server commands:
'Query FIlespace [NodeName]
[FilespaceName]
Format=Detailed'
SELECT * FROM FILESPACES WHERE -
NODE_NAME='UPPER_CASE_NAME'
and look at BACKUP_START, BACKUP_END
Select:
Backup, management class used Shows up in 'query backup', whether via
command line or GUI.
Backup, more data than expected going If you perform a backup and expect like
5 GB of data to go and instead find much
more, it's usually a symptom of retries,
as in files being open and changing
during the backup.
Backup, OS/2 OS/2 files have an archive byte (-a or
+a). Some say that if this changes,
ADSM will back up such files; but others
say that ADSM uses the
filesize-filedate-filetime combination.
Backup, prohibit See: Backups, prevent
Backup, selective A function that allows users to back up
objects from a client domain that are
not excluded in the include-exclude list
and that meet the requirement for
serialization in the backup copy group
of the management class assigned to each
object.
Performed via the 'dsmc Selective' cmd.
See: Selective Backup.
Backup, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)]
on all volumes [DOmain=DomainName(s)]
[POoltype=ANY|PRimary|COpy'
Note: You need to run 'AUDit LICenses'
before doing 'Query AUDITOccupancy' for
the reported information to be current.
Backup, subfile See: Adaptive Differencing; Set SUBFILE;
SUBFILE*
Backup, successful? Consider something like the following to
report on errors, to be run via
schedule:
/* FILESERVER BACKUP EXCEPTIONS */
Query EVent DomainName SchedName
BEGINDate=TODAY-1 ENDDate=TODAY-1
EXceptionsonly=YES Format=Detailed
>> /var/log/backup-problems
File will end up with message:
"ANR2034E QUERY EVENT: No match found
for this query."
if no problems (no exceptions found).
Backup, undo There is no way to undo standard client
Incremental or Selective backups.
Backup, which file systems to back up Specify a file system name via the
"DOMain option" (q.v.) or specify a file
system subdirectory via the
"VIRTUALMountpoint" option (q.v.) and
then code it like a file system in the
"DOMain option" (q.v.).
Backup, which files are backed up See the client manual; search the PDF
(Backup criteria) for the word "modified".
In the Windows client manual, see:
- "Understanding which files are backed
up"
- "Copy mode"
- "Resetarchiveattribute"
(TSM does not use the Windows archive
attribute to determine if a file is a
candidate for incremental backup.)
- And, Windows Journal-based backup.
It is also the case that TSM respects
the entries in Windows Registry subkey
HKLM\System\CurrentControlSet\Control\
BackupRestore\FilesNotToBackup
(No, this is not mentioned in the
client manual; is in the 4.2 Technical
Guide redbook. File \Pagefile.sys
should be in this list.)
Always do 'dsmc q inclexcl' in Windows
to see the realities of inclusion.
Note that there is also a list of
Registry keys not to be restored, in
KeysNotToRestore.
Unix: See the criteria listed under the
description of "Copy mode" (p.128 of the
5.2 manual).
See also: MODE
Backup copies, number of Defined in Backup Copy Group.
Backup Copy Group A policy object that contains attributes
which control the generation,
destination, and expiration of backup
versions of files. A backup copy group
belongs to a management class.
Backup Copy Group, define 'DEFine COpygroup DomainName PolicySet
MGmtclass [Type=Backup]
DESTination=Pool_Name
[FREQuency=Ndays]
[VERExists=N_Versions|NOLimit]
[VERDeleted=N_Versions|NOLimit]
[RETExtra=N_Versions|NOLimit]
[RETOnly=N_Versions|NOLimit]
[MODE=MODified|ABSolute]
[SERialization=SHRSTatic|STatic|
SHRDYnamic|DYnamic]'
Backup Copy Group, update 'UPDate COpygroup DomainName PolicySet
MGmtclass [Type=Backup]
[DESTination=Pool_Name]
[FREQuency=Ndays]
[VERExists=N_Versions|NOLimit]
[VERDeleted=N_Versions|NOLimit]
[RETExtra=N_Versions|NOLimit]
[RETOnly=N_Versions|NOLimit]
[MODE=MODified|ABSolute]
[SERialization=SHRSTatic|STatic|
SHRDYnamic|DYnamic]'
BAckup DB TSM server command to back up the TSM
database to tape (backs up only used
pages, not the whole physical space).
This operation is essential when
LOGMode Rollforward is in effect, as
this is the only way that the Recovery
Log is cleared. It's unclear whether
this operation copies the current
dbvolume configuration to the output
volume; but that doesn't matter, in that
the 'dsmserv restore db' operation
requires that a TSM server already be
installed with a formatted db and
recovery log, where that space will be
used as the destination of the restored
data. Syntax:
'BAckup DB DEVclass=DevclassName
[Type=Incremental|
Full|DBSnapshot]
[VOLumenames=VolNames|
FILE:File_Name]
[Scratch=Yes|No]
[Wait=No|Yes]'
The VOLumenames list will be used if
there is at least one volume in it which
is not already occupied; else TSM will
use a scratch tape per the default
Scratch=Yes.
Note that the DevClass can be of DEVType
FILE...which could allow you to have a
large-capacity hard drive inside a
fire-proof enclosure so as to produce a
secure backup for disaster with no extra
effort.
DBSnapshot Specifies that you want to
run a full snapshot database backup, to
make a "point in time" image for
possible later db restoral (in which
the Recovery Log will *not*
participate). The entire contents of a
database are copied and a new snapshot
database backup is created without
interrupting the existing full and
incremental backup series for the
database. If roll-forward db mode is in
effect, and a snapshot is performed,
the recovery log keeps growing. Before
doing one of these, be aware that the
latest snapshot db backup cannot be
deleted!
Priority: Higher than filespace Backup,
so will preempt it if conflict.
The Recovery Log space represented in
the backup will not be reclaimed until
the backup finishes: the Pct Util does
not decrease as the backup proceeds.
The tape used *does* show up in a 'Query
MOunts". Note that unlike in other ADSM
tape operations, the tape is immediately
unloaded when the backup is complete.
If using scratch volumes, beware that
this function will gradually consume
all your scratch volumes unless you do
periodic pruning ('DELete VOLHistory').
If specifying volsers to use, they must
*not* already be assigned to a DBBackup
or storage pool: if they are, ADSM will
instead try to use a scratch volume,
unless Scratch=No.
Example: 'BAckup DB
DEVclass=LIBR.DEVC_3590
VOL=000050 Type=full
Scratch=No'
You should free old dbbackup volumes:
'DELete VOLHistory TOD=-N T=DBB'
where "-N" should specify a value like
-7, saying to deleted any older than 7
days, meaning you keep the latest 7 days
worth for safety. It is best to
schedule this deletion to occur
immediately prior to doing BAckup DB: in
this way you can assure that a tape will
be available, even if the scratch pool
was exhausted.
Messages: ANR1360I when output volume
opened; ANR1361I when the volume is
closed; ANR4554I tracks progress;
ANR4550I at completion (reports number
of pages backed up). If you neglect to
perform a BAckup DB for some time and a
significant amount of database updating
has occurred, you will be reminded of
this by an ANR2121W message in the
Activity Log.
Incremental DB Backup does *not*
automatically write to the last tape
used in a full backup: it will write to
a scratch tape instead. (And each
incremental writes to a new tape.)
Queries: Do either:
'Query VOLHistory Type=DBBackup' or
'Query LIBVolume'
to reveal the database backup volume.
(A 'Query Volume' is no help because it
only reports storage pool volumes, and
by their nature, database backup media
are outside ADSM storage.
See: Database backup volume, pruning.
By using the ADSMv3 Virtual Volumes
capability, the output may be stored on
another ADSM server (electronic
vaulting).
See also: DELete VOLHistory;
dsmserv RESTORE DB
BAckup DB performance As of mid-2001, BAckup DB is still a
plodding task. Data rates, even with
the best disk, tape, and CPU hardware,
are only 3 - 4 MB/sec, which is well
below hardware speeds. Thus, the TSM
database system itself is the drag on
performance.
BAckup DB to a scratch 3590 tape Perform like the following example:
in the 3494 'BAckup DB DEVclass=LIBR.DEVC_3590
Type=Full'
BAckup DB to a specific 3590 tape Perform like the following example:
in the 3494 'BAckup DB DEVclass=LIBR.DEVC_3590
Type=Full VOLumenames=000050
Scratch=No'
BAckup DEVCONFig ADSM server command to back up the
device configuration information which
ADSM uses in standalong recoveries.
Syntax:
'BAckup DEVCONFig [Filenames=___]'
(No entry is written to the Activity Log
to indicate that this was performed.)
See also DEVCONFig server option.
Backup failure message "ANS4638E Incremental backup of
'FileSystemName' finished with 2
failure"
Backup files See also: File name uniqueness
Backup files: deletable by client Controlled by the BACKDELete parameter
node? on the 'REGister Node' and 'UPDate Node'
commands. Default: No (which thus
prohibits a "DELete FIlespace" operation
from the client).
Query via 'Query Node Format=Detailed'.
Backup files, management class binding By design, you can not have different
backup versions of the same file bound
to different management classes. All
backup versions of a given file are
bound to the same management class.
Backup files, delete *SM provides no inherent method to do
this, but you can achieve it by the
following paradigm:
1. Update Copygroup Verexists to 1,
ACTivate POlicyset, do a fresh
incremental backup. This gets rid of
all but the last (active) version of
a file.
2. Update Copygroup Retainonly and
Retainextra to 0; ACTivate POlicyset;
EXPIre Inventory. This gets ADSM to
forget about inactive files.
3. If the files are "uniquely
identified by the sub-directory
structure above the files" add those
dirs to the exclude list. Do an
Incremental Backup. The files in the
excluded dirs get marked inactive.
The next EXPIre Inventory should then
remove them from the tapes.
See also: Database, delete table entry
Backup files, list from server 'Query CONtent VolName ...'
Backup files, retention period Is part of the Copy Group definition.
Is defined in DEFine DOmain to provide a
just-in-case default value.
Note that there is one Copy Group in a
Management Class for backup files, and
one for archived files, so the retention
period is essentially part of the
Management Class.
Backup files, versions 'SHOW Versions NodeName FileSpace'
Backup files for a node, list from SELECT NODE_NAME, FILESPACE_NAME, -
SERVER HL_NAME, LL_NAME, OWNER, STATE, -
BACKUP_DATE, DEACTIVATE_DATE FROM -
BACKUPS WHERE -
NODE_NAME='UPPER_CASE_NAME'
(Be sure that node name is upper case.)
See also: HL_NAME; LL_NAME
Backup generations See "Backup version"
Backup Image See: dsmc Backup Image
Backup laptop computers One technique: Define a schedule for
laptop users that spans a 24-hour window
and have the scheduler service running,
as SCHEDMODE POLLING, starting at boot.
This will cause the scheduler to try to
contact the server every 20 minutes.
When the laptop connects to the network,
sometime within the next 20 minutes the
scheduler will be able to contact the
server, and if the schedule has not yet
been executed, it will run. (This is
preferable to invoking dsmc at boot
time, as the schedule technique deals
with the situation where users employ
sleep mode a lot, rather than shutting
down.
There are, of course, competing products
to back up mobile PCs, such as
BrightStor ARCserve Backup for Laptops &
Desktops.
Backup objects for day, query at server SELECT * FROM BACKUPS WHERE -
NODE_NAME='UPPER_CASE_NAME' AND -
FILESPACE_NAME='___' AND -
DATE(BACKUP_DATE)='2000-01-14'
Backup of HSM-managed files Use one server for HSM plus the Backup
of that HSM area: this allows ADSM to
effect the backup (of large files) by
copying from one storage pool tape to
another, without recalling the file to
the host file system.
In the typical backup of an HSM-managed
file system, ADSM will back up all the
files too small to be HSM-migrated (4095
bytes or less); and then any files which
were in the disk level of the HSM
storage pool hierarchy, in that they had
not yet migrated down to the tape level;
and then copy across tapes in the
storage pool. If Backup gets hung up on
a code defect while doing cross-tape
backup, you can circumvent by doing a
dsmrecall of the problem file(s). The
backup will then occur from the file
system copy.
Be advised that cross-pool backup can
sometimes require three drives, as files
can span tapes. With only two drives,
you can run into an "Insufficient mount
points available" condition (ANR0535W,
ANR0567).
Backup Operation Element of report from
'Query VOLHistory' or
'DSMSERV DISPlay DBBackupvolumes' to
identify the operation number for this
volume within the backup series. Will
be 0 for a full backup, 1 for first
incremental backup, etc.
See also: Backup Series
Backup operation, retry when file in Have the CHAngingretries (q.v.) Client
use System Options file (dsm.sys) option
specify how many retries you want.
Default: 4.
Backup performance Many factors can affect backup
performance. Here are some things to
look at:
- Client system capability and load at
the time of backup.
- If Expiration is running on the
server, performance is guaranteed to
be impaired, due to the CPU and
database load involved.
- Use client compression judiciously.
Be aware that COMPRESSAlways=No can
cause the whole transaction and all
the files involved within it to be
processed again, without compression.
This will show up in the "Objects
compressed by:" backup statistics
number being negative (like "-29%").
(To see how much compression is
costing, compress a copy of a typical,
large file that is involved in your
backups, outside of TSM, performing
the compression with a utility like
gzip.)
Beware that using client compression
and sending that data to tape drives
which also compress data can result in
prolonged time at the tape drive as
its algorithms struggle to find
patterns in the patternless compressed
data.
- Using the MEMORYEFficientbackup option
considerably reduces performance.
- The client manual advises: "A very
large include-exclude list may
decrease backup performance."
- A file system that does compression
(e.g., NTFS) will prolong the job.
- Backing up a file system which is
networked to this client system rather
than native to it (e.g., NFS, AFS)
will naturally be relatively slow.
- Make sure that if you activated client
tracing in the past that you did not
leave it active, as its overhead will
dramatically slow client performance.
- File system topology: conventional
directories with more than about 1000
files slow down all access, including
ADSM. (You can gauge this by doing a
Unix 'find' command in large file
systems and appreciate just how
painful it is to have too many files
in one directory.)
- Consider using MAXNUMMP to increase
the number of drives you may
simultaneously use.
- Your Copy Group SERialization choice
could be causing the backup of active
files to be attempted multiple times.
- May be waiting for mount points on the
server. Do 'Query SEssion F=D'.
- Examine the Backup log for things like
a lot of retries on active files, and
inspect the timestamp sequence for
indications of problem areas in the
file system.
- If an Incremental backup is slow while
a Selective or Incrbydate is fast, it
can indicate a client with
insufficient real memory or other
processes consuming memory that the
client needs to process an Active
files list expeditiously.
- If the client under-estimates the size
of an object it is sending to the
server, there may be performance
degradation and/or the backup may
fail. See IBM site TechNote 1156827.
- Defragment your hard drive! You can
regain a lot of performance. (This
can also be achieved by performing a
file-oriented copy of the file system
to a fresh disk, which will also
eliminate empty space in directories.)
- If a Windows system, consider running
DISKCLEAN on the filesystem.
- In a PC, routine periodic executions
of a disk analyzer (e.g., CHKDSK, or
more thorough commercial product) are
vital to find drive problems which can
impair performance.
- Do your schedule log, dsmerror log, or
server Activity Log show errors or
contention affecting progress?
- Avoid using the unqualified Exclude
option to exclude a file system or
directory, as Exclude is for *files*:
subdirectories will still be traversed
and examined for candidates. Instead,
use Exclude.FS or Exclude.Dir, as
appropriate.
- TSM Journaling may help a lot.
- The number of versions of files that
you keep, per your Backup Copy Group,
entails overhead: During a Backup, the
server has additional work to do in
having to check retention policies for
this next version of a file causing
the oldest one in the storage pool
having to be marked for expiration.
See also: DEACTIVATE_DATE
- If AIX, consider using the TCPNodelay
client option to send small
transactions right away, before
filling the TCP/IP buffer.
- If running on a PC, disable anti-virus
and other software which adds overhead
to file access.
- Backups of very large data masses,
such as databases, benefit from going
directly to tape, where streaming can
often be faster than first going to
disk, with its rotational positioning
issues. And speed will be further
increased by hardware data compression
in the drive.
- If backups first go to a disk storage
pool, consider making it RAID type, to
benefit from parallel striping across
multiple, separate channels & disk
drives. But avoid RAID 5, which is
poor at sequential writing.
- Make sure your server BUFPoolsize is
sufficient to cache some 99% of
requests (do 'q db f=d'), else server
performance plummets.
- Maximize your TXNBytelimit and
TXNGroupmax definitions to make the
most efficient use of network
bandwidth.
- Balance access of multiple clients to
one server and carefully schedule
server admin tasks to avoid waiting
for tape mounts, migration,
expirations, and the like. Migration
in particular should be avoided during
backups: see IBM site TechNote
1110026.
- Make sure that LARGECOMmbuffers Yes
is in effect in your client (the
default is No, except for AIX).
- The client RESOURceutilization option
can be used to boost the number of
sessions.
- If server and client are in the same
system, use Shared Memory in Unix and
Named Pipes in Windows.
- If client accesses server across
network, examine TCP/IP tuning values
and see if other unusual activity is
congesting the network.
- See if your client TCPWindowsize is
too small - but don't increase it
beyond a recommendes size. (63 is good
for Windows.)
- Is your ethernet card in Autonegotiate
mode? Shame on you!
- Beware the invisible: networking
administrators may have changed the
"quality of service" rating - perhaps
per your predecessor - so that *SM
traffic has reduced priority on that
network link.
- If it is a large file system and the
directories are reasonably balanced,
consider using VIRTUALMountpoint
definitions to allow backing up the
file system in parallel.
- A normal incremental backup on a very
large file system will cause the *SM
client to allocate large amounts of
memory for file tables, which can
cause the client system to page
heavily. Make sure the system has
enough real memory, and that other
work running on that system at the
same time is not causing contention
for memory. Consider doing Incrbydate
backups, which don't use file tables,
or perhaps "Fast Incrementals".
- Consider it time to split that file
system into two or more file systems
which are more manageable.
- Look for misconfigured network
equipment (adapters, switches, etc.).
- Are you using ethernet to transfer
large volumes of data? Consider that
ethernet's standard MTU size is tiny,
fine for messaging but not well suited
to large volumes of data, making for a
lot of processor and transmission
overhead in transferring the data in
numerous tiny packets. Consider the
Jumbo Frame capability in some
incarnations of gigabit ethernet, or a
transmission technology like fibre
channel, which is designed for volume
data transfers. That is, ethernet's
capacity does not scale in proportion
to its speed increase.
- If warranted, put your *SM traffic
onto a private network (like a SAN
does) to avoid competing with other
traffic in getting your data through.
- If you have multiple tape drives on
one SCSI chain, consider dedicating
one host adapter card to each drive in
order to maximize performance.
- If your computer system has only one
bus, it could be constrained. (RS/6000
systems can have multiple, independent
buses, which distribute I/O.)
- Tape drive technologies which don't
handle start-stop well (e.g., DLT)
will prolong backups. See: Backhitch
- Automatic tape drive cleaning and
retries on a dirty drive will slow
down the action.
- Tapes whose media is marginal may be
tough for the tape drive to write, and
the drive may linger on a tape block
for some time, laboring until it
sucessfully writes it - and may not
give any indication to the operating
system that it had to undertake this
extra effort and time. (As an example,
with a watchable task: Via 'Query
Process' I once observed a Backup
Stgpool taking about four times as
long as it should in writing a 3590
tape, the Files count repeatedly
remaining contant over 20 seconds as
it struggled to write modest-sized
files.)
- If you mix SCSI device types on a
single SCSI chain, you may be limiting
your fastest device to the speed of
the slowest device. For example,
putting a single-ended device on a
SCSI chain with a differential device
will cause the chain speed to drop to
that of the single-ended device.
- In Unix, use the public domain 'lsof'
command to see what the client process
is currently working on.
- In Solaris, consider utilizing
"forcedirectio" (q.v.). To analyze
performance, use the 'truss' command
to see where the client is processing.
- Is cyclic redundancy checking enabled
for the server/client (*SM 5.1)? This
entails considerable overhead.
- Exchange 2000: Consider un-checking
the option "Zero Out Deleted Database
Pages" (required restart of the
Exchange Services). See IBM article
ID# 1144592 titled "Data Protection
for Exchange On-line Backup
Performance is Slow and Microsoft
KB 815068.
- A Windows TSM server may be I/O
impaired due to its SCSI or Fibre
Channel block size.
See IBM site Technote 1167281.
If none of the above pan out, consider
rerunning the problem backup with client
tracing active. See CLIENT TRACING near
the bottom of this document.
See also: Backup taking too long; Client
performance factors; Server performance
Backup performance with 3590 tapes Writing directly to 3590 tapes, rather
than have an intermediate disk, is 3X-4X
faster: 3590's stream the data where
disks can't. Ref: ADSM Version 2
Release 1.5 Performance Evaluation
Report.
Backup preview TSM 5.3 introduced the ability to
preview the files which would be sent to
the server in a Backup operation, per
the client Include-Exclude specs.
BACKup REgistry During Incremental backup of a Windows
system, the Registry area is backed up.
However, in cases where you want to back
up the Resistry alone, you can do so
with the BACKup REgistry command.
The command backs up Registry hives
listed in Registry key
HKEY_LOCAL_MACHINEM\System\
CurrentControlSet\Control\Hivelist
Syntax: BACKup REgistry
Note that there in current clients,
there are no operands, to guarantee
system consistency. Earlier clients had
modifying parameters:
BACKup REgistry ENTIRE
Backs up both the Machine and User
hives.
BACKup REgistry MACHINE
Backs up the Machine root key hives
(registry subkeys).
BACKup REgistry USER
Backs up User root key hives (registry
subkeys).
See also: BACKUPRegistry
Backup Required Before Migration In output of 'dsmmigquery -M -D', an
(HSM) attribute of the management class which
determines whether it is necessary for a
backup copy (Backup/Restore) of the file
to exist before it can be migrated by
HSM.
Defined via MIGREQUIRESBkup in
management class.
See: MIGREQUIRESBkup
Backup retention grace period The number of days ADSM retains a backup
version when the server is unable to
rebind the object to an appropriate
management class. Defined via the
BACKRETention parameter of
'DEFine DOmain'.
Backup retention grace period, query 'Query DOmain Format=Detailed', see
"Backup Retention (Grace Period)".
Backup Series Element of report from
'Query VOLHistory' or
'DSMSERV DISPlay DBBackupvolumes' to
identify the TSM database backup series
of which the volume is a part. Each
backup series consists of a full backup
and all incremental backups that apply
to that full backup, up to the next full
backup of the TSM database.
Note: After a DSMSERV LOADDB, the Backup
Series number will revert to 1.
When doing DELete VOLHistory, be sure to
delete the whole series at once, to
avoid the ANR8448E problem.
See also: BAckup VOLHistory
Backup sessions, multiple See: RESOURceutilization
Backup Set TSM 3.7+ facility to create a collection
of a client node's current Active backup
files as a single point-in-time amalgam
(snapshot) on sequential media, to be
stored and managed as a single object in
a format tailored to and restorable on
the client system whose data is therein
represented. The GENerate BACKUPSET
server command is used to create the
set, intended to be written to
sequential media, typically of a type
which can be read either on the server
or client such that the client can
perform a 'dsmc REStore BACKUPSET'
either through the TSM server or by
directly reading the media from the
client node. The media is often
something like a CD-ROM, JAZ, or ZIP.
Note that you cannot write more than one
Backup Sets to a given volume. If this
is a concern, look into server-to-server
virtual volumes. (See: Virtual Volumes)
Also known by the misleading name
"Instant Archive".
Note that the retention period can be
specified when the backup set is
created: it is not governed by a
management class.
Also termed "LAN-free Restore".
The consolidated, contiguous nature of
the set speeds restoral. ("Speeds" may
be an exaggeration: while Backup Sets
are generated via TSM db lookups, they
are restored via lookups in the
sequential media in which the Backup Set
is contained, which can be slow.)
Backup Sets are frozen, point-in-time
snapshots: they are in no way
incremental, and nothing can be added to
one.
But there are several downsides to this
approach: The first is that it is
expensive to create the Backup Set, in
in terms of time, media, and mounts.
Second, the set is really "outside" of
the normal TSM paradigm, further
evidenced by the awkwardness of later
trying to determine the contents of the
set, given that its inventory is not
tracked in the TSM database (which would
represent too much overhead). You will
not see a directory structure for a
backupset.
Note that you can create the Backup Set
on the server as devtype File and then
FTP the result to the client, as perhaps
to burn a CD - but be sure to perform
the FTP in binary mode!
Backup Sets are not a DR substitute for
copy storage pools in that Backup Sets
hold only Active files, whereas copy
storage pools hold all files, Active and
Inactive.
There is no support in the TSM API for
the backup set format. Further, Backup
Sets are unsuitable for API-stored
objects (TDP backups, etc.) in that the
client APIs are not programmed to later
deal with Backup Sets, and so cannot
perform client-based restores with them.
Likewise, the standard Backup/Archive
clients do not handle API-generated
data.
See: Backup Set; GENerate BACKUPSET;
dsmc Query BACKUPSET;
dsmc REStore BACKUPSET; Query BACKUPSET;
Query BACKUPSETContents
Ref: TSM 3.7 Technical Guide redbook
Backup Set, amount of data Normal Backup Set queries report the
number of files, but not the amount of
data. You can determine the latter by
realizing that a Backup Set consists of
all the Active files in a file system,
and that is equivalent to the file
system size and percent utilized as
recorded at last backup, reportable via
Query FIlespace.
Backup Set, list contents Client: 'Query BACKUPSET'
Server: 'Query BACKUPSETContents'
See also: dsmc Query BACKUPSET
Backup set, on CD In writing Backup Sets to CDs you need
to account for the amount of data
exceeding the capacity of a CD...
Define a devclass of type FILE and set
the MAXCAPacity to under the size of the
CD capacity. This will cause the data to
span TSM volumes (FILEs), resulting in
each volume being on a separate CD.
Be mindful of the requirement:
The label on the media must meet the
following restrictions:
- No more than 11 characters
- Same name for file name and volume
label.
This might not be problem for local
backupset restores but is mandatory for
server backupsets over devclass with
type REMOVABLEFILE. The creation utility
DirectCD creates random CD volume label
beginning with creation date, which will
will not match TSM volume label.
Ref: Admin Ref; Admin Guide "Generating
Client Backup Sets on the Server" &
"Configuring Removable File Devices"
Backup set, remove from Volhistory A backup set which expires through
normal retention processing may leave
the volume in the volhistory. There is
an undocumented form of DELete
VOLHistory to get it out of there:
'DELete VOLHistory TODate=TODAY
[TOTime=hh:mm:ss] TYPE=BACKUPSET
VOLume=______ [FORCE=YES]'
Note that VOLume may be case-sensitive.
Backup Set and CLI vs. GUI In the beginning (early 2001), only the
CLI could deal with Backup Sets. The
GUI was later given that capability.
However: The GUI can be used only to
restore an entire backup set. The CLI is
more flexible, and can be used to
restore an entire backup set or
individual files within a backup set.
Backup Set and TDP The TDPs do not support backup sets -
because they use the TSM client API,
which does not support Backup Sets.
Backup Set and the client API The TSM client API does not support
Backup Sets.
Backup Set restoral performance Some specific considerations:
- A Backup Set may contain multiple
filespaces, and so getting to the data
you want within the composite may take
time. (Watch out: If you specify a
destination other than the original
location, data from all file spaces is
restored to the location you specify.)
- There is no table of contents for
backup sets: The entire tape or set
has to be read for each restore or
query - which explains why a Query
BACKUPSETContents is about as
time-consuming as an actual restoral.
See also "Restoral performance", as
general considerations apply.
Backup Set volumes not checked in SELECT COUNT(VOLUME_NAME) FROM
VOLHISTORY WHERE TYPE='BACKUPSET' AND
VOLUME_NAME NOT IN (SELECT VOLUME_NAME
FROM LIBVOLUMES)
Backup Sets, report SELECT VOLUME_NAME FROM VOLHISTORY
WHERE TYPE='BACKUPSET'
Backup Sets, report number SELECT COUNT(VOLUME_NAME) FROM
VOLHISTORY WHERE TYPE='BACKUPSET'
Backup skips some PC disks Possible causes:
(skipping) - Options file updated to add disk, but
scheduler process not restarted.
- Drive improperly labeled.
- Drive was relabeled since PC reboot or
since ADSM client was started.
- The permissions on the drive are
wrong.
- Drive attributes differ from those
of drives which *will* backup.
- Give ADSM full control to the root
on each drive (may have been run by
SYSTEM account, lacking root access).
- Msgmode is QUIET instead of VERBOSE,
so you see no messages if nothing goes
wrong.
- ADSM client code may be defective such
that it fails if the disk label is in
mixed case, rather than all upper or
lower.
Backup skips some Unix files An obvious cause for this occurring is
that the file matches an Exclude.
Another cause: The Unix client manual
advises that skipping can occur when the
LANG environment variable is set to C,
POSIX (limiting the valid characters to
those with ASCII codes less than 128),
or other values with limitations for
valid characters, and the file name
contains characters with ASCII codes
higher than 127.
Backup "stalled" Many ADSM customers complain that their
client backup is "stalled". In fact, it
is almost always the case that it is
processing, simply taking longer than
the person thinks. In traditional
incremental backups, the client must get
from the server a list of all files that
it has for the filespace, and then run
through its file system, comparing each
file against that list to see if it
warrants backup. That entails
considerable server database work,
network traffic, client CPU time, and
client I/O...which is aggravated by
overpopulated directories. Summary
advice: give it time.
BAckup STGpool *SM server operation to create a backup
copy of a storage pool in a Copy Storage
Pool (by definition on serial medium,
i.e., tape). Syntax:
'BAckup STGpool PrimaryPoolName
CopyPoolName [MAXPRocess=N]
[Preview=No|Yes|VOLumesonly]
[Wait=No|Yes]'
Note that storage pool backups are
incremental in nature so you only
produce copies of files that have not
already been copied. (It is incremental
in the sense of adding new objects to
the backup storage pool. It is not
exactly like a client incremental backup
operation: BAckup STGpool itself does
not cause objects to be identified as
deletable from the *SM database. It is
Expire Inventory that rids the backup
storage pool of obsolete objects.)
Order of backup: most recent data first,
then work back in time.
BAckup STGpool copies data: it does not
examine the data for issues...you need
to use AUDit Volume for that, optionally
using CRC data.
Only one backup may be started per
storage pool: attempting to start a
second results in error message "Backup
already active for pool ___".
MAXPRocess: Specify only as many as you
will have available mount points or
drives to service them (DEVclass
MOUNTLimit, less any drives already in
use or unavailable (Query DRive)). Each
process will select a node and copy all
the files for that node. Processes that
finish early will quit. The last
surviving process should be expected to
go on to other nodes' data in the
storage pool. If you don't actually get
that many processes, it could be due to
the number of mount points or there
being too few nodes represented in the
stgpool data. Elapsed time cannot be
less than the time to process the
largest client data set. Beware using
all the tape drives: migration is a
lower priority process and thus can be
stuck for hours waiting for BAckup
STGpool to end, which can result in
irate Archive users.
MAXPRocess and preemption: If you
invoked BAckup STGpool to use all drives
and a scheduled Backup DB started, the
Backup DB process would pre-empt one of
the BAckup STGpool processes to gain
access to a drive (msg ANR1440I): the
other BAckup STGpool processes continue
unaffected. (TSM will not reinitiate the
terminated process after the preempting
process has completed.)
Preview: Reveals the number of files and
bytes to be backed up and a list of the
primary storage pool volumes that would
be mounted.
You cannot backup a storage pool on one
computer architecture and restore it on
another: use Export/Import.
If a client is introducing files to a
primary storage pool while that pool is
being backed up to a copy storage pool,
the new files may get copied to the copy
storage pool, depending upon the
progress that the BAckup STGpool has
made.
Preemption: BAckup STGpool will wait
until needed tape drives are available:
it does not preempt Backups or HSM
Recalls or even Reclamation.
By using the ADSMv3 Virtual Volumes
capability, the output may be stored on
another ADSM server (electronic
vaulting - as archive type files).
Msgs: ANR1212I, ANR0986I (reports
process, number of files, and bytes),
ANR1214I (reports storage pool name,
number of files, and bytes), ANR1221E
(if insufficient space in copy storage
pool)
See also IBM site Technote 1155023.
See also: Aggregates
BAckup STGpool, estimate requirements Use the Preview option.
BAckup STGpool, how to stop If you need to stop the backup
prematurely, you can do one of:
- CANcel PRocess on each of its
processes. But: you need to know the
process numbers, and so can't, for
example, make the stop an
administrative schedule.
- UPDate STGpool ... ACCess=READOnly
This will conveniently cause all the
backup processes to stop after they
have finished with the file they are
currently working on. In the Activity
Log you will find message ANR1221E,
saying that the process terminated
because of insufficient space.
(Updating the storage pool back to
READWrite before a process stops will
prevent the process from stopping: it
has to transition to the next file for
it to see the READOnly status.)
BAckup STGpool, minimize time To minimize the time for the operation:
- Perform the operation when nothing
else is going on in ADSM;
- Maximize your TSM database Cache Hit
Pct. (standard tuning);
- Maximize the 'BAckup STGpool'
MAXPRocess number to:
The lesser of the number of tape
drives or nodes available when backing
up disk pools (which needs tape drives
only for the outputs);
The lesser of either half the number
of tape drives or the number of nodes
when backing up tape pools (which
needs tape drives for both input and
output).
- If you have an odd number of tape
drives during a tape pool backup, one
drive will likely end up with a tape
lingering it after stgpool backup is
done with that tape, and ADSM's
rotational re-use of the drive will
have to wait for a dismount. So for
the duration of the storage pool
backup, consider having your DEVclass
MOUNTRetention value 1 to assure that
the drive is ready for the next mount.
- If you have plenty of tapes, consider
marking previous stgpool backup tapes
read-only such that ADSM will always
perform the backup to an empty tape
and so not have to take time to change
tapes when it fills last night's.
BAckup STGpool, order within hierarchy When performing a Backup Stgpool on a
storage pool hierarchy, it should be
done from the top of the hierarchy to
the bottom: you should not skip around
(as for example doing the third level,
then the first level, then the second).
Remember that files migrate downward in
the hierarchy, not upward. If you do
the Backup Stgpool in the same downward
order, you will guarantee not missing
files which may have migrated in between
storage pool backups.
BAckup STGpool taking too long Can be due to tapes whose media is
marginal, tough for the input tape drive
to read or the output tape drive to
write, causing lingering on a tape block
for some time, laboring until it
sucessfully completes the I/O - and may
not give any indication to the operating
system that it had to undertake this
extra effort and time.
To analyze: Observe via 'Query Process'.
ostensibly seeing the Files count
repeatedly remaining contant as a file
of just modest file size is copied.
But is it the input or output volume?
To determine, do 'UPDate Volume ______
ACCess=READOnly' on the output volume:
this will cause the BAckup STGpool to
switch to a new output volume. If
subsequent copying suffers no delay,
then the output tape was the problem;
else it was probably the input volume
that was troublesome. While the
operation proceeds, return the prior
output volume to READWrite state, which
will tend to cause it to be used for
output when the current output volume
fills, at which time a different input
volume is likely. If copying becomes
sluggish again, then certainly that
volume is the problem.
BAckup STGPOOLHierarchy There is no such command - but there
should be: The point of a storage pool
hierarchy is that if a file object is in
any storage pool within the hierarchy,
that is "there". In concert with this
concept, there should be a command which
generally backs up the hierarchy to
backup storage. The existing command,
BAckup STGpool is antithetical, in that
it addresses a physical subset of the
whole, logical hierarchy: it is both a
nuisance to have to invoke against each
primary storage pool in turn, and
problematic in that a file which moves
in the hierarchy might be missed by the
piecemeal backup.
Backup storage pool See also: Copy Storage Pool
Backup storage pool, disk? Beware using a disk as the 1st level of
(disk buffer for Backup) a backup storage pool hierarchy. TSM
storage hierarchy rules specify that if
a given file is too big to fit into the
(remaining) space of a storage pool,
that it should instead go directly down
to the next level (presumably, tape).
What can happen is that the disk storage
pool can get full because migration
cannot occur fast enough, and the backup
will instead try to go directly to tape,
which can result in the client session
getting hung up on a Media Wait (MediaW
status). Mitigation: Use MAXSize on the
disk storage pool, to keep large files
from using it up quickly. However, many
clients back up large files routinely,
so you end up with the old situation of
clients waiting for tape drives.
Another problem with using this kind of
disk buffering for Backups is that the
migration generates locks which
interfere with Backup, worse on a
multiprocessor system. If TSM is able to
migrate at all, it will be thrashing
trying to keep up, continually
re-examining the storage pool contents
to fulfill its migration rules of
largest file sizes and nodes. Lastly,
you have to be concerned that your
backup data may not all be on tape:
being on disk, it represents an
incomplete tape data set, and
jeopardizes recoverability of that
filespace, should the disk go bad.
See also: Backup through disk storage
pool
Backup success message "Successful incremental backup of
'FileSystemName'", which has no
message number.
Backup successful? You can check the 11th field of the
dsmaccnt.log.
BACKup SYSTEMObject See: dsmc BACKup SYSTEMObject
Backup table See: BACKUPS
Backup taking too long Sometimes it may seem that the backup
(seems like it "hangs" client is hung, but almost always it is
(hung, freezes, sluggish, slow)) active. To determine why it's taking as
long as it is, you need to take a close
look at the system and see if it or TSM
is really hung, or simply slow or
blocked.
Examination of the evolutionary context
of the client might show that the number
of files on it has been steadily
increasing, and so the number in TSM
storage, and thus an increasingly
burdensome inventory obtained from the
server during a dsmc Incremental. The
amount of available CPU power and memory
at the time are principal factors: it
may be that the system's load has
evolved whereas its real memory has not,
and it needs more.
Use your opsys monitoring tools to
determine if the TSM client is actually
busy in terms of CPU time and I/O in
examination of the file system: the
backup may simply be still be looking
for new files to send to server storage.
The monitor should show I/O and CPU
activity proceeding.
In the client log, look for the backup
lingering in a particular area of the
file system, which can indicate a bad
file or disk area, where a chkdsk or the
like may uncover a problem. You could
also try a comparative INCRBYDate type
backup and see if that does better,
which would indicate difficulty dealing
with the size of the inventory. TSM
Journaling may also be an option.
In some "cranky" OS environments
(NetWare), a locked file in the file
system may cause the backup to get stuck
at that point, due to poor handling by
the OS.
Consider doing client tracing to
identify where the time is concentrated.
(See "CLIENT TRACING" section at bottom
of this document.)
If not hung, then one or more of the
many performance affectors may be at
play.
See: Backup performance
Backup through disk storage pool It is traditional to back up directly to
(disk buffer) tape, but you can do it through a
storage pool hierarchy with a disk
storage pool ahead of tape.
Advantages:
- Immediacy: no waiting for tape mount.
- No queueing for limited tape drives
when collocation is in effect.
- 'BAckup STGpool' can be faster, to the
extent that the backup data is still
on disk, as opposed to a tape-to-tape
operation.
Disadvantages:
- ADSM server is busier, having to move
the data first to disk, then to tape
(with corresponding database updates).
- There can still be some delays for
tape mounts, as migration works to
drain the disk storage pool.
- Backup data tends to be on disk and
tape, rather than all on tape. (This
can be mitigated by setting migration
levels to 0% low and 0% high to force
all the data to tape.)
- A considerable amount of disk space is
dedicated to a transient operation.
- With some tape drive technology you
may get better throughput by going
directly to tape because the streaming
speed of some tape technology is by
nature faster than disk. With better
tape technology, the tape is always
positioned, ready for writing whereas
the rotating disk has to wait for its
spot to come around again. And, the
compression in tape drive hardware can
result in the effective write speed
exceeding even the streaming rate
spec.
- If the disk pool fills, incoming
clients will go into media wait and
will remain tape-destined even if the
disk pool empties.
- In *SM database restoral, part of that
procedure is to audit any disk storage
pool volumes; so a good-sized backup
storage pool on disk will add to that
time.
See also: Backup storage pool, disk?
Backup version An object, directory, or file space that
a user has backed up that resides in a
backup storage pool in ADSM storage.
The most recent is the "active" version;
older ones are "inactive" versions.
Versions are controlled in the Backup
Copy Group definition (see 'DEFine
COpygroup'). "VERExists" limits the
number of versions, with the excess
being deleted - regardless of the
RETExtra which would otherwise keep
them around. "VERDeleted" limits
versions kept of deleted files.
"RETExtra" is the retention period, in
days, for all but the latest backup
version. "RETOnly" is the retention
period, in days, for the sole remaining
backup version of a file deleted from
the client file system.
Note that individual backups cannot be
deleted from either the client or
server.
See Active Version and Inactive
Version.
Backup version, make unrecoverable First, optionally, move the file on
the client system to another directory.
2nd, in the original directory replace
the file with a small stub of junk.
3rd, do a selective backup of the stub
as many times as you have 'versions' set
in the management class. This will
make any backups of the real file
unrestorable.
4th, change the options to stop
backing up the real file.
There is a way to "trick" ADSM into
deleting the backups:
Code an EXCLUDE statement for the
file, then perform an incremental
backup. This will cause existing
backup versions to be flagged for
deletion. Next, run EXPIre Inventory,
and voila! The versions will be
deleted.
Backup via Schedule, on NT Running backups on NT systems through
"NT services" can be problematic: If you
choose Logon As and assign it an
ADMIN ID with all the necessary
privileges you can think of, it still
may not work. Instead, double-click on
the ADSM scheduler and click on the
button to run the service as the local
System Account.
BAckup VOLHistory ADSM server command to back up the
volume history data to an opsys file.
Syntax:
'BAckup VOLHistory [Filenames=___]'
(No entry is written to the Activity Log
to indicate that this was performed.)
Note that you need not explicitly
execute this command if the
VOLumeHistory option is coded in the
server options file, in that the option
causes ADSM to automatically back up the
volume history whenever it does
something like a database backup.
However, ADSM does not automatically
back up the volume history if a
'DELete VOLHistory' is performed, so you
may want to manually invoke the backup
then.
See also: Backup Series; VOLUMEHistory
Backup MB, over last 24 hours SELECT SUM(BYTES)/1000/1000 AS
"MB_per_day" FROM SUMMARY WHERE
ACTIVITY='BACKUP' AND
(CURRENT_TIMESTAMP-END_TIME)HOURS <= 24
HOURS
Backup vs. Archive, differences See "Archive vs. Selective Backup".
Backup vs. Migration, priorities Backups have priority over migration.
Backup without expiration Use INCRBYDate (q.v).
Backup without rebinding In AIX, accomplish by remounting the
file system on a special mount point
name; or, on a PC, change the volume
name/label of the hard drive. Then back
up with a different, special management
class. This will cause a full backup
and create a new filespace name.
Another approach would be to do the
rename on the other end: rename the ADSM
filespace and then back up with the
usual management class, which will cause
a full backup to occur and regenerate
the former filespace afresh.
Backup won't happen See: Backup skips some PC disks
BACKUP_DIR Part of Tivoli Data Protection for
Oracle. Should be listed in your
tdpo.opt file. It specifies the client
directory which wil be used for storing
the files on your server. If you list
the filespaces created for that node on
the server after a succesful backup, you
will see one filespace with the same
name as you BACKUP_DIR.
Backup-archive client A program that runs on a file server,
PC, or workstation that provides a means
for ADSM users to back up, archive,
restore, and retrieve objects.
Contrast with application client and
administrative client.
BackupDomainList The title under which DOMain-named file
systems appear in the output of the
client command 'Query Options'.
BackupExec Veritas Backup Exec product. A dubious
aspect is the handling of open files,
per a selectable option: it copies a
'stub' to tape, allowing for it to skip
the file. Apparently, most of the time
when you restore the file, it's either a
null file or a partial copy of the
original, either way being useless.
http://www.BackupExec.com/
BACKUPFULL In 'Query VOLHistory' or 'DSMSERV
DISPlay DBBackupvolumes' or VOLHISTORY
database TYPE output, this is the Volume
Type to say that volume was used for a
full backup of the database.
BACKUPINCR In 'Query VOLHistory' or VOLHISTORY
database TYPE output, this is the Volume
Type to say that volume was used for an
incremental backup of the database.
BACKUPRegistry Option for NT systems only, to specify
whether ADSM should back up the NT
Registry during incremental backups.
Specify: Yes or No
Default: Yes
The Registry backup works by using an
NT API function to write the contents of
the Registry into the adsm.sys
directory. (The documentation has
erroneously been suggesting that the
system32\config Registry area should be
Excluded from the backup: it should
not). The files written have the same
layout as the native registry files in
\winnt\system32\config.
You can back up just the Registry with
the BACKup Registry command.
In Windows 2000 and beyond, you can use
the DOMain option to control the backup
of system objects.
Ref: redbook "Windows NT Backup and
Recovery with ADSM" (SG24-2231): topic
4.1.2.1 Registry Backup
BACKUPS SQL: TSM database table containing info
about all active and inactive files
backed up. Along with ARCHIVES and
CONTENTS, constitutes the bulk of the
*SM database contents. Columns:
NODE_NAME, FILESPACE_NAME, STATE
(active, inactive), TYPE, HL_NAME,
LL_NAME, OBJECT_ID, BACKUP_DATE,
DEACTIVATE_DATE, OWNER, CLASS_NAME.
Notes: Does not contain information
about file sizes or the volumes which
the objects are on (see the Contents
table). The OBJECT_ID uniquely
identifies this file among all its
versions. However, there is no
corresponding ID in the CONTENTS table
such that you could get the containing
volume name from it. (There is only the
undocumented SHow BFO command.)
In a Select, you can do
CONCAT(HL_NAME, LL_NAME) to stick those
two components together, to make the
output more familiar; or concatenate the
whole path by doing: SELECT
FILESPACE_NAME || HL_NAME || LL_NAME
FROM BACKUPS.
See: CONTENTS; DEACTIVATE_DATE; HL_NAME;
LL_NAME; OWNER; STATE; TYPE
Backups, count of bytes received Use the Summary table, available in TSM
3.7+, like:
SELECT SUM(BYTES) AS Sum_Bytes -
FROM ADSM.SUMMARY -
WHERE (DATE(END_TIME) = CURRENT DATE \
- 1 DAYS AND TIME(END_TIME) >= \
'20.00.00') OR (DATE(END_TIME) = \
CURRENT DATE) AND ACTIVITY = 'BACKUP'
See also: Summary table
Backups, parallelize Going to a disk pool first is one way;
then the data migrates to tape.
To go directly to tape: You may need to
define your STGpool with
COLlocation=FILespace to achieve such
results; else *SM will try to fill one
tape at a time, making all other
processes wait for access to the tape.
Further subdivision is afforded via
VIRTUALMountpoint. (Subdivide and
conquer.)
That may not be a good solution where
what you are backing up is not a file
system, but a commercial database backup
via agent, or a buta backup, where each
backup creates a separate filespace. In
such situations you can use the approach
of separate management classes, so as to
have separate storage pools, but still
using the same library and tape pool.
If you have COLlocation=Yes (node) and
need to force parallelization during a
backup session, you can momentarily
toggle the single, current output tape
from READWrite to READOnly to incite *SM
to have multiple output tapes.
Backups, prevent There are times when you want to prevent
backups from occurring, as when a
restoral is running and fresh backups of
the same file system would create
version confusion in the restoral
process, or where client nodes tend to
inappropriately use the TSM client
during the day, as in kicking off
Backups at times when drives are needed
for other scheduled tasks. You can
prevent backups in several ways:
In the *SM server:
- LOCK Node, which prevents all access
from the client - and which may be too
extreme.
- 'UPDate Node ... MAXNUMMP=0', to be in
effect during the day, to prevent
Backup and Archive, but allow Restore
and Retrieve.
In the *SM client:
- In the Include-Exclude list, code
EXCLUDE.FS <FileSystemName>
for each file system.
In general:
- If the backups are performed via
client schedule: Unfortunately, client
schedules lack the ACTIVE= keyword
such that we can render them inactive.
Instead, you can do a temporary
DELete ASSOCiation to divorce the node
from the backup schedule.
- If the backups are being performed
independently by the client: Do
DISAble SESSions after the restoral
starts, to allow it to proceed but
prevent further client sessions.
Or you might do UPDate STGpool ...
ACCess=READOnly, which would certainly
prevent backups from proceeding.
See also: "Restorals, prevent" for
another approach
Backups go directly to tape, not disk Some shops have their backups first go
as intended to a disk storage pool, with migration
to tape. But they may find backups going
directory to tape. Possible causes:
- The file exceeds the STGpool MAXSize.
- The file exceeds the physical storage
pool size.
- The backup occurred choosing a
management class which goes to tape.
- Maybe only some of the data is going
directly to tape: the directories.
Remember that *SM by default stores
directories under the Management
Class with the longest retention,
modifiable via DIRMc.
- Your storage pool hierarchy was
changed by someone.
- See also "ANS1329S" discussion about
COMPRESSAlways effects.
- Your client (perhaps DB2 backup) may
be overestimating the size of the
object being backed up.
- Um, the stgpool Access mode is
Read/Write, yes?
A good thing to check: Do a short Select
* From Backups... to examine some of
those files, and see what they are
actually using for a Management Class.
Backups without expiration Use INCRBYDate (q.v).
Backupset See: Backup Set
baclient Shorthand for Backup-Archive Client.
bak DFS command to start the backup and
restore operations that direct them to
buta.
See also: buta; butc; DFS
bakserver BackUp Server: DFS program to manage
info in its database, serving recording
and query operations.
See also "buserver" of AFS.
Barcode See CHECKLabel
Barcode, examine tape to assure that 'mtlib -l /dev/lmcp0 -a -V VolName'
it is physically in library) Causes the robot to move to the tape and
scan its barcode.
'mtlib -l /dev/lmcp0 -a -L FileName'
can be used to examine tapes en mass, by
taking the first volser on each line of
the file.
Bare Metal Restore (BMR) Grudgingly performed by TSM, if at all:
is basically left to 3rd party providers
such as The Kernel Group (see
www.tkg.com/products.html).
Redbook: "ADSM Client Disaster Recovery:
Bare Metal Restore" (SG24-4880)
Users group: TSM AIX Bare Metal Restore
Special interest group. Subscribe by
sending email to
TSMAIXBMR-subscribe@yahoogroups.com or
via the yahoogroups web interface at
http://www.yahoogroups.com
See also: BMR
Bare Metal Restore, Windows? BMR of Windows is highly problematic,
due to the Registry orientation of the
operating system and hardware
dependencies. I.e., don't expect it to
work. As one customer put it: "Windows
is the least transportable and least
modular OS ever."
On the IBM website is helpful article
"Modified instructions for complete
Restores of Windows Systems".
Batch mode Start an "administrative client session"
to issue a single server command or
macro, via the command:
'dsmadmc -id=YOURID -pa=YOURPW CMDNAME',
as described in the ADSM Administrator's
Reference.
BCV EMC disk: Business Continuance Volumes.
BEGin EVentlogging Server command to begin logging events
to one or more receivers. A receiver for
which event logging has begun is an
active receiver. When the server is
started, event logging automatically
begins for the console and activity log
and for any receivers that are started
automatically based on entries in the
server options file. You can use this
command to begin logging events to
receivers for which event logging is not
automatically started at server
startup. You can also use this command
after you have disabled event logging to
one or more receivers. Syntax:
'BEGin EVentlogging [ALL|CONSOLE|ACTLOG
|EVENTSERVER|FILE|FILETEXT|SNMP
|TIVOLI|USEREXIT]'
See: User exit
Benchmark Surprisingly, many sites simply buy
hardware and start using it, and then
maybe wonder if it is providing its full
performance potential. What should
happen is that the selection of hardware
should be based upon performance
specifications published by the vendor;
then, once it is made operational at the
customer site, the customer should
conduct tests to measure and record its
actual performance, under ideal
conditions. That is a benchmark. Going
through this process gives you a basis
for accepting or rejecting the new
facilities and, if you accept them, you
have a basis for later comparing daily
performance to know when problems or
capacity issues are occurring.
.BFS File name extension created by the
server for FILE devtype scratch volumes
which contain client data.
Ref: Admin Guide, Defining and Updating
FILE Device Classes
See also: .DBB; .DMP; .EXP; FILE
Billing products Chargeback/TSM, an optional plugin to
Servergraph/TSM (www.servergraph.com).
Bindery A database that consists of three system
files for a NetWare 3.11 server. The
files contain user IDs and user
restrictions. The Bindery is the first
thing that ADSM backs up during an
Incremental Backup. ADSM issues a Close
to the Bindery, followed by anOpen
(about 2 seconds later). This causes
the Bindery to be written to disk, so
that it can be backed up.
Binding The process of associating an object
with a management class name, and hence
a set of rules.
See "Files, binding to management class"
Bit Vector Database concept for efficiently storing
sparse data. Database records usually
consist of multiple fields. In some db
applications, only a few of the fields
may have data: if you simply allocate
space for all possible fields in
database records, you will end up with a
lot of empty space inflating your db.
To save space you can instead use a
prefacing sequence of bits in each
database record which, left to right,
correspond to the data fields in the db
record, and in the db record you
allocate space only for the data fields
which contain data for this record. If
the bit's value is zero, it means that
the field had no data and does not
participate in this record. If the bit's
value is one, it means that the field
does participate in the record and its
value can be found in the db record, in
the position relative to the other "one"
values.
Example: A university database is
defined with records consisting of four
fields: Person name, College, Campus
address, Campus phone number. But not
all students or staff members reside on
campus, so allocating space for the last
two fields would be wasteful. In the
case of staff member John Doe, the last
three fields are unnecessary, and so his
database record would have a bit vector
value of 1000, meaning that only his
name appears in the database record.
Bitfile Internal terminology denoting an
Aggregate. Sometimes seen like
"0.29131728", which is notation
specifying an OBJECT_ID HIGH portion (0)
and an OBJECT_ID LOW portion (29131728).
(OBJECT_ID appears in the Archives and
Backups database tables.)
Note that in the BACKUPS table, the
OBJECT_ID is just the low portion.
See also: OBJECT_ID
Bkup Backup file type, in Query CONtent
report. Other types: Arch, SpMg
Blksize See: Block size used for removable media
Block size used for removable media *SM sets the block size of all its
(tape, optical disc) blksize tape/optical devices internally.
Setting it in smit has no effect,
except for tar, dd, and any other
applications that do not set it
themselves.
ADSM uses variable blocking on all
tapes, ie. blocksize is 0. Generally
however, for 3590 it will attempt to
write out a full 256K block, which is
the largest allowed blocksize with
variable blocking. Some blocks,
eg. the last block in a series, will
be shorter.
AIX: use 'lsattr -E -l rmt1' to verify.
DLT: ADSMv3 sets blksize to 256KB.
Ref: IBM site Technote 1167281
Blurred files General backup ramifications term
derived from photography, where
imaging a moving object results in its
image being indistinct. If a file is
being updated as it is being backed up,
that imaging is "blurred".
BMR Bare Metal Restore.
The Kernel Group has a product of that
name. However, as of 2001/02 TKG has not
been committing the resources required
to develop the product, given the lack
of SSA disk, raw volume support, and
Windows 2000. URL:
http://www.tkg.com/products.html
See also: Bare Metal Restore
BOOKS Old ADSM Client User Options file
(dsm.opt) option for making the ADSM
online publications available through
the ADSM GUI's Help menu, View Books
item. The option specifies the command
to invoke, which in Unix would be
'dtext'.
Books, online, installing Follow the instructions contained in the
booklet which accompanies the Online
Product Library CD-ROM.
Books, online, storage location Located in /usr/ebt/adsm/
More specifically: /usr/ebt/adsm/books
Books, online, using If under the ADSM GUI: Click on the Help
menu, View Books item.
From the Unix prompt: 'dtext', which
invokes the DynaText hypertext browser:
/usr/bin/dtext -> /usr/ebt/bin/dtext.
Books component product name "adsmbook.obj"
As in 'lslpp -l adsmbook.obj'.
BOT A Beginning Of Tape tape mark.
See also: EOT
BPX-Tcp/Ip The OpenEdition sockets API is used by
the Tivoli Storage Manager for MVS 3.7
when the server is running under OS/390
R5 or greater. Therefore, "BPX-Tcp/Ip"
is displayed when the server is using
the OpenEdition sockets API (callable
service). "BPX" are the first three
characters of the names of the API
functions that are being used by the
server.
Braces See: {}; File space, explicit
specification
BRMS AS/400 (iSeries) Backup Recovery and
Media Services, a fully automated
backup, recovery, and media management
strategy used with OS/400 on the iSeries
server. The iSeries TSM client referred
to as the BRMS Application Client to
TSM. The BRMS Application Client
function is based on a unique
implementation of the TSM Application
Programming Interface (API) and does not
provide functions typically available
with TSM Backup/Archive clients. The
solution it integrated into BRMS and has
a native iSeries look and feel. There is
no TSM command line or GUI interfaces.
The BRMS Application client is not a
Tivoli Backup/Archive client nor a
Tivoli Data Protection Client. You can
use BRMS to save low-volume user data on
distributed iSeries systems to any
Tivoli Storage Manager (TSM) server.
You can do this by using a BRMS
component called the BRMS Application
Client, which is provided with the base
BRMS product. The BRMS Application
Client has the look and feel of BRMS and
iSeries. It is not a TSM Backup or
Archive client. There is little
difference in the way BRMS saves objects
to TSM servers and the way it saves
objects to media. A TSM server is just
another device that BRMS uses for your
save and restore operations.
BRMS backups can span volumes.
There is reportedly a well-known
throughput bottleneck with BRMS.
(600Kb/s is actually quite a respectable
figure for BRMS.)
Ref: In IBM webspace you can search
for "TSM frequencly asked questions" and
"TSM tips and techniques" which talk of
BRMS in relation to TSM.
BSAInit Initialization function in the X/Open
(XBSA) version of the TSM API.
Common error codes (accompanied by
dsierror.log messages):
96 Option file not found. Either
employ the DSMI_CONFIG environment
variable to point to it, or
establish a link from the API
directory to the prevailing options
file.
BU Seldom used abbreviation for backup.
Buffer pool statistics, reset 'RESet BUFPool'
BUFFPoolsize You mean: BUFPoolsize
BUFPoolsize Definition in the server options file.
Specifies the size of the database
buffer pool in memory, in KBytes
(i.e. 8192 = 8192 KB = 8 MB).
A larger buffer pool can keep more
database pages in the memory cache and
lessen I/O to the database.
As the ADSM (3.1) Performance Tuning
Guide advised: While increasing
BUFPoolsize, care must be taken not to
cause paging in the virtual memory
system. Monitor system memory usage to
check for any increased paging after the
BUFPoolsize change. (Use the
'RESet BUFPool' command to reset the
statistics.)
Note that a TSM server, like servers of
all kinds, benefits from the host system
having abundant real memory. Skimping is
counter-productive.
The minimum value is 256 KB; the maximum
value is limited only by available
virtual memory.
Evaluate performance by looking at
'Query DB F=D' output Cache values. A
"Cache Hit Pct." of 98% is a reasonable
target.
Default: 512 (KB)
To change the value, either directly
edit the server options file and restart
the server, or use SETOPT BUFPoolsize
and perform a RESet BUFPool.
You can have the server tune the value
itself via the SELFTUNEBUFpoolsize
option.
Ref: Installing the Server
See also: SETOPT BUFPoolsize;
LOGPoolsize; RESet BUFPool;
SELFTUNEBUFpoolsize
BUFPoolsize server option, query 'Query OPTion'
Bulk Eject category 3494 Library Manager category code FF11
for a tape volume to be deposited in the
High-Capacity Output Facility. After
the volume has been so deposited its
volser is deleted from the inventory.
bus_domination Attribute for tape drives on a SCSI bus.
Should be set "Yes" only if the drive is
the only device on the bus.
buserver BackUp Server: AFS program to manage
info in its database, serving recording
and query operations.
See also "bakserver" of DFS.
Busy file See: Changed
buta (AFS) (Back Up To ADSM) is an ADSM API
application which replaces the AFS butc.
The "buta" programs are the ADSM agent
programs that work with the native AFS
volume backup system and send the data
to ADSM. (The AFS buta and DFS buta are
two similar but independent programs.)
The buta tools only backup/restore at
the volume level, so to get a single
file you have to restore the volume to
another location and then grovel for the
file. This is why ADSM's AFS facilities
are preferred.
The "buta" backup style provides AFS
disaster recovery. All of the necessary
data is stored to restore AFS partitions
to an AFS server, in the event of loss
of a disk or server. It does not allow
AFS users to backup and restore AFS
data, per the ADSM backup model. All
backup and restore operations require
operator intervention. ADSM management
classes do not control file retention
and expiration for the AFS files data.
Locking: The AFS volume is locked in the
buta backup, but you should be backing
up clone volumes, not the actuals.
There is a paper published in the
Decorum 97 Proceedings (from Transarc)
describing the buta approach.
As of AFS 3.6, butc itself supports
backups to TSM, via XBSA (q.v.), meaning
that buta will no longer be necessary.
License: Its name is
"Open Systems Environment", as per
/usr/lpp/adsm/bin/README.
The file backup client is installable
from the adsm.afs.client installation
file, and the DFS fileset backup agent
is installable from adsm.butaafs.client.
Executables: /usr/afs/buta/.
See publication "AFS/DFS Backup
Clients", SH26-4048 and
http://www.storage.ibm.com/software/
adsm/adafsdfs.htm .
There's a white paper available at:
http://www.storage.ibm.com/software/
adsm/adwhdfs.htm
Compare buta with "dsm.afs".
See also: bak; XBSA
buta (DFS) (Back Up To ADSM) is an ADSM API
application which replaces the AFS butc.
The "buta" programs are the ADSM agent
programs that work with the native AFS
fileset backup system and send the data
to ADSM. (The AFS buta and DFS buta are
two similar but independent programs.)
The buta tools only backup/restore at
the fileset level, so to get a single
file you have to restore the fileset to
another location and then grovel for the
file. This is why ADSM's AFS facilities
are preferred.
Each dumped fileset (incremental or
full) is sent to the ADSM server as a
file whose name is the same as that of
the fileset. The fileset dump files
associated with a dump are stored within
a single file space on the ADSM server,
and the name of the file space is the
dump-id string.
The "buta" backup style provides DFS
disaster recovery. All of the necessary
data is stored to restore DFS aggregates
to an DFS server, in the event of loss
of a disk or server. It does not allow
DFS users to backup and restore DFS
data, per the ADSM backup model. All
backup and restore operations require
operator intervention. ADSM management
classes do not control file retention
and expiration for the DFS files data.
Locking: The DFS fileset is locked in
the buta backup, but you should be
backing up clone filesets, not the
actuals.
License: Its name is
"Open Systems Environment", as per
/usr/lpp/adsm/bin/README.
The file backup client is installable
from the adsm.dfs.client installation
file, and the DFS fileset backup agent
is installable from adsm.butadfs.client.
Executables: in /var/dce/dfs/buta/ .
See publication "AFS/DFS Backup
Clients", SH26-4048 and
http://www.storage.ibm.com/software/
adsm/adafsdfs.htm .
There's a white paper available at:
http://www.storage.ibm.com/software/
adsm/adwhdfs.htm
Compare buta with "dsm.dfs".
See also: bak
butc (AFS) Back Up Tape Coordinator: AFS volume
dumps and restores are performed through
this program, which reads and writes an
attached tape device and then interacts
with the buserver to record them.
Butc is replaced by buta to instead
perform the backups to ADSM.
As of AFS 3.6, butc itself supports
backups to TSM through XBSA (q.v.),
meaning that buta will no longer be
necessary.
See also: bak
butc (DFS) Back Up Tape Coordinator: DFS fileset
dumps and restores are performed through
this program, which reads and writes an
attached tape device and then interacts
with the buserver to record them.
Butc is replaced by buta to instead
perform the backups to ADSM.
See also: bak
bydate You mean -INCRBYDate (q.v.).

C: vs C:\* specification C: refers to the entire drive, while


C:\* refers to all files in the root of
C: (and subdirectories as well if
-SUBDIR=YES is specified). A C:\*
backup will not cause the Registry
System Objects to be backed up, whereas
a C: backup will.
Cache (storage pool) When files are migrated from disk
storage pools, duplicate copies of the
files may remain in disk storage
("cached") as long as TSM can afford
the space, thus making for faster
retrieval. As such, this is *not* a
write-through cache: the caching only
begins once the storage pool HIghmig
value is exceeded.
TSM will delete the cached disk files
only when space is needed. This is why
the Pct Util value in a 'Query Volume'
or 'Query STGpool' report can look much
higher than its defined "High Mig%"
threshold value (Pct Util will always
hover around 99% with Cache activated).
Define HIghmig lower to assure the
disk-stored files also being on tape,
but at the expense of more tape action.
When caching is in effect, the best way
to get a sense of "real" storage pool
utilization is via 'Query OCCupancy'.
Note that the storage pool LOwmig value
is effectively overridden to 0 when
CAChe is in effect, because once
migration starts, TSM wants to assure
that everything is cached. You might as
well define LOwmig as 0 to avoid
confusion in this situation.
Performance penalties: Requires
additional database space and updating
thereof. Can also result in disk
fragmentation due to lingering files.
Is best used for the disks which may be
part of Archive and HSM storage pools,
because of the likelihood of retrievals;
but avoid use with disks leading a
backup storage pool hierarchy, because
such disks serve as buffers and so
caching would be a waste of overhead.
With caching, the storage pool Pct Migr
value does not include cached data.
See also the description of message
ANR0534W.
CAChe Disk stgpool parameter to say whether or
not caching is in effect. Note that if
you had operated CAChe=Yes and then turn
it off, turning it off doesn't clear the
cached files from the diskpool - you
need to also do one of the following:
- Fill the diskpool to 100%, which will
cause the cached versions to be
released to make room for the new
files;
or
- Migrate down to 0, then do MOVe Data
commands on all the disk volumes,
which will free the cached images.
Cache Hit Pct. Element of 'Query DB F=D' report,
reflecting server database performance.
(Also revealed by 'SHow BUFStats'.)
The value should be up around 98%.
(You should periodically do
'RESet BUFPool' to reset the statistics
counts to assure valid values,
particularly if the "Total Buffer
Requests" from Query DB is negative
(counter overflow).) If the Cache Hit
Pct. value is significantly less, then
the server is being substantially slowed
in having to perform database disk I/O
to service lookup requests, which will
be most noticeable in degrading backups
being performed by multiple clients
simultaneously.
Your ability to realize a high value in
this cache is affected by the same
factors as any other cache: The more,
new entries in the cache - as from lots
of client backups - the less likely it
may be that any of those resident in the
cache may serve a future reference, and
so the lookup has to go all the way back
to the disk-based database, meaning a
"cache miss". It's all probability, and
the inability to predict the future.
Increase BUFPoolsize in dsmserv.opt .
Note: You can have a high Cache Hit Pct.
and yet performance still suffering if
you skimp on real memory in your server
system, because all modern operating
systems use virtual memory, and in a
shortage of real memory, much of what
had been in real memory will instead be
out on the backing store, necessitating
I/O to get it back in, which entails
substantial delay.
See topic "TSM Tuning Considerations" at
the bottom of this document.
See also: RESet BUFPool
Cache Wait Pct. Element of 'Query DB F=D' report.
Specifies, as a percentage, the number
requests for a database buffer pool page
that was unavailable (because all
database buffer pool pages are
occupied).
You want the number to be 0.0. If
greater, increase the size of the buffer
pool with the BUFPoolsize option. You
can reset this value with the
'RESet BUFPool' command.
Caching, turn off 'UPDate STGpool PoolName CAChe=No'
If you turn caching off, there's no
reason for ADSM to suddenly remove the
cache images and lose the investment
already made: that stuff is residual,
and will go away as space is needed.
CAD See: Client Acceptor Daemon
Calibration Sensor 3494 robotic tape library sensor:
In addition to the bar code reader, the
3494 accessor contains another, more
primitive visions system, based upon
infrared rather than laser: it is the
Calibration Sensor, located in the top
right side of the picker. This sensor
is used during Teach, bouncing its light
off the white, rectangular reflective
pads (called Fiducials) which are stuck
onto various surfaces inside the 3494.
This gives the robot its first actual
sensing of where things actually are
inside.
CANcel EXPIration TSM server command to cancel an
expiration process if there is one
currently running. This does NOT require
the process ID to be specified, and so
this command can be scheduled using the
server administrative command scheduling
utility to help manage expiration
processing and the time it consumes.
TSM will record the point where it
stopped, in the TSM database: this sets
a restart checkpoint for the next time
expiration is run such that it will
resume from where it left off. As such,
this may be preferable to CANcel
PRocess.
This restartability was introduced by
ADSMv3 APAR IY00629, in response to
issues with long-running Expirations.
Msgs: ANR0813I when stopped by CANcel
PRocess
See also: Expiration, stop
CANcel PRocess TSM server command to cancel a
background process. Syntax:
'CANcel PRocess Process_Number'
Notes: Processes waiting on resources
won't cancel until they can get that
resource - at which point they will go
away. For example, a Backup Stgpool
process which is having trouble reading
or writing a tape, and is consumed with
retrying the I/O, cannot be immediately
cancelled. When a process is canceled,
it often has to wait for lock requests
to clear prior to going away: SHOW LOCKS
may be used to inspect.
CANcel REQuest *SM server command to cancel pending
mount requests. Syntax:
'CANcel REQuest [requestnum|ALl]
[PERManent]'
where PERManent causes the volume status
to be marked Unavailable, which prevents
further mounts of that tape.
CANcel RESTore ADSMv3 server command to cancel a
Restartable Restore operation. Syntax:
'CANcel RESTore Session_Number|ALl'
See also: dsmc CANcel Restore;
Query RESTore
CANcel SEssion To cancel an administrative or client
session. Syntax:
'CANcel SEssion [SessionNum|ALl]'
A client conducting a dsm session will
get an alert box saying "Stopped by
user", though it was actually the server
which stopped it.
A client conducting a dsmc session will
log msg ANS1369E and usually quit.
An administrative session which is
canceled gets regenerated...
adsm> cancel se 4706
ANS5658E TCP/IP failure.
ANS5102I Return code -50.
ANS5787E Communication timeout.
Reissue the command.
ANS5100I Session established...
ANS5102I Return code -50.
SELECT command sessions are a problem:
depending on complexity of the query it
quite possible for the server to hang,
and Tivoli has stated that the Cancel
may not be able to cancel the Select,
such that halting and restarting the
server is the only way out of that
situation.
Ref: Admin Guide, Monitoring the TSM
Server, Using SQL to Query the TSM
Database, Issuing SELECT Commnds.
Msgs: ANS1369E, ANS4017E
See also: THROUGHPUTTimethreshold;
THROUGHPUTDatathreshold
Candidates A file in the .SpaceMan directory of an
HSM-managed file system, listing
migration candidates (q.v.). The fields
on each line:
1. Migration Priority number, which
dsmreconcile computes based upon file
size and last access.
2. Size of file, in bytes.
3. Timestamp of last file access
(atime), in seconds since 1970.
4. Rest of pathname in file system.
Capacity Column in 'Query FIlespace' server
command output, which reflects the size
of the object as it exists on the
client. Note that this does *not*
reflect the space occupied in ADSM.
See also: Pct Util
Cartridge devtype, considerations When using a devclass with
DEVType=Cartridge, 3590 devices can only
read. This is to allow customers who
used 3591's (3590 devices with the A01
controller) to read those tapes with a
3590 (3590 devices with the A00
controller). The 3591 device emulates
a 3490, and uses the Cartridge devtype.
3590's use the 3590 devtype. You can do
a Help Define Devclass, or check the
readme for information on defining a
3590 devclass, but it is basically the
same as Cartridge, with a DEVType=3590.
The 3591 devices exist on MVS and VM
only, so the compatibality mode is only
valid on these platforms. On all other
platforms, you can only use a 3590 with
the 3590 devtype.
Cartridge System Tape (CST) A designation for the base 3490
cartridge technology, which reads and
writes 18 tracks on half-inch tape.
Sometimes referred to as MEDIA1.
Contrast with ECCST and HPCT.
See also: ECCST; HPCT; Media Type
CAST SQL: To alter the data representation in
a query operation:
CAST(Column_Name AS ___)
See: TIMESTAMP
Categories See: Volume Categories
Category code, search for volumes 'mtlib -l /dev/lmcp0 -qC -s ____'
will report only volumes having the
specified category code.
Category code control point Category codes are controlled at the
ADSM LIBRary level.
Category code of one tape in library, Via Unix command:
list 'mtlib -l /dev/lmcp0 -vqV -V VolName'
In TSM: 'Query LIBVolume LibName
VolName'
indirectly shows the Category Code in
the Status value, which you can then see
in numerical terms by doing
'Query LIBRary [LibName]'.
Category code of one tape in library, Via Unix command:
set 'mtlib -l /dev/lmcp0 -vC -V VolName
-t Hexadecimal_New_Category'
(Does not involve a tape mount.)
No ADSM command will performs this
function, nor does the 3494 control
panel provide a means for doing it.
By virtue of doing this outside of ADSM,
you should do 'AUDit LIBRary LibName'
afterward for each ADSM-defined library
name affected, so that ADSM sees and
registers the change.
In TSM: 'UPDate LIBVolume LibName
VolName STATus=[PRIvate|SCRatch]'
indirectly changes the Category Code to
the Status value reflected in
'Query LIBRary [LibName]'.
Category Codes Ref: Redbook "IBM Magstar Tape Products
Family: A Practical Guide" (SG24-4632),
Appendix A
Category codes of all tapes in Use AIX command:
library, list 'mtlib -l /dev/lmcp0 -vqI'
for fully-labeled information, or just
'mtlib -l /dev/lmcp0 -qI'
for unlabeled data fields: volser,
category code, volume attribute, volume
class (type of tape drive; equates to
device class), volume type.
(or use options -vqI for verbosity, for
more descriptive output)
The tapes reported do not include CE
tape or cleaning tapes.
In TSM: 'Query LIBVolume [LibName]
[VolName]'
indirectly shows the Category Code in
the Status value, which you can then see
in numerical terms by doing
'Query LIBRary [LibName]'.
Category Table (TSM) /usr/tivoli/tsm/etc/category_table
Contains a list of tape library category
codes, like:
FF00=inserted. (unassigned, in ATL)
CC= Completion Code value in I/O operations,
as appears in error messages.
See the back of the Messages manuals for
a list of Completion Codes and suggested
handling.
CCW Continuous Composite WORM, as in a type
of optical WORM drive that can be in the
3995 library.
CD See also: DVD...
CD for Backup Set See: Backup set, on CD
CDRW (CD-RW) support? Tivoli Storage Manager V5.1, V4.2 and
V4.1 for Windows and Windows 2000
supports removable media devices such as
Iomega JAZ, Iomega ZIP, CD-R, CD-RW, and
optical devices provided a file system
is supplied on the media. The devices
are defined using a device class of
device type REMOVEABLEFILE. (Ref:
Tivoli Storage Manager web pages for
device support, under "Platform Specific
Notes")
With CD-ROM support for Windows,
administrators can also use CD-ROM media
as an output device class. Using CD-ROM
media as output requires other software
which uses a file system on top of the
CD-ROM media. ADAPTEC Direct CD software
is the most common package for this
application. This media allows other
software to write to a CD by using a
drive letter and file names. The media
can be either CD-R (read) or CD-RW
(read/write). (Ref: Tivoli Storage
Manager for Windows Administrator's
Guide)
CE (C.E.) IBM Customer Engineer.
CE volumes, count of in 3494 Via Unix command:
'mtlib -l /dev/lmcp0 -vqK -s fff6'
Cell (tape library storage slot) For libraries containing their own
supervisor (e.g., 3494), TSM does not
know or care about where volumes are
stored in the library, in that it merely
has to ask the library to mount them as
needed, so does not need to know.
See: Element; HOME_ELEMENT; Library...
SHow LIBINV
Cell 1 See: 3494 Cell 1
Centera Storage device from EMC which provides
retention protection for archiving fixed
content digital data records.
Supported in TSM 5.2.2.
Central Scheduling A function that allows an *SM
administrator to schedule backup,
archive, and space management operations
from a central location. The operations
can be scheduled on a periodic basis or
on an explicit date.
Shows up in server command Query STATus
output as "Central Scheduler: Active".
(It is not documented in the manuals
what controls its Active/Inactive state)
Changed Keyword at end of a line in client
backup log indicating that the file
changed as it was being backed up, as:
Normal File--> 1,544,241,152 /SomeFile
Changed
Backup may be reattempted according to
the CHAngingretries value. In the
dsmerror.log you may see an auxiliary
message for the retry: "<Filename>
truncated while reading in Shared Static
mode."
See also: CHAngingretries; Retry;
SERialization
CHAngingretries (-CHAngingretries=) Client System Options file (dsm.sys)
option to specify how many additional
times you want *SM to attempt to back
up or archive a file that is "in use",
as discovering during the first attempt
to back it up, when the Copy Group
SERialization is SHRSTatic or SHRDYnamic
(but not STatic or DYnamic). Note that
the option controls retries: if you
specify "CHAngingretries 3", then the
backup or archive operation will try a
total of 4 times - the initial attempt
plus the three retries.
Be aware that the retry will be right
after the failed attempt: *SM does not
go on to all other files and then come
back and retry this one.
Option placement: within server stanza.
Spec: CHAngingretries { 0|1|2|3|4 }
Default: 4 retries.
Note: It may be futile to attempt to
retry, in that if the file is large it
will likely be undergoing writing for a
long time.
Note: Does not control number of
retries in presence of read errors.
This option's final effect depends upon
the COpygroup's SERialization "shared"
setting: Static prohibits retries if the
file is busy; Dynamic causes the
operation to proceed on the first try;
Shared Static will cause the attempt to
be abandoned if the file remains busy,
but Shared Dynamic will cause backup or
archiving to occur on the final attempt.
See also: Changed; Fuzzy Backup; Retry;
SERialization
CHAngingretries, query The 'dsmc q o' command will *not* reveal
the value of this option: you have to
examine the dsm.sys options file.
CHAR SQL function to return a string of
optionally limited length, left-aligned.
Syntax: CHAR(Expression[,Len])
See also: LEFT()
CHECKIn LIBVolume TSM server command to check a *labeled*
tape into an automated tape library.
(For 3494 and like libraries, the volume
must be in Insert mode.)
'CHECKIn LIBVolume LibName VolName
STATus=PRIvate|SCRatch|CLEaner
[CHECKLabel=Yes|No|Barcode]
[SWAP=No|Yes] [MOUNTWait=Nmins]
[SEARCH=No|Yes|Bulk]
[CLEANINGS=1..1000]
[VOLList=vol1,vol2,vol3 ...]
[DEVType=3590]'
(Omit VolName if SEARCH=Yes. You can do
CHECKLabel=Barcode only if SEARCH=Yes.)
Note that this command is not relevant
for LIBtype=MANUAL.
Note that SEARCH=Bulk will result in
message ANR8373I, which requires doing
'REPLY <RequestNumber'.
MOUNTWait counts down with msg ANR8308I.
Not present in OS/390 *SM because such
operations are handled by the OS.
This action implicitly binds the volumes
to the category code numbers for Private
and Scratch according to the original
Define Library category code choices for
Private and Scratch.
SWAP=Yes causes *SM to eject the least
frequently mounted volume to make room
for the needed one.
Note that this involves a tape mount,
and *fails* if no drive is currently
available or the tape is outside the
library and the operator fails to
reinsert it within 60 minutes (message
ANR8426E). The checkin process will not
automatically dismount any lingering
tape mounts - you must issue a DISMount.
Example of checking in 1 tape, private:
'CHECKIn LIBV OURLIBR 000040
STATus=PRIvate SEARCH=No DEVType=3590'
Example of checking in 1 tape, scratch:
'CHECKIn LIBV OURLIBR 000040
STATus=SCRatch SEARCH=No DEVType=3590'
Example of checking in all tapes having
category "Insert" (FF00):
'CHECKIn LIBV OURLIBR STATus=SCRatch
SEARCH=Yes DEVType=3590'
Note: As of ADSMv3 you can label a
volume and check it in at the same time
via the LABEl LIBVolume command.
This command is ineligible for
administrative schedules.
Messages: ANR8424I ANR8302E ANR8357I
See also: CHECKLabel; LABEl LIBVolume
CHECKLabel Operand on AUDit LIBRary, CHECKIn,
CHECKOut. Possible values:
No (default) To perform no
verification - no tape
mount, no barcode
inspection. This is
chancey unless you know
your library to be very
reliable.
Yes Verifies the label
written on the tape media
by mounting the tape and
reading it.
Barcode SCSI libs only: Verifies
only the external barcode
on the cartridge; but if
no barcode or unable to
read, the tape will be
mounted and the internal
label will be read.
Specifying only Barcode
is skimping.
It is advised that you use Yes, to avoid
future problems.
CHECKOut LIBVolume TSM server command to logically (and
optionally, physically) check a labeled
tape out of an automated tape library.
The volume would no longer show up in a
'Query LIBVolume' display, but would
usually still be in a storage pool such
that a 'Query Volume' would still show
it, and an attempt to use it would cause
a request to be generated to check it
back into the library. Syntax:
'CHECKOut LIBVolume LibName VolName
[CHECKLabel=Yes|No]
[FORCE=No|Yes]
[REMove=Yes|No|BUlk]
[VOLList=vol1,vol2,vol3 ...]'
Note that this command is not relevant
for LIBtype=MANUAL.
Not present in OS/390 *SM because such
operations are handled by the OS.
The default "REMove=Yes" causes the
volume to be ejected from the library.
The volume being checked out must not be
mounted (or dismounting), else get
error ANR8442E.
The Category Code gets changed to FF00
(Insert category).
BUlk is new with fix level 15, to allow
outputting via the 3494 High Capacity
area, rather than just the more
limited Convenience I/O passthrough.
A checked-out volume's *SM status
remains unchanged. To the 3494, the
volume is no longer in the Library
Manager's database of known volumes.
Attempting to perform a restoral from
a checked-out volume whose Access value
has not been updated causes the client
to hang. When that is canned, the
volume Access mode becomes Unavailable
as *SM learns the truth, and further
attempts to restore from the tape get
ANS4314E at the client.
Note: If LABEl LIBVolume is running,
expect the Checkout process to linger,
uncompleted, until the Label finishes.
See also: CHECKLabel; MOVe MEDia
Checkpoint See: CKPT
Checksum You mean, CRC data.
See: AUDit Volume; CRC
CHEckthresholds Client System Options file (dsm.sys)
option for HSM use only, to specify how
often the space monitor daemon
(dsmmonitrd) checks space usage in
HSM-controlled file systems. The
interval granularity is in terms of
minutes. Default: 5 minutes.
CKPT Undocumented server command, available
at least through 1996, originally added
to the product for testing or service
use. It forces a checkpoint
(transaction state information) to be
written to the Recovery Log, and can
result in reducing its size if it is
inflated by a flurry of activity.
This command may be able to relieve the
Recovery Log - but maybe not. Rather
than trying to use an undocumented,
unsupported command you should work
within standard facilities: size your
Recovery Log to accommodate reasonable
workloads and make workloads reasonable
through spread-out scheduling; and then
let task completion and database backups
clear the Rollforward-mode Recovery Log.
See also: Recovery Log pinning/pinned
CLASS As output from 'mtlib' command volume
report, identifies the volume class:
the type of tape drive; equates to
device class:
00 3480
01 3590
Class See: Volume class
CLASS_NAME SQL: Management Class name, as in
Archives, Ar_Copygroups, Backups,
Bu_Copygroups, Mgmtclasses, and
Spacemgfiles tables.
In backup:
For directories, CLASS_NAME is typically
seen to be actual name of the default
management class, but for files is
typically "DEFAULT" - in that in
backups, directories are bound to the
management class with the longest
retention (RETOnly).
There is no management class named
"DEFAULT". Rather, files bound to the
management class reported as "DEFAULT"
are really bound to the class that is
designated as the default management
class for that policy domain (usually,
"STANDARD"). If the default management
class changes, then files bound to
"DEFAULT" will be managed by the new
designated default management class.
Directories are always bound to a
specific management class (you should
not see "DEFAULT" for directories);
either the one with the longest RETONLY
setting or the one you specified via
DIRMc. If a different management class
becomes the one with the longest RETONLY
setting, then directories will be
rebound to that management class during
the next full incremental backup (except
for the ones bound to the DIRMc
management class).
In archiving:
CLASS_NAME will be "DEFAULT" (the
generic identifier for the default
management class) for archiving done
without using ARCHmc. Interestingly, if
you use ARCHmc specifying the management
class name which is the default
management class, its actual name will
show up under CLASS_NAME; and if you
specify -ARCHmc=default, CLASS_NAME will
be "DEFAULT".
WARNING: Be sure to code the name in
upper case in Select operations!!!
Failing to do so will cause no matches
for the given class name, which may
cause you to falsely believe that there
are no files in the management class,
such that you might believe that you can
safely delete it!
See also: ARCHmc; Default management
class, query
"Classic Restore" See: No Query Restore
*CLEAN 3590 display message for when the drive
wants a cleaning tape. In a 3494 robotic
library, a cleaning tape will
automatically be mounted if one is
available and its barcode label prefix
matches that specified to the Lib Mgr.
CLEAN ARCHDIRectories Fixit command, per IX89638, to remove
duplicate archive directories, for all
nodes or a list of nodes, or resumes a
cleanup job that was canceled. This is
largely to compensate for the use of
EXPIre Inventory SKIPDirs=Yes. Syntax:
CLEAN ARCHDIRectories
[nodeList|JOBid=jobId]
[FIX=No|Yes]
Query ARCHDIRClean [jobId]
[Format=Standard|Detailed]
CANcel ARCHDIRClean jobId
Ref: ADSM 3.1 server README file
See also: EXPIre Inventory
CLEAN DRIVE ADSM server command to clean a drive in
a SCSI library (not a 3494 or other
library which manages its own cleaning).
Code: CLEAN DRIVE libraryname drivename
Cleaner Cartridge In general, a special tape cartridge
containing a material which wipes dirt
from the tape drive read/write heads.
A portion of the medium is used for each
cleaning, eventually resulting in
exhaustion of the cartridge.
TSM is involved with cleaner cartridges
*only* for those SCSI libraries that do
*not* have their own automatic cleaning
in the device hardware. Some libraries
which do their own automatic cleaning:
STK 9710, IBM 3494, 3570 and 3575.
Ref: DEFine/UPDate DRive
... CLEANFREQuency; CHECKIn LIBVolume
STATus=CLEaner
Cleaner Cartridge, 3494 You add special cleaning cartridges to a
3494 to allow the library to
automatically clean the tape drives as
needed, per cabled communication between
the drive and Library Manager computer.
Physical examination shows the cartridge
media to be very much like ordinary tape
- not a cloth-like material you might
expect.
Clean Cartridges *must* have a barcode
volser with a designated mask (prefix)
to identify cleaning cartridges - else
they can be interpreted as ordinary
tapes! The Cleaner Volume Mask defaults
to "CLN***", and can be redefined at the
3494 console under "Commands",
"Cleaner masks".
Make sure you add the right type of
cartridge: a 3490 cleaning cartridge
will be ignored by the LM when all the
drives are 3590s.
Two Cleaner Cartridges are considered
ideal for the 3494. (One, CLN999, is
supplied with the 3494, and with new
drives.)
Cleaner cartridges have no reserved cell
- they may live anywhere in the library.
Cleaning can occur by schedule, specific
request, or tape drive need.
Spent Cleaner Cartridges are
automatically ejected and hosts are
notified. Make sure that operators are
aware of this so that they neither
reinsert the cartridge as though it were
a new cleaner cartridge, nor think that
the cartridge is intended to be sent
offsite for disaster recovery purposes.
Msgs: ANR8914I
Cleaner Cartridge, 3494, how to add Simple: Just insert the cartridge into
the Convenience I/O Station. The robot
will store it in an arbitrary cell and,
by virtue of its barcode matching the
established Cleaner Mask, it will
automatically be assigned the Cleaner
Volume category.
Cleaner Cartridge life, 3494 Adjust via the 3494 Operator Station
Commands menu selection "Schedule
Cleaning, in the "Maximum cleaner usage"
box.
Default life limit count is 200; up to
500 can be specified.
Cleaner Volume category 3494 Library Manager category code FFFE
for a cleaning tape whose volser matches
a mask set up by the operator through
the library manager console.
3590 drives: category code FFFD applies.
The host is kept ignorant of such
volumes, as they are obviously not
eligible for use.
CLEANFREQuency DEFine/UPDate DRive parameter for SCSI
libraries specifying whether *SM should
get involved in drive cleaning. Code:
CLEANFREQuency=None|Asneeded|N_GB.
If Asneeded, when a drive reports a
cleaning-needed indicator to the device
driver, *SM will load the drive with a
checked-in cleaner cartridge.
IBM libraries like the 3494 manage their
own drive cleaning, and so this option
should default to CLEANFREQuency=None.
Cleaning, when last occurred At the 3494 console, go into the
Utilities menu, select View Logs, and
then select a transaction log and search
for "CLN" or "CQ00".
AIX will create a TAPE_DRIVE_CLEANING
entry in the AIX Error Log.
Cleaning cartridge See also: Cleaner cartridge
Cleaning cartridges, count of in 3494 Via Unix command:
'mtlib -l /dev/lmcp0 -vqK -s fffd'
Cleaning cartridges, 3494, max usage Is defined and queried (only) via the
count 3494 Operator Station Commands menu
selection "Schedule Cleaning, in the
"Maximum cleaner usage" box.
Default life limit count is 200; up to
500 can be specified.
Cleaning cartridges, 3494, volsers of Via Unix command:
'mtlib -l /dev/lmcp0 -qC -s fffd'
Cleaning interval, 3590 See: 3590 cleaning interval
Cleaning tape mounts, 3590, by drive See: 3590 cleaning tape mounts, by
drive
CLEANINGS_LEFT TSM DB: Column in LIBVOLUMES table
referring to the number of cleaning
cycles remaining on cleaner tapes in
SCSI libraries, where TSM must
physically control cleaning. (Is null
for libraries such as the 3494, where
the library is controlled by its own
supervisor, which handles cleaning
itself.)
Type: Integer Length: 4
CLEANUP BACKUPGROUPs TSM 4.2.2.x and 5.1.x ad hoc utility
created to clean up orphaned entries
within a TSM server database table, to
be run one time to correct the orphaned
group member problem. Involves Windows
System Objects; and so if you have not
backed up such in TSM v4, this may not
be an issue for you. Unfortunately, the
utility had a number of defects.
The utility is introduced in APAR
IC33977:
"This command will cause the server to
evaluate all the defined groups of
backup files in-use on the server and
remove any extraneous or orphaned
members that should have previously
been deleted. If the server encounters
an orphaned backup group member during
an upgrade from V4.2 to V5.1, the
server will issue the following
message:
ANR9999D iminit.c(xxxx): ThreadId
Orphaned group member encountered -
issue CLEANUP BACKUPGROUP to resolve
this issue."
From Flash 10201:
Problem Description:
SYSTEM OBJECTS were introduced in
version 3.7 and collect potentially
thousands of parts into a single object
to represent the Windows system object.
The increased database usage
(percentage utilization) is addressed
in APAR IC33977. This APAR documents a
deletion problem on the server where it
did not delete all the necessary
database entries corresponding to the
parts representing a single SYSTEM
OBJECT. This incomplete deletion
resulted in increased database
utilization because entries that should
have been deleted were not being
deleted. To resolve this, a utility,
CLEANUP BACKUPGROUP, was provided to
remove the extraneous database entries.
The server processing for expiration
and the "CLEANUP BACKUPGROUP" utility
may run slowly, may seem to hang or
stall, and may impact other operations
on the server. This slow down in
processing or apparent hang of these
processes is caused by the deletion
processing of files backed up for
SYSTEM OBJECT filespaces. Because
SYSTEM OBJECTS actually consist of
potentially thousands of parts, when
deletion occurs, the server is actually
deleting these thousands of files under
a single transaction which is why the
server appears to "hang" or stall.
However, even though it appears to hang
or stall, it is actually deleting these
system objects and should be allowed to
continue. Until CLEANUP BACKUPGROUP and
expiration run to completion, the
database utilization will remain higher
than it should be. Additionally,
because of the backlog of data that
needs to be deleted, expiration will
take longer and have more to do.
Caution: This operation eats Recovery
Log space if in RollForward mode:
consider switching to Normal mode for
the duration of the operation.
CLI Refers to Command Line Interface: the
way that the 'dsmc'; and 'dsmadmc'
commands operate, as contrasted with GUI
or WCI operation.
See also: Backup Set and CLI vs. GUI
Client A program running on a file server or
workstation that requests services of
another program called the Server.
Client, associate with schedule 'DEFine ASSOCiation Domain_Name
Schedule_Name Node_name
[,Node_name___]'
Client, last activity 'Query ACtlog SEARCH=Client-Name'
Client, prevent storing data on server There is no server setting for rendering
a client node's accesses read-only, so
that they can retrieve their data from
the server but not store new data.
However, from the client (or server
cloptset) it can be done, via Excludes.
See: Archiving, prohibit; Backups,
prevent
Client, register with server With "Closed registration" (q.v.) you
request the ADSM server administrator to
register your client system.
With "Open registration" the client root
user may register the client via the
'dsm' or 'dsmc' commands. To register
with multiple servers, enter the command
'dsmc -SErvername=StanzaName', where
StanzaName is the stanza in dsm.sys
which points to the server network and
port address.
Ref: Installing the Clients.
Client, space used on all volumes 'Query AUDITOccupancy NodeName(s)
[DOmain=DomainName(s)]
[POoltype=ANY|PRimary|COpy'
Note: It is best to run 'AUDit LICenses'
before doing 'Query AUDITOccupancy' to
assure that the reported information
will be current.
Client Acceptor Daemon (CAD) A.k.a. TSM Client Acceptor and Client
Acceptor Service: A facility to manage
TSM client processes, including the
Client Scheduler and the Web Client.
Module: dsmcad
Facility introduced in TSM 4.2 to deal
with the design behavior of the client
scheduler to retain all the memory it
has acquired for its various process
servicing, so as to reserve the
resources it predictably will need for
the next such scheduled task. (This is
sometimes disparaged as a "memory leak"
when it is merely retention.)
The developers realized that while some
client systems can sustain a process
which reserves that much memory, others
cannot do that and handle their other
workloads as well. The CAD allows only a
low-overhead process to persist in the
client system, to respond to the server
in processing schedules. The CAD will
invoke the appropriate client software
function (e.g., dsmc), and allow that
client software module to go away when
it is done, thus releasing memory which
other system tasks need. CAD also
serves to start the Web Client.
Operation is governed by the new
MANAGEDServices option.
It is important to realize how dsmcad
starts: to run as a daemon, it first
starts as an ordinary process, then
forks a copy of itself into the
background whereupon that dissociated
process becomes the daemon and the
original process ends. Because of this
daemon transition action, if dsmcad is
started from /etc/inittab, the third
(Action) field of the inittab line
should *not* contain "respawn": it
should contain "once". This keeps init
from responding to the daemon transition
by starting a replacement process -
which can result in port contention and
a start-loop. Operation involves a timer
file, through which is passed info about
the next scheduler.
A side advantage of the CAD approach is
that changes in the client options will
be in effect the next time CAD runs a
dsmc backup - in contrast to normal
client schedule behavior, which looks at
the options only when it starts, never
looking again to see any changes over
the many days that it hangs around.
Port number: Surprise!... The HTTPport
value also controls the Client Acceptor
(dsmcad) port number! Ref: www.ibm.com/
support/entdocview.wss?uid=swg21079454 .
Ref: 4.2 Technical Guide redbook; B/A
Clients manual
See also: MANAGEDServices; Remote Client
Agent; Scheduler
Client access, disable 'DISAble'
Client access, enable 'ENable'
Client activity, report See: Client session activity, report
Client component identifiers AIX prefix = AD1.
See /usr/lpp/adsm/bin/README for full
list.
Client compression See: Compression
Client CPU utilization high Typically due to the use of client
compression.
See "Backup performance" for general
tips in this area.
Client directory, ADSM AIX: /usr/lpp/adsm/bin/
Client directory, TSM AIX: /usr/tivoli/tsm/client/
/usr/tivoli/tsm/client/ba/bin/
Windows:
c:\program files\tivoli\tsm\baclient
Client files, maximum in batch "MOVEBatchsize" definition in the
transaction, define server options file.
Default: 32
Client IP address See: IP addresses of clients
Client level compatibility As client software evolves, it
introduces new features which require
changes in the format of the data as
stored on the server. Obviously, an
older client cannot understand data
formatting which is beyond its
programming.
See also: API; msgs ANS1245E, ANS4245E
Client level vs. server level It is much more difficult to advance the
TSM server software level than it is on
clients, and so the question arises as
to how disparate the client and server
software levels can be. Over the life of
the older ADSM product, it was the case
that basically any client level would
work with any server level. With the TSM
product, though, we learn that they
should not be more than one level
different. For example, if your TSM
client level is 4.1, you can operate
with a TSM 4.1 or 3.7 server, but no
lower server level is supported.
Important: When advancing a client
level, you cannot go back after you
start using it. New client levels
contain new features and change the way
in which they store client data on the
server - which would be unrecognized by
a lower level client.
Specifics can be found in:
- The Backup/Archive Clients manual,
chapter 1, under "Additional Migration
Information".
- The README file that comes with the
client software, in its section
"Migration Information"
Client log, access from server The server administrator may want to
have access to the client log. An
elegant way to avoid needing password
access is to define a client schedule to
copy the dsmsched.log files from the
client to the server:
'def sch DOMAIN GETLOG act=c obj="rcp
dsmsched.log admin@server:client.log"'
Client messages in server Activity This is Event Logging, resulting in ANE*
Log messages in the Activity Log.
Client Name In 'Query SEssion' output, identifies
the name of the client node conducting
the session. Note that if the operation
is being performed across nodes, on a
node of another name via the
-VIRTUALNodename parameter, the name
reflected will be the name specified in
that parameter, not the natural name of
the node performing the action.
Client node A file server or workstation on which
*SM has been installed and that has
been registered with an *SM server.
A node can only belong to one domain.
You can register a second node to a
different domain.
Client node, register from server 'REGister Node ...' (q.v.)
Be sure to specify the DOmain name you
want, because the default is the
STANDARD domain, which is what IBM
supplied rather than what you set up.
There must be a defined and active
Policy Set.
Note that this is how the client node
gets a default policy domain, default
management class, etc.
Client node, reassign Policy Domain 'UPDate Node NodeName DOmain=DomainName'
Node must not be currently conducting a
session with the server, else command
fails with error ANR2150E.
Client node, remove from server 'REMove Node NodeName'
Client node, rename in server 'REName Node NodeName NewName'
Client node, update from server 'UPDate Node ...' (q.v.)
Node must not be currently conducting a
session with the server, else command
fails with error ANR2150E.
Client node name Obtained in this order:
1. The 'gethostname' system call
(this is the default)
The owner of the files is that of
the invoker.
2. The nodename from the dsm.sys file
The owner of the files is that of
the invoker.
3. The nodename from the dsm.opt or
from the command line
(i.e. dsmc -NODename=mexico)
This option is meant to be temporary
pretend - it requires the user enter
a password even if password generate
is indicated in the dsm.sys file.
This mode does NOT use the login id
for ADSM owner. Instead it gives
access to all of the files backed up
under this nodename - i.e. virtual
root authority. The 'virtual root
authority' is why there is a check
to prevent the nodename entered
being the same as the
'gethostname'.
Client node policy domain name, query 'Query Node' shows node name and the
Policy Domain Name associated with it.
Client nodes, query 'Query Node [F=D]'
Reports on all registered nodes.
Client operating system Shows up in 'Query Node' Platform.
Client Option Set ADSMv3 concept, for centralized
administration of client options.
Via DEFine CLIENTOpt, the centralized
client options are defined in the
server, and are associated with a given
node via REGister Node. On the *SM
server, its administrator can use the
Force operand to force the
server-specified options to override
those in the client. This works for
singular options, like COMPRESSAlways,
but not for multiple options like
Include-Exclude and DOMain in that those
are "additive" options: every definition
adds to the collection of such
definitions. In terms of processing
order, server-defined additive options
logically precede those defined in the
client options file: they will always be
seen and processed first, before any
options set in the client options file.
The Client Option Set is associated with
a node via the 'REGister Node' command,
in operand CLOptset=OptionSetName.
Example of server command defining an
IncludeExclude file:
DEFine CLIENTOpt OptionSetName INCLEXCL
"include d:\test1".
Note that the Include or Exclude must be
in quotes. If your path name also has
quotes, then use single quotes for the
outer pair. Include and Exclude
definitions are in the specified file.
The B/A client will recongnize the need
to parse for the include or exclude
inside of the quotes.
Use 'dsmc show inclexcl' to reveal the
mingling of server-defined
Include/Exclude statements and those
from the client options file.
Note that a client version earlier than
V3 knows nothing of Client Option Sets
and so server-defined options are
ineffective with earlier clients.
Server-based client option set changes
are handed to the client scheduler when
it runs, which is to say that cloptset
changes in the server do not necessitate
restarting the client scheduler process.
(This has been verified by experiences.)
Management of a cloptset is facilitated
by the TSMManager package, which
provides a GUI-based copy and update
facility.
Ref: Redbook Getting Started with Tivoli
Storage Manager: Implementation Guide,
SG24-5416
See also: DEFine CLIENTOpt
Client Option Set, associate 'UPDate Node NodeName
CLOptset=Option_Set_Name'
Client Option Set, dissociate 'UPDate Node NodeName CLOptset=""'
Client Option Set, define 'DEFine CLOptset ______'.
Client Option Set, query Do 'Query Node ______ F=D' and look for
"Optionset:" to determine which option
set is in effect for the client, and
then do 'Query CLOptset OptionSetName'
Client options, list 'dsmc Query Options'
'dsmmigquery -o'
Client options, order of precedence Per doc APAR PQ54657:
1. Options entered on a scheduled
command.
2. Options received from the server with
a value of Force=Yes in the DEFine
CLIENTOpt. The client cannot override
the value. (Client Option Sets)
3. Options entered locally on the
command line.
4. Options entered locally in the
options file.
5. Options received from the server with
a value of Force=No. The client can
override the value.
6. Default option values.
Client options, settable within server Do 'help define CLIENTOpt' to see.
(ADSMv3)
Client options file See; Client system options file;
Client user options file
Client password, change from client 'dsmsetpw' (an HSM command)
Client password, where stored on See: Password, client, where stored on
client
Client performance factors - Make sure that you don't install any
software not needed on the machine.
For example, some AIX and Solaris
customers install everything that
comes in the client package -
including HSM, which results in it
always running without their
knowledge, taking up system resources.
- Turn off any features that are unused
and which may sap processing power.
On a Macintosh, for example, turn off
AppleTalk if it is unused.
See also: Backup taking too long;
Restoral performance; Server performance
Client polling A client/server communication technique
where the client node queries the server
for scheduled work, as defined by the
'SCHEDMODe POlling' option in the Client
System Options file (dsm.sys), and a
frequency as defined via the
"QUERYSCHedperiod" option. It is in
this mode that "Set RANDomize" can
apply.
Contrast with "PRompted" type.
Client schedule See: DEFine CLIENTAction;
DEFine SCHedule, client; NT;
Schedule, Client; SET CLIENTACTDuration;
Weekdays schedule, change the days
Client schedule, contact frequency The ADSM server attempts to contact each
specified client in sequence, giving
each up to 10 seconds to respond
before going on to the next in the list.
Client schedule, one time See: DEFine CLIENTAction
Client schedule associations, query 'Query ASSOCiation
[[DomainName] [ScheduleName]]'
Client scheduler See: Scheduler, client
Client schedules, disable See: DISABLESCheds
See also: DISAble SESSions
Client schedules, results, query 'Query EVent DomainName ScheduleName'
to see all of them. Or use:
'Query EVent * * EXceptionsonly=Yes'
to see just problems, and if none, get
message "ANR2034E QUERY EVENT: No match
found for this query."
Client schedules, see from server? Clients don't have TCP/IP ports open
until the schedule comes due, so one
can't bounce off those ports to
determine existence. The closest thing
might be the SHOW PENDING server
command, unless a comparable SQL query
could be formulated. But even then,
that's an *expected* client presence,
not current actual. This may require
some alternate access to the client to
look for the ADSM client process.
On NT, you can use server manager to
view the status of the scheduler service
on all your NT clients. It can also
used to start, stop, or disable the
service, providing you have the proper
authority and the schedules are running
as services on the clients.
Client/server A communications network architecture in
which one or more programs (clients)
request computing or data services from
another program (the server).
Client session, cancel 'CANcel SEssion NN'
Client session activity, report There are a few choices:
- Activate TSM server accounting and
report from the accounting log
records;
- Perform an SQL Select on the Summary
table.
More trivially, you can report on the
number of bytes received in the last
session with the client...which may be
any kind of session (even querying):
SELECT NODE_NAME,LASTSESS_RECVD AS \
NUM_OF_BYTES,LASTACC_TIME FROM NODES \
ORDER BY 2 DESC
Client sessions, cancel all 'CANcel SEssion all'
Client sessions, limit amount of data There is feature in the product which
would allow the server administrator to
limit the amount of data that a client
may send in a session or period of time
(one day), as you may want to do to to
squelch wasteful Backups or looping
Archive jobs.
See also: "Quotas" on storage used
Client sessions, limit time There is no obvious means in TSM to
limit client session lengths, to cause a
timeout after a certain time.
As a global affector, you could try
playing with the THROUGHPUTTimethreshold
server option, specifying a high
THROUGHPUTTimethreshold value, which
your clients could not possibly achieve,
thus causing any client session lasting
more than THROUGHPUTTimethreshold
minutes to be cancelled automatically.
Client sessions, maximum, define "MAXSessions" definition in the server
options file.
Client sessions, maximum, get 'Query STatus'
Client sessions, multiple See: RESOURceutilization
Client System Options File File dsm.sys, used on UNIX clients, that
contains a number of processing options
which identify the ADSM servers to be
contacted for services, communication,
authorization, central scheduling,
backup, archive, and space management
options. The file is maintained by the
root user on the client system. The
philosophy is that for multi-user
systems (e.g., Unix) that there should
be a system options files (this one) and
a user options file (dsm.opt) to
supplement. This is in contrast to
single-user systems like Windows, where
only a single options file is needed.
The ADSM 'dsmc Query Options' or TSM
'show options' reveals current values
and validates content.
AIX: /usr/lpp/adsm/bin/dsm.sys
IRIX: /usr/adsm/dsm.sys
The DSM_DIR environment variable can be
used to point to the directory
containing this file.
See also Client User Options File.
APAR IC11651 claims that if
PASSWORDAccess is set to Generate in
dsm.sys, then dsm.opt should *not*
contain a NODE line.
See also: Client User Options file
Client threads See: Multi-session Client
Client upgrade notes Typically, the rule in effect when
upgrading a client is that once you go
to a new client level and use it, you
cannot go back. New TSM clients extend
the table entries stored in the server
database, as associated with the data
they send to server storage pools, and
that prohibits use with earlier clients,
which can't understand the revised
tables. Note that in an upgrade, it is
never normally necessary for the TSM
client to reprocess any of the older
data.
See also: Client-server compatibility
Client User Options file File dsm.opt, used on UNIX clients,
containing options that identify the
ADSM server to contact, specify backup,
archive, restore, retrieve, and space
management options, and set date, time,
and number formats.
Either locate in /usr/lpp/adsm/bin or
have the DSM_CONFIG client environment
variable point to the file, or specify
it via -OPTIFILE on the command line.
See client system options file.
APAR IC11651 claims that if
PASSWORDAccess is set to Generate in
dsm.sys, then dsm.opt should *not*
contain a NODE line.
See also: Client System Options file
Client versions/releases, list SELECT CLIENT_VERSION as "C-Vers", -
CLIENT_RELEASE AS "C-Rel", -
CLIENT_LEVEL AS "C-Lvl", -
CLIENT_SUBLEVEL AS "C-Sublvl", -
PLATFORM_NAME AS "OS" , -
COUNT(*) AS "Nr of Nodes" FROM NODES -
GROUP BY
CLIENT_VERSION,CLIENT_RELEASE,
CLIENT_LEVEL,CLIENT_SUBLEVEL,
PLATFORM_NAME
----------------
SELECT NODE_NAME AS "Node", -
CLIENT_VERSION AS "C-Vers", -
CLIENT_RELEASE AS "C-Rel", -
CLIENT_LEVEL AS "C-Lvl", -
CLIENT_SUBLEVEL AS "C-Slvl", -
PLATFORM_NAME AS "OS" -
FROM NODES
Client-server compatibility See the compatibility list at the front
of the Backup/Archive Client manual,
chapter 1, under "Migrating from Earlier
Versions", "Upgrade Path for Clients and
Servers". Or see web page:
http://www.tivoli.com/support/
storage_mgr/compatibility.html
ClientNodeName Windows Registry value, planted as part
of setting up TSM Client Service
(scheduler), storing the TSM node name -
which may or may not be equal to the
computer name. Corresponds to the TSM
API value clientNodeNameP (q.v.).
clientNodeNameP TSM API: A pointer to the nodename for
the TSM session. All sessions must have
a node name associated with them, and
this sets it for API interactions.
The node name is not case sensitive.
This parameter must be NULL if the
passwordaccess option in the dsm.sys
file is set to generate. The API then
uses the system host name.
Clients, report MB and files count SELECT NODE_NAME, SUM(LOGICAL_MB) AS -
Data_In_MB, SUM(NUM_FILES) AS -
Num_of_files FROM OCCUPANCY GROUP BY -
NODE_NAME ORDER BY NODE_NAME ASC
CLIOS CLient Input Output Sockets: a protocol
like TCPIP that you can use to
communicate between MVS and AIX. In a
nutshell it is a faster protocol than
TCP/IP. You have to specifically set
up ADSM on MVS and your AIX machine to
take advantage of CLIOS - in other
words it does not get set up by default.
Clopset The ADSM 'dsmc Query Options' or TSM
'show options' command will show merged
options.
Closed registration Clients must be registered with the
server by an ADSM administrator.
This is the installation default.
Can be selected via the command:
'Set REGistration Closed'.
Ref: Installing the Clients
Contrast with "Open registration".
"closed sug" IBM APAR notation meaning that the
customer reported what they believe to
be a functionality problem, but which
IBM regards as not a problem: IBM will
take the report under advisement as a
"suggestion" for possible incorporation
into future software changes. Have a
nice day.
Cluster TSM terminology referring to the portion
of a filespace belonging to a single
client that is on a storage pool volume.
Referenced in message ANR1142I as TSM
performs tape reclamation.
See also: Reclamation
Clustering (Windows) See manual "TSM for Windows Quick Start"
Appendix D - "Setting up clustering".
CLUSTERnode, -CLUSTERnode= AIX and Windows client option to specify
whether TSM is responsible for managing
cluster drives in an AIX HACMP or
Microsoft Cluster Server (MSCS)
environment. For info on how to
configure a cluster server, refer to the
appropriate appendix in the client
manual. Specify:
Yes You want to back up cluster
resources.
No You want to back up local disks
This is the default.
The client on which you run the backup
must be the one which owns the cluster
resources, else the backup will not
work.
When CLUSTERnode Yes is in effect, the
cluster name is used to generate the
filespace name. However, it is not
derived from the /clustername:xxxx
option in the service definition.
Instead, the client gets the cluster
name via the Win32 API function
GetClusterInformation(). The reason you
need to specify it when running
dsmcutil.exe is because that utility can
also be used to configure remote
machines. Figuring out the local cluster
name is easy, but figuring out the
cluster name for a remote machine is a
little more difficult: the NT client may
in the future be able to do this.
CM Cartridge Memory, as contained in the
LTO Ultrium and 3592 tape cartridges.
Contained in the CM is an index (table
of contents) to the location of the
files that have been written to the
tape. If this becomes corrupted (as
happened with bad LTO drive firmware in
late 2004), the drive has to find files
by groping its way through the tape,
which severely degrades performance.
See: 3592; LTO
Code 39 Barcode as used on the 3590 tape:
A variable length, bi-directional,
discrete, self-checking, alpha-numeric
bar code. Code 39 encodes 43 characters;
zero through nine, capital "A" through
capital "Z", minus symbol, plus symbol,
forward slash, space, decimal point,
dollar sign and percent symbol. Each
character is encoded by 9 bars, 3 of
which are always wide.
Code 39 was the first alphanumeric
symbology to be developed. It is the
most commonly used bar code symbology
because it allows numbers, letters, and
some punctuation to be bar coded. It is
a discrete and variable length
symbology. Every Code 39 character has
five bars and four spaces. Every
character encodation has three wide
elements and six narrow elements out of
nine total elements, hence the name. Any
lower case letters in the input are
automatically converted to upper case
because Code 39 does not support lower
case letters. The asterisk (*) is
reserved for use as the start and stop
character. Bar code symbols, which
contain invalid characters, are replaced
with a checked box to visually indicate
the error. The Code 39 mod 43 symbol
structure is the same as Code 39, with
an additional data security check
character appended. The check character
is the modulus 43 sum of all the
character values in a given data
string. The list of valid characters for
the Code 39 bar code includes:
The capital letters A to Z
The numbers 0 to 9
The space character
The symbols - . $ / + %
Cold Backup Refers to the backup of commercial
database (e.g. Oracle) file system
elements via the standard Backup/Archive
client when the database system is shut
down (cold). This is in contrast to
using a TDP for backup of the database
when the database system is up and
alive.
Collocate a single node (belatedly) Some time after a storage pool has been
used in a non-collocated manner you want
to have one node's existing data all
collocated (as contrasted with simply
having new data or reclaimed volume data
collocated on a go-forward basis).
Possible methods:
1. Export the node's data, get the
original node definitions out of the
way, redefine the stgpool as
collocated, then import.
2. Identify all volumes containing that
(and inevitably, other nodes) data,
create a temp collocated stgpool, do
MOVe Data of the co-mingled volumes
into it (which collocates the data
for all nodes in the data), then MOVe
Data all other node volumes out of
there, resulting in the temp stgpool
containing data only belonging to the
node of interest.
Collocation A STGpool attribute for sequential
access storage pools (not disk).
Historically, *SM would by default store
data from multiple clients at the end of
the serial storage medium most recently
written to, where the co-mingling would
minimizes mounts while maximizing volume
usage. This is no collocation. Economy
at backup time, however, makes for
expense at restoral time in that each
node's data is now more widely scattered
over multiple volumes.
As of TSM 5.3, collocation by group is a
new capability; and if you do not define
a group, the arrangement effectively
defaults to collocation by node. (So,
the old lesson: never take defaults -
always specify what you want.)
Collocation shifts the relative
economics by keeping files together for
a given node, or node filespace, making
restoral quicker at the expense of more
media mounts and more incompletely-used
volumes.
Collocation off: co-mingled data, fewer
tapes, fewer tape mounts, longer time
to restore multiple files of one
client. Via:
'UPDate STGpool PoolName COLlocate=No'
Collocation on: tapes dedicated by
client, more tapes, more tape mounts,
shorter time to restore multiple files
of one client. Via:
'UPDate STGpool PoolName COLlocate=Yes'
By filespace: Keep files for each node,
and filespace within node, together in
separate tape sets.
'UPDate STGpool PoolName
COLlocate=FILespace'
By group: Keep files for a group of
nodes together in separate tape sets.
'UPDate STGpool PoolName
COLlocate=GROUP'
Default: No collocation
Note: If there are fewer tapes available
than clients or filespaces, *SM will be
forced to mingle them: Collocation is a
request, not a mandate.
Note that there is no "Wait=Yes"
provided with this command.
Advisory: Approach Collocation
cautiously where the tape technology has
inferior start-stop performance (i.e.,
traditional DLT): the performance will
likely be unsatisfactory with all the
repositioning, and you may see a lot of
"false" cleans on the drives, with tapes
that cannot be read back, lots of RSR
errors, etc.
With collocation, MOUNTRetention should
be short, to keep the increased number
of mounts from waiting for dismounts.
Related expense: BAckup STGpool will run
longer, as more primary storage pool
volumes are involved. Also, a
collocated primary tape which is updated
over a long period of time will mean
that its data is spread over *many*
tapes in the non-collocated copy storage
pool, which can make for painfully
lengthy Restore Volume operations.
Collocation is usually not very useful
for archive data because users do not
usually retrieve a whole set of files.
Notes: There are a few cases where *SM
will need to start an additional
"filling" collocation volumes. One
obvious case is when reclamation and
migration are running at the same time.
If reclamation is writing to an output
volume that migration also wants to
write to, the migration process won't
wait around for it. Instead it will
choose a new tape. Another case
involves reclamation and files that span
tapes, when it will possiblity create
another "filling" volume.
Ref: IBM site Technote 1112411
See also: Imperfect collocation
Collocation, changing Changing the STGpool collocation value
does not affect data previously stored.
That is, *SM does not suddenly start
moving data around because you changed
the setting.
Ref: Admin Guide, Turning Collocation On
or Off
Collocation, query 'Query STGpool [STGpoolName]
Format=Detailed',
look for "Collocate?", about halfway
down.
Collocation, transferring from Transferring data from a non-collocated
non-collocated space tape to a collocated tape can be very
slow because the server makes multiple
passes through the database and the
source volume. This allows files to
be transferred by node/filespace without
requiring excessive tape mounts of the
target media.
Collocation and backup For backup of a disk storage pool, a
process backs up all the files for one
node before going on the the next node.
This is done regardless of whether the
target copy pool is collocated.
For backup of sequential-access
primary pools, a backup process works on
one primary tape at a time. If the
target copy pool is collocated, the
backup process copies files on the
source tape by node/filespace. This
means that if you are backing up a
non-collocated primary pool to a
collocated copy pool, it may be
necessary to make multiple passes over
the source tape.
Collocation and database backups Collocation is often pointless - and
even counterproductive - for large
databases. Given the size of a database
backup (as via a Data Protection agent),
it tends to occupy a large portion of a
tape, effectively enforcing a kind of
implicit collocation. It is also the
case that whereas a database backup is
often far more important to business
continuity than other kinds of files, it
is beneficial not to have them clustered
together on the tape tape volumes, but
rather dispersed. Another point here is
that the DP may employ Data Striping:
the data will be kept on separate tapes,
and during restoral the tapes will be
mounted and used in parallel.
Collocation and 'MAXSCRatch' ADSM will never allocate more than
'MAXSCRatch' volumes for the storage
pool: collocation becomes defeated when
the scratch pool is exhausted as ADSM
will then mingle clients. When a new
client's data is to be moved to the
storage pool, ADSM will first try to
select a scratch tape, but if the
storage pool already has "MAXSCRatch"
volumes then it will select a volume as
follows:
For collocation by node: select the tape
with the lowest utilization in the
storage pool.
For collocation by filespace: first try
to fall back to the collocation-by-node
scheme by selecting a volume containing
data from the same client node; then
select the tape with the lowest
utilization in the storage pool.
Ref: Admin Guide, chapter on Managing
Storage Pools and Volumes, topic How the
Server Selects Volumes with Collocation
Enabled
See also: MAXSCRatch
Collocation and offsite volumes Collocation is usually not used for
offsite volumes. Reasons:
- Means fewer filled tapes.
- Means more tapes going offsite if just
partially filled, and whereas offsite
means trucking, that costs more.
- Offsite is for the rarity of disaster
recovery, and the cost of collocation
for offsite vols can almost never be
justified for that rarity.
See also: ANR1173E
Collocation and RESTORE Volume RESTORE Volume operations are also
affected by collocation: restoring from
a non-collocated copy storage pool into
a collocated primary storage pool can
be pretty slow because the restore
essentially has to sort all the files by
node to collocate them in the target
primary storage pool. This process
could be greatly accelerated by
restoring to a disk storage pool and
then allowing the files to migrate into
your sequential primary storage pool.
The reason for this is that files in a
disk storage pool are "presorted" in
the database to facilitate migration.
Collocation by filespace New in AIX server 2.1.5.8 - allows
collocation by file system rather than
client node. Code "COLlocate=FIlespace"
in your stgpool definition. Be aware
that this will fill one tape at a time
for each of the filespaces.
If you are already collocating by node,
switching to filespace will cause that
to take effect for any new tape that is
used in the storage pool. For old tapes,
a MOVe Data or a reclaim should separate
the files belonging to different
filespaces.
Collocation "not working" If you find multiple nodes or filespaces
being collocated onto a single volume in
spite of your specifications, it can be
the natural consequence of one of:
- Running out of volumes such that *SM
has no choice but to combine;
- Your STGpool MAXSCRatch has enforced
a limit on the number of volumes used,
you exceed it, and in a logical sense
run out of volumes.
You can check volume content to see when
the non-collocation occurred and
correlate that with your Activity Log to
see what transpired at the time.
See also: MAXSCRatch
Column In a silo style tape library such as the
3583, refers to a vertical section of
the library which contains storage cells
and/or tape drives which the robotic
actuator may reach when positions to
that arc position around the
circumference in the library.
Column attributes (TSM db table) To see the attributes for a specific
column in the TSM database, do like:
SELECT * FROM COLUMNS WHERE
COLNAME='MESSAGE' AND TABNAME='ACTLOG'
Column title In SQL reports from the TSM database,
the title above the data reported from
the requested database table columns,
underlined by hyphens. By default, the
title is the name of the column:
override via the AS parameter.
Column width in Select output See: SELECT output, column width
-COMMAdelimited dsmadmc option for reporting with output
being comma-delimited.
Contrast with -TABdelimited.
See also: -DISPLaymode
Command, define your own There is no facility in TSM for defining
your own server command, to be invoked
solely by name. The closest things are
macros and scripts, as described in the
Admin Guide.
Command, generate from SELECT See: SELECT, literal column output
Command format (HSM) Control via the OPTIONFormat option
in the Client User Options file
(dsm.opt): STANDARD for long-form, else
SHORT. Default: STANDARD
Command line, continuing Code either a hyphen (-) or backslash
(\) at the end of the line and contine
coding anywhere on the next line.
Command line, max len of args Command line arguments data cannot be
more than the ARG_MAX value in
/usr/include/sys/limits.h (AIX).
Command line client Refers to the command-based client, or
Command Line Interface (CLI), rather
than the window-oriented (GUI) client.
Note that the GUI is a convenience
facility: as such its performance is
inferior to that of the command line
client, and so should not be used for
time-sensitive purposes such as disaster
recovery. (So says the B/A Client
manual, under "Performing Large Restore
Operations".)
Command line editing See: Editor
Command line history See: Editor
Command line length limit, client See: dsmc command line limits
Command line mode for server cmds Start an "administrative client session"
to interact with the server from a
remote workstation, via the command:
'dsmadmc', as described
in the ADSM Administrator's Reference.
Command line recall See: Editor
Command output, capture in a file The best approach to capturing ADSM
server command output is to use the
form: "dsmadmc -OUTfile=SomeFilename___"
Alternately you can selectively redirect
the output of commands by using ' > '
and ' >> ' (redirection).
Command output, suppress Use the Client System Options file
(dsm.sys) option "Quiet".
See also: VERBOSE
Command routine ADSMv3: Command routing allows the
server that originated the command to
route the command to multiple servers
and then to collect the output from
these servers. Format:
Server1[,ServerN]: server cmd
Commands, uncommitted, roll back 'rollback'
COMMIT TSM server command used in a macro to
commit command-induced changes to the
TSM database.
Syntax: COMMIT
See also: Itemcommit
Committing database updates The Recovery Log holds uncommitted
database updates.
See: CKPT; LOGPoolsize
COMMMethod Server Options File operand specifying
one of more communications methods which
clients may use to reach the server.
Should specify at least one of:
HTTP (for Web admin client)
IPXSPX (discontinued in TSM4)
NETBIOS (discontinued in TSM4)
NONE (to block external
access to the server)
SHAREDMEM (shared memory, within a
single computer system)
SNALU6.2 (APPC - discontinued in
TSM4)
SNMP
TCPIP (the default, being TCP,
not UDP)
(Ref: Installing the Server, Chap. 5)
COMMMethod Client System Options file (dsm.sys)
option to specify the one communication
method to use to reach each server.
Should specify one of:
3270 (discontinued in TSM4)
400comm
HTTP (for Web Admin)
IPXspx
NAMEdpipe
NETBios
PWScs
SHAREdmem (shared memory, within a
single computer system)
SHMPORT
SNAlu6.2
TCPip (is TCP, not UDP)
Be sure to code it, once, on each server
stanza.
See also: Shared memory
COMMmethod server option, query 'Query OPTion'. You will see as many
"CommMethod" entries as were defined in
the server options file.
Common Programming Interface A programming interface that allows
Communications (CPIC) program-to-program communication using
SNA LU6.2. See Systems Network
Architecture Logical Unit 6.2.
Discontinued as of TSM 4.2.
COMMOpentimeout Definition in the Server Options File.
Specifies the maximum number of seconds
that the ADSM server waits for a
response from a client when trying to
initiate a conversation.
Default: 20 seconds.
Ref: Installing the Server...
COMMTimeout Definition in the Server Options File.
Specifies the communication timeout
value in seconds: how long the server
waits during a database update for an
expected message from a client.
Default: 60 seconds.
Too small a value can result in ANR0481W
session termination and ANS1005E. A
value of 3600 is much more realistic. A
large value is necessary to give the
client time to rummage around in its
file system, fill a buffer with files'
data, and finally send it - especially
for Incremental backups of large file
systems having few updates, where the
client is out of communication with the
server for large amounts of time.
If client compression is active, be sure
to allow enough time for the client to
decompress large files.
Ref: Installing the Server...
See also: IDLETimeout; SETOPT; Sparse
files, handling of, Windows
COMMTimeout server option, query 'Query OPTion'
Communication method "COMMmethod" definition in the server
options file. The method by which a
client and server exchange
information. The UNIX application client
can use the TCP/IP or SNA LU6.2
method. The Windows application client
can use the 3270, TCP/IP, NETBIOS, or
IPX/SPX method. The OS/2 application
client can use the 3270, TCP/IP, PWSCS,
SNA LU6.2, NETBIOS, IPX/SPX, or Named
Pipe method. The Novell NetWare
application client can use the IPX/SPX,
PWSCS, SNA LU6.2, or TCP/IP methods. See
IPX/SPX, Named Pipe, NETBIOS,
Programmable Workstation Communication
Service, Systems Network Architecture
Logical Unit 6.2, and Transmission
Control Protocol/Internet Protocol.
Communication protocol A set of defined interfaces that allows
computers to communicate with each
other.
Communications timeout value, define "COMMTimeout" definition in the server
options file.
Communications Wait (CommW, commwait) "Sess State" value in 'Query SEssion'
for when the server was waiting to
receive expected data from the client or
waiting for the communication layer to
accept data to be sent to the client.
An excessive value indicates a problem
in the communication layer or in the
client.
Recorded in the 23rd field of the
accounting record, and the
"Pct. Comm. Wait Last Session" field of
the 'Query Node Format=Detailed' server
command.
See also: Idle Wait; Media Wait; RecvW;
Run; SendW; Start
CommW See: Communications Wait
commwait See: Communications Wait
Competing products ARCserve; Veritas; www.redisafe.com;
www.graphiumsoftware.com
Compile Time (Compile Time API) Refers to a compiled application, which
may emply a Run Time API (q.v.).
The term "Compile Time API" may be
employed with a TDP, which is a
middleware application which employs
both the TDP subject API (database,
mail, etc.) plus the TSM API.
Compress files sent from client to Can be defined via COMPRESSIon option
server? in the dsm.sys Client System Options
file. Specifying "Yes" causes *SM to
compress files before sending them to
the *SM server. Worth doing if you
have a fast client processor.
COMPRESSAlways Client User Options file (dsm.opt)
option to specify handling of a file
which *grows* during compression.
(COMPRESSIon option must be set for this
option to come into play.)
Default: v2: No, do not send the object
if it grows during compression. v3: Yes,
do send if it grows during compression.
Notes: Specifying No can result in
wasted processing... The TXNGroupmax and
TXNBytelimit options govern transaction
size, and if a file grows in compression
when COMPRESSAlways=No, the whole
transaction and all the files involved
within it must be processed again,
without compression. This will show up
in the "Objects compressed by:" backup
statistics number being negative (like
"-29%").
Messages: ANS1310E; ANS1329S
See also IBM site TechNote 1156827.
Compression Refers to data compression, the primary
objective being to save storage pool
space, and secondarily data transfer
time. TSM compression is governed
according to REGister Node settings,
client option settings (COMPRESSIon),
and Devclass Format. Object attributes
may also specify that the data has
already been compressed such that TSM
will not attempt to compress it further.
Drives: Either client compression or
drive compression should be used, but
not both, as the compression operation
at the drive may actually cause the data
to expand.
EXCLUDE.COMPRESSION can be used to
defeat compression for certain files
during Archive and Backup processing.
Ref: TSM Admin Guide, "Using Data
Compression"
See also: File size
COMPression= Operand of REGister Node to control
client data compression:
No The client may not compress data
sent to the server - regardless
of client options.
Each client session will show:
"Data compression forced off by
the server" in the headings,
just under the Server Version
line of the client log.
Yes The client must always compress
data sent to the server -
regardless of client options.
Each client session will show:
"Data compression forced on by
the server" in the headings,
just under the Server Version
line of the client log.
Client The client may choose whether or
not to compress data sent to the
server, via client options.
Default: COMPression=Client
COMPRESSIon (client compression) Client System Options file (dsm.sys)
option. Code in a server stanza.
Specifying "Yes" causes *SM to compress
files before sending them to the TSM
server, during Backup and Archive
operations, for storage as given - if
the server allows the client to make a
choice about compression, via
"COMPRESSIon=Client" in 'REGister Node'.
Conversely, the client has to uncompress
the files in a restoral or retrieval.
(The need for the client to decompress
the data coming back from the server is
implicit in the data, and thus is
independent of any client option.)
Worth considering if you have a fast
client processor and the storage device
does not do hardware compression (most
tape drives do). Compression increases
data communication throughput and takes
less space if the destination storage
pool is Disk - but less desirable if the
storage pool is tape, in that the tape
drive is better for doing compression,
in hardware.
Beware: if the file expands during
compression then TSM will restart the
entire transaction - which could involve
resending other files, per the
TXNGroupmax / TXNBytelimit values. The
slower your client, the longer it takes
to compress the file, and thus the
longer the exposure to this possibility.
Check at client by doing:
'dsmc Query Option' for ADSM or
'dsmc show options' for TSM.
The dsmc summary will contain the extra
line:
"Compression percent reduction:", which
is not present without compression.
Note that during the operation the
progress dots will be fewer and slower
than if not using compression.
With "COMPRESSIon Yes", the server
COMMTimeout option becomes more
important - particularly with large
files - as the client takes considerable
time doing decompression.
How long does compression take? One way
to get a sense of it is to, outside of
TSM, compress a copy of a typical, large
file that is involved in your backups,
performing the compression with a
utility like gzip.
Where the client options call for both
compression and encryption, compression
is reportedly performed before
encryption - which makes sense, as
encrypted data is effectively binary
data, which would either see little
compression, or even exapansion. And,
encryption means data secured by a key,
so it further makes sense to prohibit
any access to the data file if you do
not first have the key.
See also: Sparse files, handling of,
Windows
Compression, by tape drive Once the writing of a tape has begun
with or without compression, that method
will persist for the remainder of the
tape is full. Changing Devclass FORMAT
will affect only newly used tapes.
Compression, client, control methods Client compression may be controlled by
several means:
- Client option file spec.
- Client Option Set in the server.
(Do 'dsmc query options' to see what's
in effect, per options file and
server side Option Set.)
- Mandated in the server definition of
that client node.
If compression is in effect by any of
the above methods, it will be reflected
in the statistics at the end of a Backup
session ("Objects compressed by:").
Compression algorithm, client Is Ziv Lempel (LZI), the same as that
used in pkzip, MVS HAC, and most likely
unix as well, and yes the data will
normally grow when trying to compress it
for a second time, as in a client being
defined with COMPRESSAlways=Yes and a
compressed file being backed up.
Per the 3590 Intro and Planning Guide:
"Data Compression is not recommended for
encrypted data. Compressing encrypted
data may reduce the effective tape
capacity." This would seem to say that
any tough binary data, like
pre-compressed data from a *SM client,
would expand rather than compress, due
to the expectations and limitations of
the algorithm.
Compression being done by client node? Controlled by the COMPression parameter
(before it sends files to server for on the 'REGister Node' and 'UPDate Node'
backup and archive) commands.
Default: Client (it determines whether
to compress files).
Query from ADSM server:
'Query Node Format=Detailed'.
"Yes" means that it will always compress
files sent to server; "No" means that it
won't.
Query from client:
'dsmc Query Option' for ADSM, or
'dsmc show options' for TSM
look for "Compression".
Is also seen in result from client
backup and archive, in "Objects
compressed by:" line at end of job.
Compression being done by *SM server Controlled via the DEVclass "FORMAT"
on 3590 tape drives? operand.
Compression being done by tape drive? Most tape drives can perform hardware
compression of data. (The 3590 can.)
Find out via the AIX command:
'/usr/sbin/lsattr -E -l rmt1'
where "rmt1" is a sample tape drive
name. TSM will set compression according
to your DEVclass FORMAT=____ value.
You can use SMIT to permanently
change this, or do explicit:
'chdev -l rmt1 compress=yes|no'.
You can also use the "compress" and
"nocompress" keywords in the 'tapeutil'
or 'ntutil' command to turn compression
on and off for subsequent *util
operations (only).
Configuration file An optional file pointed to by your
application that can contain the same
options that are found in the client
options file (for non-UNIX platforms) or
in the client user options file and
client system options file (for UNIX
platforms). If your application points
to a configuration file and values are
defined for options, then the values
specified in the configuration file
override any value set in the client
options files.
Configuration Manager See: Enterprise Configuration and Policy
Management
Connect Agents Commercial implementations of the ADSM
API to provide high-performance,
integrated, online backups and restores
of industry-leading databases.
TSM renamed them to "Data Protection"
(agents) (q.v.).
See http://www.storage.ibm.com/
software/adsm/addbase.htm
Console mode See: -CONsolemode; Remote console
-CONsolemode Command-line option for ADSM
administrative client commands
('dsmadmc', etc.) to see all unsolicited
server console output. Sometimes
referred to as "remote console".
Results in a display-only session (no
input prompt - you cannot enter
commands). And unlike the Activity Log,
no date-timestamps lead each line.
Start an "administrative client session"
via the command: 'dsmadmc -CONsolemode'.
To have Operations monitor ADSM,
consider setting up a "monitor" admin ID
and a shell script which would invoke
something to the effect of:
'dsmadmc -ID=monitor -CONsolemode
-OUTfile=/var/log/ADSMmonitor.YYYYMMDD'
and thus see and log events.
Note that ADSM administrator commands
cannot be issued in Console Mode.
See also: dsmadmc; -MOUNTmode
Ref: Administrator's Reference
Consumer session In Backup, the session which actually
performs the data backup. (To use an FTP
analogy, this is the "data channel".)
Sometimes called the "data thread".
In accounting records, Consumer sessions
may be distinguished from their related
Producer sessions only by virtue of
fields 16 and 17 being zero in Producer
sessions.
Contrast with: Producer session
See also: RESOURceutilization
Contemporary Cybernetics 8mm drives 8510 is dual density (2.2gig and 5gig).
(That brand was subsumed by Exabyte: see
http://www.exabyte.com/home/
products.html for models.)
Content Manager CommonStore CommonStore seamlessly integrates SAP
R/3 and Lotus Domino with leading IBM
archive systems such as IBM Content
Manager, IBM Content Manager OnDemand,
or TSM. The solution supports the
archiving of virtually any kind of
business information, including old,
inactive data, e-mail documents, scanned
images, faxes, computer printed output
and business files. You can offload,
archive, and e-mail documents from your
existing Lotus Notes databases onto
long-term archive systems. You can also
accomplish a fully auditable document
management system with your Lotus Notes
client.
http://www.ibm.com/software/data/
commonstore/
CONTENTS (SQL) The *SM database table which is the
entirety of all filespaces data. (As
such, Select queries against this table
are quite expensive.) Along with
Archives and Backups tables, constitutes
the bulk of the *SM database contents.
Columns:
VOLUME_NAME, NODE_NAME (upper case),
TYPE (Bkup, Arch, SpMg), FILESPACE_NAME
(/fs), FILE_NAME (/subdir/ name),
AGGREGATED (n/N), FILE_SIZE,
SEGMENT (n/N), CACHED (Yes/No)
Whereas the Backups table records a
single instance of the backed up file,
the Contents table records the primary
storage pool instance plus all copy
storage pool instances.
Note that no timestamp is available for
the file objects: that info can be
obtained from the Backups table. But a
major problem with the Contents is the
absence of anything to uniquely identify
the instance of its FILE_NAME, to be
able to correlate with the corresponding
entry in the Backups table, as would be
possible if the Contents table carried
the OBJECT_ID. The best you can do is
try to bracket the files by creation
timestamp as compares with the volume
DATE_TIME column from the Volhistory
table and the LAST_WRITE_DATE from the
Volumes table.
See also: BACKUPS; Query CONtent
Continuation and quoting Specifying things in quotes can always
get confusing...
When you need to convey an object name
which contains blanks, you must enclose
it in quotes. Further, you must nest
quotes in cases where you need to use
quotes not just to convey the object to
*SM, but to have an enclosing set of
quotes stored along with the name. This
is particulary true with the OBJECTS
parameter of the DEFine SCHedule command
for client schedules. In its case,
quoted names need to have enclosing
double-quotes stored with them; and you
convey that composite to *SM with single
quotes. Doing this correctly is simple
if you just consider how the composite
has to end up...
Wrong: OBJECTS='"Object 1"'-
'"Object 2"'
Right: OBJECTS='"Object 1" '-
'"Object 2"'
That is, the composite must end up being
stored as: "Object 1" "Object 2"
for feeding to and proper processing by
the client command. The Wrong form
would result in: "Object 1""Object 2"
mooshing, which when illustrated this
way is obviously wrong. The Wrong form
can result in a ANS1102E error.
Ref: "Using Continuation Characters" in
the Admin Ref.
Continuing server command lines Code either a hyphen (-) or backslash
(continuation) (\) at the end of the line and contine
coding anywhere on the next line.
Continuing client options Lines in the Client System Options File
(continuation) and Client User Options File are not
continued per se: instead, you re-code
the option on successive lines. For
example, the DOMain option usually
entails a lot of file system names; so
code a comfortable number of file system
names on each line, as in:
DOMain /FileSystemName1, ...
DOMain /FileSystemName7, ...
Control Session See: Producer session
Count() SQL function to calculate the number of
records returned by a query.
Note that this differs from Sum(), which
computes a sum from the contents of a
column.
Convenience Eject category 3494 Library Manager category code FF10
for a tape volume to be ejected via the
Convenience I/O Station. After the
volume has been so ejected its volser
is deleted from the inventory.
Convenience Input-Output Station 3494 hardware feature which provides 10
(Convenience I/O) access slots in the door for inputting
cartridges to the 3494 or receiving
cartridges from it. May also be used
for the transient mounting of tapes for
immediate processing, not to become part
of the repository.
The Convenience I/O Station is just a
basic pass-through area, and should not
be confused with the more sophisticated
Automatic Cartridge Facility magazine
available for the 3590 tape drive.
We find that it takes some 2 minutes, 40
seconds for the robot to take 10 tapes
from the I/O station and store them into
cells.
When cartridges have been inserted from
the outside by an operator, the Operator
Panel light "Input Mode" is lit. It
changes to unlit as soon as the robot
takes the last cartridge from the
station.
When cartridges have been inserted from
the inside by the robot, the Operator
Panel light "Output Mode" is lit.
The Operator Station System Summary
display shows "Convenience I/O: Volumes
present" for as long as there are
cartridges in the station.
See also the related High Capacity
Output Facility.
Convenience I/O Station, count of See: 3494, count of cartridges in
cartridges in Convenience I/O Station
CONVert Archive TSM4.2 server command to be run once on
each node to improve the efficiency of a
command line or API client query of
archive files and directories using the
Description option, where many files may
have the same description. Previously,
an API client could not perform an
efficient query at all and a Version 3.1
or later command line client could
perform such a query only if the node
had signed onto the server from a GUI at
least once.
Syntax:
CONVert Archive NodeName Wait=No|Yes
Msgs: ANR0911I
COPied COPied=ANY|Yes|No
Operand of 'Query CONtent' command, to
specify whether to restrict query output
either to files that are backed up to a
copy storage pool (Yes) or to files that
are not backed up to a copy storage
pool (No).
Copy Group A policy object assigned to a
Management Class specifying attributes
which control the generation,
destination, and expiration of backup
versions of files and archived copies of
files. It is the Copy Group which
defines the destination Storage Pools to
use for Backup and Archive.
ADSM Copygroup names are always
"STANDARD": you cannot assign names,
which is conceptually pointless anyway
in that there can only be one copygroup
of a given type assigned to a management
class. 'Query Mgm' does not reveal the
Copygroups within the management class,
unfortunately: you have to do
'Query COpygroup'.
Note that Copy Groups are used only with
Backup and Archive. HSM does not use
them: instead, its Storage Pool is
defined via the MGmtclass attribute
"MIGDESTination".
See "Archive Copy Group" and "Backup
Copy Group".
Copy group, Archive type, define See: DEFine COpygroup, archive type
Copy group, Backup type, define See: DEFine COpygroup, backup type
Copy group, Archive, query 'Query COpygroup [CopyGroupName]
(defaults to Backup type copy group) Type=Archive'
Copy group, Backup, query 'Query COpygroup [CopyGroupName]
(defaults to Backup type copy group) [Type=Backup]'
Copy group, delete 'DELete COpygroup DomainName PolicySet
MgmtClass [Type=Backup|Archive]'
Copy group, query 'Query COpygroup [CopyGroupName]'
(defaults to Backup type copy group)
COPy MGmtclass Server command to copy a management
class within a policy set. (But a
management class cannot be copied across
policy domains or policy sets.) Syntax:
'COPy MGmtclass DomainName SetName
FromClass ToClass'
Then use 'UPDate MGmtclass' and other
UPDate commands to tailor the copy.
Note that the new name does not make
it into the Active policy set until
you do an ACTivate POlicyset.
Copy Mode See: Backup Copy Group
Copy Storage Pool A special storage pool, consisting of
serial volumes (tapes) whose purpose is
to provide space to have a surity backup
of one or more levels in a standard
Storage Pool hierarchy. The Copy
Storage Pool is employed via the
'BAckup STGpool' command (q.v.).
There cannot be a hierarchy of Copy
Storage Pools, as can be the case with
Primary Storage Pools.
Be aware that making such a Copy results
in that much more file information being
tracked in the database...about 200
bytes for each file copy in a Copy
Storage Pool, which is added to the
file's existing database entry rather
than create a separate entry.
Copy Storage Pools are typically not
collocated because it would mean a mount
for every collocated node or file
system, which could be a lot.
Note that there is no way to readily
migrate copy storage pool data, as for
example when you want to move to a new
tape technology and want to
transparently move (rather than copy)
the current data.
Ref: Admin Guide topic Estimating and
Monitoring Database and Recovery Log
Space Requirements
Copy Storage Pool, define See: DEFine STGpool (copy)
Copy Storage Pool, delete node data You cannot directly delete a node's data
from a copy storage pool; but you can
circuitously effect it by using MOVe
NODEdata to shift the node's data to
separate tapes in the copy stgpool
(temporarily changing the stgpool to
COLlocate=Yes), and then doing DELete
Volume on the newly written volumes.
Copy storage pool, files not in Invoke 'Query CONtent' command with
COPied=No to detect files which are not
yet in a copy storage pool.
Copy Storage Pool, moving data You don't: if you move the primary
storage pool data to another location
you should have done a 'BAckup STGpool'
which will create a content-equivalent
area, whereafter you can delete the
volumes in the old Copy Storage Pool and
then delete the old Copy Storage Pool.
Note that neither the 'MOVe Data'
command nor the 'MOVe NODEdata' command
will not move data from one Copy Storage
Pool to another.
Copy Storage Pool, restore files Yes, if the primary storage pool is
directly from unavailable or one of its volumes is
destroyed, data can be obtained directly
from the copy storage pool
Ref: TSM Admin Guide chapter 8,
introducing the Copy Storage Pool:
...when a client attempts to retrieve a
file and the server detects an error in
the file copy in the primary storage
pool, the server marks the file as
damaged. At the next attempt to access
the file, the server obtains the file
from a copy storage pool.
Ref: TSM Admin Guide, chapter
Protecting and Recovering Your Server,
Storage Pool Protection: An Overview...
"If data is lost or damaged, you can
restore individual volumes or entire
storage pools from the copy storage
pools. TSM tries to access the file from
a copy storage pool if the primary copy
of the file cannot be obtained for one
of the following reasons:
- The primary file copy has been
previously marked damaged.
- The primary file is stored on a
volume that is UNAVailable or
DEStroyed.
- The primary file is stored on an
offline volume.
- The primary file is located in a
storage pool that is UNAVailable, and
the operation is for restore,
retrieve, or recall of files to a
user, or export of file data."
Copy Storage Pool, restore volume from 'RESTORE Volume ...'
Copy Storage Pool & disaster recovery The Copy Storage Pool is a secondary
recovery vehicle after the Primary
Storage Pool, and so the Copy Storage
Pool is rarely collocated for optimal
recovery as the Primary pool often is.
This makes for a big contention problem
in disaster recovery, as each volume may
be in demand by multiple restoral
processes due to client data
intermingling. A somewhat devious
approach to this problem is to define
the Devclass for the Copy Storage Pool
with a FORMAT which disables data
compression by the tape drive, thus
using more tapes, and hence reducing the
possibility of collision. Consider
employing multiple management classes
and primary storage pools with their own
backup storage pools to distribute data
and prevent contention at restoral time.
If you have both high and low density
drives in your library, use the lows for
the Copy Storage Pool. Or maybe you
could use a Virtual Tape Server, which
implicitly stages tape data to disk.
Copy Storage Pool volume damaged If a volume in a Copy Storage Pool has
been damaged - but is not fully
destroyed - try doing a Move Data first
in rebuilding the data, rather than just
deleting the volume and doing a fresh
BAckup STGpool. Why? If you did the
above and then found the primary storage
pool volume also bad, you would have
unwittingly deleted your only copies of
the data, which could have been
retrieved from that partially readable
copy storage pool volume. So it is most
prudent to preserve as much as possible
first, before proceeding to try to
recreate the remainder.
Copy Storage Pool volume destroyed If a volume in a Copy Storage Pool has
been destroyed, the only reasonable
action is to make this known to ADSM by
doing 'DELete Volume' and then do a
fresh 'BAckup STGpool' to effectively
recreate its contents on another volume.
(Note that Copy Storage Pool volumes
cannot be marked DEStroyed.)
Copy Storage Pools current? The Auditocc SQL table allows you to
quickly determine if your Copy Storage
Pools have all the data in the Primary
Storage Pools, by comparing:
BACKUP_MB to BACKUP_COPY_MB
ARCHIVE_MB to ARCHIVE_COPY_MB
SPACEMG_MB to SPACEMG_COPY_MB
If the COPY value is higher, it
indicates that you have the same data in
multiple Copy Storage Pools, as in an
offsite pool.
COPY_TYPE Column in VOLUMEUSAGE SQL table denoting
the types of files: BACKUP, ARCHIVE,
etc.
COPYContinue DEFine/UPDate STGpool operand for how
the server should react when
COPYSTGpools is in effect and an error
is encountered in generating the copy
storage pool image. The default is Yes,
to continue copying, but not to the
problem copy storage pool, for the
duration of that client backup session.
A new session will begin with no prior
state information about previous
problems.
Note that this option may be useless
with TDPs, which don't retry
transactions.
Msgs: ANR4737E
Copygroup See: Copy Group
COPYSTGpools TSM 5.1+ feature providing the
possibility to simultaneously store a
client's files into each copy storage
pool specified for the primary storage
pool where the clients files are
written. The simultaneous write to the
copy pools only takes place during
backup or archive from the client. In
other words, when the data enters the
storage pool hierarchy. It does not take
place during data migration from an HSM
client nor on a LAN free backup from a
Storage Agent. Naturally, if your
storage pools are on tape, you will need
a tape drive for the primary storage
pool action and the copy storage pool
action: 2 drives. Your mount point usage
values must accommodate this.
Maximum length of the copy pool name:
30 chars
Maximum number of copy pool names:
10, separated by commas (no intervening
spaces)
This option is restricted to only
primary storage pools using NATIVE or
NONBLOCK data format.
The COPYContinue parameter may also be
specified to further govern operation.
Note: The function provided by
COPYSTGpools is not intended to replace
the BACKUP STGPOOL command. If you use
the COPYSTGpools parameter, continue to
use BACKUP STGPOOL to ensure that the
copy storage pools are complete copies
of the primary storage pool. There are
cases when a copy may not be created.
COUNT(*) SQL statement to yield the number of
rows satisfying a given condition: the
number of occurrences. There should
be as many elements to the left of the
count specification as there are
specified after the GROUP BY, else you
will encounter a logical specification
error. Example:
SELECT OWNER,COUNT(*) AS
"Number of files" FROM ARCHIVES
GROUP BY OWNER
SELECT NODE_NAME,OWNER,COUNT(*) AS
"Number of files" FROM ARCHIVES
GROUP BY NODE_NAME,OWNER
See also: AVG; MAX; MIN; SUM
COUrier DRM media state for volumes containing
valid data and which are in the hands of
a courier, going offsite. Their next
state should be VAULT.
See also: COURIERRetrieve; MOuntable;
NOTMOuntable; VAult; VAULTRetrieve
COURIERRetrieve DRM media state for volumes empty of
data, which are being retrieved by a
courier. Their next state should be
ONSITERetrieve.
See also: COUrier; MOuntable;
NOTMOuntable; VAult; VAULTRetrieve
CPIC Common Programming Interface
Communications.
.cpp Name suffix seen in some messages.
Refers to a C++ programming language
source module.
CRC Cyclic Redundancy Checking. Available as
of TSM 5.1: provides the option of
specifying whether a cyclic redundancy
check (CRC) is performed during a client
session with the server, or for storage
pools. The server validates the data by
using a cyclic redundancy check which
can help identify data corruption.
The CRC values are validated when AUDit
Volume is performed and during
restore/retrieve processing, but not
during other types of data movement
(e.g., migration, reclamation, BAckup
STGpool, MOVe Data).
It is important to realize that the CRC
values are stored when the data is first
enters TSM, via Backup or Archive, to be
stored in a storage pool which has
CRCdata specified. The CRC info is
thereby stored with the data and is
associated with it for the life of that
data in the TSM server, and will move
with the data even if the data is moved
to a storage pool where CRC recording is
not in effect. Likewise, if data was
not originally stored with CRC, it will
not attain CRC if moved into a CRCed
storage pool.
The Unix 'sum' command performs similar
CRC processing.
Activated: VALIdateprotocol of DEFine
SERver; CRCData operand of DEFine
STGpool; REGister Node VALIdateprotocol
operand;
Verified: "Validate Protocol" value in
Query SERver; "Validate Data?" value in
Query STGpool
Ref: IBM site TechNote 1143615
See: VALIdateprotocol
Cristie Bare Machine Recovery IBM-sponsored complementary product for
TSM: A complete system recovery solution
that allows a machine complete recovery
from normal TSM backups.
http://www.ibm.com/software/tivoli/
products/storage-mgr/cristie-bmr.html
Cross-client restoral See: Restore across clients
Cross-node restoral See: Restore across clients
CSQryPending Verb type as seen in ANR0444W message.
Reflects client-server query for
pending scheduled tasks.
CST See: Cartridge System Tape
See also: ECCST; HPCT; Media Type
CST-2 Designation for 3490E (q.v.).
Ctime and backups The "inode change time" value (ctime)
reflects when some administrative action
was performed on a file, as in chown,
chgrp, and like operations. When ADSM
Backup sees that the ctime value has
changed, it will back up the file again.
This can be problematic for HSM-managed
files, in that such backup requires
copying from tape to tape, and there may
be too few drives available during the
height of nightly backups, which could
cause the backup to fail then. So try
to avoid mass chgrp and like operations
on HSM-managed files.
CURRENT_DATE SQL: Should be the current date, like
"2001-09-01". But in ADSM 3.1.2.50,
the month number was one more than it
should be.
Examples:
SELECT CURRENT_DATE FROM LOG
SELECT * FROM ACTLOG WHERE
DATE(DATE_TIME)=CURRENT_DATE
See also: Set SQLDATETIMEformat
CURRENT_TIME SQL: The current time, like HH:MM:SS
format.
See also: Set SQLDATETIMEformat
CURRENT_TIMESTAMP SQL: The current date and time, like
YYYY-MM-DD HH:MM:SS or YYYYMMDDHHMMSS.
See also: Set SQLDATETIMEformat
CURRENT_USER SQL: Your administrator userid, in upper
case.

D2D Colloquialism for Disk-to-Disk, as in a


disk backup scheme where the back store
is disk rather than tape. See: DISK
D2D backup Really an ordinary backup, where the
TSM server primary storage pool is of
ramdom access devtype DISK rather serial
access FILE or one of the various tape
drive types.
See also: DISK
D2T Colloquialism for Disk-to-Tape, as in a
disk backup scheme where the back store
is tape - the traditional backup medium.
Damaged files These are files in which the server
found errors when a user attempted to
restore, retrieve, or recall the file;
or when an 'AUDit Volume' is run, with
resulting Activity Log message like:
"ANR2314I Audit volume process ended for
volume 000185; 1 files inspected, 0
damaged files deleted, 1 damaged files
marked as damaged."
TSM knows when there is a copy of the
file in the Backup Storage Pool, from
which you may recover the file via
'RESTORE Volume', if not
'RESTORE STGpool'.
If the client attempts to retrieve a
damaged file, the TSM server knows that
the file may instead be obtained from
the copy stgpool and so goes there.
The marking of a file as Damaged will
not cause the next client backup to
again back up the file, given that the
supposed damage may simply be a dirty
tape drive. Doing an AUDit Volume
Fix=Yes on a primary storage pool volume
may cause the file to be deleted
therefrom, and the next backup to store
a fresh copy of the file into that
storage pool.
Msgs: ANR0548W; ANR1167E
See also: Destroyed
Damaged files, list from server 'Query CONtent VolName ...
DAmaged=Yes'
(Interestingly, there is no "Damaged"
column available to customers in the
Contents table in the TSM SQL database.)
DAT Digital Audio Tape, a 4mm format which,
like 8mm, has been exploited for data
backup use. It is a relatively fragile
medium, intended more for convenience
than continuous use.
Note that *SM Devclass refers to this
device type as "4MM" rather than "DAT".
A DDS cartridge should be retired after
2000 passes, or 100 full backups. A DDS
drive should be cleaned every 24 hours
of use, with a DDS cleaning cartridge.
Head clogging is relatively common.
Recording formats:
DDS2 and DDS3 (Digital Data Storage).
DDS2 - for DDS2 format without
compression
DDS2C - for DDS2 with hardware
compression
DDS2 - for DDS3 format without
compression
DDS3C - for DDS3 format with hardware
compression
Data access control mode One of four execution modes provided by
the 'dsmmode' command. Execution modes
allow you to change the space management
related behavior of commands that run
under dsmmode. The data access control
mode controls whether a command can
access a migrated file, sees a migrated
file as zero-length, or receives an
input/output error if it attempts to
access a migrated file. See also
execution mode.
Data channel In a client Backup session, the part of
the session which actually performs the
data backup.
Contrast with: Producer session
See: Consumer session
Data mover A named device that accepts a request
from TSM to transfer data and can be
used to perform outboard copy
operations. As used with Network
Addressable Storage (NAS) file server.
Related: REGISTER NODE TYPE=NAS
Data ONTAP Microkernel operating system in NetApp
systems.
Data Protection Agents Tivoli name for the Connect Agents that
were part of ADSM. More common name:
TDP (Tivoli Data Protection). The TDPs
are specialized programs based upon the
TSM API to back up a specialized object,
such as a commercial database, like
Oracle. As such, the TDPs typically also
employ an application API so as to
mingle within an active database, for
example.
You can download the TDP software from
the TSM web site, but you additionally
need a license and license file for the
software to work.
See also: TDP
Data session See: Consumer session
Data thread In a client Backup session, the part of
the session which actually performs the
data backup.
Contrast with: Producer session
See: Consumer session
Data transfer time Statistic in a Backup report: the total
time TSM requires to transfer data
across the network. Transfer statistics
may not match the file statistics if the
operation was retried due to a
communications failure or session loss.
The transfer statistics display the
bytes attempted to be transferred across
all command attempts.
Beware that if this value is too small
(as when sending a small amount of data)
then the resulting Network Data Transfer
Rate will be skewed, reporting a higher
number than the theoretical maximum.
Look instead to the Elapsed time, to
compute sustained throughput.
Ref: Backup/Archive Client manual,
"Displaying Backup Processing Status".
Database The TSM Database is a proprietary
database, governing all server
operations and containing a catalog of
all stored file system objects. All data
storage operations effectively go
through the database.
The TSM Database contains:
- All the administrative definitions and
client passwords;
- The Activity Log;
- The catalog of all the file system
objects stored in storage pools on
behalf of the clients;
- The names of storage pool volumes;
- In a No Query Restore, the list of
files to participate in the restoral;
- Digital signatures as used in subfile
backups.
Named in dsmserv.dsk, as used when the
server starts. (See "dsmserv.dsk".)
Customers may perform database queries
via the SELECT command (q.v.) and via
the ODBC interface. There is indexing.
The TSM database is dedicated to the
purposes of TSM operation. It is not a
general purpose database for arbitrary
use, and there is no provided means for
adding or thereafter updating arbitrary
data.
Why a proprietary db, and not something
like DB2? Well, in the early days of
ADSM, DB2's platform support was
limited, so this product-specific,
universal database was developed. It is
also the case that this db is optimized
for storage management operations in
terms of schema and locking. But the
problem with the old ADSM db is that is
is very limited in features, and so a
DB2 approach is being re-examined.
See also: Database, space taken for
files; DEFine SPACETrigger; ODBC; Select
Database, back up Perform via ADSM server command
'BAckup DB' (q.v.).
To back up to a 3590 tape in the 3494,
choose a tape which is not already
defined to a storage pool.
Note that there is no query command to
later directly reveal which tape a
database backup was written to: you have
to do 'Query VOLHistory Type=DBBackup'.
Database, back up unconventionally An unorthodox approach for supporting
point-in-time restorals of the ADSM
database that came to mind would be to
employ standard *SM database mirroring
and at an appointed time do a Vary Off
of the database volume(s), which can
then be image-copied to tape, or even be
left as-is, with a replacement disk area
put into place (Vary On) rotationally.
In this way you would never have to do a
Backup DB again.
Database, back up to a scratch 3590 Perform like the following example:
tape in the 3494 'BAckup DB DEVclass=OURLIBR.DEVC_3590
Type=Full'
Database, back up to a specific 3590 Perform like the following example:
tape in the 3494 'BAckup DB DEVclass=OURLIBR.DEVC_3590
Type=Full VOLumenames=000049
Scratch=No'
Database, "compress" See: dsmserv UNLOADDB (TSM 3.7)
Database, content and compression The TSM Server database has a b-tree
organization with internal references to
index nodes and siblings. The database
grows sequentially from the beginning to
end, and pages that are deleted
internally are re-used later when new
information is added. The only utility
that can compress the database so that
"gaps" of deleted pages are not present
is the database dump/load utility.
After extensive database deletions,
due to expiration processing or
filespace/volume delete processing,
pages in the midst of the database space
may become free, but pages closer to the
beginning or end of the database still
allocated. To reduce the size of your
database, sufficient free pages must
exist at the end of the linear database
space that is allocated over your
database volumes. A database dump
followed by a load will remove free
pages from the beginning of the
database space to minimize free space
fragmentation and may allow the
database size to be reduced.
Database, convert second primary 'REDuce DB Nmegabytes'
volume to volume copy (mirror) 'DELete DBVolume 2ndVolName'
'DEFine DBCopy 1stVolName 2ndVolName'
Database, create 'dsmfmt -db /adsm/DB_Name Num_MB'
where the final number is the desired
size for the database, in megabytes, and
is best defined in 4MB units, in that
1 MB more (the LVM Fixed Area, as seen
with SHow LVMFA) will be added for
overhead if a multiple of 4MB, else more
overhead will be added. For example: to
allocate a database of 1GB, code "1024":
ADSM will make it 1025.
Database, defragment See: dsmserv UNLOADDB (TSM 3.7)
Database, defragment? You can gauge how much your TSM database
is fragmented by doing Query DB and
compare the Pct Util against the Maximum
Reduction: a "compacted" database with a
modest utilization will allow a large
reduction, but a "fragmented" one will
be much less reducible.
Database, delete table entry See: Backup files, delete; DELRECORD;
File, selectively delete from *SM
storage
Database, designed for integrity The design of the database updating
for ADSM uses 2-phase commit, allowing
recovery from hardware and power
failures with a consistent database.
The ADSM Database is composed of 2 types
of files, the DB and the LOG, which
should be located on separate volumes.
Updates to the DB are grouped into
transactions (a set of updates). A
2-phase commit scheme works the
following way, for the discussion
assume we modify DB pages 22, 23:
1) start transaction
2) read 22 from DB and write to LOG
3) update 22' in DB and write 22' to log
4) same as 2), 3) for page 23
5) commit
6) free LOG space
Database, empty If you just formatted the database and
want to start fresh with ADSM, you need
to access ADSM from its console, via
SERVER_CONSOLE mode (q.v.). From there
you can register administrators, etc.,
and get started.
Database, enlarge You can extend the space which may be
used within database "volumes"
(actually, files) by using the 'EXTend
DB' command. If your existing files are
full, you *cannot* extend the files
themselves: they are fixed in size.
Instead, you have to add a volume
(file), as follows:
- Create and format the physical file
by doing this from AIX:
'dsmfmt -db /adsm/dbext1 100'
which will create a 101 MB file,
with 1 MB added for overhead.
- Define the volume (file) to ADSM:
'DEFine DBVolume /adsm/dbext1
The space will now show up in 'Query
DBVolume' and 'Query DB', but will
not yet be available for use.
- Make the space available:
'EXTend DB 100'
Note that doing this may automatically
trigger a database backup, with message
ANR4552I, depending.
Database, extend usable space 'EXTend DB N_Megabytes'
The extension is a physical operation,
so shell "filesize" limit could disrupt
the operation.
Note that doing this may automatically
trigger a database backup, with message
ANR4552I, depending.
Database, maximum size Per APAR IC15376, the ADSM database
should not exceed 500 GB.
Per the TSM 5.1 Admin Guide: 530 GB.
Ref: Server Admin Guide, topic
Increasing the Size of the Database or
Recovery Log topic, in Notes.
See: SHow LVMFA, which reveals that the
max is actually 531.2 GB. (See the
reported "Maximum possible DB LP Table
size".)
See also: Volume, maximum size
Database, mirror See: MIRRORRead LOG
Database, mirror, create Define a volume copy via:
'DEFine DBVolume Db_VolName Copy_VolName
'DEFine DBCopy Db_VolName Copy_VolName'
Then you can do an 'EXTend DB
N_Megabytes' (which will automatically
kick off a full database backup).
Database, mirror, delete 'DELete DBVolume Db_VolName'
(It will be almost instantaneous)
Message: ANR2243I
Database, number of filespace objects See: Objects in database
Database, query 'Query DB [Format=Detailed]'
Database, rebuild from storage pool No: in a disaster situation, the ADSM
tapes? server database *cannot* be rebuilt from
the data on the storage pool tapes,
because the tape files have meaning only
per the database contents.
Database, reduce by duress Sometimes you have to minimize the size
of your database in order to relocate it
or the like, but can't Reduce DB
sufficiently as it sits. If so, try:
- Prune all but the most recent
Activity Log entries.
- Delete any abandoned or useless
filespaces to make room. (Q FI F=D
will help you find those which have
not seen a backup in many a day, but
watch out for those that are just
Archive type.)
- Delete antique Libvol entries.
- If still not enough space, an
approach you could possibly use would
be to Export and delete any dormant
node data, to Import after you have
moved the db, to bring that data
back.
Database, reduce space utilized You can end up with a lot of empty space
in your database volumes. If you need to
reclaim, you can employ the technique of
successively adding a volume to the
database and then deleting the oldest
volume, until all the original volumes
have been treated. This will consolidate
the data, and can be done while *SM is
up. Note that free space within the
database is a good thing, for record
expansion.
Database, remove volume 'DELete DBVolume Db_VolName'
That starts a process to migrate data
from the volume being deleted to the
remaining volumes. You can monitor the
progress of that migration by doing
'q dbv f=d'.
Database, reorganize See: dsmserv UNLOADDB (TSM 3.7)
Database, space taken per client node This is difficult to determine (and no
one really cares, anyway), but here's an
approach: The Occupancy info, which
provides the number of filespace
objects), by type, in primary and copy
storage pools. The Admin Guide topic
"Estimating and Monitoring Database and
Recovery Log Space Requirements"
provides numbers for space utilized.
The product of the two would yield an
approximate number.
Database, space taken for files From Admin Guide chapter Managing the
Database and Recovery Log, topic
Estimating and Monitoring Database and
Recovery Log Space Requirements:
- Each version of a file that ADSM
stores requires about 400 to 600
bytes of database space. (This is an
approximation which anticipates
average usage. Consider that for
Archive files, the Description itself
can consume up to 255 chars, or
contribute less if not used.)
- Each cached or copy storage pool copy
of a file requires about 100 to 200
bytes of database space.
- Overhead could increase the required
space up to an additional 25%.
These are worst-case estimations: the
aggregation of small files will
substantially reduce database
requirements.
Note that space in the database is used
from the bottom, up.
Ref: Admin Guide: Estimating and
Monitoring Database and Recovery Log
Space Requirements.
Database, "split" There is no utility for splitting the
TSM database, per se; and, certainly, a
given TSM server instance can employ
only one database. Sites with an
unweildy database size (defined as
taking too much of the day to back up)
may want to create a second TSM server
instance and have that one take some of
the load. This is most commonly
accomplished simply by having clients
start using the second server for data
storage, pointing back to the old server
only for the restoral of older data,
until that all, ultimately expires on
the older server. A more cumbersome
approach is to employ Export to move
nodes to the new server, but few shops
go through that Herculean effort.
Database, verify and fix errors See: 'DSMSERV AUDITDB'
Database allocation on a disk For optimal performance and minimal seek
times:
- Use the center of a disk for TSM
space. This means that the disk arm
is never more than half a disk away
from the spot it needs to reach to
service TSM.
- You could then allocate one biggish
space straddling the center of the
disk; but if you instead make it two
spaces which touch at the center of
the disk, you gain benefit from TSM's
practice of creating one thread per
TSM volume, so this way you can have
two and thus some parallelism.
Database Backup To capture a backup copy of the ADSM
database on serial media, via the
'BAckup DB' command.
Database backups are not portable across
platforms - they were not designed to be
so - and include a lot of information
that is platform specific: use
Export/Import to migrate across
platforms.
By using the ADSMv3 Virtual Volumes
capability, the output may be stored on
another ADSM server (electronic
vaulting).
See also: dsmserv RESTORE DB
Database backup, latest SELECT DATE_TIME AS -
"DATE TIME ",TYPE, -
MAX(BACKUP_SERIES),VOLUME_NAME FROM -
VOLHISTORY WHERE TYPE='BACKUPFULL' OR -
TYPE='BACKUPINCR'
Database backup, query volumes 'Query VOLHistory Type=DBBackup'.
The timestamp displayed is when the
database backup started, rather than
finished.
Another method:
'Query DRMedia DBBackup=Yes
COPYstgpool=NONE'
Note that using Query DRMedia affords
you the ability to very selectively
retrieve info, and send it to a file,
even from a server script.
Database backup, delete all 'DELete VOLHistory TODate=TODAY
TOTime=NOW Type=DBBackup'
(Note that TSM will not allow you to
delete your last database backup, for
safety reasons. You can circumvent this,
and free a "trapped" tape, by doing a
placebo db backup to devclass type
File.)
Database backup in progress? Do 'Query DB Format=Detailed' and look
at "Backup in Progress?".
Database backup trigger, define See: DEFine DBBackuptrigger
Database backup trigger, query 'Query DBBackuptrigger
[Format=Detailed]'
Database backup volume Do 'Query VOLHistory Type=DBBackup',
if the ADSM server is up, or
'Query OPTions' and look for
"VolumeHistory".
If *SM is down, you can find that
information in the file specified on the
"VOLUMEHistory" definition in the server
options file (dsmserv.opt).
See "DSMSERV DISPlay DBBackupvolumes"
for displaying information about
specific volumes when the volume
history file is unavailable.
See "DSMSERV RESTORE DB Preview=Yes" for
displaying a list of the volumes needed
to restore the database to its most
current state.
Database backup volume, pruning If you do not have DRM:
Use 'DELete VOLHistory TODate=SomeDate
TOTime=SomeTime Type=DBBackup'
to manage the number of database
backups to keep.
If you have DRM:
'Set DRMDBBackupexpiredays __'
Database backup volumes, identifying Seek "BACKUPFULL" or "BACKUPINCR" in the
current volume history backup file - a handy way
to find them, without having to go into
ADSM. Or perform server query:
select volume_name from volhistory -
where (upper(type)='BACKUPFULL' or -
upper(type)='BACKUPINCR')
Database backup volumes, identifying Unfortunately, when a 'DELete
historical VOLHistory' is performed the volsers of
the deleted volumes are not noted. But
you can get them two other ways:
1. Have an operating system job capture
the volsers of the BACKUPFULL,
BACKUPINCR volumes contained in the
volume history backup file (named in
the server VOLUMEHistory option)
before and after the db backup, then
compare.
2. Do 'Query ACtlog BEGINDate=-N
MSGno=1361' to pick up the historical
volsers of the db backup volumes at
backup completion to check against
those no longer in the volume
history.
Database backups (Oracle, etc.) Done with TSM via the Tivoli Data
Protection (TDP) products.
See: TDP
See also: Adsmpipe
Database buffer pool size, define "BUFPoolsize" definition in the server
options file.
Database buffer pool statistics, reset 'RESet BUFPool'
Database change statistics since last 'Query DB Format=Detailed'
backup
Database consumption factors - All the administrative definitions are
here; elminate what is no longer
needed.
- The Activity Log is contained in the
database: control amount retained via
'Set ACTlogretention N_Days'. The
Activity Log also logs administrator
commands, Events, client session
summary statistics, etc., which you
may want to limit.
- The database is at the mercy of client
nodes or their filespaces being
abandoned, and client file systems
and disks being renamed such that
obsolete filespaces consume space.
- Volume history entries consume some
space: eliminate what's obsolete via
'DELete VOLHistory'.
- More than anything, the number of
files cataloged in the database
consume the most space, and your
Copy Group retention policies govern
the amount kept. Nodes which have a
sudden growth in file system files
will inflate the db via Backup.
See: "Many small files" problem
- Restartable Restores consume space in
that the server is maintaining state
information in the database (the SQL
RESTORE table). Generally control via
server option RESTOREINTERVAL, and
reclaim space from specific
restartable restores via the server
command CANCEL RESTORE. Also, during
such a restore the server will need
extra database space to sort
filenames in its goal to minimize tape
mounts during the restoral, and so
there will be that surge in usage.
- Complex SELECT operations will require
extra database space to work the
operation.
- When you Archive a file, the directory
containing it is also archived. When
the -DEscription="..." option is used,
to render the archived file unique, it
also causes the archived directory to
be rendered unique, and so you end up
with an unexpectedly large number of
directories in the *SM database, even
though they are all effectively
duplicates in terms of path.
- The size of the Aggregate in
Small Files Aggregation is also a
factor: the more small files in an
aggregate, the lower the overhead in
database cataloging. As the 3.1
Technical Guide puts it, "The database
entries for a logical file within an
aggregate are less than entries for a
single physical file." See: Aggregate
- Make sure that clients are not running
Selective backups or Archives on their
file systems (i.e., full backups)
routinely instead of Incremental
backups, as that will rapidly inflate
the database. Likewise, be very
careful of coding MODE=ABSolute in
your Copy Group definitions.
- Talk to client administrators about
excluding useless files from backup,
like temp directories and web browser
cache files.
- Make sure that 'EXPIre Inventory' is
being run regularly - and that it gets
to run to completion. Note that
API-based clients, such as the TDP
series and HSM, require their own,
separate expiration handling: failing
to do that will result in data
endlessly piling up in the storage
pools and database.
- Not using the DIRMc option can result
in directories being needlessly
retained after their files have
expired, in that the default is for
directories to bind to the management
class with the longest retention
period (RETOnly).
- Realize that long-lived data that was
stored in the server without
aggregation will be output from
reclamation likewise unaggregated,
thus using more database space than if
it were aggregated.
(See: Reclamation)
- With the Lotus Notes Agent, *SM is
cataloging every document in the Notes
database (.NSF file).
- Beware the debris left around from the
use of DEFine CLIENTAction (q.v.).
- Windows System Objects are large and
consist of thousands of files.
- Wholesale changes of ACLs (Access
Control Lists) in a file system may
cause all the files to be backed up
afresh.
- Daylight Savings Time transitions can
cause defective TSM software to back
up every file.
- Use of DISK devclass volumes can use
more db space. (See Admin Guide table
"Comparing Random Access and
Sequential Access Disk Devices".)
In that the common cause of db growth is
file deluge from a client node, simple
ways to inspect are: produce a summary
of recent *SM accounting records;
harvest session-end ANE* records from
the Activity Log; and to do a Query
Content with a negative count value on
recently written storage pool tapes.
(Ideally, you should be running
accounting record summaries on a regular
basis as a part of system management.)
Database file It is named within file:
/usr/lpp/adsmserv/bin/dsmserv.dsk
(See "dsmserv.dsk".)
Database file name (location) Is defined within file:
/usr/lpp/adsmserv/bin/dsmserv.dsk
(See "dsmserv.dsk".)
The name gets into that file via
'DEFine DBVolume' (not by dsmfmt).
ADSM seems to store the database file
name in the ODM, in that if you restart
the server with the name strings within
dsmserv.dsk changed, it will still look
for the old file names.
Database file name, determine 'Query DBVolume [Format=Detailed]'
Database filling indication Activity log will contain message
ANR0362W when utilization exceeds 80%.
Database fragmentation, gauge Try the following to report:
SELECT CAST((100 - (
CAST(MAX_REDUCTION_MB AS FLOAT) * 256 )
/ (CAST(USABLE_PAGES AS FLOAT) -
CAST(USED_PAGES AS FLOAT) ) * 100) AS
DECIMAL(4,2)) AS PERCENT_FRAG FROM DB
Database full indication ANR0131E diagnosticid: Server DB space
exhausted.
Database growth See: Database consumption factors
Database location See "Database file name"
Database log pages, mode for reading, "MIRRORRead DB" definition in the
define server options file.
Database log pages, mode for writing, "MIRRORWrite DB" definition in the
define server options file.
Database max utilization stats, reset 'RESet DBMaxutilization'
Resets the Max. Pct Util number, which
is seen in a 'Query DB', to be the same
as the current Pct Util value.
Database page size 'Query DB Format=Detailed',
"Page Size (bytes):"
Currently: 4096
Database performance - Locate the database on disks which are
separate from other operating system
services, and choose fast disks and
connection methods (like Ultra SCSI).
- Spread over multiple physical volumes
(disks) rather than consolidating on a
single large volume: TSM gives a
process thread to each volume, so
performance can improve through
parallelism. And, of course, you
always benefit by having more disk
arms to access data.
- Avoid RAID striping, as this will slow
performance. (Striping is for
distributing I/O across multiple
disks. This slows down db operations
because striping involves a relatively
costly set-up overhead to get multiple
disk working together to handle the
streaming type writing of a lot of
data. DB operations constitute many
operations involving small amounts of
data, and thus the overhead of
striping is detrimental.)
- Do 'Query DB F=D' and look at the
Cache Hit Pct. The value should be up
around 98%. If less, consider boosting
the server BUFPoolsize option.
- Assure that the server system has
plenty of real memory so as to avoid
paging in serving database needs.
See also: Server performance
Database robustness The *SM database is private to the
product. Unfortunately, it is not a
robust database, and as long as it
remains proprietary it will likely be
the product's Achilles heel. Running
multiple, simultaneous, intense
database-updating operations (Delete
Filespace, Delete Volume) has
historically caused problems, including
database deadlocks, server crashes, and
even database damage. AVOID DOING SO!!
Database size issues See: Database consumption factors
Database space utilization issues So your database seems bloated. Is
there something you can do? The ADSM
database will inevitably grow with the
number of files being backed up and
the number of backup versions retained
and their retention periods. Beyond the
usual, the following are pertinent to
database space utilization:
- Make sure you are running expiration
regularly.
- The Activity Log is in the database.
Examine your 'Set ACTlogretention'
value and look for runaway errors
that may have consumed much space.
- Look for abandoned File Spaces, the
result of PC users renaming their
disks or file systems and then doing
backups under the new name.
- Volume History information tends to
be kept forever: you need to
periodically run 'DELete VOLHistory'.
And with that command you should also
be deleting old DBBackup volumes to
reclaim tapes.
- Using verbose descriptions for
Archive files will eat space. (Each
can be up to 255 chars.)
- Consider coercing client systems to
exclude rather useless files from
backups, such as temp files and web
browser cache files.
Database space required for HSM files Figure 143 bytes + filename length.
Database Space Trigger ADSM V3.1.2 feature which allows setting
a trigger (%) and when reached, will
dynamically create a new volume, define
it to the database and extend the db.
Database too big? A perennial question is when the TSM
database should be considered "too big"
such that a single, large server should
be split into multiple servers, and thus
multiple, smaller databases. The general
answer is: when the slowest db mangement
task takes too long to complete.
Currently, the slowest db management
task is Expiration. When you find that
you Expiration will not complete a full,
unfettered run within an extended period
dedicated to it within a day (e.g., 12
hours), then it may be time to split.
As always, consider whether a TSM
release level boost or hardware
improvements may bring the needed
relief. Database query times should be
another factor - not just Select or
Query commands, but the kind of lookups
involved in large restorals.
Database usage As of ADSM 3.1, the product database
will be used both for its usual
inventory tracking (permanent space), as
well as temporary space for SQL
operations and BAckup Node, RESTORE Node
processing. The two space utilizations
must not mix. The space for permanent
tables grows from low-numbered pages
upward, while space for temporary tables
expands from the highest numbered pages
downward. There may not be sufficient
room between the highest permanent page
and the top of the database: Query DB
reports database utilization but cannot
illuminate such a shortage. You would
need to expand the database or
reorganize it with UNLOADDB,LOADDB to
make space.
Ref: IBM site Solution 1116019
Database volume (file) Each database volume (file) contains
info about all the other db and log
files.
See also: dsmserv.dsk
Database volume, add 'DEFine DBVolume Vol_Ser'
Database volume, delete 'DELete DBVolume Vol_Ser'
Database volume, query 'Query DBVolume [VolName]
[Format=Detailed]'
Database volume, vary back on 'VARy ONline VolName'
after message ANR0202W, ANR0203W,
ANR0204W, ANR0205W. Always look into
the cause before attempting to bring the
possibly defective volume back.
Database volume usage, verify If your *SM db volumes are implemented
as OS files (rather than rlv's) you can
readily inspect *SM's usage of them by
looking at the file timestamps, as the
time of last read and write will be
thereby recorded.
Databases, backing up Is performed via ADSM Connect Agents and
TSM Data Protection (agents).
For supported list, see the Clients
software list (URL available at the
bottom of this document). For others
you'll have to seek another source.
General note: Backing up active
databases using simple incremental
backup, from outside the database, is
problematic because part of the database
is on disk and part is in memory, and
perhaps elsewhere (e.g., recovery log).
Unlike a sequential file, which is
updated either appended to it or
replacing it, a database gets updated in
random locations inside of it - often
"behind" the backup utility, which is
reading the database as a sequential
file. Furthermore, many databases
consist of multiple, interrelated files,
and to it is impossible for an external
backup utilities to capture a consistent
image of the data. Thus, it's advisable
to back up databases using an API-based
utility which participates in the
database environment to back it up from
the inside, and thus get a consistent
and restorable image. Alternately, some
database applications can themselves
make a backup copy of the database,
which can then be backed up via TSM
incremental backup.
Ref: redbook Using ADSM to Back Up
Databases (SG24-4335)
DATE SQL: The month-day-year portion of the
TIMESTAMP value, of form MM/DD/YYYY.
Sample usage:
SELECT NODE_NAME, PLATFORM_NAME, -
DATE(LASTACC_TIME) FROM NODES
SELECT DATE(DATE_TIME) FROM VOLHISTORY -
WHERE TYPE='BACKUPFULL'
See also: TIMESTAMP
Date, per server ADSM server command 'SHow TIME' (q.v.).
See also: ACCept Date
DATE_TIME SQL database column, as in VOLHISTORY,
being a timestamp (date and time), like:
2001-07-30 09:30:07.000000
See also: CURRENT_DATE; DATE
DATEformat, client option, query Do ADSM 'dsmc Query Option' or TSM 'show
options' and look at the "Date Format"
value. A value of 0 indicates that your
opsys dictates the format.
See also: TIMEformat
DATEformat, client option, set Definition in the client user options
file. Specifies the format by which
dates are displayed by the *SM client.
NOTE: Not usable with AIX or Solaris, in
that they use NLS locale settings (see
/usr/lib/nls/loc in AIX, and
/usr/lib/localedef/src in Solaris). Do
'locale' in AIX to see its settings.
"1" - format is MM/DD/YYYY (default)
"2" - format is DD-MM-YYYY
"3" - format is YYYY-MM-DD
"4" - format is DD.MM.YYYY
"5" - format is YYYY.MM.DD
Default: 1
Query: ADSM 'dsmc Query Options' or TSM
'dsmc show options' and look at the
"Date Format" value. A value of 0
indicates that your opsys dictates the
format.
Advisory: Use 4-digit year values.
Various problems have been encountered
when using 2-digit year values, such as
Retrieve not finding files which were
Archived using a RETV=NOLIMIT (so date
past 12/31/99).
DATEformat, server option, query 'Query OPTion' and look at the
"DateFormat" value.
DATEformat, server option, set Definition in the server options file
for ADSM and old TSM.
Specifies the format by which dates are
displayed by the *SM server (except for
'Query ACtlog' output, which is always
in MM/DD/YY format).
"1" - format is MM/DD/YYYY (default)
"2" - format is DD-MM-YYYY
"3" - format is YYYY-MM-DD
"4" - format is DD.MM.YYYY
"5" - format is YYYY.MM.DD
Default: 1
Note that this does not affect the
format of dates in the dsmaccnt.log.
This option is obsolete since TSM 3.7:
the date format is now governed by the
locale in which the server is running,
where the LANGuage server option is the
surviving control over this.
Ref: Installing the Server...
See also: LANGuage; TIMEformat
DAY(timestamp) SQL function to return the day of the
month from a timestamp.
See also: HOUR(); MINUTE(); SECOND()
Day of week in Select See: DAYNAME
Daylight Savings Time You should not have to do anything in
TSM during a Daylight Savings Time
transition: that should be handled by
your computer operating system, and all
applications running in the system will
pick up the adjusted time.
In a z/OS environment, see IBM site
article swg21153685.
See also: ACCept Date; NTFS and Daylight
Savings Time
DAYNAME(timestamp) SQL function to return the day of the
week from a timestamp. Example:
SELECT ... FROM ... WHERE
DAYNAME(current_date)='Sunday'
See also: HOUR(); MINUTE(); SECOND()
DAYS SQL "labeled duration": a specific unit
of time as expressed by a number (which
can be the result of an expression)
followed by one of the seven duration
keywords: YEARS, MONTHS, DAYS, HOURS,
MINUTES, SECONDS, or MICROSECONDS
(q.v.).
The number specified is converted as if
it were assigned to a DECIMAL(15,0)
number. A labeled duration can only be
used as an operand of an arithmetic
operator in which the other operand is a
value of data type DATE, TIME, or
TIMESTAMP. Thus, the expression HIREDATE
+ 2 MONTHS + 14 DAYS is valid, whereas
the expression HIREDATE + (2 MONTHS + 14
DAYS) is not. In both of these
expressions, the labeled durations are 2
MONTHS and 14 DAYS.
DAYS(timestamp) SQL function to get the number of days
from a timestamp (since January 1, Year
1).
DB2 backups Is not a TDP, but like them it utilizes
the TSM client API to store the data on
the TSM server.
It is best to invoke the client while
sitting within the client directory.
Instead of, or addition to that, you may
want to set the following environment
variables:
Basic client:
DSM_CONFIG=<Drive>:<PathToOptionsFile>
DSM_DIR=<Drive>:<NameOfTSMdirectory>
DSM_LOG=<Drive>:<NameOfErrorLogDir>
API client:
DSMI_CONFIG=<Drive>:<PathToOptionsFile>
DSMI_DIR=<Drive>:<NameOfTSMdirectory>
DSMI_LOG=<Drive>:<NameOfErrorLogDir>
Each backup is its own filespace, whose
name is that of the DB2 database plus a
timestamp.
See redbook: "Using ADSM to Back Up
Databases", SG24-4335-03 and
"Managing VLDB Using DB2 UDB EEE",
SG24-5105-00.
DB2 backups, delete You have to manually inactive the
backups using the db2adutl delete
command.
Sample tasks:
'db2adutl query full'
will list your db2 backups;
'db2adutl delete full older than N days'
will delete.
DB2 backups, query Like: db2adutl query full
(You cannot use 'dsmc query backup'
because the backups were stored via the
TSM client API.)
DB2 log handling The DB2 database backup does not pick up
the DB2 logs: use the user exit program
provided by DB2 to archive (not backup)
the inactive log files.
DB2 restore command Like: db2 restore db db0107 use tsm
.DBB File name extension created by the
server for FILE devtype scratch volumes
which contain TSM database backup data.
Ref: Admin Guide, Defining and Updating
FILE Device Classes
See also: .BFS; .DMP; .EXP; FILE
DBBACKUP In 'Query VOLHistory', volume type for
sequential access storage volumes used
for database backups.
Also under 'Volume Type' in
/var/adsmserv/volumehistory.backup .
DBBackup tapes vanishing with DRM Watch out that you don't delete database
volume history with the same number of
days as the DRM
"Set DRMDBBackupexpiredays" command:
just when ADSM DRM is changing the
status of the db tapes to "vault
retrieve" you are also deleting them
from the volume history and therefore
never see them as "vault retrieve".
DBBackuptrigger The Database Backup Trigger: to define
when TSM is to automatically run a full
or incremental backup of the TSM
database, based upon the Recovery Log
filling, when running in Rollforward
mode. (As opposed to getting message
ANR0314W in Normal mode.)
At triggering time, TSM also
automatically deletes any unnecessary
recovery log records - which may take
valuable time.
Msgs: ANR4553I
See: DEFine DBBackuptrigger; Set LOGMode
DBDUMP In 'Query VOLHistory', Volume Type to
say that volume was used for an online
dump of the database (pre ADSM V2R1).
Also under 'Volume Type' in
/var/adsmserv/volumehistory.backup .
.dbf See: Oracle database factoids
DBPAGESHADOW TSM 4.1 server option. Provides a means
of mirroring the last batch of
information written to the server
database. If enabled, the server will
mirror the pages to the file specified
by DBPAGESHADOWFILE option. On restart,
the server will use the contents of this
file to validate the information in the
server database and if needed take
corrective action if the information in
the actual server database volumes is
not correct as verified by the
information in the page shadow file.
In this way if an outage occur that
affects both mirrored volumes, the
server can recover pages that have been
partially written.
See the dsmserv.opt.smp file for an
explanation of the DBPAGESHADOW and
DBPAGESHADOWFILE options. Note that the
DBPAGESHADOWFILE description differs
from what is documented in the TSM
publications. This option does NOT
prepend the server name to the file
name: the file name used is simply the
name specified on the option.
DBPAGESHADOWFILE TSM 4.1 server option (boolean).
Specifies the name of the database page
shadowing file. See: DBPAGESHADOW
DBSnapshot See: BAckup DB; DELete VOLHistory;
"Out of band"; Query VOLHistory
DBSnapshot, delete This is performed with the command
'DELete VOLHistory ... Type=DBSnapshot'.
However, TSM insists that the latest
snapshot database backup cannot be
deleted! A way to get around this would
be to perform another DBSnapshot, this
time directed at a File type of output
devclass. This would allow you to delete
the tape volume from TSM and re-use it,
and you could then delete the file at
the operating system level. This
presumes that you have enough disk space
for the file. You might be able to get
away with making the file /dev/null if
you are on Unix.
D/CAS Circa 1990 Data CASsette tape
technology using a specially notched
Philips audio cassette catridge and 1/8"
tape, full width. Variations:
D/CAS-43 50 MB
Tape vendors: Maxell 184720
D/CAS-86 100 MB
600 feet length, 16,000 ftpi
Tape vendors: Maxell CS-600XD
DCR Design Change Request
DDS* Digital Data Storage: the data recording
format for 4mm (DAT) tapes, as in DDS1,
DDS2, DDS3.
See: DAT
DDS2 tapes Can be read by DDS2 and DDS3 drives.
DEACTIVATE_DATE *SM SQL: Column in the BACKUPS table,
being the date and time that the object
was deactivated; that is, when it went
from being an Active file to Inactive.
Example: 2000-08-16 02:53:27.000000
The value is naturally null for Active
files (those whose STATE is
ACTIVE_VERSION). It may also be null for
Inactive files (INACTIVE_VERSION): this
is the case for old files marked for
expiration based on number of versions
(rather than retention periods), so
marked during client Backup processing
(Incremental or Selective). Note that
such marked files can be seen in a
server Select, but cannot be seen from
client queries.
During expiration if the TSM server
encounters an inactive version without a
deactivation date, then TSM expires this
object. Looked at another way, if client
backup processing does not occur,
version-oriented expiration cannot
occur.
See also: dsmc Query Backup
Deadlocks in server? 'SHow DEADLocks' (q.v.)
Msgs: ANR0390W
Debugging See "CLIENT TRACING" and
"SERVER TRACING" at bottom of this
document.
DEC SQL function to convert a string to a
decimal number. Syntax:
DEC(String,Precision,Scale)
String Is the string to be converted
Precision Is the length for the portion
before the decimal point.
Scale Is the length for the portion
after the decimal point.
DEC Alpha client Storage Solutions Specialists provides
an ADSM API called ABC. See
HTTP://WWW.STORSOL.COM.
DEFAULT The generic identifier for the default
management class, as shows up in the
CLASS_NAME column in the Archives and
Backups SQL tables. Note that "DEFAULT"
is a reserved word: you cannot define a
management class with that name.
See also: CLASS_NAME; Default management
class
Default management class The management class *SM assigns to a
storage pool file if there is no INCLUDE
option in effect which explicitly
assigns a management class to specified
file system object names.
Hard links are bound to the default
management class in that they are not
directories or files.
Note that automatic migration occurs
*only* for the default management class;
for the incl-excl named management class
you have to manually incite migration.
Default management class, establish 'ASsign DEFMGmtclass DomainName SetName
ClassName'
Default management class, query 'Query POlicyset' and look in the
Default Mgmt Class Name column
or 'Query MGmtclass' and look in the
Default Mgmt Class column
DEFAULTServer Client System Options file (dsm.sys)
option to specify the default server.
This is a reference to the SErvername
stanza which is coded later in the file:
it is *not* the actual server name,
which is set via SET SERVERNAME.
The stanza name is restricted to 8
characters (not 64, as the manual says).
HSM migration will use this value unless
MIgrateserver is specified.
DEFine Administrator You mean: REGister Admin
DEFine ASSOCiation Server command to associate one or more
client nodes with a client schedule
which was established via
'DEFine SCHedule'.
Syntax:
'DEFine ASSOCiation Domain_Name
Schedule_Name Node_name [,...]'
Note that defining a new schedule to a
client does not result in it promptly
"seeing" the new schedule, when
SCHEDMODe PRompted is in effect: you
need to restart the scheduler so that it
talkes to the server and gets scheduled
for the new task.
Related: 'DELete ASSOCiation'
DEFine BACKUPSET Server command to define a client backup
set that was previously generated on one
server and make it available to the
server running this command. The client
node has the option of restoring the
backup set from the server running this
command rather than the one on which the
backup set was generated. Any backup set
generated on one server can be defined
to another server as long as the servers
share a common device type. The level of
the server to which the backup set is
being defining must be equal to or
greater than the level of the server
that generated the backup set. You can
also use the DEFINE BACKUPSET command to
redefine a backup set that was deleted
on a server. Syntax:
'DEFine BACKUPSET Client_NodeName
BackupSetName
DEVclass=DevclassName
VOLumes=VolName[,VolName...]
[RETention=Ndays|NOLimit]
[DESCription=____]'
See also: GENerate BACKUPSET
DEFine CLIENTAction TSM server command to schedule one or
more clients to perform a command, once.
This results in the definition of a
client schedule with a name like "@1",
PRIority=1, PERUnits=Onetime, and
DURunits to the number of days set by
the duration period of the client
action. It also does DEFine ASSOCiation
to have the operation handled by the
specified nodenames.
'DEFine CLIENTAction
[NodeName[,Nodename]]
[DOmain=DomainName]
ACTion=ActionToPerform
[OPTions=AssociatedOptions]
[OBJects=ActionObjects]
[Wait=No|Yes]'
where ACTion is one of:
Incremental
Selective
Archive
REStore
RETrieve
IMAGEBACkup
IMAGEREStore
Command
Macro
For OBJects: Normally code within double
quotes; but if you need to code quotes
within quotes, enclose the whole in
single quotes and the internals as
double quotes. Example:
DEFine CLIENTAction NODEA -
ACTion=Command -
OBJects='mail -s "Subject line, body
empty" joe </dev/null >/dev/null'
Where ACTion=Command, you can code
OBJects with multiple operating system
commands, separated by the conventional
command separator for that environment.
For example, in Unix, you can cause a
delayed execution by coding a 'sleep'
ahead of the command, as in:
OBJects='sleep 20; date'.
If there is any question about the
invoked commands being in the Path which
the scheduler process may have been
started with, by all means code the
commands with full path specs, which
will avoid 127 return code issues.
The Wait option became available in TSM
4.1.
Note that a Command is run under the
account under which the TSM server was
started (in Unix, usually root).
Timing: How soon the action is performed
is at the mercy of your client SCHEDMODe
spec: POlling is at the client's whim,
and will result in major delay compared
to PRompted, where the server initiates
contact with the client (when it gets
around to it - *not* necessarily
immediately). When using PRompted,
watch out for PRESchedulecmd and
POSTSchedulecmd, which would thus get
invoked every time.
Housekeeping: Because of the schedule
clutter left behind, you should
periodically run 'DELete SCHedule
Domain_Name @*', which gets rid of the
temporary schedule and association.
Msgs: ANR2510I, ANR2561I
See also: DEFine SCHedule, client;
SET CLIENTACTDuration
DEFine CLIENTOpt Server command to add a client option to
an option set. Syntax:
DEFine CLIENTOpt OptionSetName
OptionName 'OptionValue'
[Force=No|Yes]
[SEQnumber=number]
Force will cause the server-defined
option to override that in the client
option file - for singular options
only...not additive options like
Include-Exclude and DOMain. Additive
options will always be seen by the
client (as long it is at least V3), and
will be logically processed ahead of the
client options.
Code the OptionValue in single quotes to
handle multi-word values, and use
double-quotes within the single quotes
to further contain sub-values. Example:
DEFine CLIENTOpt SETNAME INCLEXCL
'Exclude "*:\...\Temporary Internet
Files\...\"' SEQ=0
See also: Client Option Set
DEFine CLOptset Examples:
DEFine cloptset ts1 desc='Test option
sets'
COMMIT
DEFine CLIENTOpt ts1 CHAngingretries 1
seq=10
DEFine CLIENTOpt ts1 COMPRESSAlways=Yes
Force=Yes SEQnumber=20
DEFine CLIENTOpt ts1 INCLEXCL
"exclude /tmp/.../*"
DEFine CLIENTOpt ts1 INCLEXCL
"include ""*:\My Docs\...\*"""
COMMIT
See also: Client Option Set
DEFine COpygroup Server command to define a Backup or
Archive copy group within a policy
domain, policy set, and management
class. Does not take effect until you
have performed 'VALidate POlicyset' and
'ACTivate POlicyset'.
DEFine COpygroup, archive type 'DEFine COpygroup DomainName PolicySet
MgmtClass Type=Archive
DESTination=PoolName
[RETVer=N_Days|NOLimit]
[SERialization=SHRSTatic|STatic|
SHRDYnamic|DYnamic]
DEFine COpygroup, backup type 'DEFine COpygroup DomainName PolicySet
MgmtClass [Type=Backup]
DESTination=Pool_Name
[FREQuency=Ndays]
[VERExists=N_Versions|NOLimit]
[VERDeleted=N_Versions|NOLimit]
[RETExtra=N_Versions|NOLimit]
[RETOnly=N_Versions|NOLimit]
[MODE=MODified|ABSolute]
[SERialization=SHRSTatic|STatic|
SHRDYnamic|DYnamic]'
DEFine DBBackuptrigger Server command to define settings for
the database backup trigger. Syntax:
'DEFine DBBackuptrigger
DEVclass=DevclassName
[LOGFullpct=N]
[INCRDEVclass=DevclassName]
[NUMINCremental=???]'
where:
LOGFullpct Specifies the Recovery Log
percent fullness threshold at which an
automatic backup is triggered, 1 - 99.
Default: 50 (%). Choose a value which
gives the backup a chance to complete
before the Log fills.
NUMINCremental Specifies the maximum
number of Incrementals that will be
performed before a Full is done. Code
0 - 32, where 0 says to only do Fulls.
Default = 6.
See also: DBBackuptrigger
DEFine DBCopy Server command to define a volume copy
(mirror) of a database volume. Syntax:
'DEFine DBCopy Db_VolName Copy_VolName'
DEFine DBVolume Server command to define an additional
volume for the database. Syntax:
'DEFine DBVolume Vol_Ser
Formatsize=#MB Wait=No|Yes'
Messages: ANR2429E DEFINE DBVolume:
Maximum database capacity exceeded.
Note that you benefit from having more
DB volumes. See: Database performance
DEFine DEVclass Server command to define a device class
for storage pools, and associating it
with a previously defined library, if
applicable.
Note that the device class DISK is
pre-defined in TSM, as used in DEFine
STGpool for random access devices.
See also: Devclass
DEFine DEVclass (3590) 'DEFine DEVclass DevclassName
DEVType=3590 LIBRary=LibName
[FORMAT=DRIVE|3590B|3590C|
3590E-B|3590E-C]
[MOUNTRetention=Nmins]
[PREFIX=ADSM|TapeVolserPrefix]
[ESTCAPacity=X]
[MOUNTWait=Nmins]
[MOUNTLimit=DRIVES|Ndrives|0]'
DEFine DEVclass (File) 'DEFine DEVclass DevclassName
DEVType=FILE
[MOUNTLimit=1|Ndrives|DRIVES]
[MAXCAPacity=4M|maxcapacity]
[DIRectory=currentdir|dirname]'
Note that "3590" is a special, reserved
DEVType.
Specifying MOUNTLimit=DRIVES allows *SM
to adapt to the number of drives
actually available. (Do not use for
External LIbraries (q.v.).)
See also: .DBB; .DMP; .EXP; FILE
DEFine DOmain Server command to define a policy
domain. Syntax:
'DEFine DOmain DomainName
[DESCription="___"]
[BACKRETention=NN]
[ARCHRETention=NN]'
Since a client node is assigned to one
domain name, it makes sense for the
domain name to be the same as the client
node name (i.e., the host name).
See: ARCHRETention; BACKRETention
DEFine DRive Server command to define a drive to be
used in a previously-defined library.
Syntax:
'DEFine DRive LibName DriveName
DEVIce=/dev/???
[ONLine=Yes|No]
[CLEANFREQuency=None|Asneeded|N]
[ELEMent=SCSI_Lib_Element_Addr]'
where ONLine says whether a drive should
be considered available to *SM.
The TSM Admin Ref manual specifically
advises: "Each drive is assigned to a
single library." DO NOT attempt to
define a physical drive to more than one
library! Doing so will result in
conflicts which will render drives
offline. Thus, with a single library,
you cannot use the same drives for
multiple scratch pools, for example. To
get around this: say you have both 3590J
tapes and 3590Ks, but want the lesser
tapes used for offsite volumes. What you
can do is use DEFine Volume to assign
the 3590s to the offsite pool - which
will go on to use the general scratch
pool only when its assigned volumes are
used up.
Example: 'DEFine DRive OURLIBR
OURLIBR.3590_300
DEVIce=/dev/rmt1'
TSM will get the device type from the
library's Devclass, which will
subsequently turn up in 'Query DRive'.
It is not necessary to perform an
ACTivate POlicyset after the Define.
In a 3494, how does TSM communicate with
the Library Manager to perform a mount
on a specific drive if the LM knows
nothing about the opsys device spec? In
a preliminary operation, TSM issues an
ioctl() MTDEVICE request, after having
performed an open() on the /dev/rmt_
name to obtain a file descriptor, to
first obtain that Device Number from the
Library Manager, and thereafter uses
that physical address for subsequent
mount requests. For an example, see
/usr/lpp/Atape/samples/tapeutil.c .
DEFine LIBRary Server command to define a Library.
Syntax for 3494:
'DEFine LIBRary LibName LIBType=349x -
DEVIce=/dev/lmcp0
PRIVATECATegory=Np_decimal
SCRATCHCATegory=Ns_decimal'
The default Private category code: 300
(= X'12C').
The default Scratch category code: 301
(= X'12D').
With 3494 libraries and 3590 tapes, the
defined Scratch category code is for
3490 type tapes, and that value + 1 will
be used for your 3590 tapes. Server
option ENABLE3590LIBRARY must also be
defined for 3590 use.
In choosing category code numbers, be
aware that the 'mtlib' command
associated with 3494s reports category
code numbers in hexadecimal: you may
want to choose values which come out to
nice, round numbers in hex, and code
their decimal equivalents in the DEFine
LIBRary.
Realize also that choosing category
codes is a major commitment: you can't
change them in UPDate LIBRary.
AUTOLabel is new in TSM 5.2, for SCSI
libraries, to specify whether the server
attempts to automatically label tape
volumes. Requires checking in the tapes
with CHECKLabel=Barcode on the CHECKIn
LIBVolume command. "No" Specifies that
the server does not attempt to label any
volumes. "Yes" says to label only
unlabeled volumes. OVERWRITE is to
attempt to overwrite an existing label -
only if both the existing label and the
bar code label are not already defined
in any server storage pool or volume
history list.
DO NOT attempt to define multiple
libraries to simultaneously use the same
drives. See comments under DEFine DRive.
See also: ENABLE3590LIBRARY;
Query LIBRary; SCRATCHCATegory; UPDate
LIBRary
DEFine LOGCopy Server command to define a volume copy
(mirror) of a recovery log volume.
Syntax:
'DEFine LOGCopy RecLog_VolName
Mirror_Vol'
DEFine LOGVolume Server command to define an additional
recovery log volume. Syntax:
'DEFine LOGVolume RecLog_VolName'
Messages: ANR2452E
DEFine MGmtclass Server command to define a management
class within a policy set. Syntax:
'DEFine MGmtclass DomainName SetName
ClassName
[SPACEMGTECH=AUTOmatic|
SELective|NONE]
[AUTOMIGNOnuse=Ndays]
[MIGREQUIRESBkup=Yes|No]
[MIGDESTination=poolname]
[DESCription="___"]'
Note that except for DESCription, all of
the optional parameters are Space
Management Attributes for HSM.
DEFine PATH TSM server command to define a path, and
thus access, from a source to a
destination - a new requirement as of
TSM 5.1, to support server-free backups.
DEFine PATH Source_Name Destination_Name
SRCType=DATAMover|SERVer
[AUTODetect=No|Yes]
DESTType=DRive LIBRary=Library_Name
DEVIce=Device_Name|FILE [ONLine=Yes|No]
[DIRectory=Current_Directory|<Other>]
The source and destination must be
defined before the path.
With SRCType=SERVer, the Source_Name is
the server name which was defined using
'Set SERVername _____'.
Additional info:
http://www.ibm.com/support/
docview.wss?uid=swg21083662
Note the the server Device Configuration
file will now contain this command (msg
ANR0901W if erroneous).
See also: DEFine DRive; Paths
DEFine POlicyset Server command to define a policy set
within a policy Domain. Syntax:
'DEFine POlicyset Domain_Name SetName
[DESCription="___"]'
DEFine SCHedule, administrative Server command to define an
administrative schedule.
Syntax:
'DEFine SCHedule SchedName
Type=Administrative
CMD=CommandString
[ACTIVE=No|Yes]
[DESCription="___"]
[PRIority=5|N]
[STARTDate=MM/DD/YYYY|TODAY]
[STARTTime=NNN]
[DURation=N]
[DURunits=Minutes|Hours|Days|
INDefinite]
[PERiod=N]
[PERUnits=Hours|Days|Weeks|
Months|Years|Onetime]
[DAYofweek=ANY|WEEKDay|WEEKEnd|
SUnday|Monday|TUesday|
Wednesday|THursday|
Friday|SAturday]
[EXPiration=Never|some_date]'
The schedule name can be up to 30 chars.
In CMD=CommandString: string length is
limited to 512 chars; you cannot specify
redirection (> or >>).
Macros cannot be scheduled (as they
reside on the client, not the server),
but you can schedule (server) Scripts.
DEFine SCHedule, client Server command to define a schedule
which a client may use via server
command 'DEFine ASSOCiation'.
Syntax:
'DEFine SCHedule DomainName SchedName
[DESCription="___"]
[ACTion=Incremental|Selective|
Archive|REStore|
RETrieve|Command|Macro]
[OPTions="___"] [OBJects="___"]
[PRIority=N] [STARTDate=NNN]
[STARTTime=HH:MM:SS|NOW]
[DURation=N]
[DURunits=Hours|Minutes|Days|
INDefinite]
[PERiod=N]
[PERUnits=Days|Hours|Weeks|
Months|Years|Onetime]
[DAYofweek=ANY|WEEKDay|WEEKEnd|
SUnday|Monday|TUesday|
Wednesday|THursday|
Friday|SAturday]
[EXPiration=Never|some_date]'
The schedule name can be up to 30 chars.
Use PERUnits=Onetime to perform the
schedule once.
ACTion=Command allows specifying that
the schedule processes a client
operating system command or script whose
name is specified via the OBJects
parameter. Be careful not to specify
too many objects, or use wildcards, else
msg ANS1102E can result. See also
"Continuation and quoting". Note that
because TSM has no knowledge of the
workings of the invoked command, it can
only interpret rc 0 from the invoked
command as success and any other value
as failure, so plan accordingly.
OBJects specifies the objects (file
spaces or directories) for which the
specified action is performed. If
ACTion=Incremental, you may change the
OBJects and the change will be seen in
the next scheduled backup. This is a
distinct advantage over relying upon the
client Domain statement to list the file
systems to be backed up, in that a
change to the client options file is not
seen until the scheduler is restarted.
OPTions specify options to the dsmc
command, just as you would when manually
invoking dsmc on that client platform,
including leading hyphen as appropriate
(e.g., -subdir=yes).
Once the schedule is defined, you need
to bind it to the client node name:
see 'DEFine ASSOCiation'. Then you can
start the scheduler process on the
client node.
See also: DEFine CLIENTAction; DURation;
SET CLIENTACTDuration; SHow PENDing
DEFine SCRipt ADSMv3 server command to define a Server
Script. Syntax:
'DEFine SCRipt Script_Name
["Command_Line..." [Line=NNN]
| File=File_Name]
[DESCription=_____]'
Command lines are best given in quotes,
and can be up to 1200 characters long.
The description length can be up to 255.
The DEFine will fail if there is a
syntax error in the script, such as a
goto target lacking a trailing colon or
target label longer than 30 chars, with
msg ANR1469E.
It is probably best to create and
maintain scripts in files in the server
system file system, as the line-oriented
revision method is quite awkward.
See also: Server Scripts; UPDate SCRipt
DEFine SERver To define a Server for Server-to-Server
Communications, or to define a Tivoli
Storage Manager storage agent as if it
were a server. Syntax:
For Enterprise Configuration, Enterprise
Event Logging, Command Routing, and
Storage Agent:
'DEFine SERver ServerName
SERVERPAssword=____
HLAddress=ip_address
LLAddress=tcp_port
[COMMmethod=TCPIP]
[URL=url] [DESCription=____]
[CROSSDEFine=No|Yes]'
For Virtual Volumes:
'DEFine SERver ServerName PAssword=____
HLAddress=ip_address
LLAddress=tcp_port
[COMMmethod=TCPIP] [URL=____]
[DELgraceperiod=NDays]
[NODEName=NodeName]
[DESCription=____]'
See also: Query SERver;
Set SERVERHladdress;
Set SERVERLladdress
DEFine SPACETrigger ADSMv3 server command to define settings
for triggers that determine when and how
the server resolves space shortages in
the database and recovery log. It can
then allocate more space for the
database and recovery log when space
utilization reaches a specified value.
After allocating more space, it
automatically extends the database or
recovery log to make use of the new
space.
Note: Setting a space trigger does not
mean that the percentage used in the
database and recovery log will always be
less than the value specified with the
FULLPCT parameter. TSM checks usage when
database and recovery log activity
results in a commit. Deleting database
volumes and reducing the database does
not cause the trigger to
activate. Therefore, the utilization
percentage can exceed the set value
before new volumes are online.
Mirroring: If the server is defined with
mirrored copies for the database or
recovery log volumes, TSM tries to
create new mirrored copies when the
utilization percentage is reached. The
number of mirrored copies will be the
same as the maximum number of mirrors
defined for any existing volumes. If
sufficient disk space is not available,
TSM creates a database or recovery log
volume without a mirrored copy. Syntax:
DEFine SPACETrigger DB|LOG Fullpct=__
[SPACEexpansion=N_Pct]
[EXPansionprefix=______]
[MAXimumsize=N_MB]
Msgs: ANR4410I; ANR4411I; ANR4412I;
ANR4414I; ANR4415I; ANR4430W; ANR7860W
See also: Query SPACETrigger
DEFine STGpool (copy) DEFine STGpool PoolName DevclassName
POoltype=COpy
[DESCription="___"]
[ACCess=READWrite|READOnly|
UNAVailable]
[COLlocate=No|Yes|FIlespace]
[REClaim=PctOfReclaimableSpace]
[MAXSCRatch=N] [REUsedelay=N]
PoolName can be up to 30 characters.
See also: MAXSCRatch
DEFine STGpool (disk) Server command to define a storage pool.
Syntax for a random access storage pool:
'DEFine STGpool PoolName DISK
[DESCription="___"]
[ACCess=READWrite|READOnly|
UNAVailable]
[MAXSize=MaxFileSize]
[NEXTstgpool=PoolName]
[MIGDelay=Ndays]
[MIGContinue=Yes|No]
[HIghmig=PctVal] [LOwmig=PctVal]
[CAChe=Yes|No] [MIGPRocess=N]'
PoolName can be up to 30 characters.
Note that MIGPRocess pertains only to
disk storage pools.
See also: DISK; MIGContinue
DEFine STGpool (tape) Server command to define a storage pool.
Syntax for a tape storage pool:
'DEFine STGpool PoolName DevclassName
[DESCription="___"]
[ACCess=READWrite|READOnly|
UNAVailable]
[MAXSize=NOLimit|MaxFileSize]
[NEXTstgpool=PoolName]
[MIGDelay=Ndays]
[MIGContinue=Yes|No]
[HIghmig=PctVal] [LOwmig=PctVal]
[COLlocate=No|Yes|FIlespace]
[REClaim=N]
[MAXSCRatch=N] [REUsedelay=N]
[OVFLOcation=______]'
PoolName can be up to 30 characters.
Note that once a storage pool is
defined, it is thereafter stuck with the
specified devclass: you cannot change it
with UPDate STGpool. (You are left with
doing REName STGpool, and then redefine
the original name to be as you want it,
whereafter you can do Move Data to
transfer contents from old to new.)
The OVFLOcation value will appear in
message ANR8766I telling of the place
for the ejected volume, so use
capitalization and wording which makes
it stand out in that context.
See also: MAXSCRatch; MIGContinue
DEFine Volume Server command to define a volume in a
storage pool (define to a storage pool).
Syntax:
'DEFine Volume PoolName VolName
[ACCess=READWrite|READOnly|
UNAVailable|OFfsite]
[LOcation="___"]'
Resulting msg: ANR2206I
Note that a volume can belong to only
one storage pool.
A storage pool which normally uses
scratch volumes may also have specific
volumes defined to it: the server will
use the defined volume first. (Ref:
Admin Guide, "How the Server Selects
Volumes with Collocation Enabled")
If a 3590 tape, do 'CHECKIn' after.
Defined Volume A volume which is permanently assigned
to a storage pool via DEFine Volume.
Contrast with Scratch Volumes, which are
dynamically taken for use in storage
pools, whereafter they leave the storage
pool to return to Scratch state.
Ref: Admin Guide, "Scratch Volumes
Versus Defined Volumes".
See also: Scratch Volume
Degraded Operation 3494 state wherein the library is
basically operational, but an auxiliary
aspect of it is inoperative, such as the
Convenience I/O Station.
delbuta DFS: ADSM-provided command (Ksh script)
to delete a fileset backup (dump) from
both ADSM storage (via 'dsmadmc
... DELete FIlespace') and the DFS
backup database (via 'bak deletedump').
'delbuta {-a Age|-d Date|-i DumpID|-s}
[-t Type] [-f FileName] [-n] [-p] [-h]'
where you can specify removal by age,
creation date, or individual Dump ID.
You can further qualify by type ('f' for
full backups, 'i' for incrementals, 'a'
for incrementals based upon a parent
full or incremental); or by a list
contained within a file. Use -n to see
a preview of what would be done, -p to
prompt before each deletion, -h to show
command usage.
Where: /var/dce/dfs/buta/delbuta
Ref: AFS/DFS Backup Clients manual,
chapter 7.
Delete ACcess See: dsmc Delete ACcess
DELETE ARCHCONVERSION Process seen in the server the first
time a node goes into the Archive GUI
when the archive data needs to be
converted, as when upgrading clients
between certain (unknown) levels. The
conversion operation can be very
time-consuming, depending upon the
amount of archive data in server storage
which needs to be converted.
Msgs: ANS5148W
Delete ARchive See: dsmc Delete ARchive
DELete ASSOCiation ADSM Server command to remove the
association between one or more clients
with a schedule. Syntax:
'DELete ASSOCiation Domain_Name
Schedule_Name Node_name [,...]'
Related: 'DEFine ASSOCiation',
'Query ASSOCiation'.
DELete BACKUPSET Server command to delete a backup set
prior to its natural expiration. A
Backup Set's retention period is
established when the set is created, and
it will automatically be deleted
thereafter. Syntax:
'DELete BACKUPSET Node_Name
Backup_Set_Name [BEGINDate=____]
[BEGINTime=____] [ENDDate=____]
[ENDTime=____]
[WHERERETention=N_Days|NOLimit]
[WHEREDESCription=____]
[Preview=No|Yes]'
Note that the node name and backup set
name are required parameters: you may
use wildcard characters such as "* *"
in those positions. And in using
wildcards in these positions you may be
able to get around the restriction of
not being able to delete the last
backupset.
See also: DELete VOLHistory
DELete DBVolume *SM server command to delete a database
volume, which is performed
asynchronously, by a process. *SM will
automatically move any (MB blocks) of
data on the volume to remaining database
space, thus consolidating it. Volume
deletion is only logical: the physical
database volume/file remains intact
within the operating system, but no
longer part of the TSM environment. (You
separately dispose of it, as desired,
using OS commands.)
Before proceeding, issue the Query DB
command: the difference between the
Available Space and Assigned Capacity
must be at least as large as the volume
that will be deleted - which only makes
sense, as you intend to take away that
much space. You would have such a
margin in one of the following ways:
- In the beginning, you allocated a
substantial database but had not done
EXTend DB operations to utilize all
of it.
- You perform DEFine DBVolume to add
that much space to the database,
without doing an EXTend DB.
- You perform a REDuce DB to
that much space to the database.
The best approach is to delete volumes
in the reverse order that you added
them so as to minimize the possibility
of data being moved more than once in
the case of multiple volume deletions.
The best approach to removing a DB
volume is to first Reduce the database
and then delete a volume.
Syntax: 'DELete DBVolume VolName'
DELete DEVclass ADSM server command to delete a device
class. Syntax:
'DELete DEVclass DevclassName'
DELete DRive TSM server command to delete a drive
from a library. Syntax:
'DELete DRive LibName Drive_Name'
Example: 'DELete DRive OURLIBR
OURLIBR.3590_300'
Notes: A drive that is in use - busy -
cannot be deleted (you will get error
ANR8413E or the like). All paths
related to a drive must be deleted
before the drive itself can be deleted.
Use SHOW LIBrary to verify status.
Msgs: ANR8412I
DELete FIlespace (from server) TSM server command to delete a client
file space. The deletion of objects is
immediate: no later Expire Inventory is
required. The deletion of the filespace
takes place file by file, and can run
for days for large filespaces. Syntax:
'DELete FIlespace NodeName
FilespaceName [Type=ANY|Backup|
Archive|SPacemanaged]
[Wait=No|Yes] [OWNer=OwnerName]
[NAMETYPE=SERVER|UNIcode|FSID]
[CODEType=BOTH|UNIcode|
NONUNIcode]'
By default, results in an asynchronous
process being run in the server to
effect the database deletions, which you
can monitor via Query PRocess. You need
to wait for this to finish before, say,
doing a fresh incremental backup on this
filespace name. Use Wait to make the
deletion synchronous.
For Windows filespaces, you may have to
add NAMETYPE=UNICODE to get it to work.
WARNING; DO NOT RUN MORE THAN ONE DELETE
FILESPACE AT A TIME!!! Doing so could
jeopardize your *SM database. See entry
on "Database robustness". Also, do not
run a DELete FIlespace when clients are
active, as the entirety of the Delete
could end up in your Recovery Log as
client updates prevent the
administrative updates from being
committed.
Note that "Type=ANY" removes only Backup
and Archive copies, not HSM file copies:
you have to specify "SPacemanaged" to
effect the more extreme measure of
deleting HSM filespaces. Note also that
the deletion will be an intense database
operation, which can result in commands
stalling. Moreover, competing processes
- especially for the same node - will
likely need access to the same database
blocks, and collide with the message
"ANR0390W A server database deadlock
situation...". For this reason is it
best to run only one DELete FIlespace at
one time.
If interrupted: Files up to that point
are gone.
If a pending Restore is in effect, this
operation should not work.
Speed: rather time-consuming - we've
seen about 50 files/second.
See also: Delete Filespace (from client)
Delete Filespace (from client) ADSM client command:
'dsmc Delete Filespace', which will
present a selection menu of file spaces
(though this requires "BACKDELete=Yes"
on 'REGister Node', which is contrary to
the default, so that you may need to do
it from the server).
Results in an *asynchronous* process
being run in the server to effect the
database deletions and inventory
expiration: you must wait for this to
finish before, say, doing a fresh
incremental backup on this filespace
name.
Speed: rather time-consuming - we've
seen about 50 files/second.
If a pending Restore is in effect, this
operation should not work.
See also: DELete FIlespace (from server)
Delete Filespace fails to delete it You may be intending to delete a node,
and are pursing the preliminary steps of
deleting its filespaces. The Delete
Filespace may seem happy, but doing a
Query Filespace thereafter shows that
the filespace has not gone away. This is
likely a server software defect: a
server level upgrade may correct it.
Beyond that, you might try doing Delete
Filespace from the client, selecting the
filespace by relative number, and see if
that makes it go away. (From the server
side, 'DELete FIlespace <NodeName> *'
may work - but you may not want all that
node's filespaces deleted!) If not, do
SELECT * FROM VOLUMEUSAGE WHERE
NODE_NAME="__" and see if any volumes
show up, where the volumes may be in a
wacky state you may be able to correct;
or you may be able to delete the
volumes, assuming collocation by node
such that no other nodes' data are on
the volume, or where you can first
perform a Move to separate out the nodes
data on that volume.
Your only other choice would be an
appropriate audit operation - which is
dicey stuff: you should contact TSM
Support.
DELete LIBRary ADSM server command to delete a library.
Prior to doing this, all the library's
assigned drives must be deleted.
WARNING!! Deleting a library causes all
of its volues to be checked out! If you
unfortunately do this, you will need to
use the 'mtlib' AIX command to fix the
Category codes, and then use 'AUDit
LIBRary' to reconcile ADSM with the
library reality.
DELete LOGVolume ADSM server command to delete a Recovery
Log volume. ADSM will automatically
start a process to move any data on the
volume to remaining Recovery Log space,
thus consolidating it.
To delete a log volume, Query LOG needs
to show a Maximum Extension value at
least as large as the volume being
deleted.
Deletion is only logical: the physical
recovery volume/file remains intact.
The best approach is to delete volumes
in the reverse order that you added
them so as to minimize the possibility
of data being moved more than once in
the case of multiple volume deletions.
Syntax: 'DELete LOGVolume VolName'.
Delete Node You mean 'REMove Node'.
DELETE OBJECT See: File, selectively delete from *SM
storage; File Space, delete selected
files
DELete SCHedule, administrative Server command to delete an
administrative schedule.
Syntax:
'DELete SCHedule SchedName
Type=Administrative'
See also: DEFine SCHedule
DELete SCHedule, client Server command to delete a client
schedule.
Syntax:
'DELete SCHedule DomainName SchedName
[Type=Client]'
See also: DEFine SCHedule
DELete SCRipt Server command to delete a server script
or one line from it. Syntax:
'DELete SCRipt Script_Name
[Line=Line_Number]'
Deleting a whole script causes the
following prompt to appear:
Do you wish to proceed? (Yes/No)
(There is no prompt when simply deleting
a line.)
Deleting a line does not cause lines
below it to "slide up" to take the old
line number: all lines retain their
prior numbers.
Msgs: ANR1457I
Delete selected files from ADSM See: Filespace, delete selected files
storage
DELete VOLHistory TSM server command to delete
non-storage pool volumes, such as those
used for database backups and Exports.
Syntax:
'DELete VOLHistory
TODate=MM/DD/YYYY|TODAY
|TODAY-Ndays
TOTime=HH:MM:SS|NOW
|NOW+hrs:mins|NOW-hrs:mins
Type=All|DBBackup [DEVclass=___]
|DBSnapshot [DEV=___]
|DBDump|DBRpf|EXPort
|RPFile
[DELETELatest=[No|Yes]
|RPFSnapshot
[DELETELatest=[No|Yes]
|STGNew
|STGReuse|STGDelete'
There is no provision for deleting a
single volume, sadly.
As of ADSMv3, you will get an error if
you try to delete all DBBackup copies:
you must keep at least 1, per
APARs IX86694 and IX86661. This is also
the case for DBSnapshot volumes: the
latest cannot be deleted.
Do not use this command to delete DBB
volumes that are under the control of
DRM: DRM itself handles that per
Set DRMDBBackupexpiredays. (If you are
paying for and using DRM, let it do what
it is supposed to: meddling jeopardizes
site recoverability.)
Do not expect *SM to delete old DBBackup
entries reflecting Incremental type
'BAckup DB' operations until the next
full backup is performed. That is, the
full and incrementals constitute a set,
and you should not expect to be able to
delete critical data within the set: the
whole set must be of sufficient age that
it can entirely go (msg ANR8448E).
"Type=BACKUPSET" is not documented but
may work, being a holdover frome version
4.1 days. Also, there was a bug in the
4.2 days that prevented some backupsets
from being deleted with the DELete
BACKUPSET command; you could delete them
with 'DELete VOLHistory Type=BACKUPSET
Volume=<VolName> TODate=<date>'
Msgs: ANR2467I (reports number of
volumes deleted, but not volnames)
See also: Backup Series; Backup set,
remove from Volhistory
DELete Volume TSM server command to delete a volume
from a storage pool and, optionally, the
files within the volume, if the volume
is not empty. Syntax:
'DELete Volume VolName
[DISCARDdata=No|Yes]'
Specifying DISCARDdata=Yes will cause
the removal of all database information
about the files that were backed up to
that tape, and so the next incremental
backup will take all such files afresh.
(This is logical deletion: The volume is
not mounted. The physical data remains
on the tape, though logically
inaccessible. If you have security
and/or privacy concerns for such tapes
that had been used by TSM and are being
decommissioned from the library,
consider using a utility like the
tapeutil command's "erase" function to
physically eradicate the data.)
Note that the volume may not immediately
return to the scratch pool if REUsedelay
is in effect. Also, if the volume is
offsite, you should recall to onsite.
Multiple simultaneous: V3 experience
reveals no problems running more than
one data-discarding Delete Volume at a
time. I've run 5 at a time without
incident.
Deleting a primary storage pool copy of
a file also causes any copy storage pool
copies to be deleted (a form of instant
expiration of data, in that the primary
copy constitutes the stem of the
database entry). Ref: Admin Guide,
"Deleting Storage Pool Volumes".
Notes: No Activity Log or dsmerror.log
entry will be written as a result of
this action. Volumes whose Access is
Unavailable cannot be deleted.
If a pending Restore is in effect, this
operation should not work.
"ANS8001I Return code 13" indicates that
the command was invoked without
"DISCARDdata=Yes" and the volume still
contains data.
Messages: ANR1341I
See also: DELete VOLHistory
"deleted" In backup summary statistics, as in
"Total number of objects deleted:".
Refers to the number of files expired
because not found (or excluded) in the
backup operation. Those files will be
flagged in the body of the report with
"Expiring-->".
Deleted files, rebind See: Inactive files, rebind
Deleted from storage pool, messages ANR1341I, ANR2208I, ANR2223I
DELetefiles (-DELetefiles) Client option to delete files from the
client file system after Archive has
stored them on the server. Can also be
used with the restore image command and
the incremental option to delete files
from the restored image if they were
deleted from the file space after the
image was created.
Note particularly the statement that the
operation will not delete the file until
it is stored on the server. This affects
when in the sequence that the file will
actually be deleted. Remember that *SM
batches Archive data into Aggregates,
as defined by transaction sizings (TXN*
options) and so the file(s) will not be
deleted until the transaction is
completed.
DANGER!!: If your server runs with
Logmode Normal, you may lose files if
the server has to be restored, because
all transactions since the last server
database backup will be lost! Before
using DELetefiles in a site, carefully
consider all factors.
What about directories? The Archive
operation has no capability for deleting
directories, for several reasons...
First, directories may be the home of
objects other than the files being
deleted (e.g., symbolic links, special
files, unrelated files), and because in
the time it takes to archive files from
any given directory, new files may have
been introduced into it. If you want
directories deleted, you need to do so
thereafter, with an operating system
function.
See also: Total number of objects
deleted
Dell firmware advisory Customers report serious quality
problems with Dell firmware, as for the
Dell Powervault 136T. Beware.
DELRECORD Undocumented, unsupported command noted
in some APARs for deleting TSM db table
entries. Usage undefined.
See also: Database, delete table entry
Delta file As used in subfile backups.
Msgs: ANS1328E
Demand Migration The process HSM uses to respond to an
out-of-space condition on a file
system. HSM migrates files to ADSM
storage until space usage drops to the
low threshold set for the file
system. If the high threshold and low
threshold are the same, HSM attempts to
migrate one file.
DEMO_EXPIRED Identifier from TSM server msg, like:
ANR9999D mmsext.c(2195): ThreadId<55>
Invalid response from exit
(DEMO_EXPIRED).
which strongly suggests that you were
employing a demonstration copy of the
software, and that expired (e.g, for
edt and/or acsls software).
Density See: Tape density
DES See: ENCryptkey; PASSWORDDIR
-DEScription="..." Used on 'dsmc Archive' or 'dsmc Query
ARchive' or 'dsmc Retrieve' to specify a
text string describing the archived
file, which can be used to render it
unique among archived files of the same
name. Descriptions may be up to 254
characters long. Wildcard characters
may be used in query and retrieve
operations - but obviously not in
establishment of the archive entry.
Be aware that rendering the file unique
by employing a Description also
implicitly renders the path directory
unique such that it will also be
archived again if there isn't one of the
same description already stored in the
server. This is to say that the given
description is also applied to the path
directory.
If you do not specify a description with
the archive command, the default is to
provide a tagged date, in the form
"Archive Date: __________", where the
date value inserted is the system date,
always 10 characters long. (If your date
format uses a two digit year, there will
be two blank spaces at the end of the
date.) Note that only the date is
provides - not the time of day. As an
extension of the above description, this
automatic attachment of a date-specific
description renders archived files
non-unique within a day, but unique
across days.
Description, on an Archive file Is set via -DEscription="..." in the
'dsmc archive' operation.
Note that you cannot change the archive
file Description after archiving.
DESTination A Copy Group attribute that specifies
the storage pool to which a file is
backed up, archived, or migrated. At
installation, ADSM provides three
storage destinations named BACKUPPOOL,
ARCHIVEPOOL, and SPACEMGTPOOL.
Destination for Migrated Files In output of 'dsmmigquery -M -D', an
(HSM) attribute of the management class which
specifies the name of the ADSM storage
pool in which the file is stored when it
is migrated.
Defined via MIGDESTination in management
class.
See: MIGDESTination
DEStroyed Access Mode for a primary storage pool
volume saying that it has been
permanently damaged, and needs a
'RESTORE STGpool' or 'RESTORE Volume'
(which itself will mark the volume
DEStroyed, msg ANR2114I).
Set: 'UPDate Volume ...
ACCess=DEStroyed'.
(Note that Copy Storage Pool volumes
cannot be marked DEStroyed.)
If there is a storage pool backup for
the volume, access to files that were on
the volume causes *SM to automatically
obtain them instead from the copy
storage pool.
Note that marking volumes as "Destroyed"
does not affect the status of the files
on the volumes: the next Incremental
Backup job will not back up those files
afresh. All that the Destroyed mode does
is render the volume unmountable.
See: Copy Storage Pool, restore files
directly from
But the volume or storage pool RESTORE
operation should still be performed, to
repopulate the primary storage pool with
the files.
See also: RESTORE Volume
-DETail Option for dsmc invocation, to supply
more information with subcommands...
In Query Backup, the last Modified and
Accessed timestamps are additionally
shown. (But no further file details you
might want to see, such as file owner.)
/dev/fsm Character special file, being the HSM
File Space Manager pseudo device,
apparently created when HSM comes up.
Should look like:
crw-rw-rwT- 1 root sys 255,
0 Dec 5 12:28 /dev/fsm
If need to re-create, do:
'mknod /dev/fsm c 255 0'
'chmod 1666 /dev/fsm'
/dev/lb_ SCSI library supported by *SM device
driver, such as the 9710.
/dev/lmcp0 3494 Library Manager Control Point
special device, established by
configuring and making this "tape"
device Available via SMIT, as part of
installing the atldd (automated tape
library device driver). (Specifically,
'mkdev -l lmcp0" creates the dev in
AIX.)
/dev/mt_ In Unix systems, tape drives that are
used by *SM, but not supported by *SM
device drivers.
AIX usage note: When alternating use of
the drive between AIX and *SM, make one
available and the other unavailable,
else you will have usage problems. For
example, if the drive was most recently
used with *SM, do:
rmdev -l mt0; mkdev -l rmt0;
and then the inverse when done.
/dev/rmt_ Magnetic tape drive supported as a
GENERICTAPE device.
/dev/rmt_.smc For controlling the SCSI Medium Changer
(SMC), as on 3570, 3575, 3590-B11
Automatic Cartridge Facility.
/dev/rmt_.smc, creation When running 'cfgmgr -v' to define a
3590 library, the 3590's mode has to be
in "RANDOM" for the rmt_.smc file to be
created.
/dev/rop_ Optical drives supported by ADSM.
/dev/vscsiN See "vscsi".
Devclass The device class for storage pools: a
storage pool is assigned to a device
class. The device class also allows you
to specify a device type and the maximum
number of tape drives that it can ask
for.
For random access (disk), the Devclass
must be the reserved name "DISK".
For tape, the Devclass is whatever you
choose, via 'DEFine DEVclass'.
Used in: 'DEFine DBBackuptrigger',
'DEFine STGpool', 'Query Volume'
See also: Query DEVclass; SHow DEVCLass
Devclass, 3590, define See "DEFine DEVclass (3590)".
Devclass, rename There is no command to do this: you have
to define a new devclass, reassign to
it, then delete the old name.
Devclass, verify all volumes in See: SHow FORMATDEVCLASS _DevClass_
DEVCLASSES SQL table for devclass definitions.
Columns: DEVCLASS_NAME,
ACCESS_STRATEGY (Random, Sequential),
STGPOOL_COUNT, DEVTYPE, FORMAT,
CAPACITY, MOUNTLIMIT, MOUNTWAIT,
MOUNTRETENTION, PREFIX, LIBRARY_NAME,
DIRECTORY, SERVERNAME, RETRYPERIOD,
RETRYINTERVAL, LAST_UPDATE_BY,
LAST_UPDATE (YYYY-MM-DD HH:MM:SS.000000)
DEVCONFig Definition in the server options file,
dsmserv.opt
(/usr/lpp/adsmserv/bin/dsmserv.opt).
Specifies the name of the file(s) that
should receive device configuration
information and thus become backups when
such information is changed by the
server. Use 'BAckup DEVCONFig' to force
updating of the file(s).
Default: none
Ref: Installing the Server...
See also: Device config...
DEVCONFig server option, query 'Query OPTion'
devconfig.out In TSM v5 and higher the first line of
file must be: SET SERVERNAME ADSM
Device Specified via "DEVIce=DeviceName" in
'DEFine DRive ...'
device category As seen in 'mtlib -l /dev/lmcp0 -f
/dev/rmt2 -qD' on a 3494.
See: Category Codes
Device class See: Devclass
Device config file considerations During a *SM DB restore, if your libtype
is set to manual in your devconfig file,
check that SHARED=NO is not part of the
DEFINE LIBR statement.
See also: DEVCONFig
Device config file, determine name 'Query OPTions', look for "Devconfig"
Device config info, file(s) to "DEVCONFig" definition in the
receive as backup, define server options file, dsmserv.opt
(/usr/lpp/adsmserv/bin/dsmserv.opt).
The files will end up containing all
device configuration info that
administrators set up, in ADSM command
format, such as "DEFine DEVclass..." and
"DEFINE LIBRARY" command lines.
Device configuration, backup manually 'BAckup devconfig'
causes the info to be captured in
command line format in files defined on
DEVCONFIG statements in the server
options file, dsmserv.opt
(/usr/lpp/adsmserv/bin/dsmserv.opt).
Device configuration, restore Occurs as part of the process involved
in the following commands (run from the
AIX command line):
'dsmserv restore db'
'dsmserv loaddb'
'DSMSERV DISPlay DBBackupvolumes'
Device drivers, tape drives Under Unix:
Drives which are used with a name of the
form "/dev/rmtX" employ tape device
drivers supplied with the operating
system, which in AIX are stored in
/usr/lib/drivers. These are defined in
SMIT under DEVICES then TAPE DRIVES.
For example, IBM "high tape device"
drives such as 3590 have their driver
software shipped with the tape hardware.
Drives used with a name of the form
"/dev/mtX" employ tape device drivers
supplied by ADSM itself. These are
defined in SMIT under ADSM DEVICES. And
their library will be /dev/lb0.
DEVNOREADCHECK Undocumented VM opsys option: allows the
server to ignore the RING IN/NO RING
status of the input tape.
DEVType Operand of 'DEFine DEVclass', for
specifying device class. Recognized:
FILE, 4MM, 8MM, QIC, 3590, CARTridge,
OPTical.
Note: Devtypes can change from one TSM
version to another such that they cannot
be caried across in an upgrade. The
upgrade may nullify such DEVTypes. Thus,
in performing an upgrade it is wise to
check your DEVclasses.
df of HSM file system (AIX) Performing a 'df' command on the HSM
server system with the basic HSM-managed
file system name will cause the return
of a hdr line plus two data lines, the
first being the JFS file system and the
second being the FSM mounted over the
JFS. However, if you enter the file
system name with a slash at the end of
it, you will get one data line, being
just the FSM mounted over the JFS.
dfmigr.c Disk file migration agent.
See also: afmigr.c
DFS The file backup client is installable
from the adsm.dfs.client installation
file, and the DFS fileset backup agent
is installable from adsm.butadfs.client.
You need to purchase the Open Systems
Environment Support license for AFS/DFS
clients.
The DCE backup utilities are located in
/opt/dcelocal/bin.
See 'buta', 'delbuta'.
DFS backup to Solaris IBM reportedly has no plans to support
this type of client.
DFSBackupmntpnt Client System Options file option, valid
only when you use dsmdfs and dsmcdfs.
(dsmc will emit error message ANS4900S
and ignore the option.)
Specifies whether you want ADSM to see
a DFS mount point as a mount point (Yes,
which is the default) or as a directory
(No):
Yes ADSM considers a DFS mount point to
be just that: ADSM will back up
only the mount point info, and not
enter the directory.
This is the safer of the two
options, but limits what will be
done.
No ADSM regards a DFS mount point as a
directory: ADSM will enter it and
(blindly) back up all that it finds
there.
Note that this can be dangerous, in
that use of the 'fts crmount'
command is open to all users, who
through intent or ignorance can
mount parts or all of the local
file system or a remote one, or
even create "loops".
Default: Yes
By default, when doing an incremental
backup on any DFS mount point or DFS
virtual mount point, TSM does not
traverse the mount points: it will only
backup the mount point metadata. To
backup mount a point as a regular
directory and traverse the mount point,
set DFSBackupmntpnt No before doing the
backup. If you want to backup a mount
point as mount point and backup the data
below the mount point, first backup the
parent directory of the mount point and
then backup mount point separately as a
virtual mount point.
See also: AFSBackupmntpnt
DFSInclexcl Client System Options file option, valid
only when you use dsmdfs and dsmcdfs.
(dsmc will emit error message ANS4900S
and ignore the option.)
Specifies the path and file name of
your DFS include-exclude options file.
DHCP database, back up Do not attempt to back this up directly:
it can be made to produce a backup copy
of its database periodically
(system32/dhcp/backup), and then that
copy can be backed up with TSM
incremental backup. You also can make a
copy of the DHCP registry setup info in
a REG file for backup. The key is
located in
HKEY_LOCAL_MACHINE\System\
CurrentControlSet\Services\DHCPServer\
Configuration.
Ref: http://support.microsoft.com/
support/kb/articles/Q130/6/42.asp
Diamond icon in v3 GUI Restore A four-sided diamond icon to the left of
a file in the v3 GUI shown in a Restore
selection tree display indicates that
the file is Inactive. Shown to the left
of a directory, indicates that the
directory contains inactive files.
DIFFESTIMATE Option in the TDPSQL.CFG file. Prior to
performing a database backup, the TDP
for SQL client must 'reserve' the
required space in the storage pool. It
*should* get the estimate right for full
backups and transaction log backups
because the space used in the database
and transaction logs is available from
SQL Server. But: For differential
backups, there is no way of knowing how
much data is to be backed up until the
backup is complete. The TDP for SQL
client therefore uses the percentage
specified in the the DIFFESTIMATE option
to calculate a figure based on the total
space used. E.g., for a database of 50GB
with a DIFFESTIMATE value of 20, TDP
will reserve 10Gb (20% of 50GB). A
"Server out of data storage space" error
will arise if the actual backup exceeds
the calculated estimate. If the storage
pool is not big enough to accomodate the
larger backup, of if other backup data
prevents further space being reserved,
this error will occur. Setting
DIFFESTIMATE to 100 will ensure that
there is always sufficient space
available, but will prevent space in
your primary storage pool being utilised
by other clients and may force the
backup to occur to the next storage pool
in the hierarchy unnecessarily. It is
worth setting DIFFESTIMATE to the
maximum proportion of the data you can
envisage ever being backed up during a
differential backup.
Directories, empty, and Selective Selective Backup does not back up empty
Backup directories.
Directories, empty, restoring See: Restore empty directories
Directories and Archive ADSM Archive does not save directory
structure: the only ADSM facility which
does is Incremental Backup (Selective
Backup does not, either).
See also: DIRMc
Directories and Backup A normal Incremental Backup will *not*
back up directories whose timestamp has
changed since the last backup. This is
because it would be pointless to do so:
*SM already has the information it
needs about the directory itself in
order to recreate it, and restoral of a
directory reconstructs it, with
contemporary datestamps. An -INCRBYDate
Backup, in contrast, *will* back up
pre-existing directories whose
timestamps it sees as newer, because it
knows nothing about them having been
previously backed up, by virtue of
simple date comparison.
See also: Directory performance; DIRMc"
Directories and binding to management
class The reason that directories are bound to
the management class with the longest
retention is that there is no guarantee
that the files within the directory will
all be bound to the same management
class. A simple example: suppose I have
a directory called C:\ANDY with two
files in it, like this:
C:\
ANDY\
PRODFILE.TXT
TESTFILE.TXT
and that the include/exclude list
specifies two different management
classes:
INCLUDE C:\ANDY\PRODFILE.TXT MC90DAYS
INCLUDE C:\ANDY\TESTFILE.TXT MC15DAYS
So which management class should C:\ANDY
be bound to? The question becomes even
more interesting if a new file is
introduced to the C:\ANDY directory and
an include statement binds it to, say,
the MC180DAYS management class.
Binding directories to the management
class with the longest retention
(RETOnly) is how TSM can assure that the
directory is restorable no matter which
management class the files under that
directory are bound to.
If all management classes have the same
retention, TSM will choose the one first
in alphabetical order. (APAR IY11805
talked about first choosing by most
recently updated mgmtclass definition,
but that appears false.)
Ordinary directory entries - those with
only basic info - will be stored in the
database, but entries with more info may
end up in a storage pool.
The way around this is to use DIRMc to
bind the directories to a management
class that resides on disk.
Alternatively one could create the disk
management class such that it has the
longest retention, and thus negate the
need to code DIRMc.
One "gotcha": be careful when creating
new management classes or updating
exising existing management classes. You
will always want to ensure that the
*disk* management class has the longest
retention.
Directories and Restore Whereas ordinary restore operations
reinstate the original file permissions,
directory permissions are only
restored when using the SUbdir=Y option
of 'dsmc' or the Restore Subdirectory
Branch function of dsm GUI.
Directories may be in the *SM db When a file system is restored, you may
see *SM rebuild the directory structure
long before any tapes are mounted. It
can do this when the directory structure
is basic such that it can be stored as a
dababase object (much like many empty
files can be). In such cases, there is
no storage pool space associated with
directories, and no tape use. With more
complex directory structures (Unix
directories with Access Control Lists,
Windows directories, and the like), the
extended information associated with
directories exceeds the basic database
attributes data structure, and so the
directory information needs to be stored
in a storage pool. That is where the
DIRMc option comes in: it allows you to
control the management class that will
get associated with the directory
information that needs to get stored
in a storage pool.
See also: DIRMc
Directories missing in restore Perhaps you backed them up with a DIRMc
which resolved to a shorter retention
than the files in the directories.
(Later ADSM software should prevent
this.) This is why in the absence of
DIRMc, directories are bound to the
copygroup with the longest retention
period - to prevent such loss.
Directories visible in restore, but Simplest cause: In a GUI display, you
files not shown need to click on the folder/directory
to open it, to see what's inside.
This could otherwise be a permissions
thing: you are attempting to access
files that were backed up by someone
other than you, and which do not belong
to you.
Directory--> Leading identifier on a line out of
incremental Backup, reflecting the
backup of a directory entry. Note that
with basic directory structures, as on
Unix systems, *SM is able to store
directory info in the server database
itself because the info involves only
name and basic attributes: the contents
of a directory are the files themselves,
which are handled separately. Thus,
directory backups usually do not have to
be in a storage pool. Note that the
number of bytes reflected in this report
line is the size of the directory as it
is in the file system. Because *SM is
storing just name and attributes, it is
the actual amount that *SM stores rather
than the file system number that will
contribute to the "Total number of bytes
transferred:" value in the summary
statistics from an Archive or Backup
operation.
Note that the number will probably be
less than the sum reflected by including
the numbers shown on "Directory-->"
lines of the report, in that *SM stores
only the name and attributes of
directories.
See also: Rebinding-->
Directory performance Conventional directories are simply
flat, sequential files which contain a
list of file names which cross-reference
to the physical data on the disk. As
primitive data structures, directories
impede performance, as lookups are
serial, take time, and involve lockouts
as the directory may be updated. As
everyone finds, on multiple operating
systems, the more files you have in a
directory, the worse the performance for
anything in your operating system going
after files in that directory.
The gross rule of thumb is that about
1000 files is about all that is
realistic in a directory. Use
subdirectories to create a topology
which is akin to an equilateral triangle
for best performance.
Also, from a 2.1 README:
"Tens of thousands of files in a single
random-ordered directory can cause
performance slowdowns and server
session timeouts for the Backup/Archive
client, because the list of files must
be sorted before *SM can operate on
them. Try to limit the number of files
in a single random-ordered directory,
or increase the server timeout period."
Directory permissions restored Occurred in some V2 levels. Per ADSM,
incorrectly "it is working as designed and was
documented in IC07282". Circumvent by
using dsmc restore with -SUbdir=Yes on
the command line or dsm Restore by
Subdirectory Branch in the GUI to
restore the directory with the correct
permissions.
Directory separator character '/' for Unix, DOS, OS/2, and Novell.
See also ":" volume/folder separator for
Macintosh.
Directory timestamp preservation, *SM easily preserves the timestamp of
Windows restored directories through use of the
Windows API function SetFileTime().
DIRMc Client System Options file (dsm.sys)
backup option to specify the Management
Class to use for directories - and only
directories, not the things inside them:
use Include to specify the management
class for non-directory objects. DIRMc
is for Backup only; not for Archive. See
ARCHMc for Archive.)
Syntax: DIRMc ManagementClassName
Placement: Must be within server stanza
With some client types (e.g., Unix), the
directory structure is simple enough
that directory information can be stored
in the ADSM database such that storage
pool space is not required for it: the
use of DIRMc does not change this.
However, where a client uses richer
directories or when an ACL (Access
Control List) is associated with the
directory, there is too much information
and so *does* need to be stored in a
storage pool. (Note that this same
principal pertains to all simple
objects, and thus empty files as well.)
The DIRMc option was originated because,
without it, the directories would be
bound to the management class that has a
backup copygroup with the longest
retention period (see below). In many
sites that was causing directories to go
directly to tape resulting in excessive
tape mounts and prolonged retrievals.
(Additional note: Beyond being bound to
the management class with the longest
backup retention, if multiple management
classes have the same creation date,
directories will be bound to the
management class earliest in
alphabetical order, per APAR IY11805.)
Performance: You could use DIRMc to put
directory data into a separate
management class such that it could be
on a volume separate from the file data
and thus speed restorals, particularly
if the volume is disk. (In a file system
restoral, the directory structure is
restored first.)
Systems known to have data-rich
directory information which must go to a
storage pool: DFS (with its ACLs),
Novell, Windows NTFS.
Default: the Management Class in the
active Policy Set which has the longest
retention period (RETOnly); and in the
case of there being multiple management
classes with the same RETOnly, the
management class whose name is highest
in collating sequence gets picked. (The
number of versions kept is not a
factor.) Thus, in the absence of DIRMc,
database and storage pool consumption
can be aggravated by retaining
directories after their files have
expired.
If used, be sure to choose a management
class which retains directories as long
as the files in them.
NOTE: As of ADSMv3, DIRMc is not as
relevant as it once was, because of
Restore Order processing (q.v.), which
creates an interim, surrogate directory
structure and restore/retrieves the
actual directory information whenever it
is encountered within the restore order
(the order in which data appears on the
backup media). However, the restoral
ultimately has to retouch those
surrogate directories, and you don't
want that to happen by wading through a
set of data tapes unrelated to the
restored data (where the dirs ended up
by virtue of longest retention). So use
of DIRMc is still desirable for file
systems whose directories end up in
storage pools.
See also: Directories may be in the *SM
db; Restore Order
DIRMc, query In ADSM do 'dsmc Query Options': under
GENERAL OPTIONS see "dirmc".
In TSM do 'dsmc show options' and
inspect the "Directory MC:" line.
If your client options do not specify an
override, the value will say 'DEFAULT'.
-DIrsonly Client option, as used with Retrieve, to
process directories only - not files.
DISAble Through ADSMv2, the command to disable
client sessions. Now DISAble SESSions.
DISAble EVents ADSMv3+ server command to disable the
processing of one or more events to one
or more receivers (destinations).
Syntax:
'DISAble EVents ALL[,CONSOLE][,ACTLOG]
[,ACTLOG][,EVENTSERVER][,FILE]
[,SNMP][,TIVOLI][,USEREXIT]
EventName[,ALL][,INFO]
[,WARNING][,ERROR][,SEVERE]
NODEname=NodeName[,NodeName...]
SERVername=ServerName
[,ServerName]'
where:
TIVOLI Is the Tivoli Management
Environment (TME) as a receiver.
Example: 'DISAble EV ACTLOG ANE4991 *'
DISAble SESSions Server command to prevent client nodes
from starting any new Backup/Archive
sessions.
Current client node sessions are allowed
to complete. Administrators can
continue to access the server, and
server-to-server operations are not
affected.
Duration: Does not survive across a TSM
server restart: the status is reset to
Enable.
Determine status via 'Query STatus' and
look for "Availability".
Msgs: ANR2097I
See also: DISAble; DISABLESCheds;
ENable SESSions; Server, prevent client
access
DISABLENQR See: No Query Restore, disable
DISABLESCheds Server option to specify whether
administrative and client schedules are
disabled during an TSM server recovery
scenario. Syntax:
DISABLESCheds Yes | No
Default: No
Query: Query OPTion, "DisableScheds"
Disaster recovery See: Copy Storage Pool and disaster
recovery
Disaster Recovery Manager See: DRM
Disaster recovery, short scenario, - Restore the server node from a
AIX system mksysb image;
- Restore the other volume groups
(including the ones used for the adsm
database, log, storage pool, etc.)
from a savevg;
- Follow the instructions & run the
scripts so wonderfully prepared by
DRM. (The DRM script knows everything
about the database size, volhist,
which volumes were considered offsite,
etc.)
DISK Predefined Devclass name for random
access storage pools, as used in
'DEFine STGpool DISK ...'.
With DISK TSM keeps track of each (4 KB)
block in the DISK volumes, which means
maintaining a map of all the blocks,
searching and updating that map in each
storage pool reference.
Population change over time will result
in fragmentation, which increases disk
access overhead. (Why Sequential media
is better.)
Realize that Reclamation occurs on
serial media, and not random DISK,
meaning that the space formerly occupied
by small files in a multi-file Aggregate
cannot be reclaimed, wasting space.
REUsedelay is not applicable to DISK
volumes: your data will probably not be
recoverable because the space vacated by
expired files, where whole Aggregates
expired, is reused on disk, whereas such
space remains untouched on tape.
Restoral performance may be impaired if
using random-access DISK rather than
sequential-access FILE or tape: you may
see only one restore session instead of
multiple. That is, with DISK there is no
Multi-session Restore. See:
http://www-1.ibm.com/support/
docview.wss?uid=swg21144301
DISK is a liability in a situation where
you have to restore your TSM database,
for lack of REUsedelay, as you are then
forced to audit all of your (random)
disk volumes, which adds a lot of time
and uncertainty. With FILE, this is
avoided.
DISK storage pools are best used for
only first point of arrival on a TSM
system: the data must migrate to
sequential access storage (FILE, tape)
to be safe and optimal.
Ref: Admin Guide table "Comparing Random
Access and Sequential Access Disk
Devices"
See also: D2D; FILE; Multi-session
restore
Disk Pacing Term to describe AIX's control of Unix's
traditional inclination to buffer any
amount of file data, no matter how
large. AIX limitation thus prevents
memory overloading.
Disk stgpool not being used See: Backups go directly to tape, not
disk
Disk storage pool See: Storage pool, disk
See also: Backup storage pool, disk?;
Backup through disk storage pool
Disk Table The TSM database and recovery log
volumes, as can be reported via
'SHow LVMDISKTABLE' (q.v.).
DiskXtender A hierarchical storage product by
Legato. For it to work with TSM, you
need to have file dsm.opt in the DX
home directory.
DISKMAP ADSM server option for Sun Solaris.
Specifies how ADSM performs I/O to a
disk storage pool. Either:
Yes To map client data to memory
(default);
No Write client data directly to disk.
The more effective method for your
current system needs to be determined by
experimentation.
Disks supported ADSM supports any disk storage device
which is supported by the operating
system.
Dismount tape, whether mounted by Via Unix command:
ADSM or other 'mtlib -l /dev/lmcp0 -d -f /dev/rmt?'
'mtlib -l /dev/lmcp0 -d -x Rel_Drive#'
(but note that the relative drive
method is unreliable).
Msgs: "Demount operation Cancelled -
Order sequence." probably means that the
drive is actively in use by TSM, despite
your impression.
See also: Mount tape
Dismount tape which was mounted by 'DISMount Volume VolName'
*SM (The volume must be idle, as revealed in
'Query MOunt'.)
DISMount Volume *SM server command to dismount an idle,
mounted volume. Syntax:
'DISMount Volume VolName'.
If volume is in use, ADSM gives message
ANR8348E DISMOUNT VOLUME: Volume ______
is not "Idle".
The dismount will not complete if the
*SM server is brought down right after
the command is issued (at least on 3590E
drivers): it appears that the drive
wants to exchange status with the app
before it actually does the deed.
See also: Query MOunt
DISPLAYLFINFO See: Storage Agent and
logging/accounting
-DISPLaymode ADSMv3 dsmadmc option for report
formatting, with output being in either
"list" or "table" form. Prior to this,
the output from Administrative Query
commands was displayed in a tabular
format or a list format, depending on
the column width of the operating
system's command line window, which made
it difficult to write scripts that
parsed the output from the Query
commands as the output format was not
predictable. Choices:
LISt The output is in list format,
with each line consisting of a
row title and one data item,
like...
Description: Blah-blah
TABle The output is in tabular format,
with column headings.
See also: -COMMAdelimited; SELECT
output, columnar instead of keyword
list; -TABdelimited
DISTINCT SQL keyword, as used with SELECT, to
yield only distinct, unique, entries, to
eliminate multiple column entries of the
same content. This is most useful when
doing a Join. Form:
SELECT DISTINCT <ColumnName(s)>
FROM <TableName>
Note that DISTINCT has the effect of
taking the first occurrence of each row,
so is no good for use with SUM().
DLT Digital Linear Tape, whose name derives
from its development by Digital
Equipment Corporation in the 1980s.
Employs a single-hub cartridge with 1/2"
tape where the external end is equipped
with a plastic leader loop, (which has
been the single largest source of DLT
failures). Data is recorded on DLTtape
in a serpentine linear format. DLT
technology has lacked servo tracks on
the tape as Magstar and LTO have, making
for poor DLT start-stop performance as
it has to fumble around in
repositioning, which can greatly prolong
backups, etc. DLT is thus intended to
be a streaming medium, not
start-stop. Super DLTtape finally
provides servo tracking, in the form of
Laser Guided Magnetic Recording (LGMR),
which puts optical targets on the
backside of the tape.
DLT is not an open architecture
technology - only Quantum makes it - a
factor which has caused customers to
gravitate toward LTO instead.
http://www.dlttape.com/
http://www.overlanddata.com/PDFs/
104278-102_A.pdf
http://www.cartagena.com/naspa/LTO1.pdf
See also: SuperDLT
DLT and repositioning DLT (prior to SuperDLT) lacks absolute
positioning capability, and so when you
need to perform an operation (Audit
Volume) which is to skip a bad block or
file, it must rewind the tape and then
do a Locate/Seek.
DLT and start/stop operations *SM does a lot of start/stop operations
on a tape, and DLT has not been designed
for this (until SuperDLT). Whenever the
DLT stops, it has to back up the tape a
bit ("backhitch") before moving forward
to get the tracking right. Sometimes, it
seems, it doesn't get it right anyway,
resulting in I/O errors. A lot of
repositioning "beats up" the drive, and
can result in premature failure.
See: Backhitch
DLT barcode label specs Can be found in various vendor manuals,
such as the Qualstar TLS-6000 Technical
Services Manual, section 2.3.1, at
www.qualstar.com/146035.htm#pubpdf
DLT cartridge inspection/leader repair See Product Information Note at
www.qualstar.com/146035.htm#pubpdf
DLT cleaner tape When a DLT clean tape is used, it writes
a tape mark 1/20th down the tape. The
next clean uses up 1/20 more tape. When
you have used it 20 times, putting it
back in the drive doesn't clean
anything. You can degauss it to erase
the tape marks and then reuse it up to 3
times, though that can result in the
tape head being dirtied rather than
cleaned.
DLT drives All are made by Quantum. Quantum
bought the technology from DEC, which at
the time called them TKxx tape drives.
DLT Forum Is on the Quantum Web Site:
http://www.dlttape.com/index_wrapper.asp
DLT IV media specs 1/2 inch data cartridge
Metal particle formulation for high
durability.
1,828 feet length
30 year archival storage life
1,000,000 passes MTBF
35 GB native capacity on DLT 7000, 20GB
on DLT 4000
40 GB native capacity on DLT 8000
DLT Library sources http://www.adic.com
DLT media life DLT tapes are spec'd at 500,000 passes.
In general, the problem that usually
occurs with DLT is not tape wear, but
contamination. The cleaner the
environment, the better chance the tapes
will have of achieving their full wear
life...some 38 years. Streaming will
prematurely wear the tapes.
DLT tapes density DLT 4000 are 20GB native, 40GB
"typical compression".
Manually load a tape and look very
carefully at the density lights on the
DLT drive. DLT tapes can do 35GB, but
for backwards compatibility they can do
lower densities. The drive decides on
the density when the tape is first
written to and that density is used
forever more. It is possible to
"reformat" the media to a higher
density:
0. Make sure there is no ADSM data on
the tape and the volume has been
deleted from the library and ADSM
volume list. Mark the drive as
"offline" in ADSM.
1. Mount the tape manually in the drive
2. Use the "density select" button to
choose 35GB.
3. At the UNIX system: 'dd if=/dev/zero
of=/dev/rmt/X count=100'
(/dev/rmt/X is the real OS device
driver for the drive)
4. Dismount the tape.
5. Mark the drive as online.
6. Get ADSM to relabel the tape.
This works because the DLT drive will
change the media density IF it is
writing at the beginning of the tape.
This should result in getting > 35GB on
DLT tapes.
DLT vs. Magstar (3590, 3570) drives DLT tapes are clumsy and fragile;
With a DLT the queue-up time is much
longer than any of the magstars, and the
search time is even worse;
DLT drive heads wear faster.
DLT also writes data to the very edges
of a tape causing the tape edges to
wear.
Both have cartridges consisting of a
single spool, with the tape pulled out
via a leader. DLTs are prone to load
problems, especially as the drive and
tape wear: there is a little hook in the
drive that must engage a plastic loop in
the tape leader, and when the hook comes
loose from its catch, a service call is
required to get it repaired. And, of
course, the plastic leader loop breaks.
Customers report Magstar throughput much
faster than DLT, helped by the servo
tracks on tape that DLT lacks.
Magstar-MP's are optimized for
start-stop actions, and that is much of
what ADSM will do to a drive. DLT is
optimized for data streaming.
If a MP tape head gets off alignment
during a write operation, the servo
track reader on the drive stops writing
and adjusts. DLT aligns itself during
the load of the tape. If it gets off
track during a write it has no way to
correct and could overwrite data.
New technology DLT drives can read older
DLT tapes, whereas Magstar typically
does not support backward compatibility.
DLT4000 Capacity: 20GB native, 40GB "typical
compression".
Transfer rate: 1.5 MB/sec
DLT7000 Digital Linear Tape drives, often found
in the STK 9370. Can read DLT4000 tapes.
Tape capacity: 35 GB.
Transfer rate: 5 MB/sec
Beware that they have had power supply
problems (there are 2 inside each
drive): Low voltage on those power
supplies will cause drives to fail to
unload. And always make sure to be at
the latest stable microcode level.
See also: SuperDLT
DLT7000 cleaning There is a cleaning light, and it comes
on for two different things: "clean
requested", and "clean required". There
is a tiny cable that goes from the
drives back to the robot. With hardware
cleaning on, that is how the "clean
required" gets back to the robot and
causes it to mount the cleaning tape. A
"clean request" doesn't. That is, the
light coming on does not always result
in cleaning being done.
DLT7000 compression DLT7000 reportedly come configured to
maximize data thruput, and will
automatically fall out of compression to
do this. If you want to maximize data
storage, then you need to modify the
drive behavior. See the hardware
manual.
DLT7000 tape labels Reportedly must be a 1703 style label
and have the letter 'd' in the lower
left corner.
DLT8000 Digital Linear Tape drives. DLT type IV
or better cartridges must be used.
Can read DLT4000 tapes.
Tape capacity: 40 GB native.
Transfer rate: 6 MB/s native.
DM services Unexplained Tivoli internal name for HSM
under TSM, as seen in numerous
references in the Messages manual series
9000 messages, apparently because it
would be too confusing for its Tivoli
Space Manager to have the acronym "TSM".
"DM" probably stands for Data Migrator.
.DMP File name extension created by the
server for FILE devtype scratch volumes
which contain Database dump and unload
data.
Ref: Admin Guide, Defining and Updating
FILE Device Classes
See also: .BFS; .DBB; .EXP; FILE
DNSLOOKUP TSM 5.2+ compensatory server option for
improving the performance of Web Admin
and possibly other client access by
specifying: DNSLOOKUP NO
Background: DNS lookup control is
provided in web (HTTPD) servers in
general. (In IBM software, the control
name is DNSLOOKUP; in the popular Apache
web server, the control is
HostnameLookups.) Web servers by
default perform a reverse-DNS query on
the requesting IP address before
servicing the web request. This
reverse-DNS query (C gethostbyaddr call)
is used to retrieve the host and domain
name of the client, which is logged in
the access log and may be used in
various ways. The problem comes when DNS
service is impaired. It may be the case
that your OS specifies multiple DNS
servers, and one or more of them may not
actually be DNS servers, or may be down,
or unresponsive. This can result in a
delay of up to four seconds before
rotating to the next DNS server. Other
causes of delay involve use of a
firewall or DHCP with no DNS server
(list) specified. You can gauge if you
have such a DNS problem through the use
of the 'nslookup' or 'host' commands.
Note that DNS lookup problems affect the
performance of all applications in your
system, and should be investigated, as
the use of gethostbyaddr is common.
With DNSLOOKUP OFF specified, only the
IP address is had.
Msgs: ANR2212W
See also: Web Admin performance issues
Documentation, feed back to IBM Send comments on manuals, printed and
online, to:
starpubs@sjsvm28.vnet.ibm.com
Domain See: Policy Domain
DOMain Client User Options file (dsm.opt)
option to specify the default file
systems in your client domain which are
to be eligible for incremental backup,
as when you do 'dsmc Incremental' and do
not specify a file system.
DOMain is ignored in Archive and
Selective Backup.
The DOMain statement can be coded
repeatedly: the effect is additive. That
is, options file line "DOMain a:"
followed by line "DOMain b:" is the same
as coding "DOMain a: b:". Note that
Domains may also be specified in the
client options set (cloptset) defined on
the server, which are also additive,
preceding what is coded in the client's
options file.
When a file system is named via DOMain,
all of its directories are always backed
up, regardless of Inclue/Exclude
definitions: the Include/Exclude specs
affect only eligibility of *files*
within directories.
AIX: You cannot code a name which is not
one coded in /etc/filesystems (as you
might try to do in alternately mounting
a file system R/O): you will get an
ANS4071E error message.
Default: all local filesystems, except
/tmp.
(Default is same as coding "ALL-LOCAL",
which includes all local hard drives,
excluding /tmp, and excludes any
removeable media drives, such as
CD-ROM, and excludes loopback file
systems and those mounted by
Automounter. Local drives do not
include NFS-mounted file systems.)
Verify: 'dsmc q fi' or 'dsmc q op'.
Override by specifying file systems on
the 'incremental' command, as in:
'dsmc Incremental /fs3'
Note that instead of a file system you
can code a file system subdirectory,
defined previously via the
VIRTUALMountpoint option.
Do not confuse DOMain with Policy
Domain: they are entirely different!
If employing Client Schedules, you
should consider coding the file systems
on the schedule's OBJects parameter
rather then on client DOMain statements:
this will permit changing the list at
will (centrally), and not have to
restart the client scheduler, which gets
the list afresh each time it triggers.
See also: File systems, local;
SYSTEMObject
Domain list, in GUI From the GUI menu, choose "edit" ->
preferences; there you'll find a
"backup" tab which will give you access
to your domain options, and a
self-explicit "include-exclude" tab.
-DOMain=____ Client command line option to specify
file system name(s) which augment those
specified on the Client User Options
file DOMain statement(s).
For example: If your options file
contains "DOMain /fs1 /fs2" and you
invoke a backup with -DOMain="/fs3 /fs4"
then the backup will operate on /fs1,
/fs2, /fs3, and /fs4.
Note that both DOMain and -DOMain are
ignored if you explicitly list file
systems to be backed up, as with
'dsmc i /fs7 /fs8'.
DOMAIN.Image Client Options File (dsm.opt) option for
those clients supporting Image Backups.
Specifies the mounted file systems and
raw logical volumes to be included by
default when Backup Image is performed
without file system or raw logical
volume arguments. Syntax:
DOMAIN.Image Name1 [Name2 ...]
See also: dsmc Backup Image; MODE
domdsm.cfg The default name for the TDP For Domino
configuration file. Values in that file
are established via the 'domdsmc set'
command. Note that if the file contains
invalid values, TDP will use default
values.
"Preference" info, by default, comes
from this cfg - not domdsm.opt .
Remember that dsm.opt is the TSM API
config file.
You can point to an alternate
configuration file using the DOMI_CONFIG
environment variable.
domdsmc TDP Domino utility command.
domdsmc archivelog TDO Domino command to back up all the
logs that have not yet been, regardless
of the their date. The Domino server
keeps track of which logs have been
archived and passes that information to
DP for Domino.
domdsmc query dbbackup TDP Domino command to report on
previously backed up Domino database
instances. If it fails to find any, it
may be that the domdsmc executable does
not have the set-user-id bit on: perform
Unix command 'chmod 6771' to turn it on.
See IBM KB article 1109089.
Domino See: domdsm.cfg; Lotus Domino; Tivoli
Storage Manager for Mail
DOS/Win31 client Available in ADSM v.2, but not v.3.
dpid2 daemon Serves as a translator between SMUX and
DPI (SNMP Multiplexor Protocol and
Distributed Protocol Interface) traffic.
Make sure that it is known to the snmp
agent, as by adding a 'smux' line to
/etc/snmpd.conf for the dpid2 daemon;
else /var/log could fill with msgs:
dpid2 lost connection to agent
dpid2 smux_wait: youLoseBig
[ps2pe: Error 0]
Dr. Watson errors (Windows) May be caused by having old options in
your options file, which are no longer
supported by the newer client.
DRIVE FORMAT value in DEFine DEVclass to
indicate that the maximum capabilities
of the tape drive should be used.
Note that this is not as reliable or as
definitive as more specific values.
See also: 3590B; 3590C; FORMAT
Drive A drive is defined to belong to a
previously-defined Library.
Drive, define to Library See: 'DEFine DRive'
Drive, update 'UPDate DRive ...' (q.v.)
Drive, vary online/offline 'UPDate DRive ...' (q.v.)
Drive cleaning, excessive Can be caused by bad drive microcode, as
seen with DLT7000. The microcode does
not record calibration track onto to
tapes correctly. So the drives detect a
weak signal and think that cleaning is
needed.
Drive mounts count See: 3590 tape mounts, by drive
Drive status, from host 'mtlib -l /dev/lmcp0 -f /dev/rmt1 -qD'
DRIVEACQUIRERETRY TSM4.1 server option for 3494 sharing.
Allows an administrator to set the
number of times the server will retry to
acquire a drive. Possible values:
0 To retry forever. This is
the default.
-1 To never retry.
1 to 9999 The number of times the
server will retry.
See also: 3494SHARED; MPTIMEOUT
Driver not working - can't see tape Has occurred in the case of an operating
drives system like Solaris 2.7 booted in 64-bit
mode, but the driver being 32-bit.
DRIVES SQL table for info about sequential
media drives. Elements, as of TSM5.2:
LIBRARY_NAME:
DRIVE_NAME:
DEVICE_TYPE:
ONLINE: YES/NO
READ_FORMATS: Like: 3590E-C,3590E-B
WRITE_FORMATS: Like: 3590E-C,3590E-B
ELEMENT:
ACS_DRIVE_ID:
DRIVE_STATE Possible values:
EMPTY Not in current use or pending
utilization.
LOADED A tape is loaded in the
drive.
RESERVED A tape mount is pending.
UNKNOWN Probably due to drive hardware
problems, as seen in your OS
error log.
ALLOCATED_TO:
LAST_UPDATE_BY: <AdminName>
LAST_UPDATE: <Date> <Time>
CLEAN_FREQ:
DRIVE_SERIAL Oddly empty, despite
Query DRive F=D showing
the serial.
Note: Does not reveal the media mounted
on a drive.
See also: PATHS; Unknown
Drives, maximum to use at once See: MOUNTLimit
Drives, not all in library being used As in you find processes waiting for
(Insufficient mount points drives (do 'Query SEssion F=D' and find
ANR0535W, ANR0567W) some sessions waiting for mount points),
though you believe you have enough
drives in the library to handle the
requests...
- Do Query SEssion, Query PRocess, and
Query MOunt to see if resources are
simply busy.
- Most obviously, do 'Query DRive' and
make sure all are online.
- As of TSM5, also do Query PATH to
check for good definitions and
On-Line: Yes.
- In the server, do 'SHow LIBrary' and
see if it thinks all the drives are
available. Inspect the "mod=" value:
if you have a mixture of model
numbers, some of your drives might not
get used. A further consideration is
that using new drives with old server
software (as with inappropriate
definitions such that TSM thinks they
are older drives) could result in
erratic behavior, as in perhaps balky
dismounting, etc. Review TSM
documentation on how to best define
such devices for use in your library,
and appropriate levels of software and
device drivers.
- If all your drives get rotationally
used, but all cannot be used
simultaneously, then it's a DEVclass
MOUNTLimit problem (and be aware that
MOUNTLimit=DRIVES is not always
reliable, so may be better to
explicitly specify the number).
- If not all drives get rotationally
used, some have a problem: Attempt to
use 'mtlib' and 'tapeutil'/'ntutil'
commands on those.
- Check your client MAXNUMMP value.
- Watch out for the devclass for your
drives somehow having changed and thus
being incompatible with your storage
pools.
- If just certain drives never get used,
then there is a problem specific to
those drives...
- If a 3494 or like library, look for
an Intervention Required condition,
caused by a load/unload failure or
similar, which takes the drive out
of service.
- At the library manager station,
check the availability status of the
drives. (They can be logically made
unavailable there.)
- Check the front panel of the drives,
looking for "ONLINE=0" or like
anomaly.
- In AIX, do
'lsdev -C -c tape -H -t 3590' and
see if all drives have status of
Available.
- Are you trying to use a new tape
technology with a server level which
doesn't support it such that the drive
devclass is GENERICTAPE rather than
the actual type, needed to mount and
use the tapes that go with that drive
technology?
- In a more obscure case, a 3494/3590
customer reports this being caused by
the cleaning brush on the drive not
functioning correctly: replaced,
cleaned, no more problem.
- Assure that your MAXscratch value is
appropriate.
Keep in mind that various TSM tasks
simply cannot be done in parallel.
Drives, number of in 3494 Via Unix command:
'mtlib -l /dev/lmcp0 -qS'
Drives, query 'Query DRive [LibName] [DriveName]
[Format=Detailed]'
DRM TSM Disaster Recovery Manager.
In AIX environment, does 2 major things:
1. Automates (mostly) the vaulting
process for moving/tracking copy storage
pool tapes and DB backup tapes offsite
and onsite. If you have a tape robot and
do a lot of tape vaulting you can
either:
a) Have a very expensive ADSM
administrator do all the checking and
status updates daily for vaulting
tapes;
b) Have a very expensive UNIX dude
write scripts to automate the process
(and of course maintain them); or
c) Pay for DRM and get the function
ready to go out of the box.
2. Generates the "recovery plan" file
that is a concatenated series of
scripts and instructions that tell you
how to rebuild your *SM server in an
offsite, DR environment (which is the
first thing you have to do in a disaster
situation - you have to get your *SM
server back up at your recovery recovery
site, before you can start using *SM to
recover your appls.)
Ref: Admin Guide manual; Tivoli Storage
Management Concepts redbook
Competing product: AutoVault, at
CodeRelief.com - a very inexpensive
alternative, no TSM hooks.
See also: ORMSTate
DRM, add primary, copy stgpools SET DRMPRIMSTGPOOL
SET DRMCOPYSTGPOOL
DRM, prevent from checking tape label To keep DRM from checking the tape label
before ejecting a tape:
Set DRMCHECKLabel No
DRM and ACS libraries DRM won't do checkouts from ACS
libraries. (You can write scripts to
work around it.)
DRM considerations Numerous customers report encountering
inconsistencies with DRM, as in doing
Query DRMedia and finding 18 of 50
offsite volumes not listed. This may
have to do with changing status of vault
retrieve volumes which somehow are not
checked-in in time. When the volume
history is truncated to the point where
this state change was made the volume is
'lost'.
- Make sure that you use DRM to expire
*SM database backup volumes.
- Watch out for human error: In using
MOVe DRMedia to return tapes, a
mistyped a volser for a volume that is
still physically offsite but has just
gone to vault retrieve state, the
volume will be deleted and left at the
vault: it's not in a DRM state anymore
and you have to do manual inventory to
find it.
- The offsite vendor can mistakenly omit
a tape to be returned and ops runs
MOVe DRMedia anyway and the tape is
"lost".
- A volume inadvertently left in the
tape library and not sent offsite
cannot be returned.
- A MOVe DRMedia done by mistake, or an
automated script which is not in tune
with retention policies can result in
inconsistencies.
As always, keeping good records will
help uncover and rectify problems.
If an automated library, after you
explode the DRM files, you may have to
edit DEVICE.CONFIGURATION.FILE to put
actual location and volser of your DB
backup tape. That's so the DR script
(and the server) can find it.
DRMDBBackupexpiredays See: Set DRMDBBackupexpiredays
DRMEDIA SQL: TSM database table recording
disaster recovery media, which is to say
database backup volumes and copy storage
pool volumes. Columns, with samples:
VOLUME_NAME: 000004
STATE: MOUNTABLE (always this unless
MOVe DRMedia is done)
UPD_DATE: 2000-11-12 15:11:29.000000
LOCATION:
STGPOOL_NAME: OUR.STGP_COPY
LIB_NAME: OUR.LIB
VOLTYPE: CopyStgPool DBBackup
dscameng.txt American English message text file.
The DSM_DIR client environment variable
should point to the directory where the
file should reside.
dsierror.log *SM API error log (like dsmerror.log)
where information about processing
errors is written. Because buta is
built upon the API, use of buta also
causes this log to be created.
The DSMI_LOG client environment variable
should point to the directory where you
want the dsierror.log to reside.
If unspecified, the error log will be
written to the current directory.
The error log for client root activity
(HSM migration, etc.) will be
/dsierror.log.
See also: DSMI_LOG; "ERRORLOGRetention";
tdpoerror.log
____.dsk VMware virtual disk files, such as
win98.dsk, linux.dsk, etc. Backing up
such files per se is not the best idea,
and is worse if the .dsk area is active.
The best course is to run the backup
from within the guest operating system.
dsm The GUI client for backup/archive,
restore/retrieve.
Contrast with 'dsmc' command, for
command line interface.
AIX: /usr/lpp/adsm/bin/dsm
IRIX: /usr/adsm/dsm
Solaris: /opt/IBMDSMba5/solaris/dsm
and symlink from /usr/sbin/dsmc
Beware: ADSM install renders this cmd
setGID bin, which thwarts superuser
uses. Assure setGID chmod'ed off.
As of TSM 5.3, the GUI is Java-based,
utilizing new command 'dsmj'.
Ref: Using the UNIX Backup-Archive
Client, chapter 1.
DSM_CONFIG Client environment variable to point to
the Client User Options file (dsm.opt)
for users who create their own rather
than depend upon the default file
/usr/lpp/adsm/bin/dsm.opt.
Ref: "Installing the Clients" manual.
See also: -optfile
DSM_DIR Officially, the client environment
variable to point to the directory
containing dscameng.txt, dsm.sys,
dsmtca, and dsmstat. But is also
observed by /etc/rc.adsmhsm as the
directory from which HSM should run
installfsm, dsmrecalld, and dsmmonitord.
Ref: "Installing the Clients" manual.
DSM_LOG Client environment variable to point to
the *directory* where you want the
dsmerror.log to reside. (Remember to
code the directory name, not the file
name.)
If undefined, the error log will be
written to the current directory.
Beware symbolic links in the path, else
suffer ANS1192E.
Advice: Avoid using this if possible,
because it forces use of a single error
log file, which can make for permissions
usage problems across multiple users,
and muddy later debugging in having the
errors from all manner of sessions
intermixed in the file.
Ref: "Installing the Clients" manual.
See also: ERRORLOGName option
dsm.afs The dsm.afs backup style provides the
standard ADSM user interface and
backup/restore model to AFS users, which
unlike plain dsm will back up AFS Access
Control Lists for directories. Users
can have control over the backup of
their data, and can restore individual
files without requiring operator
intervention. Individual AFS files
are maintained by the ADSM system, and
the ADSM management classes control file
retention and expiration. Additional
information is needed in order to
restore an AFS server disk.
Contrast with buta, which operates on
entire AFS volumes.
dsm.ini (Windows client) The ADSMv3 Backup/Archive GUI introduced
an Estimate function. It collects
statistics from the ADSM server, which
the client stores, by server, in the
dsm.ini file in the backup-archive
client directory. (Comparable file in
the Unix environment is .adsmrc.)
Client installation also creates this
file in the client directory.
Ref: Client manual chapter 3 "Estimating
Backup processing Time"; ADSMv3
Technical Guide redbook
This file is also being used, in at
least a provisional manner, to make the
GUI configurable, as in limiting what an
end user can do. See: GUI, control
functionality
See also: .adsmrc; Estimate; TSM GUI
Preferences
dsm.opt file See Client User Options file.
AIX: /usr/lpp/adsm/bin/dsm.opt.
IRIX: /usr/adsm/dsm.opt.
Solaris: /usr/bin (so located due to the
Solaris packaging mechanism wherein an
install will delete old files, and
/usr/bin was deemed "safe" - but not
really the best choice)
The DSM_CONFIG client environment
variable may point to the options file
to use, instead of using the options
file in the the default location.
dsm.opt.smp file Sample Client User Options file.
Use this to create your first dsm.opt
file.
dsm.sys file See: Client System Options File.
AIX: /usr/lpp/adsm/bin/dsm.sys
IRIX: /usr/adsm/dsm.sys
Solaris: /usr/bin (so located due to the
Solaris packaging mechanism wherein an
install will delete old files, and
/usr/bin was deemed "safe" - but not
really the best choice)
The DSM_DIR client environment variable
may be used to point to the directory
where the file to be used resides.
Beware there being multiple dsm.sys
files, as in AIX maybe having:
/usr/tivoli/tsm/client/api/bin/dsm.sys
/usr/tivoli/tsm/client/api/bin64/dsm.sys
/usr/tivoli/tsm/client/ba/bin/dsm.sys
dsm.sys.smp file Sample Client System Options file.
Use this to create your first dsm.sys
file. In /usr/lpp/adsm/bin
dsmaccnt.log This is the *SM server accounting file
on an AIX system, which is written to
after 'Set ACCounting ON' is done.
The file is located in the directory
specified via environment variable
DSMSERV_ACCOUNTING_DIR (q.v.) in Unix
environments, or Windows Registry key.
If that's not specified, then the
directory will be that specified by the
DSMSERV_DIR environment variable; and if
that is not specified, then it will be
the directory wherein the TSM server was
started.
The accounting log file remains in an
open state while the TSM server is
running and accounting is turned on.
Separate section "ACCOUNTING RECORD
FORMAT" near the bottom of this document
describes accounting record fields.
(Odd note: It wasn't until TSM 5.1 that
code was added to the server to
serialize access to the accounting log
file...when it was discovered that data
in it was being corrupted by
simultaneous writes from multiple server
process threads.)
See also: Accounting...
dsmadm The GUI command for server
administration of Administrators,
Central Scheduler, Database, Recovery
Log, File Spaces, Nodes, Policy Domains,
Server, and Storage Pools.
Contrast with the 'adsm' command, which
is principally for client management.
dsmadmc *SM administrative client command line
mode for server cmds, available as a
client on all *SM systems where the
*SM client software has been installed.
(On Windows clients, dsmadmc is not
installed by default: you have to
perform a Custom install, marking the
admin command line client for
installation. After a basic install, you
can go back and install dsmadmc by
reinvoking the install, choosing Modify
type, there marking just the admin
command line client for installation.
See IBM doc item 1083434.)
The dsmadmc command starts an
"administrative client session" to
interact with the server from a remote
workstation, as described in the *SM
Administrator's Reference. In Unix, the
version level preface and command output
all go to Stdout.
Note that the dsmadmc command is
neutral: you can use it on any platform
type to communicate to a TSM server on
any platform type. The dsmadmc invoker
does not have to be a superuser.
To enter console mode (display only):
'dsmadmc -CONsolemode'
To enter mount mode (monitor mounts):
'dsmadmc -MOUNTmode'
To enter batch mode (single command):
'dsmadmc -id=____ -pa=____ Command...'
'dsmadmc -id=____ -pa=____ macro Name'
To enter interactive mode:
'dsmadmc -id=YourID -pa=YourPW'
Options:
-CONsolemode Run in Console mode, to
display TSM server msgs
but allow no input.
-DATAOnly=[No|Yes] (TSM 5.2+) To
suppress the display of headers
(product version, copyright,
ANS8000I command echo, column
headers) and ANS8002I trailer.
Error messages and some info
type messages (e.g. ANR1462I)
are not suppressed.
Ref: IBM site Technote 1143748
-DISPLaymode=[LISt|TABle]
The interface is normally
adaptive, displaying output in
tabular form if the window is
wide enough, otherwise reverting
to Identifier:Value form. This
option allows you to force query
output to one or the other,
regardless of the window width.
Note that, regardless of window
width, query commands may be
programmed with a fixed column
width (example: Query STatus.
-ID=____ Specify administrator ID.
-Itemcommit Say that you want to commit
commands inside a macro as
each command is executed.
This prevents the macro
from failing if any command
in it encounters "No match
found" (RC 11) or the like.
See also: COMMIT
-MOUNTmode Run in Mount mode, to
display all mount messages,
such as ANR8319I, ANR8337I,
ANR8765I.
No input allowed.
-NOConfirm Say you don't want TSM to
request confirmation before
executing vital commands.
Example: Select, "This SQL
query might generate a big
table, or take a long time.
Do you wish to continue ?
Y/N"
-OUTfile=____ All terminal commands and
responses are to be
captured in the named
file, as well as be
displayed on the screen.
The file will not reflect
command input prompting
but will record the cmd.
Use this rather than Unix
'dsmadmc | tee <File>',
which doesn't work.
-PASsword=____ Specify admin password.
-Quiet Don't display Stdout msgs to
screen; but Stderr will.
-SERVER=____ Select a server other than
the one in this system's
client options file.
(Not avail. in Windows:
use -TCPServeraddress
instead.)
-COMMAdelimited
Specifies that any tabular
output from a server query is to
be formatted as comma-separated
strings rather than in readable
format. This option is intended
to be used primarily when
redirecting the output of an SQL
query (SELECT command). The
comma-separated value format is
a standard data format which can
be processed by many common
programs, including
spreadsheets, data bases, and
report generators. Note that
where values themselves contain
commas, TSM will enclose the
value in quotes, e.g. "123,456".
-TABdelimited
Specifies that any tabular
output from a server query is to
be formatted as tab-separated
strings rather than in readable
format. This option is intended
to be used primarily when
redirecting the output of an SQL
query (SELECT command). The
tab-separated value format is a
standard data format which can
be processed by many common
programs, including
spreadsheets, databases, and
report generators. Tabs make
parsing easier compared to
commas, in that it is not
uncommon for values to contain
commas.
You can also specify any option allowed
in the client options file. Alas, there
is no option to specify a file
containing a list of commands to be
invoked.
The dsmadmc client command is obviously
useless if the server is not up. See my
description of the ANS8023E message.
Notes: Prior to TSM 5.2 and the
-DATAOnly option, there is no way to
suppress headers or ANS800x messages
that appear in the output - you are left
to remove them after the fact. You might
use ODBC, but that accesses just the TSM
db, not any TSM commands.
You can suppress the "more..." scrolling
prompt only by running a command in
batch mode (adding the command to the
end of the line) and piping the output
to cat... dsmadmc SomCmd | cat.
Install note: dsmadmc may not install by
Ref: Admin Ref chapter 3: "Using
Administrative Client Options".
Ref: IBM site Technote 1111773
See also: -Itemcommit
dsmapi*.h *SM API header files, for compiling your
own API-based application:
dsmapifp.h
dsmapips.h
dsmapitd.h
In TSM 3.7, lives in
/usr/tivoli/tsm/client/api/bin/sample/
They are best included in C source
modules in the following order:
#include "dsmapitd.h"
#include "dsmapifp.h"
#include "dapitype.h"
#include "dapiutil.h"
#include "dsmrc.h"
See also: libApiDS.a
dsmapitca The ADSM API Trusted Communication Agent
For non-root users, the ADSM client uses
a trusted client (dsmtca) process to
communicate with the ADSM server via a
TCP session. This dsmtca process runs
setuid root, and communicates with the
user process (API) via shared memory,
which requires the use of semaphores.
The DSM_DIR client environment variable
should point to the directory where the
file should reside.
dsmattr HSM: Command to set or display the
recall mode for a migrated file. Syntax:
'dsmattr
[-RECAllmode=Normal|Migonclose|
Readwithoutrecall]
[-RECUrsive] FileName(s)|Dir(s)'
See "Readwithoutrecall".
dsmautomig (HSM) Command to start threshold migration for
a file system. dsmmonitord checks the
need for migration every 5 minutes (or
as specified on the CHEckthresholds
Client System Options file (dsm.sys))
and if needed will automatically invoke
dsmautomig to do threshhold migrations.
Query: ADSM 'dsmc Query Options' or TSM
'dsmc show options', look for
"checkThresholds".
As such an automigration runs, it will
result in many sessions with the TSM
server.
Note that persistent dsmautomig
invocations are an indication that HSM
thinks the file system is runninng out
of space, despite what a 'df' may show.
Deleting files or extending the file
system has been shown to stop these
"dry heaves" dsmautomig invocations.
See "dsmmonitord", "automatic
migration", "demand migration".
dsmBeginQuery API function.
dsmBindMC API call to bind the file object to a
management class. It does so by
scanning the Include/Exclude list for a
spec matching the object, wherein you
may have previously coded a management
class for a filespec. What the call
returns reflects what it has found -
which is to say that the dsmBindMC call
does not itself specify the Management
Class.
You'll end up with the default
management class if the dsmBindMC
processing did not find a spec for the
object in the Include/Exclude list.
It would be nice if there were a call
which were as definitive as the -ARCHMc
spec for the command line client, but
such is not the case.
dsmc Command-line version of the client for
backup-restore, archive-retrieve.
Invoking simply 'dsmc' puts you into the
command line client, in interactive mode
(aka "loop mode"). (Simply invoking dsmc
does not - yet - result in a session
with the TSM server: it is the issuance
of a subcommand which causes the session
to be initiated.)
Contrast with 'dsm' command, for
graphical interface (GUI).
To direct to another server, invoke like
this: 'dsmc q fi -server=Srvr', or
'dsmc i -server=Srvr /home'. (Note
that the options *must* be coded AFTER
the operation.)
AIX: /usr/lpp/adsm/bin/dsmc
IRIX: /usr/adsm/dsmc
NT: Reference the B/A Client manual for
Windows manual, section "Starting a
Command Line Session", where you can
Start->Programs->TSM folder->Command
Line icon; or use the Windows command
line to shuffle over to the TSM
directory and issue the 'dsmc' command.
Solaris: /opt/IBMDSMba5/solaris/dsmc,
and symlink from /usr/sbin/dsmc
Note that you can run a macro file with
dsmc: put various commands like
Incremental into a file, the run as
'dsmc macro MacroFilename'.
Beware: ADSM install renders this cmd
setGID bin, which thwarts superuser
uses. Assure setGID chmod'ed off.
Ref: Using the UNIX Backup-Archive
Client, chapter 7.
See also: dsmc LOOP
dsmc and wildcards (asterisk) New TSM users in at least a Unix
environment may not realize that how you
utilize a wildcard may cause results to
be wholly different than they expect.
For example: A novice user goes into a
directory and wants to see all the files
that are in the backup storage pool for
that directory, so they enter:
dsmc query backup *
But what does that really do? The
asterisk is exposed to the Unix shell
that is controlling the user session,
and it expands the asterisk into a list
of all the files in the directory. So
the query will end up trying to ask the
TSM server for information on the files
currently in the directory - which may
have no correlation with what is in the
backup storage pool. (This theoretical
example sidesteps the TSM complication
that it may disallow such wildcarding,
with error message ANS1102E; but we're
trying to explore a point here.)
So how do you then pose the request to
the TSM server that it show all backed
up files from the directory? By one of
the following constructs (where this is
a Unix example):
dsmc query backup '*'
dsmc query backup \*
dsmc query backup "*"
By quoting or escaping the asterisk, the
shell passes it, intact, to the dsmc
command, which responds by formulating
an API request to the TSM server for all
files contained within the stored
filespace for this directory. And this
yields the expected results.
The rule here may be expressed as:
* refers to the file system
'*' refers to the filespace
Note that the above does *not* apply to
the Windows environment: the Windows
command processor does not expand
wildcards, but rather just passes them
on to the invoked program as-is.
dsmc Archive To archive named files. Syntax:
'Archive [-ARCHMc=managementclass]
[-DELetefiles]
[-DEscription="..."]
[-SErvername=StanzaName]
[-SUbdir=No|Yes]
[-TAPEPrompt=value]
FileSpec(s)'
The number of FileSpecs is limited to
20; see "dsmc command line limits".
Wildcard characters in the FileSpec(s)
can be passed to the Archive command for
it to expand them: this avoids the shell
implicitly expanding the names, which
can result in the command line arguments
limit being exceeded. For example:
instead of coding:
dsmc Archive myfiles.*
code:
dsmc Archive 'myfiles.*' or...
dsmc Archive myfiles.\*
Note that the archive operation will
succeed even if you don't have Unix
permissions to delete the file after
archiving.
It is important to understand that an
Archive operation is deemed "explicit":
that you definitely want all the
specified files sent...WITHOUT
EXCEPTION. Because of this, message
ANS1115W and a return code 4 will be
produced if you have an Exclude in play
for an included object. (Due to the
preservational nature of Archive, you
very much want to know if some file was
not preserved.)
It is advisable to make use of the
DEscription, as it renders the archived
object unique - but be aware that doing
so also forces the path directories to
be archived once more, if the
description is unique.
Archiving a file automatically archives
the directories in the path to it.
As of ADSMv3.1 mid-1999 APAR IX89638
(PTF 3.1.0.7), archived directories are
not bound to the management class with
the longest retention.
Note that you cannot change the archive
file Description after archiving.
See also: DELetefiles; dsmc Archive
dsmc Backup Image TSM3.7+ client command to create an
image backup of one or more file spaces
that you specify. Available for major
Unix systems (AIX, Sun, HP). This is a
raw logical volume backup, which backs
up a physical image of a volume rather
than individually backing up the files
contained within it. This is achieved
with the TSM API (which must be
installed). This backup is totally
independent of ordinary Backup/Restore,
and the two cannot mingle.
Image backups need to be run as "root".
Syntax:
'dsmc Backup Image <options> File_Spec'
where File_Spec identifies either the
name of the file system that occupies
the logical volume (more specifically,
the mount point directory name), when
that file system is mounted; or the name
of the logical volume itself, when it
has no mounted file system. If the
volume contains a file system, you must
specify by file system name: that allows
you to supplement the image backup with
Incremental or Selective backups via the
MODE option. It also assures that the
mounted file system, if any, is
dismounted before the image backup is
performed.
The client and server both must be at
least 3.7.
Advisory: When a file system is
specified, the operation will try to
unmount the file system volume, remount
it read-only, perform the backup, and
then remount it as it was. This can be
disruptive, and is problematic if the
backup is interrupted.
Use the Include.Image option to include
an image for backup, or to assign a
specific management class to an image
object. Syntax:
'dsmc Backup Image [Opts] Filespec(s)'
Ref: Redbook "Tivoli Storage Manager
Version 3.7 Technical Guide";
IBM online info item swg21153898
Msgs: ANS1063E; ANS1068E
See also: MODE
dsmc Backup NAS Contacts the TSM EE server for it to
initiate an image backup of one or more
file systems belonging to a Network
Attached Storage (NAS) file server. The
NAS file server performs the outboard
data movement. A server process starts
in order to perform the backup.
See also: NDMP; NetApp
dsmc BACKup SYSTEMObject Windows client command to back up all
valid system objects, allowing you to
perform a backup of System Objects
separate from ordinary files. Note that
an Incremental Backup will ordinarily
also back up System Objects.
Verification: The backup log will show
messages like "Backup System Object:
Event log", "Backup System Object:
Registry".
Note that this command cannot be
scheduled.
dsmc CANcel Restore ADSMv3 client command to cancel a
Restore operation.
See also: CANcel RESTore
dsmc command line limits By default, the number of FileNames
which can be specified on the dsmc
command line to 20 (message ANS1102E);
and the TSM backup-archive client's
command-line parsing is limited to 2048
total bytes (message ANS1209E The input
argument list exceeds the maximum length
of 2048 characters.). The intent is to
protect hapless customers from
themselves - but that of course
penalizes everyone, deprives the product
of the flexibility that its Enterprise
status warrants, and prevents it from
scaling to the capabilities of the
operating system environment which the
customer chose for large-scale
processing.
(In AIX, at least, the command line
length limit is defined by the ARG_MAX
value in /usr/include/sys/limits.h:
exceeding that results in the typical
shell error "arg list too long".)
As of the TSM 5.2.2 Unix client, this
limitation is relieved in the form of
the -REMOVEOPerandlimit command line
option.
In other environments, there are some
circumventions you can employ:
- Use the -FILEList option.
- In the Unix environment, use the
'xargs' command to efficiently invoke
the command with up to 20 filespecs
per invocation, via the -n20 option.
Within an interactive session (which you
invoked by entering 'dsmc' with no
operands):
A physical line may not contain more
than 256 characters, and may be
continued to a maximum of 1500
characters.
Ref: B/A Clients manual, "Entering
client commands"
See also: -FILEList; -REMOVEOPerandlimit
dsmc Delete ACcess TSM client command to revoke access to
files that you previously allowed others
to access via 'dsmc SET Access'.
Syntax: 'dsmc Delete ACcess [options]'
You will be presented with a list from
which to choose. (As such, this is a
quick, convenient way to display all
access permissions.)
dsmc Delete ARchive TSM client command to delete Archived
files from TSM server storage. Syntax:
'dsmc Delete ARchive [options] FileSpec'
In more detail:
'dsmc Delete ARchive
[-NOPRompt]
[-DEscription="..."]
[-PIck]
[-SErvername=StanzaName]
[-SUbdir=No|Yes]
FileSpec(s)'
If you do not qualify the deletion with
a unique Archive file description, all
archived files of that name will be
deleted.
The number of FileSpecs is limited to
20; see "dsmc command line limits".
The delete actually only marks the
entries for deletion: it is
Expire Inventory which actually removes
the entries and reclaims space. But the
marking is irreversible: there is no
customer-provided means for un-marking
the files; and the marking does not show
up in the Archives table. Thus, a Select
on the Archives table continues to show
the files exactly as before the Delete
Archive.
dsmc Delete Filespace ADSM client command to delete filespaces
from *SM server storage. Syntax:
'dsmc Delete Filespace [options]'
You will be presented with a list of
filespaces to choose from.
dsmc EXPire TSM client command to inactivate the
backup objects you specify in the file
specification or with the FILEList
option. That is, the Active version of
the file on the TSM server is rendered
Inactive, making it subject to all the
site policies regarding Inactive files.
The command does not remove workstation
files: if you expire a file or directory
that still exists on your workstation,
the file or directory remains on the
client, and will be backed up again in
the next incremental backup (unless, of
course, you exclude the object from
backup processing). If you expire a
directory that contains active files,
those files will not appear in a
subsequent query from the GUI. However,
these files will display on the command
line if you specify the proper query
with a wildcard character for the
directory.
dsmc Help Client command line interface command to
see help topics on the use of dsmc
commands and option, plus message
numbers. (Note that you have to scroll
down to see everything.)
When you invoke 'dsmc Help', there is no
interaction with the TSM server.
dsmc Incremental The basic command line client command to
perform an incremental backup. Syntax:
'Incremental [<Options...>]
FileSpec(s)'
FileSpec(s): Most commonly will be file
system name(s). If you want to back up
just a directory, how you specify the
directory will make a difference... In
specifying a file system name, you enter
just the name, like "/home", and TSM
will pursue backing up the full file
system. But if you specify a directory
name like /home/user1, only the
directory entry itself will be backed
up: you need to specify /home/user1/ to
explicitly tell TSM that rather than
just back up that object, that you are
telling it to back up a directory *and*
what is contained in it.
The number of FileSpecs may be limited;
see "dsmc command line limits".
Important: An incremental backup which
operates on a whole file system is
termed a "full incremental", and it
causes the timestamp for the last
incremental backup to be updated on the
TSM server (as seen in Query
Filespace). An incremental backup which
specifies files(s) and/or directories
is termed a "partial incremental", and
the timestamp for the last incremental
backup is *not* updated on the server.
The lack of timestamp recording will not
cause the same, unchanged files to be
backed up again in an Incremental
backup, but no timestamp will cause an
-INCRBYDate backup to back them up again
and again.
Note that whereas scheduled backups
result in each line being timestamped,
this does not happen with command line
incremental backups. (Neither running
the command as a background process, nor
redirecting the output will result in
timestamping the lines.)
The number of filespec operands may be
limited: see "dsmc command line limits".
See also: dsmc Selective
dsmc LOOP To start a loop-mode (interactive)
client session.
Same as entering just 'dsmc'.
dsmc Query ACcess TSM client command to display a list of
users whom you have given access rights
to your Backup and/or Archive files, via
dsmc SET ACcess, so that they can
subsequently perform Restore or Retrieve
using -FROMNode, -FROMOwner, etc.
'dsmc Query ACcess [-scrolllines]
[-scrollprompt]'
See also: dsmc SET Access
dsmc Query ACTIVEDIRECTORY Windows TSM 4.1 client command to
provide information about backed up
Active Directory.
Ref: Redpiece "Deploying the Tivoli
Storage Manager Client in a Windows 2000
Environment"
dsmc Query ARchive *SM client command to list specified
Archive files. Syntax:
'dsmc Query ARchive
[-DEscription="___"]
[-FROMDate=date] [-TODate=date]
[-FROMNode=nodename]
[-FROMOwner=ownername]
[-SCROLLPrompt=value]
[-SCROLLLines=number]
[-SErvername=StanzaName]
[-SUbdir=No|Yes]
FileSpec(s)'
Data returned, in columns:
Size Archive Date - Time File[name]
Expires on Description
The number of FileSpecs is limited to
20; see "dsmc command line limits".
Wildcard characters in the filename(s)
can be passed to the Archive command for
it to expand them: this avoids the shell
implicitly expanding the names, which
can result in the command line arguments
limit being exceeded. For example:
instead of coding:
dsmc Query ARchive myfiles.*
code:
dsmc Query Archive 'myfiles.*' or...
dsmc Query Archive myfiles.\*
Displays: File size, archive date and
time, file name, expiration date, and
file description (but not file owner).
Performing a wide search for your
archive files is a challenge. You'd like
to say "look for all my archive files,
beginning at the root of the mounted
file systems". But it doesn't want to
comply. What you have to do is restrict
the search to a file system. For
example, if your file activity is in
/home, you can do:
dsmc q archive /home/ -subdir=yes
-desc="whatever"
Note the foolishness of these client
commands: unless you code a slash (/) or
slash-asterisk (/*) at the end of the
directory name, the commands assume that
you are looking for an individual *file*
of that name, and turns up nothing!
Note: Root can see the archive files
owned by others, but the query does not
reveal file owners.
Note that you can query across nodes,
but only if the file system
architectures are compatible.
See also: dsmc Query Backup across
architectural platforms
dsmc Query Backup *SM client command to list specified
backup files, issued as:
'dsmc Query Backup [options] <filespec>'
Options:
-DIrsonly: Display only directory names
for backup versions of your files, as
in: 'dsmc Query Backup -dirs
-sub=yes <FileSpec(s)>'.
-FROMDate=date
-FROMTime=time
-INActive To include Inactive files in
the operation. All Active files will
be displayed first, and then the
Inactive ones. Note that files marked
for expiration cannot be seen from the
client, but can be seen in a server
Select on the BACKUPS table.
-SCROLLPrompt=Yes
-SCROLLLines=number
-SErvername=StanzaName
-SUbdir=Yes
-TODate=date
-TOTime=time
-DATEFORMAT, -FROMNode, -FROMOWNER,
-NODename, -NUMBERFORMAT, -PASsword,
-QUIET, -TIMEFORMAT, -VERBOSE
The number of FileSpecs is limited to
20; see "dsmc command line limits".
Note that it is not possible to use a
filespec which is the top of your file
system (e.g., "/" in Unix) and have dsmc
report all files, regardless of
filespace. It can't do that: you have to
base the query on filespaces.
Wildcards: Use only opsys (shell)
wildcard characters, which can only be
used in the file name or extension. They
cannot be used to specify destination
files, file systems, or directories. In
light of this, you would best do
'Query Filespace' first to see what file
systems were being backed up, rather
than frustrate yourself trying to use
wildcards which get you nowhere.
What is reported: file size, backup
timestamp, managment class, A or I for
Active/Inactive, and file path.
What is not reported: file details such
as username, group info, file
timestamps, or even the type of file
system object (to allow distinguishing
between directories and files, for
example): neither the -verbose nor
-description CLI options help get more
info. In contrast to the CLI, the GUI
will provide such further info, via its
View menu, "File details" selection -
but this operates on one file at a
time.
Note that the speed of this query
command in returning results bears no
relationship to the speed of a restoral
of the same files, both because of
further *SM database lookup requirements
and media handling.
See also: dsmc and wildcards;
DEACTIVATE_DATE
dsmc Query Backup across architectural Cross platform Querying of files only
platforms work on those platforms that understand
the other's file systems, such as among
Windows, DOS, NT, and OS/2; or among
AIX, IRIX, and Solaris - and even there
incompatibilities may exist. Mac's
can't be either the source or the target
in moves from another platform.
A succinct way to express the schism is
to say that there are the "slash" and
"backslash" camps, and that their files
cannot mingle.
See also: Restore across architectural
platforms
dsmc Query BACKUPSET *SM client command to query a backup
set from a local file or the server, to
see metadata about the Backup Set: its
name, generatin date, retention, and
description. You must be superuser to
query a backupset from the server.
Syntax:
'Query BACKUPSET [Options]
BackupsetName|LocalFileName'
Note that there is no way from the
client to query the contents of a backup
set.
See also: Backup Set;
Query BACKUPSETContents
dsmc Query CERTSERVDB Windows TSM 4.1 client command.
Ref: Redpiece "Deploying the Tivoli
Storage Manager Client in a Windows 2000
Environment"
dsmc Query CLUSTERDB Windows TSM 4.1 client command.
Ref: Redpiece "Deploying the Tivoli
Storage Manager Client in a Windows 2000
Environment"
dsmc Query COMPLUSDB Windows TSM 4.1 client command.
Ref: Redpiece "Deploying the Tivoli
Storage Manager Client in a Windows 2000
Environment"
dsmc Query Filespace TSM client command to report filespaces
known to the server for this client.
The "Last Incr Date" column reflects the
completion timestamp of the last
successful, full Incremental backup. If
its value is null, it could be the
result of:
- The filespace having been created by
Archive activity only.
- Doing backups other than complete
Incremental type (e.g., Selective,
or Incremental on a subdirectory in
the file system).
- The Incremental backup having been
interrupted.
- The Incremental backup suffering from
files changing during backup and you
don't have Shared Dynamic copy
serialization active, or files
selected for backup disappear from
the client before the backup can be
done.
- It's a filespace for odd backup types
such as buta.
This command does not report the start
timestamp for that backup: you have to
either perform the corresponding
server-side query or get that info via
API programming.
Syntax:
'dsmc Query Filespace [-FROMNode=____]'
See also: Query FIlespace
dsmc Query Image To query image backups.
Sample output:
Image Size FSType Backup Date
----- ------- ------ -------------------
1 3.45 GB RAW 12/02/04 13:12:11
Mgmt Class A/I Image Name
---------- --- ----------
DEFAULT A /dev/howie
dsmc Query INCLEXCL TSM 4.1+: Formalized client command to
display the list of Include-Exclude
statements that are in effect for the
client, in the order in which they are
processed during Backup and Archive
operations. This is the best way to
interpret your include-exclude
statements, as it reports your
client-based and server-based (Cloptset)
specifications together.
Report columns:
Mode Incl or Excl
Function Archive or All
Pattern '#' appears at the front
where '*' was coded for
"all drives".
Source File Where the include or
exclude is:
dsm.opt = Your client.
Server = Cloptset.
Operating System =
Windows Registry value.
This command is valid for all UNIX, all
Windows, and NetWare clients.
Historical notes: Was introduced in
ADSMv3.PTF6 as an undocumented client
command, like 'dsmc Query OPTION'. In
TSM 3.7, Tivoli management decided that,
because it was unsupported, it should
not be a Query, but rather a Show
command, being consistent with
undocumented and unsupported SHow
commands in the server. That command
persisted into TSM 4.1.2, where the
capability was formalized as the
'dsmc Query INCLEXCL' command.
Customers still using it in older client
levels need to realize that because it
was "unsupported", it would not
necessarily be capable of recognizing
newer Exclude options, like EXCLUDE.FS
(as was discovered). For example, if you
have no EXCLUDE.FS statements coded and
don't get the message "No exclude
filespace statements defined.", then the
Query code is behind the times.
See IBM site Technote 1164101 for how to
traverse a GUI display to verify whether
file system objects will be backed up.
See also: dsmc SHow INCLEXCL
dsmc Query Mgmtclass ADSM client command to display info
about the management classes available
in the active policy set available to
the client.
'dsmc Query Mgmtclass [-detail]
[-FROMNode=____]'
where -detail reveals Copy Group info,
which includes retention periods.
dsmc Query Options Undocumented ADSM client command,
contributed by developers, to report
combined settings from the Client System
Options file and Client User Options
file.
In ADSMv3, also shows the merged options
in effect (those from dsm.opt and the
cloptset).
TSM: Replaced by 'show options'.
dsmc Query RESTore ADSM client command to display a list of
your restartable restore sessions, as
maintained in the server database.
Reports: owner, replace, subdir,
preservepath, source, destination.
Restartable sessions are indicated by
negative numbers, and their Restore
State is reported as "restartable".
See also: RESTOREINTERVAL
dsmc Query SChedule ADSM client command to display the
events scehduled for your node.
dsmc Query SEssion ADSM client command to display info
about your ADSM session: current node
name, when the session was established,
server info, and server connection.
dsmc Query SYSTEMInfo TSM 5.x Windows client meta command to
provide a comprehensive report on the
TSM Windows environment - options files,
environment variables, files implicitly
and explicitly excluded, etc.
Creates a dsminfo.txt file.
dsmc Query SYSTEMObject TSM 4.1 Windows client command to
provide information about backed up
System Objects.
Ref: Redpiece "Deploying the Tivoli
Storage Manager Client in a Windows 2000
Environment"
dsmc Query Tracestatus ADSM client command to display a list of
available client trace flags and their
current settings.
Ref: Trace Facility Guide
dsmc REStore Client command to restore file system
objects.
'dsmc REStore [FILE] [<options...>]
<SourceFilespec>
[<DestinationFilespec>]'
Allowable options:
-DIrsonly, -FILESOnly, -FROMDate,
-FROMNode, -FROMOwner, -FROMTime,
-IFNewer, -INActive, -LAtest, -PIck,
-PITDate, -PITTime, -PRESERvepath,
-REPlace, -RESToremigstate, -SUbdir,
-TAPEPrompt, -TODate, -TOTime.
The number of SourceFilespecs is limited
to 20; see "dsmc command line limits".
If you are restoring a directory, it is
important that you specify the
SourceFilespec with a directory
indicator (slash (/) in Unix, backslash
(\) in Windows, else the restore will
conduct a prolonged search for what it
presumes to be a file rather than a
directory. This is particularly
important for point-in-time restorals,
where the client does a lot of
filtering.
See also: dsmc and wildcards;
Restore...
dsmc REStore BACKUPSET Client command to restore a Backup Set
from the server, a local file, or a
local tape device.
The location of the Backup Set may be
specified via -LOCation. The default
location is server.
Use client cmd 'dsmc Query BACKUPSET' to
get metadata about the backup set.
Use server cmd 'Query BACKUPSETContents'
to either check the contents of the
Backup Set or gauge access performance
(which excludes the destination disk
performance factors involved in a client
dsmc REStore BACKUPSET).
dsmc REStore REgistry TSM command to restore a Windows
Registry. But it will restore only the
most recent one, rather than an inactive
version. You can manually restore an
older version by using the GUI to
restore the files to their original
location, the adsm.sys directory. Start
the Registry restore within the GUI with
the command Restore Registry in the menu
Utilities or within the ADSM CLI with
REGBACK ENTIRE. Be sure that you check
the Activate Key after Restore box in
the dialog window. The ADSM client
tries to restore the latest version of
the files into the adsm.sys directory,
but this time, you do not allow to
replace the files on your disk. This
will guarantee that the 'older' files
will remain on the disk. The last dialog
window which appears is a confirmation
that the registry restore is completed
and activated as the current
registry. The machine must be rebooted
for the changes to take effect.
See also: REGREST
dsmc RETrieve *SM client command to retrieve a
previously Archived file. Syntax:
'dsmc RETrieve [options]
SourceFilespec [DestFilespec]'
where you may specify files or
directories. Allowable options:
-DEScription, -DIrsonly, -FILESOnly,
-FROMDate, -FROMNode, -FROMOwner,
-FROMTime, -IFNewer, -PIck,
-PRESERvepath, -REPlace,
-RESToremigstate, -SUbdir, -TAPEPrompt,
-TODate, -TOTime.
The number of SourceFilespecs is limited
to 20; see "dsmc command line limits".
dsmc SCHedule See: Scheduler, client, start manually
dsmc Selective TSM client command to selectively back
up files and/or directories that you
specify. Syntax:
'dsmc Selective [-Options...]
FileSpec(s)'
Allowable options:
-DIrsonly, -FILESOnly, -VOLinformation,
-CHAngingretries, -Quiet, -SUbdir,
-TAPEPrompt
When files are named, the directories
that contain them are also backed up,
unless the -FILESOnly option is present.
The number of FileSpecs is limited to
20; see "dsmc command line limits".
To specify a whole Unix file system,
enter its name with a trailing slash.
You must be the owner of a file in order
to back it up: having read access is not
enough. (You get "ANS1136E Not file
owner" if you try.)
Your include-exclude specs apply to
Selective backups.
It is important to understand that a
Selective backup is deemed "explicit":
that you definitely want all the
specified files backed up...WITHOUT
EXCEPTION. Because of this, message
ANS1115W and a return code 4 will be
produced if you have an Exclude in play
for an included object.
Relative to Incremental backups,
Selective backups are "out of band":
they do not participate in the
Incremental continuum, in several ways:
- In a selective backup, copies of the
files are sent to the server even if
they have not changed since the last
backup. This might result in having
more than one copy of the same file on
the server, and can result in old
Inactive versions of the file being
pushed out of existence, per retention
versions policies.
- The backup date will not be reflected
in 'Query Filespace F=D', or in
'dsmc Query Filespace'.
If you change the management class on an
Include, Selective backup will cause
rebinding of only the current, Active
file being backed up: it will not rebind
previously backed up files, as an
unqualified Incremental will.
See also: Selective Backup
dsmc SET Access *SM client command to grant another
user, at the same or different node,
access to Backup or Archive copies of
your files, which they would do using
-FROMNode and -FROMOwner. Syntax:
'dsmc SET Access {Archive|Backup}
{filespec...}
NodeName [User_at_NodeName]
[Options...]'
The filespec should identify files, and
not just name a directory.
The access permissions are stored in the
TSM database. Thus, the original
granting client system may vanish and
the grantee can still access the files.
There is no check for either the node or
user being known to the *SM server
- though the node needs to be registered
with the *SM server for that node and
its user to subsequently access the data
that you are authorizing access to, else
error ANS1353E will be encountered.
Note that this applies only to *your*
specific files, even if you are root.
That is, if you are root and attempt to
grant file system access to root at
another node, you will *not* be able to
see files created by other users as you
would as root on the native system.
There is no way for the TSM server
administrator to query user Access
settings: that database structure is not
exposed to Select queries.
Inverse: 'dsmc Delete ACcess'.
See also: dsmc Query ACcess; -FROMNode;
-FROMOwner; -NODename
dsmc SET Password *SM client command to change the ADSM
password for your workstation. If you do
not specify the old and new password
parameters, you are prompted once for
your old password and twice for your new
password. Syntax:
'dsmc SET Password OldOne NewOne'
dsmc SHow INCLEXCL TSM: Undocumented client command,
contributed by developers, to evaluate
your Include-Exclude options as TSM
thinks of them.
This command is invaluable in revealing
the mingling of server-defined
Include/Exclude statements and those
from the client options file.
Beware: In that this operation is
unsupported, it may not be capable of
recognizing newer Exclude options. For
example, if you have no EXCLUDE.FS
statements coded and don't get the
message "No exclude filespace statements
defined.", then the SHow code is behind
the times.
Shortcoming: Does not reveal the
managment class which may be coded on
Include lines...you have to browse your
options file.
Read the report from the top down.
Remember that Include/Exclude's defined
in the server Client Option Set in
effect for this node will precede those
defined on the client (additive).
Report elements:
No exclude filespace statements defined
Means that there are no "EXCLUDE.FS"
options defined in the client options
file.
No exclude directory statements defined
Means that there are no "EXCLUDE.DIR"
options defined in the client options
file.
No include/exclude statements defined
Means that there are no "INCLExcl"
options defined in the client options
file. (Message shows up even in client
platforms where INCLExcl is not a
defined client option.)
ADSM: 'dsmc Query INCLEXCL'.
dsmc SHOW Options TSM client command to reveal all options
in effect for this client. Note that
output is more comprehensive than what
is returned from the dsm GUI's
Display Options selection. For example,
this command will report InclExcl status
whereas the GUI won't.
ADSM: 'dsmc query options'
(The ADSM query option command was an
undocumented command developed for
internal use. In support of this the
command was changed in TSM to a show
option command so that it fell in line
with the standard ADSM/TSM conventions
for non-supported commands.)
dsmc status values (AIX) Do not depend upon 'dsmc' to yield
meaningful return codes (see advisory
under "Return codes"). However,
observation shows that the dsmc command
typically returns the following shell
status values.
0 The command worked. In the case
of a server query (Query Filespace)
there were objects to be reported.
2 The command failed. In the case
of a server query (Query Filespace)
there were no objects to be
reported.
168 The command failed for lack of
server access due to no password
established for "password=generate"
type access and invoked by non-root
user such that no password prompt
was issued. Accompanied by message
ANS4503E.
(Don't confuse these Unix status values
with TSM return codes.)
dsmc.afs Command-line dsm.afs
dsmc.nlm won't unload (Novell Netware) Have option "VERBOSE" in the options
file, not "QUIET". Then, rather than
unload the nlm at the Netware console,
go into the dsmc.nlm session and press
'Q' to quit.
dsmcad See: Client Acceptor Daemon (CAD)
DSMCDEFAULTCOMMAND Undocumented ADSM/TSM client option for
the default subcommand to be executed
when 'dsmc' is invoked with no operands.
Normally, the value defaults to "LOOP",
which is what you are accustomed to in
invoking 'dsmc', that being the same as
invoking 'dsmc LOOP'. Conceivably, you
might change it to something like HELP
rather than LOOP; but probably nothing
else.
Placement: in dsm.opt file (not dsm.sys)
dsmcdfs Command-line interface for backing up
and restoring DFS fileset data, which
this command understands as such, and so
will properly back up and restore DFS
ACLs and mount points, as well as
directories and files.
See also: dsmdfs
dsmccnm.h ADSM 3.1.0.7 introduced a new
performance monitoring function which
includes this file. See APAR IC24370
See also: dsmcperf.dll; perfctr.ini
dsmcperf.dll ADSM 3.1.0.7 introduced a new
performance monitoring function which
includes this file. See APAR IC24370
See also: dsmccnm.h.dll; perfctr.ini
dsmcrash.log, dsmcrash.dmp TSM 5.2+ failure analysis data capture
files. The object is to provide for
"first failure data capture" of crashes
by capturing the info by IBM facilities
the first time the crash occurs.
Dr. Watson itself does a nice job of
this, but TSM should not depend upon
Dr. Watson being installed or configured
to capture the needed info.
dsmcsvc.exe This is the NT scheduler service. It
has nothing to do with the Web client or
the old Web shell client.
Use 'DSMCUTIL LIST' to get a list of
installed services.
dsmcutil.exe Scheduler Service Configuration Utility
in Windows. Allows *SM Scheduler
Services installation and configuration
on local and remote Windows machines.
The Scheduler Service Configuration
Utility runs on Windows only and must
be run from an account that belongs to
the Administrator/Domain Administrator
group. Syntax:
'dsmcutil Command Options'
Example: update the node name and
password to new node:
'dsmcutil update
/name:"your service name"
/node:newnodename /password:password'
ADSMv2 name (dsmcsvci.exe in ADSMv3).
Use 'DSMCUTIL LIST' to get a list of
installed NT services.
The /COMMSERVER and /COMMPORT options
are used to override values in the
client options file used by the service.
They correspond to different client
options depending on the communications
method being used (and yes, there is
/CommMethod dsmcutil option). For
TCP/IP, they correspond to
-TCPServername and -tcpPort,
respectively.
Written by Pete Tanenhaus
<tanenhau@US.IBM.COM>.
Ref: Installing the Clients;
dsmcutil.hlp file in the BAclient dir.
dsmcsvci.exe ADSMv3 name (dsmcutil.exe in ADSMv2).
dsmdf HSM command to display all file systems
which are under the control of HSM.
Does not display any which are not.
Note that running the AIX 'df' command
will show the file system twice - first
as a device-and-filesystem and then as
filesystem-and-filesystem, where the
latter reflects the FSM overlay. Much
the same comes out of an AIX 'mount'
command.
Invoke 'dsmmighelp' for assistance with
all the HSM commands.
dsmdfs GUI interface for backing up
and restoring DFS fileset data, which
this command understands as such, and so
will properly back up and restore DFS
ACLs and mount points, as well as
directories and files. Its look and
usage ie exactly the same as 'dsm'.
Notes: Do not try to select the type
"AGFS" for backup - that is the
aggregate. Instead, go into the type
"DFS" file system. You should also
define some VIRTUALMountpoints to be
able to directly select within the
"/..." file system.
See also: dsmcdfs
dsmdu HSM command to display *SM space usage
for files and directories under the
control of HSM, in terms of 1 KB
blocks; that is, the true size of all
files in a directory, whether resident
or migrated. Syntax:
'dsmdu [-a] [-s] [Dir_Name(s)]'
where -a shows each file
-s reports just a sum total
Dir_Name(s) One or more
directories to report on. If
omitted, defaults to the current
dir.
This command can take considerable time,
in having to perform a server lookup on
each file encountered. (It processes all
files in a single, long-running session,
not many little sessions.)
Contrast with the Unix 'du -sk' command,
which obviously cannot report on the
size of migrated files.
Invoke 'dsmmighelp' for assistance with
all the HSM commands.
dsmerror.log Where information about processing
errors is written.
The DSM_LOG client environment variable
may be used to specify a directory where
you want the dsmerror.log to reside.
If unspecified, the error log for a dsm
or dsmc client session will be written
to the current directory.
ADSM doesn't want you to have
dsmerror.log be a symlink to /dev/null:
if it finds that case, it will actually
remove the symbolic link and replace it
with a real dsmerror.log file! (See
messages ANS1192E and ANS1190E.)
The error log for client root activity
(HSM migration, etc.) will be
/dsmerror.log.
In Macintosh OS X, the default error log
name is instead "TSM Error Log".
Don't try to use a single dsmerror.log
for all sessions in the system: It's
unusual and unhealthy, from both logical
an physical standpoints, to mingle the
error logging from all sessions - which
may involve simultaneous sessions. In
such an error log, you want a clear-cut
sequence of operations and consequences
reflected. If you want all error logs to
go to a single directory, consider
creating a wrapper script for dsmc,
named the same or differently, which
will put all error logs into a single,
all-writable directory, with an error
log path spec which appends the
username, for uniqueness and
singularity. The wrapper script would
invoke dsmc with the -ERRORLOGname=
option spec.
Advisory: Exclude dsmerror.log from
backups, to prevent wasted time and
possible problems.
See: DSM_LOG; ERRORLOGName;
ERRORLOGRetention; dsierror.log
dsmerror.log ownership The error log file will be owned by the
user that initiated the client session.
However, if another user subsequently
invokes the client, it can try and fail
to gain access to that file because of
permissions problems. You could make
the file "public writable", but that is
problematic in mixing error logging,
making for later confusion in inspection
of that log. Each user should end up
with a separate error log, per
invocation from separate "current
directory" locations. Try to avoid
using the DSM_LOG client environment
variable, which would force use of a
single error log file for the
environment.
dsmfmt TSM server-provided command for AIX, to
format file system "volumes", which can
be spaces to contain the TSM database,
recovery log, storage pool, or a file
which serves as a random access storage
pool. Not for AIX raw logical volumes
or Solaris raw partitions: they do not
need to be formatted by TSM, and the
dsmfmt command has no provision for
them (it only accepts file names). But
note that Solaris raw partitions need to
be formatted in OS terms.
Note that dsmfmt does *not* update the
dsmserv.dsk file to add the new server
component: that happens under a dsmserv
invocation DEFine command.
Note that there is no means of
distinguishing DB or Recovery Log mirror
volumes during formatting - which has a
significant implication during DB
restorals onto new hardware.
Located in /usr/lpp/adsmserv/bin/.
The command *creates* the designated
file, so the file must not already
exist.
Unix note: There is no man page!
Ref: Administrator's Reference manual,
Appendix A.
The size to be specified is the desired
size, in MB, not counting the 1 MB
overhead that dsmfmt will add (so if
you say 4MB, you will get a 5MB
resultant file). So the size should
always be an odd number.
To format a database volume:
'dsmfmt -db DBNAME SizeInMB-1MB'
To format a recovery log volume:
'dsmfmt -log DBNAME SizeInMB-1MB'
To format a file as a storage pool:
'dsmfmt -data NAME SizeInMB-1MB'
The name given the file is the name to
be used for the storage volume when it
is later defined to the server.
What the utility does is not exciting:
it writes the chars "Eric" repeatedly to
fill the space.
Beware the Unix shell "filesize" limit
preventing formatting of a large file.
dsmfmt errno 27 (EFBIG - File too It may be that your Unix "filesize"
large) (errno = 27) limit prohibits writing a file that
large. Do 'limit filesize' to check.
If that value is too small, try
'unlimit filesize'. If that doesn't
boost the value, you need to change the
limit value that the operating system
imposes upon you (in AIX, change
/etc/security/limits).
Another cause: the JFS file system not
configured to allow "large files"
(greater than 2 GB), per Large File
Enabled. Do 'lsfs -q' and look for the
"bf" value: if "false", not in effect.
dsmfmt errno 28 (ENOSPC - No space No more disk blocks are left in the file
left on device) (errno = 28) system. Most commonly, this occurs
because you simply did not plan ahead
for sufficient space. In an AIX JFS
enabled for Large Files, free space
fragmentation may be the problem: there
are not 32 contiguous 4 KB blocks
available.
dsmfmt "File size..." error With a very large format (e.g., 80 GB),
the following error message appears:
"File size for /directory/filename must
be less than 68,589,453,312 bytes."
You may be exceeding file size limits
for your operating system, or in Unix
may be exceeding the filesize resource
limit for your process.
dsmfmt performance Dsmfmt is I/O intensive. Beware doing
it on a volume or RAID or path which is
also being used for other I/O intensive
tasks such as OS paging.
dsmfmt.42 Version of dsmfmt for AIX 4.2, so as to
support volumes > 2GB in size. In such
a system, dsmfmt should be a symlink to
dsmfmt.42 . Be sure to define the
filesystem as "large file enabled".
dsmhsm ADSM HSM client command to invoke the
Xwindows interface.
Note that there is no 'dsmhsmc' command
for line-mode HSM commands. There are
instead individual commands such as
'dsmdf', 'dsmdu', 'dsmrm', etc.
Invoke 'dsmmighelp' for assistance with
all the HSM commands.
DSMI_CONFIG ADSM API: Environment variable pointing
to the Client User Options file
(dsm.opt). Note that it should point at
the options file itself, not the
directory that it resides in.
Ref: "AFS/DFS Backup Clients" manual.
DSMI_DIR ADSM API: The client environment
variable to point to the directory
containing dscameng.txt, dsm.sys, and
dsmtca.
Ref: "AFS/DFS Backup Clients" manual.
DSMI_LOG ADSM API: Client environment variable
to point to the *directory* where you
want the dsierror.log to reside.
(Remember to code the directory name,
not the file name.)
If undefined, the error log will be
written to the current directory.
Ref: "Installing the Clients" manual.
DSMI_ORC_CONFIG TDP for Oracle environment variable, to
point to the client user options file
(dsm.opt).
dsmInit() TSM API function to start a session from
the TSM client to the TSM server. There
can only be one active session open at a
time within one client process.
dsmj Java-based client interface, new in TSM
5.3, intended to replace dsm.
dsmlabel To label a tape, or optical disk, for
use in a storage pool. (Tapes must be
labeled to prevent overwriting tapes
which don't belong to *SM and to
control tapes once *SM has used them
(and re-use when they become empty).
The command must be issued from the
server directory, or you must set the
DSMSERV_DIR and DSMSERV_CONFIG
environment variables, because the
command needs to read the server options
file to get the LANGuage option value
for proper processing.
Syntax:
'dsmlabel -drive=/dev/XXXX [-drive...]
-library=/dev/lmcp0
[-search] [-keep] [-overwrite]
[-format] [-help] [-barcode] [-trace]'.
where the drive must be one which was
specifically *SM-defined, via SMIT.
You can specify up to 8 drives, to more
quickly perform the labeling.
It will iteratively prompt for a label
volsers so you can do lots of tapes.
Type just 'dsmlabel' for full help.
-format Is effective only on optical
cartridges.
-barcode Use the barcode reader to
select volumes: will cause the first
six characters of the barcode to be
used as the volume label.
Dsmlabel does not change Category Codes.
If you Ctrl-C the job, it will end after
the current tape is done.
Tapes new to a 3494 tape library will
have a category code of Insert both
before and after the dsmlabel operation.
Ref: Administrator's Reference manual
See also: 'LABEl LIBVolume'; "Tape,
initialize for use with a storage pool".
Newly purchased tapes should have been
internally labeled by the vendor, so
there should be no need to run the
'dsmlabel' utility.
dsmlicense The license module in the TSM server
directory. In AIX(5), is in fileset:
tivoli.tsm.license.rte (32-bit)
tivoli.tsm.license.aix5.rte64 (64-bit).
Associated fileset
tivoli.tsm.license.cert installs the
.lic files (e.g., mgsyslan.lic).
Important: The dsmlicense module name is
singular. Whichever license fileset you
install will plant its version -
replacing any prior version! So DO NOT
install both the 32-bit and 64-bit
filesets, thinking you are being
complete, as you will be creating a
chaotic situation, which could result in
problems such as ANR9613W. Note that
there is no text embedded in the module
which allows you to determine whether it
is the 32-bit or 64-bit version.
dsmls HSM command to list files in a directory
and show file states. Syntax:
'dsmls [-n] [-R] [Filespec...]'
where:
-n Omits column headings from report.
-R Traverses subdirectories.
Note that it does not expand wildcard
specifications itself, so you CANNOT
code something like:
dsmls /filesys/files.199803\*
In report:
Resident Size: Shows up as '?' if the
path used is a symlink, because HSM is
uncertain as to the actual filespace
name.
File State: m = migrated
m (r) = migrated, with recallmode set
to Readwithoutrecall
'?' if the path used is a symlink.
Note that the premigrated files are
reported from the premigrdb database
located in the .SpaceMan directory.
Note that the command does not report
when the file was migrated.
dsmmigfs Add, dsmmigfs Update HSM: Command to add or remove space
management, or to query it.
'dsmmigfs Add [-OPTIONS] FSname' causes:
1. Creates .SpaceMan dir in the filesys
2. Updates
/etc/adsm/SpaceMan/config/dsmmigfstab
to add the filesys definition to HSM,
with selected options
3. Updates the /etc/filesystems stanza
for the filesys to add a "nodename"
entry is added, "mount" is changed to
"false", and "adsmfsm=true" is added.
4. Mounts FSM over the AIX filesys.
5. Activates HSM management of it.
But it does not result in that Filespace
becoming known in the ADSM server: the
first migration or backup will do that.
Add/Update options:
-HThreshold=N Specifies high threshold
for migration from the HSM-managed
file system to the HSM storage pool.
-Lthreshold=N Specifies low threshold
for migration from the HSM-managed
file system to the HSM storage pool.
(A low value is good for loading a
file system, but not for keeping many
files recalled.)
-Pmpercentage=N The percentage of space
in the file system that you want to
contain premigrated files that are
listed next in the migration
candidates list for the file system.
-Agefactor=N The age factor to assign
to all files in the file system.
-Sizefactor The size factor to assign
to all files in the file system.
-Quota=N The max number of megabytes
(MB) of data that can be migrated and
premigrated from the file system to
ADSM storage pools.
Default: the same number of MB as
allocated for the file system itself.
-STubsize=N The size of stub files
left on the file system when HSM
migrates files to ADM storage.
Hints: Specifying a low Lthreshold value
helps in file system loading by keeping
migration active, to prevent message
ANS4103E condition.
dsmmigfs Deactivate/REActivate/REMove HSM: Command to deactivate, reactivate,
or remove space management for a file
system. Command processing occurs wholly
within the HSM client regime: there is
no interaction with the TSM server.
Commands:
'dsmmigfs Deactivate <filsysname(s)>'
This prevents migration, recall, or
reconciliation processes from occurring
for the file system. Any migration,
recall, or reconciliation process that
currently is in progress is allowed to
complete first. Thereafter, migrated
files cannot be accessed - but those
which are wholly in the file system can
be accessed.
Repeating the Deactivate
This does not unmount the FSM from over
the JFS file system.
There is no file change in
/etc/adsm/SpaceMan/ to denote the new
status of the HSM file system.
There is no file change in the
.SpaceMan file system subdirectory to
denote the change.
Note that 'dsmmigfs query' does not
reflect the status of the file system.
'dsmmigfs REActivate <filsysname(s)>'
'dsmmigfs REMove <filsysname(s)>'
dsmmigfs GLOBALDeactivate HSM: Command to deactivate or reactivate
/GLOBALREActivate space management for all file systems on
the client system. Syntax:
dsmmigfs GLOBALDeactivate
dsmmigfs GLOBALREActivate
Reactivation will recreate the global
state file /etc/adsm/SpaceMan/config/
dmiFSGlobalState.
dsmmigfs Query HSM: Command to query space management
settings for named or all HSM-controlled
file systems. Its processing inspects
HSM management files, and does not
contact the TSM server. Syntax:
'dsmmigfs Query [ <filsysname(s)> ]'
Report columns (mostly self-explantory):
File System Name
High Thrshld
Low Thrshld
Premig Percent
Age Factor
Size Factor
Quota
Stub File Size
Server Name
The name of the TSM server which
back-ends this file system. This column
is relatively new - ADSMv2 lacked it.
The dsmmigfstab file has this column,
which will usually contain "-",
indicating that the default migration
server is in effect: command processing
dynamically fills in the name from the
client options file.
Note that this command
dsmmigfs REMove HSM: Command to remove space management
from a file system. Syntax:
'dsmmigfs REMove [FileSysName(s)>]'
or use the GUI cmd 'dsmhsm'.
This will perform a Reconcile, Expire,
and then unmount of the FSM, also
involving an update of /etc/filesystems
in AIX. Make sure you are not sitting
in that directory at the time, or the
unmount will fail with messages ANS9230E
and ANS9078W.
It is best to do this *before* doing a
Delete Filespace: if you do it after,
you will have to do the Del Filespace
twice to finally get rid of the file
space.
dsmmigfstab HSM: file system table naming the AIX
file systems which are to be managed by
HSM. Located in
/etc/adsm/SpaceMan/config/.
Add file systems to the list via the
dsmhsm GUI, or the
'dsmmigfs add FileSystemName' command.
Query via:
'dsmmigfs query [FileSystemName...]'
This file is the basis of what you see
when you do 'dsmmigfs query'. The Server
Name column will usually contain "-",
indicating that the default migration
server is in effect.
Note that the order of file systems
within this file governs the order in
which the FSM is mounted over the JFS
mount. You should be careful about this
where a file system is a child of a
previously mounted file system, which is
to say when a mount point is within a
mounted file system.
It is possible to edit this file -
indeed, there are times when that is
necessary, such as restoring an
obliterated HSM file system; but
normally you should update it via the
'dsmmigfs' command, which will validate
settings and take care of adjunct tasks.
dsmmonitord checks to see if this file
has been changed: if it has, then
dsmmonitord re-reads the file.
Nothing recreates this fundamental file
if it is destroyed: you would need to
manually reconstruct it - using your
knowledge of your HSM file systems, and
referencing file samples in the manual.
dsmmighelp HSM: Command to display usage
information on its command repertoire.
dsmmigquery HSM: Command to display space management
information, such as
migration candidates, recall list.
'dsmmigquery [-Candidatelist]
[-SORTEDMigrated]
[-SORTEDAll] [-Help]
[file systems]'
'dsmmigquery [-Mgmtclass] [-Detail]
[-Options]'
Caution: defaults to current directory,
so be sure to specify file system name.
dsmmigrate HSM: Command to migrate selected files
from a local file system to an ADSM
storage pool. Syntax:
'dsmmigrate [-R] [-v] FileSpec(s)'
where...
-R Specifies recursive pursuit of
subdirectories.
-v Displays the name and size of each
file migrated.
If using a wildcard, it is faster to
allow dsmmigrate to expand it per its
own processing order, as in invoking
like: 'dsmmigrate \*.gz'
with the asterisk quoted so that ADSM
expands it rather than the shell.
To migrate all files in a file system:
'dsmmigrate /file/system/\*'
To perform a dsmmigrate on a file, you
must be the file's owner, else suffer
ANS9096E.
The operation depends upon files in
/etc/adsm/SpaceMan/status/, which have
names like "34cc91490e4071" and are
linked to from the .SpaceMan
subdirectories in each of the
HSM-controlled file systems.
Note: For a large file system this may
take some time, and depending upon the
ADSM server configuration you might get
message ANS4017E on the client, which
would mean that that the server waited
up to its COMMTimeout value for the
client to come back with something for
the server to do, but nadda, so the
server dismissed the session. (Issue
the server command 'Query OPTion' to see
the prevailing CommTimeOut value, in
seconds.)
Dsmmigrate will typically generate
dsmerror.log data in the current
directory when given a wildcard and some
of the files need not be migrated.
dsmmigundelete HSM: Command to recreate deleted stub
files, to reinstate file instances which
were inadvertently deleted from the
HSM-managed file system. (This command
operates on whole file systems: you
cannot specify single files.)
This operation depends upon the original
directory structure being intact: it
will not recreate a stub file where the
file's directory is missing. Thus, this
command cannot be used as a generalized
restoral method. (You might do a dsmc
restore with -dirsonly to recreate the
directory structure first.)
The stub contains information *SM needs
to recall the file, plus some amount of
user data. *SM needs 511 bytes, so the
amount of data which can also reside in
the stub is the defined stub size minus
the 511 bytes. When you do a
dsmmigundelete, *SM simply puts back
enough data to recreate the stubs, with
0 bytes of user data (since you don't
want us going out to tapes to recover
the rest of the stub). When the file
gets recalled, then migrated again, we
once again have user data that we can
leave in the stub, so the stub size goes
back to its original value. This goes to
show that the leading file data in the
stub file is a copy of what's in the
full, migrated file.
Techniques: This command may be used in
migrating an HSM file system from one
computer to another, where a REName Node
can be performed to reassign ownership
of filespaces. First perform a -dirsonly
restore on the new system, then run
dsmmigundelete on the file system. This
is described in the HSM manual under
"Back Up and Migrate Files to a
Different Server".
See also: Leader data
dsmmode HSM: Command to set one or more
execution modes which affect the
HSM-related behavior of commands:
-dataaccess controls whether a
migrated file can be retrieved.
-timestamp controls whether the file's
atime value is set to the
current time when accessed.
-outofspace controls whether HSM
returns an error code rather
than try to recover from
out-of-space conditions.
-recall controls how a migrated file
is recalled: Normal or
Migonclose.
Note, however, that the outofspace
parameter will *not* prevent commands
like 'cp' from encountering "No space
left on device" conditions.
dsmmonitord HSM monitoring daemon, started by
/etc/inittab's "adsmsmext" entry
invoking /etc/rc.adsmhsm .
It is busy: every 2 seconds it looks
for file-system-full conditions so as to
start migration; and every 5 minutes to
do threshhold migrations (or the
interval specified on the
CHEckthresholds Client System Options
file (dsm.sys)).
This daemon also runs dsmreconcile (from
either the directory specified via
DSM_DIR or the directory whence
dsmmonitord was invoked) according to
the interval defined via the
RECOncileinterval Client System Options
file (dsm.sys) option, and automatically
before performing threshold migration if
the migration candidates list for a file
system is empty.
Be aware that this daemon does not help
if the user attempts to recall a file of
a size which causes the local file
system to be exhausted: what happens is
that the user gets a "ANS9285K Cannot
complete remote file access" error
message - which says nothing about this.
Full usage (as found in the binary):
'dsmmonitord [-s seconds] [-t directory]
[-v]'
dsmmonitord PID Is remembered in file:
/etc/adsm/SpaceMan/dsmmonitord.pid
dsmnotes The backup client command for the Lotus
ConnectAgent. Sample usage:
'dsmnotes incr
d:\notes\data\mail\johndoe.nsf'
DSMO_PSWDPATH See: aobpswd
dsmperf.dll You mean: dsmcperf.dll (q.v.)
dsmq HSM: Command to display all information,
for all files currently queued for
recall. Columns:
ID Recall ID
DPID The PID of the dsmrecall
daemon.
Start Time When it started
INODE Inode number of the file
being recalled.
Filesystem File system involved.
Original Name Name of file that was
migrated.
dsmrecall HSM: Command to explicitly demigrate
(recall) files which were previously
migrated. Syntax:
'dsmrecall [-recursive] [-detail]
Name(s)'
The -detail option alas shows details
only upon completion of the full
operation: it does not reveal progress.
If using a wildcard, it is *much* faster
to allow dsmrecall to expand it per its
own processing order: having the shell
expand it forces dsmrecall to get the
files off tape in collating order,
rather than the order it knows them to
be on the tape(s) - so invoke like:
'dsmrecall somefiles.199807\*'
with the asterisk quotes so *SM expands
it rather than the shell.
Note that during a recall, as the
recalled file is being written back to
disk that its timestamp will be "now",
and thereafter will be set to the file's
original timestamp.
Dsmrecall will typically not generate
dsmerror.log data in the current
directory when given a wildcard and some
of the files need not be recalled.
In the presence of msg "ANR8776W Media
in drive DRIVE1 (/dev/rmt1) contains
lost VCR data; performance may be
degraded.", it may be faster to do a
Restore of the files to a temp area, if
you simply want to reference the data.
dsmrecalld HSM daemon to perform the recall of
migrated files. It is started by
/etc/inittab's "adsmsmext" entry
invoking /etc/rc.adsmhsm .
Control via the MINRecalldaemons and
MAXRecalldaemons options in the Client
System Options file (dsm.sys).
Default: 20
Full usage (as found in the binary):
dsmrecalld [-t timeout] [-r retries]
[{-s | -h}] [{-i | -n}] [-v]
-t timeout in seconds; only valid
with -s
-r number of times to retry recall;
only valid with -s
-s soft recall, will time out;
default
-h hard recall, will not time out
-i interruptable, can be cancelled;
default
-n non-interruptable, cannot be
cancelled
dsmrecalld PID Is remembered in file:
/etc/adsm/SpaceMan/dsmrecalld.pid
dsmreconcile HSM: Client root user command to
synchronize client and server and build
a new migration candidates list for a
file system. Is usually run
automatically by dsmmonitord, invoking
dsmreconcile once for each controlled
file system, at a frequency (mostly)
controlled by the RECOncileinterval
Client System Options file (dsm.sys)
option. Can also be run manually as
needed. Syntax:
'dsmreconcile [-Candidatelist]
[-Fileinfo] [FileSystemName(s)]'
Note that HSM will also run
reconcilliation automatically before
performing threshold migration if the
migration candidates list for a file
system is empty.
Msgs: "Note: unable to find any
candidates in the file system." can
indicate that all files have been
migrated.
See also: Expiration (HSM);
MIGFILEEXPiration; Migration candidates
list (HSM); RECOncileinterval; .SpaceMan
dsmreg.lic ADSMv2 /usr/lpp/adsmserv/bin executable
module for converting given license
codes into encoded hex strings which are
then written to the adsmserv.licenses
file.
See: adsmserv.licenses; License...;
REGister LICense
dsmrm HSM: Command to remove a recall process
from the recall queue, by its recall ID,
as revealed by doing 'dsmq', as reported
in its "ID" column.
(Don't think of this command as an HSM
variant of the Unix 'rm' command:
contrary to appearance, it is not such a
variant, as the existence of 'dsmdu' and
'dsmls' might make you think.)
Do 'dsmmighelp' for usage info.
dsmsched.log The schedule log's default name, as it
resides in the standard ADSM directory.
Can be changed via the SCHEDLOGname
Client System Options file (dsm.sys)
option. To verify the name: in ADSM, do
'dsmc q o' and look for SchedLogName; in
TSM, do 'dsmc show opt'. Obviously, you
need write access to the directory in
which the log is to be produced in order
to have a log.
See: SCHEDLOGname
dsmscoutd HSM 5+ Scout Daemon, which seeks
migration candidates. Its operation is
governed by the Maxcandidates value.
Ref: IBM site Technote 1176491
dsmserv Command in /usr/lpp/adsmserv/bin/ to
start the *SM server. This is
something which would be done by the
/usr/lpp/adsmserv/bin/rc.adsmserv shell
being executed by the "autosrvr" line
which ADSM installation added to the
/etc/inittab file.
Command-line options:
-F To overwrite shared memory
when restarting the server
after a server crash.
Code before other options.
noexpire Suppress inventory
expiration, otherwise
specified via EXPINterval.
-o FileName Specifies the server
options file to be used,
as when running more than
one server.
quiet Start the server as a
daemon program. The server
runs as a background
process, and does not read
commands from the server
console. Output messages
are directed to the
SERVER_CONSOLE.
Note that there is no option for
preventing client sessions from
starting, which can be inconvenient in
some circumstances, like restarting
after a hinkey problem.
Installed via: In AIX(5), fileset:
tivoli.tsm.server.rte (32-bit)
tivoli.tsm.server.aix5.rte64 (64-bit).
Important: The dsmserv module name is
singular. Whichever dsmserv fileset you
install will plant its version -
replacing any prior version! So DO NOT
install both the 32-bit and 64-bit
filesets, thinking you are being
complete, as you will be creating a
chaotic situation. Note further that
there is no text embedded in the module
which allows you to determine whether it
is the 32-bit or 64-bit version.
Performance: dsmserv performs regular
fsync() calls. When used for stand-alone
operations like database restorals, the
run time can be 6 hours with the syncing
and 15 minutes without. Since dsmserv is
an unstripped module, there is the
opportunity to CSECT-replace the fsync
by statically linking in a dummy fsync
function which simply returns (keeping
dsmserv from getting fsync from the
shared library).
See also: Processes, server; dsmserv.42
Ref: ADSM Installing the Server...
TSM Admin Guide chapter on Managing
Server Operations; Starting, Halting,
and Restarting the Server
dsmserv AUDITDB A salvage command for when *SM is down
with a bad database or disk storage pool
volume, to look for structural problems
and logical inconsistencies. Run this
command *before* starting the server,
typically after having reloaded the
database. Syntax:
'DSMSERV AUDITDB
[ADMIN|ARCHSTORAGE|DISKSTORAGE|
INVENTORY|STORAGE]
[FIX=No|Yes]
[Detail=No|Yes]
[LOGMODE=NORMAL|ROLLFORWARD]
[FILE=ReportOutputFile]'
The various qualifiers represent partial
database treatments. Reportedly, running
with no qualifiers does everything
represented in the partial qualifiers.
ARCHDESCRIPTIONS <nodename> [FIX=Yes]
To fix corrupted database as evidenced
in message 'Error 1246208 deleting row
from table "Archive.Descriptions"'.
DISKSTORAGE: Causes disk storage pool
volumes to be audited.
FIX=No: Report, but not fix, any logical
inconsistencies found. If the audit
finds inconsistencies, re-issue the
command specifying FIX=Yes before
making the server available for
production work. Because AUDITDB must
be run with FIX=Yes to recover the
database, the recommended usage in a
recovery situation is FIX=Yes the first
time.
FIX=Yes: Fix any inconsistencies and
issues messages indicating the actions
taken.
Detail=No: Test only the referential
integrity of the database, to just
reveal any problems. This is the
default.
Detail=Yes: Test the referential
integrity of the database and the
integrity of each database entry.
LOGMODE=NORMAL: Allows you to override
your server's Rollforward logmode, to
avoid running out of recovery log
space. (Note that Logmode is controlled
via the Set command, which you
obviously cannot perform when you
cannot bring your server up because it
has the problem you are addressing.)
Tivoli recommends opening a problem
report with them before running this
audit - under their guidance. Per their
advisory: "If errors are encountered
during normal production use of the
server that suggest that the database is
damaged, the root cause of the errors
must be determined with the assistance
of IBM Support. Performing DSMSERV
AUDITDB on a server database that has
structural damage to the database tables
may result in the loss of more data or
additional damage to the database." Be
aware that such an audit cannot correct
all problems: it will fail on an
inconsistency in the database, as one
example.
If your database is TSM-mirrored, you
should first set the MIRRORREAD DB
server option to VERIFY: this will force
the server to compare database pages
across the mirrored volumes, and if an
inconsistency is found on a given mirror
volume, that volume will be marked as
stale and it will be forced to
resynchronize with a remaining valid
volume.
Runtime: Beware that this command is not
optimized, and can take a very long time
to run, proportional to the amount of
data to be audited. Some customers
report it running over 4 days for an 8
GB database! (Processing time has been
observed to be non-linear, as in one
customer finding it taking over 3 days
to get halfway through the database,
then finishing less than a day later.)
If coming from a TSM v4 system, you may
see dramatically lesser runtimes if you
first run CLEANUP BACKUPGROUP. Consult
the Readme and Support if unsure.
Msgs: ANR0104E; ANR4142I; ANR4206I;
ANR4306I
Ref: Admin Ref, Appendix
See also: AUDit DB (online cmd)
See also separate TSM DATABASE AUDITING
samples towards the bottom of this doc.
dsmserv AUDitdb, interrupt? There's no vendor documentation saying
whether an AUDitdb can be stopped (as in
killing its process), safely. The
process reportedly disregards Ctrl-C
(SIGINT) and simple 'kill' command
(SIGTERM): only a 'kill -9' (SIGKILL)
terminates the process. Customer
reports of having stopped the process
tell of no (known) ill effects; but that
is non-deterministic: hold onto that
backup tape!
dsmserv AUDitdb archd fix=yes Undocumented ADSM initial command to
correct a corrupted database as
evidenced in message 'Error 1246208
deleting row from table
"Archive.Descriptions"'.
dsmserv DISPlay DBBackupvolumes Stand-alone command to display database
backup volume information when the
volume history file (e.g.,
/var/adsmserv/volumehistory.backup) is
not available. Full syntax:
'DSMSERV DISPlay DBBackupvolumes
DEVclass=DevclassName
VOLumenames=VolName[,VolName...]'
Example:
'DSMSERV DISPlay DBBackupvolumes
DEVclass=OURLIBR.DEVC_3590
VOLumenames=VolName[,VolName...]'
Note that this command will want to
use a tape drive - one specified in the
file named by the DEVCONFig dsmserv.opt
parameter - to mount the tape R/O.
(Drive must be free, else get ANR8420E
I/O error.)
You can use this command form to try
identify the database backup tapes when
the volume history file is absent, not
up to date, or lacking DBBACKUP entries.
The command requires the devconfig file
- which may also have been lost - and
entails going hunting through a possibly
large number of tapes until you finally
find the latest dbbackup tape.
See also: dsmserv RESTORE DB, volser
unknown
dsmserv DUMPDB *SM database salvage function, to be
used in conjunction with DSMSERV LOADDB
(q.v.).
The output tape from this operation must
have been labeled by the product.
See also: STAtusmsgcnt
dsmserv DUMPDB and LOADDB These are part of a salvage utility
that was a stop gap solution for ADSM
version 1 until the database backup
and recovery functions could be added
in ADSM version 2. Unless you are on
ADSM version 1 (which is unsupported
except for the VSE server), you should
be using BAckup DB and DSMSERV
RESTORE DB functions to backup/recover
your database (and also for migrating
ADSM server to a different hardware
server of the same operating system
type). The circumstances under which
you might use DUMPDB and LOADDB today
are very rare and probably would
involve the absence of regular ADSM
database backups (regular database
backups using BAckup DB are obviously
recommended) and are probably
recommended only under the direction
of IBM ADSM service support.
See also: dsmserv LOADDB; LOADDB
dsmserv EXTEND LOG FileName N_MB Stand-alone command to extend the
Recovery Log to a new volume when its
size is insufficient for ADSM start-up.
(Note that you are to add a new volume,
*not* extend the existing one.)
The new volume should have been
separately prepared by running
'dsmfmt -log ...'.
The extend operation will run dsmserv
for the short time that it takes to
extend the log and format the new
volume, plus add the new volume name to
the dsmserv.dsk file, whereafter the
stand-alone server process shuts down.
Thereafter you may bring up the server
normally.
dsmserv FORMAT Ref: Administrator's Reference,
TSM Utilities appendix.
dsmserv INSTALL Changed to DSMSERV FORMAT in ADSMv3.
Ref: Administrator's Reference,
Appendix D.
dsmserv LOADDB Stand-alone command to reload the ADSM
database after having done
'DSMSERV DUMPDB' and 'DSMSERV INSTALL'.
After a DUMPDB, it is best to perform
the LOADDB to a database having twice
the capcity as the amount that was
dumped... As the Admin Guide says: "The
DSMSERV LOADDB utility may increase the
size of the database. The server packs
data in pages in the order in which they
are inserted. The DSMSERV DUMPDB utility
does not preserve that order. Therefore,
page packing is not optimized, and the
database may require additional space."
See topic "ADSM DATABASE STRUCTURE AND
DUMPDB/LOADDB" at the bottom of this
file for further information.
This operation takes a looooooong time:
it slows as it gets further along, with
tremendous disk activity.
Example:
'DSMSERV LOADDB
DEVclass=OURLIBR.DEVC_3590
VOLumenames=VolName[,VolName...]'
Note: After the reload, the next
BAckup DB will restart your
Backup Series number as 1.
See also: Backup Series; STAtusmsgcnt
dsmserv LOADFORMAT Stand-alone command to format the
database and recovery log for the
dsmserv LOADDB utility. (Use this rather
than dsmserv FORMAT when a LOADDB is
planned.) Formatting reinitializes those
disk areas, obliterating any data which
had been in them.
Syntax:
'dsmserv LOADFORMAT
<number_of_log_files> <LogFileName(s)>
<number_of_db_files> <DbFileName(s)>'
Can be used to reset the order of your
database volumes.
Use of this command causes Logmode to be
reset to Normal.
See also: Set LOGMode
dsmserv RESTORE DB A set of commands for restoring the *SM
server database, under varying
conditions.
The database backup volumes to be used
can be in a library type of MANUAL,
SCSI, 349x, ACSLS, or External type
library. (The manuals had said that
MANUAL or SCSI was required, but Flash
1121477 clarified the larger set of
types, and APAR IC36835 fixed the doc.)
If the database and/or recovery log
volumes are destroyed, use dsmfmt to
prepare replacements AT LEAST EQUAL IN
CAPACITY to the originals. (Failure to
make them equal in capacity can result
in server failure.) DO NOT reformat the
recovery log volume if doing a
rollforward recovery: you need its data
for the recovery.
Run time: Expect a db restoral to take
much longer than your daily full db
backup - perhaps 3 to 4 times longer.
Note that the database/recovery log
volumes present at restoral time do not
have to match those from which the
backup was taken. For example, computer
system ServerA, upon which the TSM
server database was backed up, is
destroyed. Another computer system,
ServerB, of the same type is to be taken
over to replace ServerA. Database and
recovery log volumes are set up in
ServerB - which has a very different
disk system than ServerA had. As long
as the new db space is at least as large
as the utilized space in the former db,
then the restoral should work and the
new TSM server instance should come up
and operate fine.
You would be wise to set server config
file options "DISABLESCheds Yes" and
NOMIGRRECL before proceeding. (After
restoral, Halt and undo them; restart.)
With most forms of Restore DB, you will
also need a copy of the volume history
file and your server options file with
its pointer to the vol history. This
makes the RESTORE DB process simpler as
you can just specify a date rather than
having to work out which backup is on
what volser.
The -todate=xx/xx/xxxx -totime=xx:xx
options allow you to select which
database backup(s) to restore from; NOT
a point at which the recovery log
should be rolled forward to.
==> Do NOT restart the server between
the install and the restore db command:
doing this would delete all the entries
in the volume history file!
Do's and Dont's:
Realize that Restore DB was designed to
restore back onto the same machine where
the image was taken: that is, Restore DB
is not intended to serve as a
cross-platform migration mechanism.
You can do 'DSMSERV RESTORE DB' across
systems of the same architecture: see
the Admin Guide, Managing Server
Operations, Moving the Tivoli Storage
Manager Server, for the rules.
It is illegal, risky, and in some cases
logically impossible to employ Restore
DB to migrate the *SM database across
platforms, which is to say different
operating systems and hardware
architectures. (See IBM site TechNote
1137678.) The same considerations apply
in this issue as in moving any other
kind of data files across systems and
platforms:
- Character set encodings may differ:
ASCII vs. EBCDIC; single-byte
vs. double-byte.
- Binary byte order may differ:
"big-endian" vs. "little-endian", as
in the classic Intel architecture
conventions v. the rest of the world.
- Binary unit lengths may differ: as in
32-bit word lengths vs 64-bit.
- The data may contain other
environmental depedencies.
Simply put, the architectures and
software levels of the giving and taking
systems must be equivalent. In general,
use Export/Import migrate across
systems. (One customer reported
successfully migrating from AIX to
Solaris via Restore DB; but the totality
of success is unknown, and it might
succeed only with very specific levels
of the two operating systems and *SM
servers.)
Msgs: ANR0301I; ANR4639I
See also IBM site TechNote 1111554
("Post Database Restore Steps").
See also: BAckup DB; Export
dsmserv RESTORE DB, volser unknown TSM provides a command to assist with
the situation where you need to perform
a TSM database restoral and the volume
history information has been lost, as in
a disk failure. See:
dsmserv DISPlay DBBackupvolumes
The command requires the devconfig file
- which may also have been lost - and
entails going hunting through a possibly
large number of tapes until you finally
find the latest dbbackup tape.
What you really need in such
circumstances is something to
dramatically reduce the number of
volumes to search through...
One 3494 user reported combined loss of
the *SM database and volume history
backup file, leaving no evidence of what
volume to use in restoring the database.
That's a desperate situation, calling
for desperate measures...
If you know the approximate time period
of when your dbbackup was taken, you can
narrow it down to a few tape volumes and
then try each in a db restore: only one
tape in a given time period can be a
dbbackup, and the others ordinary data,
which db restore should spit out...
Go to your 3494 operator panel. Activate
Service Mode. In the Utilities menu,
choose View Logs. Go into the candidate
TRN (transactions) log. Look for
MOUNT_COMPLETE, DEMOUNT_COMPLETE entries
in your time period. The volser is in
angle brackets, like <001646001646>,
wherein the volser is 001646. (Watch out
for the 3494 PC clock being mis-set.)
dsmserv RESTORE DB performance If your TSM database is in a dedicated
JFS file system, consider eliminating
use of the jfslog during the restoral,
to gain speed. This can be accomplished
by unmounting the filesystem and the
remounting it with the options "rw" and
"nointegrity". Thereafter, unmount and
remount normally.
dsmserv RESTORE DB Preview=Yes Stand-alone command to display a list
of the volumes needed to restore the
database to its most current state,
without performing the restoral
operation. You must be in the
directory with the dsmserv.opt file,
else will get ANR0000E message; so do:
'cd /usr/lpp/adsmserv/bin'
'DSMSERV RESTORE DB Preview=Yes'
dsmserv runfile Command for the *SM server to run a
single procedure encoded into a file,
and halt upon completing that task.
Syntax: dsmserv runfile <FileName>
where the file contains one or more TSM
server commands, one per line (akin to a
TSM macro).
This command is most commonly run to
load the provided q_* sample scripts:
dsmserv runfile scripts.smp
and to initialize web admin definitions:
dsmserv runfile dsmserv.idl
Ref: Admin Ref manual; Quick Start
manual
See also: Web Admin; SQL samples
dsmserv UNLOADDB TSM 3.7+ Stand-alone command to
facilitate defragmentation
(reorganization) of the TSM database,
via unload-reload, unloading the
database in key order, to later reload
that preserve. (The operation does not
"compress" the db, as an early edition
of the TSM Admin Guide stated, but
rather reclaims empty space by
compacting database records - putting
them closer together.)
This operation maximizes the spread
between permanent data and the top of
the database, as needed for temporary
work space such as SQL queries (see
"Database usage").
The output tape from this operation must
have been labeled by the product.
Syntax:
DSMSERV UNLOADDB DEVclass=DevclassName
[VOLumenames=Volnameslist]
[Scratch=Yes|No]
[CONSISTENT=Yes|No
where:
CONSISTENT Specifies whether server
transaction processing should be
suspended so that the unloaded database
is a transactionally-consistent image.
Default: Yes
The procedure:
- Shut down the server.
- dsmserv unloaddb devclass=tapeclass
scratch=yes
- Halt that server instance.
- Reinitialize the db and recovery log
as needed, as in:
dsmserv format 1 log1 2 db1 db2
- Reload the database:
dsmserv loaddb devclass=tapeclass
volumnames=db001,db002,db003
(The reload will take less time than
the unload - maybe 2/3 the time.)
- Consider doing a DSMSERV AUDITDB to
fix any inconsistencies before putting
the database back into production.
Ref: Admin Guide topic "Optimizing the
Performance of the Database and Recovery
Log"; Admin Ref appendix A
The Tivoli documentation is superficial,
failing to provide information as to how
long you can expect your database to be
out of commission, the risks involved,
the actual benefits, or how long you can
expect them to last. For execution,
there is no documentation saying what
constitutes success or failure, what
messages may appear, or what to do if
the operation fails.
Is it worth it? Customers who have tried
the operation report improvements of
about 10% immediately after the reload,
and very long runtimes (maybe days). It
is probably not worth it.
dsmserv UPGRADEDB To start the TSM server and, in the
(dsmserv -UPGRADEDB) process, update some of the database
meta-data. Conventionally, a product
upgrade from one release to the next
will require an UPGRADEDB; but when
going between PTFs and patches of the
same release an UPGRADEDB should not be
required.
It does not have to convert any database
data - and thus the operation is
insensitive to the size of the actual
database and should take seconds to
execute regardless of the database
size. All your policies, devices,
etc. will be preserved.
Note that upgrades which do not involve
any change in data formats will not
utilize an Upgradedb. Upgrades that do
involve data format changes will usually
perform the Upgradedb automatically - or
in some cases tell the customer that it
needs to be done. So, usually you do
not have to manually invoke an
Upgradedb when upgrading your TSM server
software in the presences of an existing
TSM database. Naturally, server upgrades
are performed when the server is down.
Do you have to manually invoke this
command? Not in a "migrate install"
(installing a new TSM over old), per the
Quick Start manual: the UPGRADEDB is
performed automatically.
An Upgradedb will *not* update some
things. For example, upgrading a pre-v5
server to v5 will not cause tape drive
definitions to be converted to new
form: if you do Query PATH thereafter,
you will not see any paths for the
drives. In fact, the drives won't work,
and a DEFine PATH won't work: you have
to do DELete DRive and DEFine DRive and
DEFine PATH to fix this.
DSMSERV_ACCOUNTING_DIR Server environment variable to specify
the directory in which the dsmaccnt.log
accounting file will be written.
If directory doesn't exist, or the
environment variable is not set, the
current directory is used for the
accounting file.
NT note: a Registry key instead
specifies this location.
DSMSERV_CONFIG Server environment variable to point
to the Server Options file.
DSMSERV_DIR Server environment variable to point
to the directory containing the server
executables.
DSMSERV_OPT Server environment variable to point
to the server options file.
dsmserv.42 Version of dsmserv for AIX 4.2, so as to
support ADSM file system volumes > 2GB
in size. In such a system, dsmserv
should be a symlink to dsmserv.42 . Be
sure to define the filesystem as "large
file enabled".
dsmserv.cat ADSM V.3 message catalog installed in
/usr/lib/nls/msg/en_US.
dsmserv.dsk Server directory file which names the
Database and Recovery Log files/volumes,
each on its own line, as referenced by
the server when it starts. (This file is
always read from the directory in which
the server is started.)
Created: 'DEFine DBVolume' and 'DEFine
LOGVolume', and 'dsmserv format', as
specified in the Quick Start manual.
Updated: Each time you define or delete
server volumes. (Humans should never
have to touch this file.)
Note that storage pool disk volumes are
*not* recorded in this file: they are
recorded only in the *sm database.
If this file is absent during an
install, the install will create small
db.adm, log.dsm, backup.dsm,
archive.dsm, and dsmserv.opt files.
At start-up, dsmserv.dsk is used to find
ONE data base or recovery log volume:
the rest of the volumes are located
through a structure in the first 1 MB
that is added to each of the data base
and recovery log volume during that
volume's creation. That is, each db and
log file contains info about all the
other db and log files, so in a pinch
you could start the server by creating a
minimal dsmserv.dsk file containing just
one db and log file name: the server
will thereafter update dsmserv.dsk with
all the log and db file names. This is
why there is no primary vs. mirror
distinction of volumes listed in this
file. (If minimally populating the file,
it is thus probably best to enter
primary rather than mirror volumes, so
that primary will show up as the first
copy, as you would prefer.)
dsmserv.err Server error log, in the server
directory, written when the server
crashes, ostensibly when the server is
being run in the foreground.
Seen to contain messages:
ANR7833S, ANR7834S, ANR7837S, ANR7838S
See also: dsmsvc.err
DSMSERV.IDL See: Web Admin (webadmin)
dsmserv.lock The TSM server lock file. It both
carries information about the currently
running server, and serves as a lock
point to prevent a second instance from
running. Sample contents:
"dsmserv process ID 19046 started Tue
Sep 1 06:46:25 1998".
Msgs: ANR7804I
See also: adsmserv.lock
dsmserv.opt Server Options File, normally residing
in the server directory. Specifies a
variety of server options, one of the
most important being the TCP port number
through which clients reach the server,
as coded in their Client System Options
File.
Note that the server reads the file from
top to bottom during restart. Some
options, like COMMmethod, are additive,
while others are unique specifications.
For unique options, the last one
specified in the file is the last one
used.
Updating: Whereas the server reads its
options file only at start time, changes
made to the file via a text editor will
not go into effect until the next server
restart. Use the SETOPT command (q.v.)
to both update the file and put some
options into effect. (Beware, however,
that the command appends to the file,
which can result in there being
multiple, redundant options in the file
which you will want to clean up.)
The DSMSERV_CONFIG environment variable,
or the -o option of 'dsmserv' command,
can be used to specify an alternate
location for the file.
Ref: Admin Ref manual, appendix "Server
Options Reference"
See also: Query OPTion
dsmserv's, number of See: Processes, server
dsmsetpw HSM: Command to change the ADSM password
for your client node.
dsmsm HSM: Space monitor daemon process which
runs when there are space-managed file
systems defined in
/etc/adsm/SpaceMan/config/dsmmigfstab
dsmsm PID HSM: Is remembered in file:
/etc/adsm/SpaceMan/config/
dsmmigfstab.pid
dsmsnmp ADSMv3: SNMP component.
Must be started before the ADSM server.
dsmsta Storage Agent.
dsmstat Monitors NFS mounted filesystems to be
potentially backed up. Looks for NFS
file system status timeout.
Is not needed if you do not have any NFS
mounted filesystems or you do not want
to use the nfstimeout option.
DSM_DIR also points to this.
See: NFSTIMEout
dsmsvc.err Server error log, in the server
directory, written when the server
crashes, ostensibly when the server is
being run in the background.
See also: dsmserv.err
DSMSVC.EXE Service name of the web server bound to
TCP port 1580.
dsmtca Trusted Communication Agent, aka
Trusted Client Agent program.
Employing the client option
PASSWORDAccess Generate causes dsmtca
to run as root.
For non-root users, the ADSM client uses
a trusted client (dsmtca) process to
communicate with the ADSM server via a
TCP session. This dsmtca process runs
setuid root, and communicates with the
user process (dsmc) via shared memory,
which requires the use of semaphores.
So for non-root users, when you start a
dsmc session, it hands data to dsmtca as
an intermediary to send to the server.
The DSM_DIR client environment variable
should point to the directory where the
file should reside.
dsmulog You can capture *SM server console
messages to a user log file with the
*SM dsmulog utility. You can invoke the
utility with the ADSMSTART shell script
which is provided as part of the ADSM
AIX server package. You can have the
server messages written to one or more
user log files. When the dsmulog utility
detects that the server it is capturing
messages from is stopped or halted, it
closes the current log file and ends its
processing.
(/usr/lpp/adsmserv/bin/)
Ref: Admin Guide; Admin Ref;
/usr/lpp/adsmserv/bin/adsmstart.smp
dsmwebcl.log The) Web Client log, where all Web
Client messages are written. (Error
messages are written to the error log
file.)
Location: Either the current working
directory or the directory you specify
with the DSM_LOG environment variable.
See also: Web client
Dual Gripper 3494 feature to add a second gripper to
the cartridge picker ("hand") so that it
can hold one cartridge to be stored and
grab one for retrieval. This feature
makes possible "Floating-home Cell" so
that cartridges need not be assigned
fixed cells. "Reach" factors result in
the loss of the top and bottom two rows
of your storage cells, so consider
carefully if you really need a dual
gripper. (Except in a very active
environment with frequent tape
transitions, storage cells are preferred
over having a dual gripper.)
The gripper is not controlled by host
software: it is a 3484 Library Manager
optimizer function (i.e microcode).
The dual gripper is only used during
periods of high (as determined by the
LM) activity.
Dual Gripper usage statistics Gripper usage info is available from the
3494's Service Mode... Go to the Service
menu thereunder, and select View Usage
Info.
DUMPDB See: DSMSERV DUMPDB
dumpel.exe Windows: Dump Event Log, a Windows
command-line utility that dumps an event
log for a local or remote system into a
tab-separated text file. This utility
can also be used as a filter.
DURation In schedules: The DURation setting
specifies the size of the window within
which the scheduled event can begin - or
resume. For example, if the scheduled
event starts at 6 PM and has a DURation
of 5 hours, then the event can start
anywhere from 6 PM to 11 PM. Perhaps
more importantly, if the scheduled event
is preempted (msg ANR0487W), ADSM will
know enough to restart the event if
resources (i.e., tape drives) become
available within the window.
DVD as server serial media Backups can be performed to DVD, in
place of tape. The Admin Guide manual
provides some guidance in configuring
for this. One Windows customer reports
success in a somewhat different way:
Use the Windows program called DLA
(Drive Letter Assingment) from Veritas,
often included in the burner software;
or use a package like IN-CD from Nero.
You can then format the DVD (or CD) like
a diskette. Then define a device-class
of removable file and a manual library.
Now you can write directly on the CD or
DVD.
See also: CD...
DYnamic An ADSM Copy Group serialization
mode, as specified by the
'DEFine COpygroup' command
SERialization=DYnamic operand spec.
This mode specifies that ADSM accepts
the first attempt to back up or archive
an object, regardless of any changes
made during backup or archive
processing.
See: Serialization.
Contrast with Shared Dynamic, Shared
Static, and Static.
See also CHAngingretries option.
DynaText The hypertext utility in ADSMv2 to read
the online Books on most platforms
supporting ADSM: all Unixes, Macintosh,
Microsoft Windows. Obsolete, with the
advent of HTML and PDF.

'E' See: 3490 tape cartridge; Media Type


E-fix IBM term for an emergency software patch
created for a single customer's
situation. As such, e-fixes should not
be adopted by other customers.
See also: Patch levels
E-Lic Electronic Licensing - A key file that
is on the CD, but not located on any
download sites. Thus you must have the
CD loaded in most cases before being
able to use the downloaded filesets.
EBU Enterprise Backup UTILITY used with
Oracle 7 databases. Involves a Backup
Catalog. See "RMAN" for Oracle 8
databases.
ECCST Enhanced Capacity Cartridge System Tape;
a designation for the 3490E cartridge
technology, which reads and writes 36
tracks on half-inch tape. Sometimes
referred to as MEDIA2.
Contrast with CST and HPCT.
See also: CST; HPCT; Media Type
.edb Filename suffix for MS Exchange
Database.
Related: .pst
Editor ADSMv3+ client option (dsm.opt or
dsm.sys) option controlling the command
line interface editor, which allows you
to recall a limited number of
previously-issued commands (up to 20)
via the keyboard (up-arrow, down-arrow),
and edit them (up-arrow, Delete, Insert
keys).
AKA "Previous Command Recall".
Specify: Yes or No
Default: Yes in Unix; No in Windows
(Windows does not use this facility, in
deference to the Windows command line
console history capabilities.)
Ref: Unix B/A Client manual, Using
Commands, Remembering Previous Commands
EHPCT 3590 Extended High Performance Cartridge
Tape, as typically used in 359E drives.
See: 3590 'K'
See also: CST; HPCT
Eject tape from 3494 library Via TSM server command:
'CHECKOut LIBVolume LibName VolName
[CHECKLabel=no] [FORCE=yes]
[REMove=Yes]'
where the default REMove=Yes causes
the ejection.
Via Unix command you can effect this by
changing the category code to EJECT
(X'FF10'):
'mtlib -l /dev/lmcp0 -vC -V VolName
-t ff10'
Ejections, "phantom" Tapes get ejected from the tape library
without TSM having done it. Customers
report the following causes:
- Drive incorrectly configured by
installation personnel. Reads fail,
and the drive (erroneously) signals
the library manager that the tape is
so bad that it should be spit out.
- Excessive SCSI chain length. Caused
severe errors such that the tape was
rejected.
Ejects, pending Via Unix command:
'mtlib -l /dev/lmcp0 -qS'
Elapsed processing time Statistic at end of Backup/Archive job,
recording how long the job took, in
hours, minutes, and seconds, in HH:MM:SS
format, like: 00:01:36. This is
calculated by subtracting the starting
time of a command process from the
ending time of the completed command
process.
Shows up in server Activity Log on
message ANE4964I.
ELDC Embedded Lossless Data Compression
compression algorithm, as used in the
3592. See also: ALDC; LZ1; SLDC
Element Term used to describe some part of a
SCSI Library, such as the 3575. The
element number allows addressing of the
hardware item as a subset of the SCSI
address. An element number may be used
to address a tape drive, a tape storage
slot, or the robotics of the library. In
such libraries, the host program (TSM)
is physically controlling actions and
hence specific addressing is necessary.
In libraries where there is a supervisor
program (e.g., 3494), actions are
controlled by logical host requests to
the library, rather than physical
directives, and so element addressing is
not in effect.
In TSM, an element is described in the
'DEFine DRive' command ELEMent
parameter.
Note that element numbers do not
necessarily start with 1.
See also: HOME_ELEMENT
Element address SCSI designation of the internal
elements of a SCSI device, such as a
small SCSI library, where each slot,
drive, and door has its own element
address as a subset of the library's
SCSI address. Element addresses have
fixed assignments, per the device
manufacturer: your definitions must
conform to them.
If a SCSI library drive cannot be used
within TSM but can be used successfully
via external means (e.g., the Unix 'tar'
command) that could indicate incorrect
Element addresses. Another symptom of
an element mismatch is if TSM will mount
a tape but be unable to use it and/or
dismount it.
Element addresses, existing You can probably use the 'tapeutil' or
'ntutil' command: open first device and
then do Element Inventory (14).
Or use 'lbtest' (q.v.):
Select 6 to open the library, 8 to get
the element count and 9 to get the
inventory. Scroll back to the top of the
9 listing to find the drives and element
addresses associated with SCSI IDs.
In AIX, note that the 'lsdev' command is
typically of no help in identifying the
element address from the SCSI ID and
drive - there is no direct correlation.
Example of using lbtest: Library with
three drives mt1, mt2 and mt3 (drives
can be either rmtX or mtX devices). The
slot address are 5, 6, and 7. It is
believed that mt1 goes with element 5.
To test this theory a tape needs to
loaded in the drive located at slot 5
either manually or using lbtest. To use
lbtest do the following:
- Invoke lbtest
- Select 1: Manual test
- Select 1: Set device special file
(e.g., /dev/lb0)
- Prompt: "Return to continue:"
Press Enter
- Select 6: open
- Select 8: ioctl return element count
(shows the number of drives, slots, ee
ports and transports)
- Select 9: ioctl return all library
inventory
(Will show the element address of all
components. Next to element address
you will see indications of FULL or
EMPTY.)
- Select 11: move medium transport
element address:
Source address moving from:
(select any slot with tape)
Destination address move to:
(in this case it would be 5)
Invert option:
Select 0 for not invert
- Select 40: execute command
(which does AIX command `tctl -f
/dev/mt1 rewoffl`)
If the command is successful, the
drive and element match. If you get
the message "Driver not ready" try
/dev/mt2 and so on until it is
successful: the process of
elimination.
- Select 11: move medium
Source address will be 5 and
destination will be 6 for the next
drive.
- Select 40: execute command
- Repeat selections 11 and 40 for each
remaining drive.
- After the last drive has been verified
select 11 to return tape to its slot.
select 99 to return to opening menu
select 9 to quit
Element number See: Element address
Empty Typical status of a tape in a 'Query
Volume' report, reflecting a sequential
access volume that either had just been
acquired for use from the Scratch pool,
or had been assigned to the storage pool
via DEFine Volume, and data has not yet
been written to the volume.
Can also be caused when the empty tapes
are not in the library by virtue of MOVe
MEDia: another MOVe MEDia would have to
be done to get them to go to scratch,
because if the tapes are out of the
library and go to scratch you will lose
track of them.
See also: Pending
Empty directories, backup Empty directories are only backed up
during an Incremental backup, not in a
Selective backup. (Some portions of the
ADSM documentation suggest that empty
directories are not backed up: this is
incorrect - they are backed up.)
Empty directories, restoring See "Restore and empty directories".
Empty file and Backup The backup of an empty file does not
require storage pool space or a tape
mount: it is the trivial case where all
the info about the empty file can be
stored entirely in the database entry.
However, if supplementary data such as
an Access Control List (ACL) is attached
to the file, it means that the entry is
too data-rich to be entirely stored in
the database and so ends up in a storage
pool.
EMTEC European Multimedia Technologies
Former name: BASF Magnetics, which
changed its name to EMTEC Magnetics
after it was sold by BASF AG in 1996.
Starting in 2002, all famous BASF-brand
audio, video and data media products
will bear the name "EMTEC".
Emulex LP8000 Fibre Channel Adapter Needs to be configured as "fcs0" device
for it to work with the TSM smit menus.
If inadvertently defined as an lpfc0
device, it suggests that you have loaded
the "emulex" device driver instead,
which corresponds to the filesets
devices.pci.lpfc.diag and
devices.pci.lpfc.rte, which are filesets
are provided by Emulex. In order to have
the device recognized as a fcs0 device
instead of lpfc0 device, you need to
remove those two filesets and rerun
cfgmgr. You of course will need to have
the proper IBM AIX fibre channel
filesets installed. Those filesets are
dicussed in the TSM server readme.
http://www.emulex.com/ts/fc/docs/
frame8k.htm
ENable Through ADSMv2, the command to enable
client sessions. Now ENable SESSions.
ENable SESSions TSM server command to permit client
node Backup and Archive sessions,
undoing the prohibition of a prior
DISAble SESSions command.
Note that the Disable status does not
survive across an AIX reboot: the status
is reset to Enable.
Determine status via 'Query STatus' and
look for "Availability".
Msgs: ANR2096I
See also: DISAble SESSions; ENable
ENABLE3590LIBRary Definition in the server options file
(dsmserv.opt).
Specifies the use of 3590 tape drives
within 349x tape libraries.
Default: No?
Msgs: ANR8745E
Ref: Installing the Server...
ENABLE3590LIBRary server option, query 'Query OPTion'
ENABLELanfree TSM client option to specify whether to
enable an available LAN-free path to a
storage area network (SAN) attached
storage device. A LAN-free path allows
backup, restore, archive, and retrieve
processing between the Tivoli Storage
Manager client and the SAN-attached
storage device.
See also: LanFree bytes transferred
ENABLEServerfree TSM client option to specify whether to
enable SAN-based server-free image
backup which off-loads data movement
processing from the client and server
processor and from the LAN during image
backup and restore operations. Client
data is moved directly from the client
disks to SAN-attached storage devices by
a third-party copy function initiated by
the Tivoli Storage Manager server. The
client disks must be SAN-attached and
accessible from the data mover, such as
a SAN router. If SAN errors occurs, the
client fails-over to a direct connection
to the server and moves the data via
LAN-free or LAN-based data movement.
See also: Server-free; Serverfree data
bytes transferred
Encryption of client-sent data New in TSM 4.1. Uses uses a standard
56-bit DES routine to provide the
encryption. The encryption support uses
a very simple key management method,
where the key is a textual password. The
key is only used at the client, it is
not transferred or stored at the server.
Multiple keys can be used, but only the
key entered when the ENCryptkey client
option was set to SAVE is stored.
Information stored in the file stream on
the server indicates that encryption was
used and which type. Unlike the TSM user
password, the encryption key password is
case-sensitive. If the password is lost
or forgotten, the encrypted data cannot
be decrypted, which means that the data
is lost.
Where the client options call for both
compression and encryption, compression
is reportedly performed before
encryption - which makes sense, as
encrypted data is effectively binary
data, which would either see little
compression, or even exapansion. And,
encryption means data secured by a key,
so it further makes sense to prohibit
any access to the data file if you do
not first have the key.
Performance hit: Be well aware that
encrypting network traffic comes at a
substantial price, in lowering
throughput.
The TSM 5.3 client introduces a stronger
encryption type: beyond the older DES56,
you can select AES128 via the
ENCRYPTIONTYPE client option.
See: ENCryptkey
ENCryptkey TSM 4.1 Windows option, later extended
to other clients, specifying whether to
save the encryption key password to the
Registry in encrypted format. (Saving it
avoids being prompted for the password
when invoking the client, much like
"PASSWORDAccess generate" saves the
plain password.)
Syntax: ENCryptkey Save|Prompt
where Save says to save the encryption
key password while Prompt says not to
save it, such that you are prompted in
each invocation of the client.
Where stored:
Unix: The encryption key and password
are encrypted and stored in the
TSM.PWD file, in a directory
determined by the PASSWORDDIR option.
Windows: Registry
Default: Save
See also: /etc/security/adsm/;
INCLUDE.ENCRYPT; EXCLUDE.ENCRYPT
End of volume (EOV) The condition when a tape drive reaches
the physical end of the tape. Unlike
disks, which have fixed, known
geometries, tape lengths are inexact. In
writing a tape, its end location is
known only by running into it.
End-of-volume message ANR8341I End-of-volume reached...
Enhanced Virtual Tape Server 1998 IBM product: To optimize tape
storage resources, improve performance,
and lower the total cost of ownership.
See also: Virtual Tape Server
Enrollment Certificate Files Files provided by Tivoli, with your
server shipment, containing server
license data. Filenames are of the form
_______.lic .
See: REGister LICense
Enterprise Configuration and Policy TSM feature which makes possible
Management providing Storage Manager configuration
and policy information to any number of
managed servers after having been
defined on a configuration server. The
managed servers "subscribe" to profiles
owned by the configuration manager, and
thereafter receive updates made on the
managing server. The managed server
cannot effect changes to such served
information: it is only a recipient.
Ref: Admin Guide, chapter on "Working
with a Network of IBM Tivoli Storage
Manager Servers"
Enterprise Management Agent The TSM 3.7 name for the Web Client.
Environment variables See: DSM_CONFIG, DSM_DIR, DSM_LOG,
DSMSERV_ACCOUNTING_DIR,
VIRTUALMountpoint
In AIX, you can inspect the env vars for
a running process via: ps eww <PID>
Ref: Admin Guide, "Defining Environment
Variables"; Quick Start, "Defining
Environment Variables"
EOS End of Service. IBM term for
discontinuance of support for an old
product. Their words:
"Defect support for Tivoli products will
generally be provided only for the
current release and the most recent
prior release. A prior release will be
eligible for service for 12 months
following general availability of the
current release. These releases will be
supported at the latest maintenance
("point release") level. Usually,
there will be 12 months' notice of EOS
for a specific release. At the time of
product withdrawal, notice of the EOS
date for the final release will be
given. At the time a release reaches
EOS, it will no longer be supported,
updated, patched, or maintained. After
the effective EOS date, Tivoli may
elect, at its sole discretion, to
provide custom support beyond the EOS
date for a fee."
See also: WDfM
EOT An End Of Tape tape mark.
See also: BOT
EOV See: End of volume
EOV message ANR8341I End-of-volume reached...
ERA codes (from 3494) See MTIOCLEW (Library Event Wait)
Unsolicited Attention Interrupts table
in the rear of the SCSI Device Drivers
manual.
Erase tape See: Tape, erase
errno The name of the Unix system standard
error number, as enumerated in header
file /usr/include/sys/errno.h .
Some *SM messages explicitly refer to it
by its name, some by generic return
code.
errno 2 Common error indicating "no such file or
directory", often caused by specifying a
file name without using its full path,
such that the operation seeks the file
in the current directory rather than a
specific place.
Error handler See: ERRORPROG
Error log A text file (dsmerror.log) written on
disk that contains ADSM processing error
messages.
Beware symbolic links in the path, else
suffer ANS1192E.
See also: DSM_LOG; ERRORLOGname;
ERRORLOGRetention
Error log, operating system AIX has a real hardware error log,
reported by the 'errpt' command.
Solaris records various hardware
problems in the general
/var/log/messages log file.
Error log, query ADSM 'dsmc Query Options' or TSM 'dsmc
show options', look for "Error log".
Error log, specify location The DSM_LOG Client environment variable
may be used to specify the directory in
which the log will live.
ADSMv3: add this to dsm.sys:
* Error log
errorlogname /var/adm/log/
dsmerror.log
errorlogretention 14 D
Error log size management Use the client option ERRORLOGRetention
to prune old entries from the log, and
to potentially save old entries.
Error messages language "LANGuage" definition in the server
options file.
Error Number In messages, usually refers to the error
number returned by the operating system.
In Unix, this is the "errno" (q.v.).
Error Recovery Cell See "Gripper Error Recovery Cell"
ERRORLOGname Macintosh, Novell, and Windows options
file and command line option for
specifying the name of the TSM error
log file (dsmerror.log), where error
messages are written. (Note that it is
the name of a file, not a directory.)
Beware symbolic links in the path, else
suffer ANS1192E.
See also: DSM_LOG; dsmerror.log;
ERRORLOGRetention
ERRORLOGRetention Client System Options file (dsm.sys)
option (not Client User Options file, as
the manual may erroneously say) to
specify the number of days to keep error
log entries, and whether to save the
pruned entries (in file dsmerlog.pru).
Syntax:
ERRORLOGRetention [N | <days>] [D | S]
where:
N Do not prune the log (default).
days Number of days of log to keep.
D Discard the error log entries
(the default)
S Save the error log entries to
same-directory file dsmerlog.pru
Placement: Code within server stanza.
Default: Keep logged entries
indefinitely.
See also: SCHEDLOGRetention
ERRORPROG Client System Options file (dsm.sys)
option to specify a program which ADSM
should execute, with the message as an
operand, if a severe error occurs
during HSM processing. Can be as simple
as "/bin/cat". Code within the server
stanza.
ERT Estimated Restore Time
See also: Estimate
ESM Enterprise Storage Manager, as in ADSM
or TSM.
ESTCAPacity The estimated capacity of volumes in a
Device Class, as specified in the
'DEFine DEVclass' command.
This is almost always just a human
reference value, having no impact on how
much data TSM actually puts onto a tape
- which is as much as it can.
Note that the value "latches" for a
given volume when use of the volume
first begins. Changing the ESTCAPacity
value will apply to future volumes, but
will not change the estimated capacity
of prevailing volumes (as revealed in a
'Query Volumes' report).
After a reclamation, the ESTCAPacity
value for the volume returns to the base
number for the medium type.
Estimate The ADSMv3 Backup/Archive GUI introduced
an Estimate function. At the conclusion
of backups, this implicit function
collects statistics from the *SM server,
which the client stores, by *SM server
address, in the .adsmrc file in the
user's Unix home directory, or Windows
dsm.ini file. In a later operation, the
GUI user may invoke the Estimate
function to get a sense of what will be
involved in a subsequent Backup,
Archive, Restore, or Retrieve: The
client can then estimate the elapsed
time for the operation on the basis of
the saved historical information. A user
can then choose to cancel the operation
before it starts if the amount of data
selected or the estimated elapsed time
for the operation is excessive.
The information provided:
Number of Objects Selected: The number
of objects (files and directories)
selected for an operation such as
backup or restore.
Calculated Size: The Estimate function
calculates the number of bytes the
currently selected objects occupy by
scanning the selected directories or
requesting file information from the *SM
server.
Estimated Transfer Time: The client
estimates the elapsed time for the
operation on the basis of historical
info, calculating it by using the
average transfer rate and average
compression rate from previous
operations.
See also: .adsmrc; dsm.ini
Estimated Capacity A column in a 'Query STGpool' report
telling of the estimated capacity of the
storage pool. The value is dependent
upon the stgpool MAXSCRatch value having
been set: If the stgpool has stored data
on at least one scratch volume, the
estimated capacity includes the maximum
number of scratch volumes allowed for
the pool. (For tape stgpools, the
EstCap number is a rather abstract
value, amortized over the all the tapes
in a library - which typically have to
be available for use in other storage
pools as well, and so is usually
meaningless for any single stgpool.
For a sequential storage pool the value
is an estimate of the total amount of
available space on all volumes in the
storage pool - a value which includes
all of the storage pool's volumes
(regardless of their current Access
Mode), averaging the "est_capacity_mb"
value from each volume currently
assigned to the storage pool (as
influenced by your Devclass ESTCAPacity
setting), then multiplying that average
by the maximum number of volumes the
pool could encompass (i.e. the quantity
of volumes DEFined to the storage pool
plus the MAXSCRatch value).
See "Pct Util, from Query STGpool" for
observations on deriving the amount of
data contained in the stgpool.)
TSM uses estimated capacity to determine
when to begin reclamation of stgpool
volumes.
Estimated Capacity A column in a 'Query Volumes' report
telling of the estimated capacity of a
volume, which is as was specified via
the ESTCAPacity operand of the 'DEFine
DEVclass' command. The value reported
is the "logical capacity": the content
after 3590 hardware compression. If the
files were well compressed on the
client, then little or no compression
can be done by the drives and thus the
closer the value will be to physical
capacity. Experience shows that the
capacity value is not assigned to a
volume until the first data is actually
written to it.
Ref: TSM Admin Guide, "How TSM Fills
Volumes"
See also: ESTCAPacity; Pct Util
/etc/.3494sock Unix domain socket file created by the
Library Manager Control Point daemon
(lmcpd).
/etc/adsm/ Unix directory created for storing
control information. All Unix systems
have the HSM SpaceMan subdirectory in
there. Non-AIX Unix systems have their
encrypted client password file in there
for option PASSWORDAccess GENERATE.
The 3.7 Solaris client (at least, GUI)
is reported to experience a
Segmentation Fault failure due to a
problem in the encrypted password file.
Removing the problem file from the
/etc/adsm/ directory (or, the whole
directory) will eliminate the SegFault.
(Naturally, you have to perform a root
client-server operation like 'dsmc q
sch' to cause the password file to be
re-established.)
See also: /etc/security/adsm; Password,
client, where stored on client;
PASSWORDDIR
/etc/adsm/SpaceMan/ HSM directory for managing file systems
controlled by HSM.
If you accidentally delete any of its
subdirectories, they are not
automatically recreated: you have to do
so manually.
There is also a .SpaceMan HSM directory
in each controlled file system.
/etc/adsm/SpaceMan/ActiveRecallTab HSM active recall table (binary).
(Has been in HSM since its early days.)
Is used by commands such as dsmrm and
dsmq. The file is automatically created
and updated if a recall process is
started or stopped.
/etc/adsm/SpaceMan/candidatesPools/ Directory used by the HSM dsmscoutd
daemon and dsmautomig process. In that
directory will be APool.* and BPool.*
files. The APool.* file is the active
migration candidates list, as employed
by the dsmautomig process. The dsmscoutd
daemon is busily scouting for new
candidates, which it adds to the BPool
list. When the APool list is fully
migrated, the BPool list is swapped over
to become the new APool, and dsmscoutd
begins building a new BPool.
/etc/adsm/SpaceMan/config/ HSM directory housing files governing
file system management.
/etc/adsm/SpaceMan/config/ Global state file.
dmiFSGlobalState If necessary to recreate, cd into the
config directory, then do 'dsmmigfs
globalreactivate'.
/etc/adsm/SpaceMan/config/dsmmigfstab HSM ASCII table of its managed file
systems.
See: "dsmmigfstab" for more info
/etc/adsm/SpaceMan/config/dsmmigfstab The dsmmigfstab.pid file is a lock file
.pid for the dsmmigfstab file. It contains
the process ID of the process updating
it...or at least the process which last
updated it. (Observation shows the pid
file old and the contained PID not
reflect a current system process.)
/etc/adsm/SpaceMan/dsmmonitord.pid The dsmmonitord.pid file is a lock file
for the currently running monitor
daemon, created by that daemon. The
file contains the PID of that process,
in ASCII numerals.
If the file is lost, restart the daemon
to recreate it.
/etc/adsm/SpaceMan/dsmrecalld.pid The dsmrecalld.pid file is a lock file
for the currently running recall
daemon, created by that daemon. The
file contains the PID of that process,
in ASCII numerals.
If the file is lost, restart the daemon
to recreate it.
/etc/adsm/SpaceMan/status/ HSM status info directory. Files in it
have hexadecimal names, are 32 bytes in
size, and contain binary data:
space-management-related statistics for
the file system with which the file is
associated. The file is pointed to by a
symlink in the
target of the .SpaceMan/status entry
in the space-managed file system.
Status info is rebuilt by dsmreconcile.
/etc/ibmatl.conf Library Manager Control Point Daemon
(lmcpd) configuration file in Unix.
Defines the 3494 libraries that this
host system will communicate with
Each active line in the file consists
of three parts:
1. Library name: Is best chosen to be
the network name of your library,
such as "LIB1" in a
"LIB1.UNIVERSITY.EDU" name.
In AIX, the name must be the one that
was tied to the /dev/lmcp_ device
driver during SMIT configuration.
In Solaris, this is the arbitrary
symbolic name you will specify on the
DEVIce operand of the DEFine LIBRary
TSM server command, and use with the
'mtlib' command -l option to work
with the library.
2. Connection type: If RS-232, the name
of the serial device, such as
/dev/tty1. If TCP/IP, the IP address
of the library. (Do not code
":portnumber" as a suffix unless you
have configured the 3494 to use a
port number other than "3494", as
reflected in /etc/services.)
3. Identifier: The 1-8 character name
you told the 3494 in Add LAN Host to
call this host system (Host Alias).
The file may be updated at any time; but
the lmcpd does not look at the file
except when it starts, so needs to be
restarted to see the changes.
Ref: "IBM SCSI Tape Drive, Medium
Changer, and Library Device Drivers:
Installation and User's Guide" manual
(GC35-0154)
See also: Library Manager Control Point
Daemon
/etc/ibmatl.pid Library Manager Control Point (LMCP)
Daemon PID number file. The lmcpd
apparently keeps it open and locked, so
it is not possible for even root to open
and read it.
/etc/mnttab in Solaris Prior to Solaris 8, /etc/mnttab was a
mounts table file. As of Solaris 8, it
is a mount point for the mnttab file
system! The name should be excluded from
backups (in dsm.opt code
"Domain -/etc/mnttab"), as it does not
have to be restored: the OS will
re-create it.
/etc/security/adsm/ AIX default directory where ADSM stores
the client password. Overridable via
the PASSWORDDIR option.
ADSMv3: Should contain one or more files
whose upper case names are the servers
used by this client, and whose contents
consist of an explanatory string
followed by an encrypted password for
reaching that server.
TSMv4: File name is TSM.PWD .
This password file is established by the
client superuser performing a
client-server command which requires
password access, such as 'dsmc q sched'.
See also: Client password, where stored
on client; ENCryptkey; /etc/adsm;
PASSWORDDIR
Ethernet card, force us of specific You may have multiple ethernet cards in
a computer and want client sessions to
use a particular card. (In networking
terms, the client is "multi-homed".)
This can be effected via the client
TCPCLIENTAddress option, in most cases;
but watch out for the server-side node
definition having a hard-coded HLAddress
specification.
EVENT Special archive copygroup type in TSM
5.2.2+ for Event-based retention
policy.
Ref: API manual; Admin Guide manual
Event ID NN (e.g. Event ID 11) An Windows Event number, as can be seen
in the Windows Event Viewer. A handy
place to search for their meaning:
http://www.eventid.net/search.asp
Event ID: 17055 As when backing up an MS SQL db.
Apparently the backup process was
interrupted and this caused the BAK file
to become corrupt. This also makes it
impossible to restore from the BAK file,
another reported symptom. The BAK files
were deleted and recreated and things
worked thereafter.
Event Logging An ADSM/TSM feature. You can define
event receivers using FILEEXIT or
USEREXIT support and collect real time
event data. You can then create your
own parsing utilites to sort the data
and arrange the results to suit your
needs. This avoids the Query Event
command, which is compute intensive and
requires a generous amount of server
resources. Event Logging is one way to
alleviate expensive queries against your
server.
Windows: Employ the dsmcutil
/EVENTLOGGING option or the Windows GUI
setup wizard (for configuring the client
scheduler) which has a checkbox for
event logging.
See: BEGin EVentlogging; Disable Events;
ENable EVents; Query ENabled
Event records, delete old 'DELete event Date [Time]'
Event records retention period, query 'Query STatus', look for
"Event Record Retention Period"
Event records retention period, set 'Set EVentretention N_Days'
Default: Installation causes it to be
set to 10 days.
Event return codes Return codes in the Event Log can be
other than what you might expect...
If a client schedule command is executed
asynchronously, then it is not possible
for TSM to track the outcome, in which
case the event will be reported as
"Complete" with return code 0. To get a
true return code, run the command
synchronously, where possible, as in
using Wait=Yes.
If the command is a Server Script that
includes several commands which are
simply stacked to run in sequence, each
of those commands may or may not end
with return code 0, but ultimately the
script exits with a return code of 0,
then the event will be reported as
"Complete" with return code 0. The
obvious treatment here is to write the
Script to examine the return code from
each invoked comamnd and exit early when
a result is non-zero. Again, such
commands must be synchronous.
See also: Return codes
Event server See: TEC
EVENTS table SQL table. Columns:
SCHEDULED_START, ACTUAL_START,
DOMAIN_NAME, SCHEDULE_NAME, NODE_NAME,
STATUS, RESULT, REASON.
More reliable than the SUMMARY table,
but getting at data can be a challenge.
You need to specify values for the
SCHEDULED_START and/or ACTUAL_START
columns in order to get older data from
the EVENTS table: SELECT * FROM EVENTS
WHERE SCHEDULED_START>'06/13/2003'.
Restriction: Dates must be explicit, not
computed or relative; so the construct
"scheduled_start>current_timestamp - 1
day" won't work (see APAR IC34609).
For a developer, the EVENTS table is a
little tricky. Unlike BACKUPS, NODES,
ACTLOG, etc., which have a finite number
of records, the EVENTS table is
unbounded. If you do a Query EVent with
date criteria beyond your event record
retention setting, you'll get a status
of Uncertain. If you do a Query EVent
for future dates, you get a status of
Future. When the Query EVent function
was "translated" to the SELECTable
EVENTS table, the question as to what
constitutes a complete table
(i.e. SELECT * FROM EVENTS) needed to be
addressed. Since EVENTS is unbounded,
the table is theoretically infinite in
size. So the developers decided to
mirror Query EVent behavior and thus get
only the records for today, by default.
Note that SELECT does not support the
reporting of Future events from the
EVENTS table, but it will show you
Uncertain records that go past your
event record retention.
See also: APAR IC34609 re timestamps
Events, administrative command 'Query EVent ScheduleName
schedules, query Type=Administrative'
Events, client schedules, query 'Query EVent DomainName ScheduleName'
to see all of them. Or use:
'Query EVent * * EXceptionsonly=Yes'
to see just problems, and if none, get
message "ANR2034E QUERY EVENT: No match
found for this query."
EVENTSERVer Server option to specify whether, at
startup, the server should try to
contact the event server.
Code "Yes" or "No". Default: Yes
Exabyte 480 8mm library with 4 drives and 80 tape
slots. A rotating cylindrical silo sits
above the four tape drives.
Examined See message ANS1899I exploration.
*EXC_MOUNTWait It is an Exchange Agent only option that
tells the Exchange Agent to wait for
media (tape) mounts when necessary.
Values: Yes, No.
excdsm.log The TDP for Exchange log file, normally
located in the installation directory
for TDP for Exchange (unless you changed
it).
Exchange Microsoft Exchange, a mail agent.
Exchange stores all mailboxes in one
file (information store) ... therefore
you can't restore individual mailboxes.
(More specifically, there is no "brick
level" backup/restore due to the absence
of a native "backup and restore" API
from Microsoft (as of Exchange 5.5 and
2000; a subsequent version may provide
the API capability). In Exchange 2000,
you can somewhat mitigate having to do
mailbox restores if you use the deleted
mailbox retention option. (Or called
something very similar) This will allow
you to recover a mailbox after it has
been deleted X number of days ago, based
on this setting. Exchange 2003 should
have "Recovery Storage Group" that will
allow you to restore an individual
mailbox "database" (not a single
mailbox, just the mailbox database) into
a special storage group without
impacting the live server. You can then
connect to it and use ExMerge to export
the individual mailbox. Still lacking,
but something.
Ref: In www.adsm.org, search on "brick",
and in particular see Del Hoobler's
postings.)
Backed up by Tivoli Storage Manager for
Mail (q.v.).
If you have version 1.1.0.0 of the
ADSMConnect Exchange Agent, then you
MUST be running the backup as Exchange
Site Service Account. This account, by
default, has the correct permissions to
back up the Exchange Server.
Performance: Tivoli's original testing
showed that "/buffers:3,1024" seemed to
produce the best results.
Redbook: Connect Agent for Exchange.
See also: ARCserve; TDP for Exchange;
TXNBytelimit; TXNGroupmax
Exchange, delete old backups With TDP for Exchange version 1, look at
the "EXCDSMC /ADSMAUTODELETE" command.
With TDP for Exchange version 2, you do
not have to worry about deletions
because it has the added function of TSM
policy management that will handle
expiration of old backups
automatically.
Exchange, restore a single mailbox? *SM can only do this if Microsoft
provides an API that makes it possible,
and Microsoft DOES NOT have mailbox/item
level backup and restore APIs for any
version of Exchange including the new
Exchange 2000. There are vendors who
have coded solutions using APIs (like
MAPI) that are not intended for backup
and restore. These solutions tend to
take large amounts of time for backups
and full restores... (Try restoring a
50Gig IS or storage group from an item
level backup and restore.) Microsoft
themselves claims that they have tried
to come up with a way to provide some
type of item level restore support via
the backup and restore APIs but have not
succeeded because of the architecture of
the JET database (the database that is
the heart of Exchange.) Microsoft
contends that customers should take
advantage of deleted item level recovery
and the new deleted mailbox level
recovery of Exchange 2000 to solve these
problems.
Ref: "TDP for Microsoft Exchange Server
Installation and User's Guide" manual,
appendix B topic "Individual Mailbox
Restore".
A third party vendor, Ontrack Software
(www.ontrack.com) has a software product
called PowerControls which claims to
read a .edb full backup to extract a
single mailbox.
Exchange, restore across servers? It can be done. One customer says:
The trick is to specify the
TSM-nodename of the FROM-server when
you restore on the TO-server.
For instance:
tdpexcc restore "Storage Group C" FULL
/Mountwait=Yes /MountDatabases=Yes
/excserver=<TO-server>
/fromexcserver=<FROM-server>
/TSMPassword=<TSM_PW FROM-server>
/tsmnode=<TDP-TSMNodename FROM-server>
Another says:
Go to the restore server and do a
restore of the mail (make sure erase
existing logs is CHECKED!), but DO NOT
restore the DIRECTORY, only the
information store, private and public.
Then after the restore restart the
services for exchange and go into the
Administrator program (see tech net
article ID Q146920 for full details).
Go into Server Objects, and then select
Consistency Adjuster.
Under the Private Information Store
section make sure Synchronize with the
directory is checked, click All
Inconsistencies and away you go. This
will rebuild the user directory whole
list and all the mail.
Naturally, be sure that your operating
system, Exchange, and TDP levels are all
the same across the server systems, and
do the deed only after having a full
backup. Here are some Microsoft docs
explaining some issues to keep in mind:
http://www.microsoft.com/exchange/
techinfo/deployment/2000/
MailboxRecover.asp
http://www.microsoft.com/exchange/
techinfo/deployment/2000/
E2Krecovery.asp
Exchange, restoring You can restore the Exchange Db to a
different computer, provided it is
within the same Exchange Org.; but only
the info store - not the directory.
Performance: An Exchange restore will
almost always be slower than backup
because it is writing to disk and, more
importantly, it is replaying transaction
logs. Use Collocation by filespace, to
keep the data for your separate storage
groups on different tapes to facilitate
running parallel restores.
Exchange 2000 SRS, back up via CLI To backup the Exchange 2000 Site
Replication Service via the command
line, do like:
tdpexcc backup "SRS Storage" full
/tsmoptfile=dsm.opt /logfile=exsch.log
/excapp=SRS >> excfull.log
Exchange 2003 (Exchange Server 2003) Requires Data Protection for Exchange
version 5.2.1 at a minimum.
See: http://www.ibm.com/support/
entdocview.wss?uid=swg21157215
Exchange Agent Only deals with Information Store (IS)
and Directory (DIR) data. The Message
Transfer Agent (MTA) is not dealt with
at all.
The Exchange Agent has 4 backup types:
Full, Copy, Incremental, Differential:
"Full" and "Copy" backup contain the
database file, all transaction logs,
and a patch file.
"Incremental" and "Differential" backup
contain the database file, all
transaction logs, and a patch file.
Each backup will show which type it is
in the backup history list on the
Restore Tab.
See also: TDP for Exchange
Exchange databases There are 2/3 databases in Exchange...
- The Directory, dir.edb, which stores
the users/groups/etc.
- The Public Database, pub.edb, which
store public folders and such.
- The Private Database, priv.edb, which
stores the private mailboxes and
such.
Exchange product files Seagate had a product for backing up
open Exchange files. It uses ADSM as a
backup device (through the API). Then
Seagate sold the backup software
division to Veritas, so see:
http://www.veritas.com/products/stormint
Exclude The process of specifying a file or
group of files in your include-exclude
options file with an exclude option to
prevent them from being backed up or
migrated. You can exclude a file from
backup and space management, backup
only, or space management only. Note
that exclusion operates ONLY ON FILES!
Any directories which ADSM finds as it
traverses the file system will be backed
up. The other implication of this is
that ADSM will always traverse
directories, even if you don't want it
to, so it can waste a lot of time. To
avoid directory traversal, use
EXCLUDE.DIR, or consider using virtual
mount points instead to specify major
subdirectories to be processed, and omit
subdirectories to be ignored.
Note that excluding a file for which
there are prior backups has essentially
the same effect as if the file had been
deleted from the client: all the backup
versions suddenly become expired.
EXclude Client option to specify files that
should be excluded from TSM Archive,
Backup, or HSM services.
Placement:
Unix: Either in the client system
options file or, more commonly, in
the file named on the INCLExcl option.
Other: In the client options file.
You cannot exclude in Restorals.
Remember that upper/lower case matters!
For backup exclusion, code as:
'EXclude.backup pattern...'
For HSM exclusion, code as:
'EXclude.spacemgmt pattern...'
To exclude from *both* backup and HSM:
'EXclude pattern...'
As to "pattern"...
/dir/* covers all files in dir and
/dir/.../* covers all files in all
subdirs of dir, so both cover
all files below dir.
Further, /dir/.../* includes /dir/*, so
only one exclude is necessary to
exclude a whole branch.
Effects: The file(s) are expired in the
next backup.
Note that with DFS you need to use four
dots (as in /dir/..../*).
Messages: ANS4119W
See also: EXCLUDE.DIR; EXCLUDE.File; etc
EXCLUDE.FS; Journal-based backups &
Excludes
Exclude a drive You can code your client Domain
statement to omit the drive you don't
want backed up. Note that specification
like 'EXCLUDE.Dir "C:\"' should not be
used to try to exclude the root of a
drive.
Exclude and retention (expiration) When you exclude files or directories,
it has the same effect as if the objects
were no longer on the client system: the
the backup versions will be eligible for
expiration.
Exclude archive files In TSM 4.1: EXCLUDE.Archive
In earlier levels, a circumvention is to
include them to a special management
class that does not exist. You will then
get an error message and the files will
not be archived.
Exclude from Restore There is no Exclude option to exclude
file system objects during a Restore.
To try to circumvent, you might create a
dummy object of that name in the file
system and then tell the Restore not to
replace files.
Exclude ignored? See: Include-Exclude "not working"
EXCLUDE.Archive TSM 4.1+: Exclude a file or a group of
files that match the pattern from
Archiving (only). This does not
preclude the archiving of directories in
the path of the file - but in any case,
this should not be an issue, in that TSM
does not archive directories that it
knows to already be in server storage.
There is no Exclude that excludes from
both Archive and Backup.
EXCLUDE.Backup Excludes a file or a group of files from
backup services only. There is no
Exclude that excludes from both Backup
and Archive.
Effects: The file(s) are expired in the
next backup.
EXCLUDE.COMPRESSION Can be used to defeat compression for
certain files during Archive and Backup
processing.
Where used: To alleviate the problem of
server storage pool space being
mis-estimated and backups thus failing
because already-compressed files expand
during TSM client compression. So you
would thus code like:
EXCLUDE.COMPRESSION *.gz
EXCLUDE.Dir (ADSM v.3+) Specifies a directory (and files and
subdirectories) that you want to exclude
from Backup services only, thus keeping
*SM from scanning the directory for
files and subdirectories to possibly
back up. (The simpler EXCLUDE does
*not* prevent the directory from being
traversed to possibly back up
subdirectories.)
The pattern is a directory name, not a
file specification. Wildcards *are*
allowed. In Unix, specify like:
EXCLUDE.Dir /dirname or
EXCLUDE.Dir /dirnames*
In Windows, note that you cannot do like
"EXCLUDE.Dir G:" to exclude a drive:
you need to have "EXCLUDE.Dir G:\*".
Use this option when you have both the
backup-archive client and the HSM
client installed.
Do not attempt to specify like
'EXCLUDE.Dir "C:\"' to try to exclude
the root of a drive.
Effects: The directory and all files
below it are expired in the next backup.
Note that EXCLUDE.Dir takes precedence
over all other Include/Exclude
statements, regardless of relative
positions.
Note that EXCLUDE.Dir cannot be
overridden with an Include.
EXCLUDE.Dir *does not* apply if you
perform an Incremental backup whose
command line objects include the same
(sub)directory which is named on your
client EXCLUDE.Dir spec, because that
directory is being explicitly named as
something to back up, and thus will not
be viewed by TSM as a (sub)directory to
be excluded.
EXCLUDE.Dir *does not* apply if you
perform a Selective backup of a single
file under that directory; but it does
apply if the Selective employs wildcard
characters to identify files under that
directory.
Example of excluding all subdirectories
named .snapshot:
EXCLUDE.Dir /.../.snapshot
Ref: IBM site Technote 1168934
See also: Journal-based backups &
Excludes
EXCLUDE.ENCRYPT TSM 4.1 Windows option to exclude files
from encryption processing.
See also: ENCryptkey; INCLUDE.ENCRYPT
EXCLUDE.File Excludes files, but not directories,
that match the pattern from normal
backup services, but not from HSM
services.
Effects: The file(s) are expired in the
next backup.
EXCLUDE.File.Backup Excludes a file from normal backup
services.
EXCLUDE.FS (ADSM v.3+) Specifies a filespace/filesystem that
you want to exclude from Backup
services. (This option applies only to
Backup operations - not Archive or HSM.)
This option is available in the Unix
client, but not the Windows client (as
of TSM 5.2.2).
In TSM (not ADSM) the filespace may be
coded using a pattern.
Effects: The specified file system(s)
are skipped, as though they were not
specified on the command line of the
Domain option. (Note that the file
systems are *not* expired, as lesser
EXCLUDEs do.)
Note that EXCLUDE.FS takes precedence
over all other Include statements and
non-EXCLUDE.FS Exclude statements,
regardless of relative positions.
But: Does it make sense to exclude a
file system? Or should you instead not
include it in the first place, as in not
coding it in a DOMain statement or as a
dsmc command object? (Make sure that you
*do* have a DOMain statement coded in
your options file!) With client
schedules, an alternative is to use the
OBJects parameter to control the file
systems to back up.
See also: dsmc Query INCLEXCL;
dsmc SHow INCLEXCL
EXCLUDE.HSM No, there is no such thing. What you
want to do is simply EXCLUDE, which
excludes the object from both Backup and
HSM.
Exclude.Restore An ad hoc, undocumented addition you may
stumble upon in the TSM 5.2 client. It
is there only for use under the
direction of IBM Service: there is no
assurance that it will work as you
expect, or in all cases. AVOID IT.
Executing Operating System command or Message in client schedule log,
script: referring to a command being run per
either the PRESchedulecmd,
PRENschedulecmd, POSTSchedulecmd, or
POSTNschedulecmd option; or by the
DEFine SCHedule ACTion=Command spec
where OBJects="___" specifies the
command name.
Execution Mode (HSM) A mode that controls the space
management related behavior of commands
that run under the dsmmode command. The
dsmmode command provides four execution
modes - a data access control mode that
controls whether a migrated file can be
accessed, a time stamp control mode that
controls whether the access time for a
file is set to the current time when the
file is accessed, an out-of-space
protection mode that controls whether
HSM intercepts an out-of-space condition
on a file system, and a recall mode that
controls whether a file is stored on
your local file system when accessed, or
stored on your local file system only
while it is being accessed, and then
migrated back to ADSM storage when it is
closed.
.EXP File name extension created by the
server for FILE devtype scratch volumes
which contain Export data.
Ref: Admin Guide, Defining and Updating
FILE Device Classes
See also: .BFS; .DBB; .DMP; FILE
EXPINterval Definition in the Server Options file.
Specifies the number of hours between
automatic inventory expiration runs,
after first running it when the server
comes up. Setting the interval to 0
sets the process to manual, and then you
must enter the 'EXPIre Inventory'
command to start the process.
Default: 24 hours
Automatic expiration can be suppressed
by starting 'dsmserv' with the
"noexpire" command line option.
You can also code "EXPINterval 0".
Ref: Installing the Server...
See also: SETOPT
EXPInterval server option, change 'SETOPT EXPINterval ___' while up, or
change dsmserv.opt file EXPINterval for
next start-up.
EXPInterval server option, query 'Query OPTion', look for "ExpInterval".
Expiration The process by which objects are deleted
from storage pools because their
expiration date or retention period has
passed. Backed up or archived objects
are marked for deletion based on the
criteria defined in the backup or
archive copy group ('Query COpygroup').
File objects are evaluated for removal
at Expiration time either by having been
marked as expired at Backup time (per
your retention policy Versions rules) or
per the retention periods specified in
the Backup Copy Group.
The expiration process has two phases:
1. Data expiration on ITSM database.
2. Data expiration on tapes. (Freeing
tapes to Scratch can seem to be
delayed as this is under way.)
The order in which expiration occurs has
been observed to be the same as types
are listed in the ANR0812I message:
backup objects, archive objects, DB
backup volumes (DRMDBBackupexpiredays),
recovery plan files (DRM).
Expiration processing also removes
restartable restore sessions that exceed
the time limit set for such sessions by
the RESTOREINTERVAL server option.
Avoid doing expirations during
incremental backups - the backups will
be degraded. As the TSM Performance
Tuning Guide says: "Expiration
processing is very CPU and I/O
intensive. If possible, it should be
run when other TSM processes are not
occurring."
Beware that as a database operation, the
expiration will require Recovery Log
space. If the expiration is massive, the
Recovery Log will fill, and so you
should have DBBackuptrigger configured.
If SELFTUNEBUFpoolsize is in effect, the
Bufpool statistics are reset before the
expiration.
Messages: ANR4391I, ANR0811I, ANR0812I,
ANR0813I
See also: DEACTIVATE_DATE; dsmc EXPire;
EXPInterval; SELFTUNEBUFpoolsize
Expiration (HSM) The retention period for HSM-migrated
files is controlled via the
MIGFILEEXPiration option in the
Client System Options file (governing
their removal from the migration area
after having been modified or deleted in
the client file system) such that the
storage pool image is obsolete. The
client system file is, of course,
permanent and does not expire.
Possible values: 0-9999 (days).
Default: 7 (days).
The value can be queried via:
'dsmc Query Option' in ADSM or 'dsmc
show options' in TSM; look for
"migFileExpiration".
Expiration, invocation Invoked automatically per Server Options
file option EXPInterval;
Invoke manually: 'EXPIre Inventory'.
Expiration, stop (cancel) 'CANcel PRocess Process_Number' will
cause the next Expire Inventory to start
over.
'CANcel EXPIration' is simpler, and will
cause the expiration to checkpoint so
that the next Expire Inventory will
resume.
You may also want to change the
EXPINterval server option to
"EXPINterval 0" to prevent further
expirations, at their assigned intervals
- though this means having to take down
the server.
See also: CANcel EXPIration
Expiration date for a Backup file Perform a SELECT on the Backups table
to get the DEACTIVATE_DATE, and then add
your prevailing backup retention
period.
Expiration date for an Archive file Perform a SELECT on the Archives table
to get the ARCHIVE_DATE, and then add
your prevailing archive retention
period.
Expiration happening? 'Query ACtlog BEGINDate=-999 s=expira'
should reveal ANR0812I messages
reflecting deletions.
Expiration happening outside schedule When you have an administrative schedule
performing 'EXPIre Inventory', you want
to defeat automatic expirations which
otherwise occur via the ExpInterval
server option.
Expiration messages, control "EXPQUiet" server option (q.v.).
Expiration not happening - Is your EXPINterval server option set
to a good value, or do you have an
administrative schedule doing Expire
Inventory regularly?
- Retention periods defined in the
Copy Group define how long storage
pool files will be retained: if you
have long retentions then you won't
see data expiring any time soon.
- Did the management class to which the
files were bound disappear? (You can
query a few files to check.) If so,
the default management class copy
group values pertain; or, if no such
default copy group, then the DEFine
DOMain grace period prevails.
See also: Grace period
Expiration performance Some things to consider:
- Boosting BUFPoolsize to a high value
will cut run time substantially.
- Avoid running when other database-
intensive operations are scheduled.
(The "What else is running?"
question.)
- Standard operating system
configuration issues: CPU speed,
memory size, disk and paging space
performance, contention with other
system processes, etc.
- Look for TSM db disk problems in the
operating system error log.
- Performing the expiration with
SKipdirs=No with less than TSM server
level 5.1.5.1 will result in not just
directories being skipped in
Expiration, but also the files within
those directories! This causes file to
build up in the TSM server. Reverting
to SKipdirs=Yes will gradually fix the
performance problem.
- The more versions you have of a file
in server storage, and the longer your
Backup Copy Group retention policies,
the longer Expiration will take,
because time-based policy processing
occurs during Expiration (in contrast
with versions-based processing, which
occurs at client Backup time).
Ref: IBM site Solution 1141810: "How to
determine when disk tuning is needed for
your ITSM server".
See also: Database performance
Expiration period, HSM See: Expiration (HSM); MIGFILEEXPiration
Expiration process As reported in Query Process, like:
Examined 14784 objects, deleting 14592
backup objects, 16 archive objects,
0 DB backup volumes, 0 recovery plan
files; 0 errors encountered.
Notes:
- Backup and Archive objects may be
deleted in concert: it is not the case
that expiration will go through all
Backup object first, then move on to
Archive object deletions.
Expiration processes, list 'SELECT STATUS FROM PROCESSES WHERE
PROCESS ='Expiration' '
Expiration slow (ADSMv3) APAR PQ26279 describes a major ADSM
software defect in which expiration was
overly slow in initial and later runs.
Expire files by name See: dsmc EXPire
EXPIre Inventory *SM server command to manually start
inventory expiration processing, via a
background process, to remove outdated
client Archive, Backup, and Backupset
objects from server storage pools
according to the terms specified by the
Copypool retention and versions
specifications for the management
classes to which the objects are bound.
EXPIre Inventory processes Backup files
according to having been marked as
expired at Backup time, per retention
versions rules; or by examining Inactive
files according to retention time
values. Expiration naturally removes the
storage pool object instance, as well as
the appropriate database reference.
Expiration is also employed by the
server to remove expired server state
settings such as Restartable Restore.
(The name "Expire Inventory" is
misleading, as the function performed by
the command is actually database
deletion, by virtue of deleting files
previously marked expired during Backup,
and those computed at Expire Inventory
time as having outlived the time-based
retention policy.)
EXPIre Inventory can be cancelled.
Syntax:
'EXPIre Inventory [Quiet=No|Yes]
[Wait=No|Yes]
[DUration=1-2147483648_Mins]
[SKipdirs=No|Yes]'
DUration can be defined to limit how
long the task runs. (Note: At the end of
the duration, the expiration will stop
and the point where it stopped is
recorded in the TSM database, which will
be the point from which it resumes when
the next EXPIre Inventory is run.)
SKipdirs is per APAR IY06778, due to the
revised expiration algorithm
experiencing performance degradation
while expiring archive objects. (The
problem with deleting archive
directories, is that TSM must not delete
the directory object if there are still
files dependent upon it. So, to delete
an archive directory, TSM needs to see
if ANY files referenced that directory
using another set of database calls.
This other set of database calls is
where the extra time was being spent.)
SKipdirs is thus a formalized
circumvention for a design change which
wasn't properly thought through or
tested.
The intent of SKipdirs=Yes initially was
to allow EXPIre Inventory to bypass all
the directories created by Archive. This
was a circumvention until the CLEANUP
ARCHDIR utilities could be run to clear
out these objects. However, until the
fix in TSM server level 5.1.5.1,
SKipdirs=Yes can also prevent Backup
directories and the files under them
from being deleted, resulting in ever
longer EXPIre Inventory executions and
database bloat. SKipdirs=Yes should
*not* be used perpetually.
Note that there is no capability for
expiring data for only one node, or
filespace within a node.
Note that API-based clients, such as the
TDPs, require their own, separate
expiration handling (actually,
deletion). Likewise, HSM handles
expiration of its own files
separately: see MIGFILEEXPiration.
How long it takes: The time is
proportional to the amount of data ready
to be expired. (It is not the case that
it plows through the entire *SM database
at each invocation, seeking things ready
to be expired.)
Expire inventory works through the nodes
in the order they were registered.
This is a disruptive operation which
can cause *SM processing to slow to a
crawl, so run off-hours so that it will
not conflict with things.
Reclamation should be disabled during
the Expiration ('UPDate STGpool PoolName
REClaim=100') so that it doesn't get
kicked off prematurely and waste
resources in copying data that will be
expired as expiration proceeds.
WARNING: Expiration quickly consumes
space in the Recovery Log, and can
exhaust it if the amount of data
expiration is great. The DUration
operand is there to help keep this from
happening.
Msgs: See: EXPIre Inventory, results
See also: CANcel EXPIration;
dsmc EXPire; Expiration, stop;
Expiring.Objects; Restartable Restore;
Server Options file option EXPInterval
EXPIre Inventory, placement EXPIre Inventory is best kicked off at
the end of a daily (e.g., morning)
administration job so that it will
reduce tape occupancy levels so that
following Reclamation work can run
efficiently thereafter.
EXPIre Inventory, results Messages:
ANR0812I reports the number of
objects removed upon normal conclusion,
but: An historic shortcoming is lack of
reporting of the number of bytes
involved. You can compensate for this by
doing 'AUDit LICenses' and
'Select * From Auditocc' before and
after the 'EXPIre Inventory'.
ANR0813I for abnormal conclusion.
ANR0987I marks process completion.
ANR4391I to record each filespace being
processed when started in non-quiet
mode; but no report on the number of
files expired for the node.
Expire processing order It looks like Expire processing occurs
in the order that you add your client
nodes to the *SM server.
Expiring--> Leads the line of output from a Backup
operation, as when Backup finds that a
file has been removed from the file
system since the last Backup. The file
will be rendered Inactive in server
storage. The previously Active copy in
server storage is "deactivated". Note
that no server storage space is freed
until Expire Inventory processing
occurs.
See also: Updating-->; Normal File-->;
Rebinding-->
Expiring file HSM: A migrated or premigrated file that
has been marked for expiration and
removal from *SM storage. If a stub file
or an original copy of a premigrated
file is deleted from a local file
system, or if the original copy of a
premigrated file is updated, the
corresponding migrated or premigrated
file is marked for expiration the next
time reconciliation is run. It expires
and is removed from *SM storage after
the number of days specified with the
MIGFILEEXPiration option have elapsed.
See: MIGFILEEXPiration
Expiring.Objects An internal server table to record what
is available for expiration at any given
point in time. It's maintained
"on-the-fly" as new objects come into
the system and the existing objects get
moved to Inactive or available for
expiration. The records contain the
pertinent information for the server to
complete the deletion. So, instead of
walking the inventory tables at EXPIre
Inventory time and performing lengthy
calculations then as to what objects can
go, that workload is distributed over
time. On larger systems, it greatly
speeds up the process of figuring out
what can be deleted and what can't.
Fluctuations in expire time are due to
external events, such as a filesystem
that had purged a lot of files,
retention policies changed, etc.
Export *SM server meta command encompassing a
family of object exports which allow
parts of the server to be written to
removable media (tape) so that the
data can be transferred to another
server - even one of a different
architecture (supposedly).
The produced tape(s) will end up in the
LIBVolumes list with a Last Use type of
"Export". Note that Export will write
out Backup files first, before other
types, and exports first from things
directly resident in its database
(directories, empty files, etc). Export
apparently uses *SM database space for
scratch pad use, as database usage will
increase when only Export is running.
One cute thing you can do for an
abandoned filespace is to Export it to a
file, archive the file, and delete the
filespace such that the data is
preserved but all the database space
reflecting the individual files is
reclaimed.
Export is sometimes advocated for
getting long-term storage data out of
the TSM server, to reduce overhead. This
is effective, but lost are all the
advantages of TSM database inventory
tracking of the data, where it is then
up to you to somehow keep track of what
you wrote to what export tape and how to
get it back.
Export obviously requires sufficient
output volumes, and that all the input
volumes have a Status which allows them
to be used - and their contents must be
viable and readable. (Do 'Query Volume
ACCess=UNAVailable,DESTroyed' before the
export, and deal with any anomalies.)
Keep in mind also that an Export is
long-running, and during all that time,
anything may happen to its input and/or
output volumes. Marginal tape drives can
also screw up a long-running Export,
where tapes are fine: if one is in
evidence in your Activity Log or OS
error log, do a Vary Off to keep it from
participating in the Export.
Results appear in the Activity Log...
Message ANR0617I will summarize how well
the export went: SUCCESS or INCOMPLETE.
Watch for message ANR0627I saying that
files were skipped, as can happen when
input tapes suffer I/O errors. (Export
will nicely go on to completion, getting
as much data as it can.)
To export from one *SM server's storage
pools to another, use the ADSMv3+
Virtual volumes facility (see the Admin
Guide).
Note: Your success in exporting from one
server to another is probabalistic, as
the vendor would do little testing in
this area. Exporting across platforms is
dicey at best. (Be particularly cautious
with EBCDIC vs. ASCII platforms.) You
will probably have the best chance when
the receiving server is at the same
level or higher compared to the
exporting server.
Ref: Admin Guide, Managing Server
Operations, Moving the Tivoli Storage
Manager Server
See also: dsmserv RESTORE DB; IMport
EXPORT In 'Query VOLHistory', Volume Type to
say that volume was used to record
data for export.
Also under 'Volume Type' in
/var/adsmserv/volumehistory.backup .
EXPort Node TSM server command to export client
node definitions to serial media (tape).
Syntax:
'EXPort Node [NodeName(s)]
[FILESpace=FileSpaceName(s)]
[DOMains=DomainName(s)]
[FILEData=None|All|ARchive|
Backup|BACKUPActive|
ALLActive|SPacemanaged]
[Preview=No|Yes]
[DEVclass=DevclassName]
[Scratch=Yes|No]
[VOLumenames=VolName(s)]
[USEDVolumelist=file_name]'
Note that exporting to a device type of
SERVER allows exporting the data to
another ADSM server, via virtual volumes
(electronic valuting).
Hint: Using Preview=Yes is a handy way
of determining the amount of data owned
by a node.
Consider doing a LOCK Node first!
Export via FTP rather than tape Keep in mind that you can export to a
devclass of type FILE, and then FTP the
resultant file to the other system for
Importation.
Export-Import across libraries In some cases, customers want to perform
an Export-Import from one library to
another of the same type, usually at
different sites, to rebuild the TSM
server at the other site. The TSM
manuals have been without information on
how to approach this...
- Do 'LOCK Node' on all involved client
nodes to prevent inadvertent changes
to the data you intend to export, and
nullify all administrative schedules
which could interfere with the
long-running Export.
- Perform an Export of all data.
Carefully check the results of the
operation to assure that all the data
successfully made it to tape.
(The volumes will show up in
VOLHistory as Volume Type "EXPORT".)
- Perform a CHECKOut LIBVolume to eject
the volumes.
- Transport the tapes to the new site.
- Flick the read/write tab on the tapes
to read-only before inserting into the
new library, as you'll want to assure
that this vital data is not
obliterated until you're sure that the
new TSM system is complete and stable.
- Insert the tapes into the new library.
- Perform a CHECKIn LIBVolume with a
STATus=PRIvate.
- Perform Import. Check that the amount
of data imported matches that in the
Export.
- At some later time, perform a CHECKOut
LIBVolume of the read-only volumes and
change their tab to read-write to
enable their re-use, then perform a
CHECKIn LIBVolume as STATus=SCRatch.
Leave the old TSM system and library
intact until the new TSM system is
complete: it is not unknown for there to
be problems with Export-Import.
Export-Import across servers You may get stuck with a situation where
you have an old server and a new server
and no common tape hardware nor means of
disconnecting tape drives from one
system to attach to the other, in
performing a traditional Export-Import.
In that case, if you're running Unix, a
"trick" you might try is to do the
export over the network, by doing the
export-import using File devices which
are in reality FIFO special files, which
on the sending system is being read by
an 'r**' command to send the data over
to the network to be caught by a program
there which feeds the FIFO that Import
is reading over there.
On the sending and receiving systems do:
mkfifo fifo
On the sending system do:
cat fifo | rsh othersys 'cat > fifo'
And then have the sending *SM system do
an Export Node to a File type device and
a VOlumename being the file name of
fifo, and have the receiving TSM system
do an Import from a File type device
where VOlumename is fifo on that
system.
(Note: This is an unproven concept, but
should work.)
Export-Import Node A method of copying a node from one ADSM
server to another, retaining the same
Domain and Node names. (If the node
imports with Domain name which is odd to
your ADSM server, you can thereafter do
an 'UPDate Node' to reassign the node to
a more suitable Domain in your server.)
Note that this migrates the Filespace
data, but the file system stays where it
is; and so Export-Import is
inappropriate for when you want to
transfer an HSM file system from one
ADSM server host to another (use
cross-node restore instead).
EXPQUiet Server option to control the verbosity
of expiration messages: No (default)
allows verbosity; Yes minimizes output.
ext3 file system support The TSM 5.1.5 client for Linux provides
(Linux client) support for ext3 file systems.
Prior to that, one could effect backups
via dsmc by defining the file systems of
interest as VIRTUALMountpoint's:
subsequent restoral can be performed via
either dsmc or dsm. The filespace will
be recorded as type EXT2 on the server.
EXTend DB *SM server command to extend the
database "assigned space" to use
more of the "available space".
Causes a process to be created which
physically formats the additional space
(because it takes so long). 'Query DB'
will immediately show the space being
available, though the formatting has not
completed. Syntax:
'EXTend DB N_Megabytes'
Note that doing this may automatically
trigger a (full) database backup, with
message ANR4552I, depending upon your
DBBackuptrigger values.
EXTend LOG TSM server command to extend the
Recovery Log "assigned space" to use
more of the "available space".
Causes a process to be created which
physically formats the additional space
(because it takes so long). 'Query LOG'
will immediately show the space being
available, though the formatting has not
completed. Syntax:
'EXTend LOG N_Megabytes'
Results in ANR0307I formatting progress
messages to appear in the Activity Log.
Caution: In some cases, customers have
found that with Logmode Rollforward, the
next db backup after the extension fails
to clear the Recovery Log. Restarting
the server is the only known way to
clear that situation.
See also: dsmserv EXTEND LOG
EXTernal Operand of 'DEFine LIBRary' server
command, to specify that a mountable
media repository is managed by an
external media management system.
External Library A collection of drives manage by a media
managment system that is not part of
ADSM, as for example some mainframe tape
management system. (A 3494 that is used
directly by *SM is *not* an External
Library.)
EZADSM Early name for the ADSM Utilities.
Name obsoleted in ADSM 2.1.0.

Failed Status in Query EVent output indicating


that the scheduled event did occur but
the client reports a failure in
executing the operation, and successive
retries have not succeeded.
See also: Missed;
Total number of objects failed
FAS Fabric-Attached Storage, as employed in
NetApp brand network attached storage
product.
FaStT 600 Became DS4300.
FC Fibre Channel. Current 3590 drives can
be attached to hosts via Fibre Channel
or SCSI.
FCA Fibre Channel Adapter card.
fcs0 See: Emulex LP8000 Fibre Channel Adapter
FDR/UPSTREAM Backup/restore product from Innovation
Data Processing, which they say is a
comprehensive, powerful, high
performance storage management solution
for backup of most of the open systems
LAN/UNIX platforms and S/390 Linux data
to OS/390 or z/OS mainframe backup
server. UPSTREAM will provide automated
operations with fast, reliable and
verifiable backups/restores/archival and
file transfers that can be automatically
initiated and controlled from either
client or the mainframe backup
server. UPSTREAM provides unique data
reduction techniques including online
database agents offering maximum safety
with superior disaster recovery
protection. Supports Windows and AIX.
(The vendor's website is poor.)
FFFA volume category code, 3494 Reflects a tape which was manually
removed from the 3494, by opening the
door and removing the tape from a cell,
instead of otherwise ejecting it. To
remove the Library Manager entry for the
volume, to allow the cell to be reused,
change the Category Code to FFFB.
See: Volume Categories
Fibre Channel adapter, mixing disk IBM's official statement concerning the
and tape on same one FC HBA sharing of tape and disk on a
single adapter, as of 2003/05:
"...Using a single Fibre Channel host
bus adapter (HBA) on a host server for
concurrent tape and disk operations is
generally not recommended. In high
performance, high stress situations with
dissimilar I/O devices, stability
problems can arise. IBM is focused on
assuring configuration interoperability.
In so doing, IBM tests single HBA
configurations to determine
interoperability. Certain customer
environments using AIX with the IBM FC
Switch (2109) connecting both ESS (2105)
and Magstar 3590 Tape have demonstrated
acceptable interoperability. For
customers that are considering sharing a
single HBA with concurrent disk and tape
operations, it is strongly recommended
that the sales team conduct a Pre-Sales
Solutions Assurance Review with members
of the Techline or ATS team to review
the issues and concerns. IBM and IBM's
partners will continue evaluating other
configurations and make specific
statements regarding interoperability as
available." - Technote 1194590
Ref: IBM Ultrium Device Drivers
Installation and User's Guide, as one
place.
Synposis: You risk a hang or data
corruption, not that it certainly won't
work.
See also: HBA
FibreChannel and number of tape drives A rule of thumb is that there should not
be more than three tape drives per
FibreChannel path.
FICON IBM term, used with S/390, for Fiber
Connection of devices. A follow-on to
ESCON. Ref: redbook "Introduction to
IBM S/390 FICON" (SG24-5176)
FID messages (3590) Failure ID message numbers, which appear
on the 3590 drive panel.
FID 1 These messages indicate device
errors that require operator and service
representative, or service
representative only action. The problem
is acute. The device cannot perform any
tasks.
FID 2 These messages report a degraded
device condition. The problem is
serious. The customer can schedule a
service call.
FID 3 These messages report a degraded
device condition. The problem is
moderate. The customer can schedule a
service call.
FID 4 These messages report a service
circuitry failure. The device requires
service, but normal drive function is
not affected. The customer can schedule
a service call.
Ref: 3590 Operator Guide (GA32-0330-06)
Appendix B especially.
Fiducials White, light-reflective rectangles
attached to the corners of tape drives
and cell racks in a 3494 tape robot for
the infrared sensor on the robot head to
determine exactly where such elements
exactly are, when in Teach mode.
Ref: "IBM 3590 High Performance Tape
Subsystem User's Guide" (GA32-0330-0)
FILE In DEFine DEVclass, is a DEVType which
refers to a disk file in a file system
of the *SM server computer, which is
regarded as a form of sequential access
media - which implicitly means singular
access, which is to say that a FILE is
dedicated to a single active Session,
where no other Sessions can use the FILE
volume - including multi-session
processes. (This is in contrast to the
DISK device class, which is random
access, and can be simultaneously used
by multiple Sessions.) Naturally, there
is no library or drive defined for FILE.
FILE type volumes may be either Scratch
or Defined type. For Scratch type, when
the server needs to allocate a scratch
"volume" (file), it creates a new file
in the directory specified in the
DEFine. For scratch volumes used to
store client data, the file created by
the server has a file name extension of
.BFS. For scratch volumes used to store
export data, a file name extension of
.EXP is used. For example, suppose you
define a device class with a DIRECTORY
of /ADSMSTOR and the server needs a
scratch volume in this device class to
store export data, the file which the
server creates might then be named
/ADSMSTOR/00566497.EXP . When empty,
Scratch type FILE volume size is
controlled by the Devclass MAXCAPacity
value: when a volume is filled, another
is created and used. The number of such
volumes is limited by the Stgpool
MAXSCRatch value: if inadequate, you
will ultimately encounter "out of space"
stgpool error messages.
Scratch type FILE volumes are deleted
from the file system, giving back the
space they occupied.
Instead of Scratch, you may do DEFine
Volume to pre-assign volumes in the FILE
pool, in conjunction with setting
MAXSCRatch=0. This allows you to attain
predictable results, as in spreading I/O
load over multiple OS disks.
Properties:
- FILE type devices are sequential
media, and are treated in many
respects like tape.
- No prep (labeling, formatting) is
required.
- They require mountpoints, are mounted
and dismounted, etc.
- Volume name must be unique, as it is
a file system file name.
- MOUNTLimit may be used to limit the
number of simultaneous volumes in use
in the pool, and thus limit processes:
when limit reached, new processes wait
for FILEs. MOUNTLimit=DRIVES is not
valid in that there are no "drives".
- There should be no actual manual
intervention required in their use.
FILE devs may be used for a variety of
purposes, including electronic vaulting.
Ref: Admin Guide table "Comparing Random
Access and Sequential Access Disk
Devices"
See also: .BFS; .DBB; DISK; .DMP; .EXP;
SERVER; Sequential devices; Storage pool
space and transactions
See also IBM site Technote 1141492
FILE devclass performance As a sequential pseudo device, FILE
benefits from several real and
conceptual performance advantages, over
DISK (random access) class:
- There is only the need to keep track
of where files start within the FILE
area, rather than map blocks as in
DISK class.
- Access is linear, without TSM having
to hop around seeking the next piece
of the series.
- Access is dedicated rather than
shared, eliminating contention.
However, there are inconvenient
realities in this pretense:
- The FILE area is built upon a file
system's disk blocks - which can be
expected to be scattered about on the
disk.
- The disk will often be shared, and
so there is real contention involved.
FILE is tape emulation: there are
certain TSM functionality advantages,
but don't fool yourself into believing
that FILE is truly sequential.
File, delete from filespace See: File Space, delete selected files
File, expirable? See: SHow Versions
File, find on a set of volumes SELECT VOLUME_NAME FROM CONTENTS WHERE -
NODE_NAME='UPPER_CASE_NAME' AND -
FILESPACE_NAME='{fsname}' AND -
FILE_NAME='{path.without.fsname}
{filename}'
File, find when only filename known There may be times when you know the
name of a file, but not what directory
(or perhaps even filespace) it is in.
In the TSM server you can do:
SELECT * FROM BACKUPS WHERE
[FILESPACE_NAME="FSname" AND]
LL_NAME="TheFileName"
(Remember that for client systems where
filenames are case-insensitive, such as
Windows, TSM stores them as UPPER CASE,
so search for them the same way.)
File, in storage pool When TSM stores files in storage pools,
if the current storage pool sequential
volume fills as the file is being
written, the remainder of the file will
be stored on another volume: the file
will span volumes. (If the file is
within an Aggregate, the Aggregate
necessarily spans volumes as well.)
A file cannot span Aggregates.
If the file size meets or exceeds
Aggregate size, the file is not
Aggregated.
See: Aggregated?; Segment Number
File, management class bound to The management class to which any given
file is bound can be most readily be
checked via 'dsmc q backup ...' or a GUI
restore looksee on the client, or via a
more consumptive Select performed on the
server Backups table.
File, selectively delete from *SM There is no supported way currently to
storage - standard method dispose of an individual file from
server storage via a server operation:
but you may accomplish it from the
client side, by one of the following
methods:
1. The crude approach: Create an empty,
dummy file of the same name, back up
the empty surrogate as many times as
your retention generations value, to
assure that all copies of the
original are gone. (The backup of an
empty file does not require storage
pool space or a tape mount: it is the
trivial case where all the info about
the empty file can be stored entirely
in the database entry.)
2. Use a special management class with
null retention values...
- On the server, define a special
management class with VERDeleted=0
and RETOnly=0;
- On the client, code an Include to
tie the specific file to that
special management class;
- On the client, create a dummy file
in the same place in the file
system that the bogey file existed;
- Perform a Selective Backup on that
file name.
*SM will then expire the "old"
version of the file, and the low
retention will cause Expiration to
delete it the next day.
File, selectively delete from *SM Unsupported and possibly *dangerous*:
storage - unsupported method First up you need to find out the object
id(s) for the object(s) that you want to
delete. You can find this out from the
backup or archive tables using SELECT.
Then the DELETE OBJECT command is used.
However: the OBJECT_ID field from the
backup and archive tables is a single
number. The object ID required by DELETE
OBJECT takes 2 numbers as parameters, an
OBJECT_ID HIGH and an OBJECT_ID LOW. The
HIGH value has been seen to always be
zero. So, if you want to delete object
193521018 for example, the command would
be: DELETE OBJECT 0 193521018.
Note that this command is a *SM
construct, as opposed to the pure SQL
Delete statement.
Further warning: This command does
exactly and only what it says: it
deletes an object - regardless of
context. It does not update all the
necessary tables to fully remove an
object from the TSM server. If you use
this command, you risk creating a
database inconsistency and thus future
problems. Indeed, customers who have
used DELETE OBJECT report that a
subsequent AUDITDB found inconsistencies
for that OBJECT_ID.
See also: File Space, delete selected
files
File, split over two volumes? Do SELECT FILE_NAME FROM CONTENTS WHERE
volume_name='______' AND SEGMENT<>'1/1'
to find the name of the file spread over
two volumes. Then do:
SELECT VOLUME_NAME FROM CONTENTS WHERE
FILE_NAME='see.above' AND SEGMENT='2/2'
to find the other volume.
File, what volume is it on? The painful way, depending upon your
file population:
SELECT VOLUME_NAME FROM CONTENTS -
WHERE FILE_NAME='_______'
Or: Restore or retrieve the file to a
temp area, and see what tape was
mounted.
Or: Mark the storage pool Unavailable
for a moment, attempt a restoral or
retrieval, unmark, and look in the
server Activity Log for what volume it
could not get.
See also: Restoral preview
File(s), always back up during an Accomplish this by creating a parallel
incremental backup Management Class definition pointing to
a parallel Backup Copy Group definition
which contains "MODE=ABSolute", and then
have an Include statement for that file
refer to the parallel Management Class.
File age For migration prioritization purposes,
the number of days since a file was last
accessed.
File aggregation See Aggregates
File attributes, in TSM storage File attributes are not available at
the server via SQL Select queries: the
attribute information is only available
via the same kind of client you used to
back up the file, and then only in the
GUI client. That is, if you used the
Windows client to back up a file, only
the Windows client GUI can get the file
attributes.
While the server certainly does store
the attributes given to it by the
client, the TSM server does not provide
the server administrator with that view
of the database. Nor is there any way to
get them in their "raw" (uninterpreted)
format. This is partly because such data
is something only the client admin need
be concerned about, and partly because
the way the attributes are stored is
platform-specific such that extra server
programming would be needed to properly
interpret the attributes in the context
of the client architecture.
ODBC issues Select requests, so it's
view of the server DB is likewise
limited (and slow).
See also: dsmc Query Backup
File in use during backup or archive Have the CHAngingretries (q.v.) Client
System Options file (dsm.sys) option
specify how many retries you want.
Default: 4.
File name (location) of database, Are defined within file:
recovery log /usr/lpp/adsmserv/bin/dsmserv.dsk
(See "dsmserv.dsk".)
File name length, maximum supported In pursuing the contents of a file
system, TSM accepts files as they are,
without reservation. As the (Unix)
client manual says: "As long as the file
system allows creation of the file, the
Tivoli Storage Manager client will back
up or archive the file."
However: TSM does limit what it will
accept for the coding of file names in
option files and on command lines. For
example, the TSM 5.2 Unix client manual
says of such explicit specs:
"The maximum number of characters for a
file name is 256. The maximum combined
length of the file name and path name is
1024 characters."
File name uniqueness An elemental concept in *SM relates to
its database orientation: each file is
unique by nodename, filespace, and
filename. Together, the nodename,
filespace name, and filename constitute
the database key for managing the file.
File names as stored in server Client operating system file names are
stored in the server according to the
conventions of the operating system and
file system.
Unix file names are case-sensitive, and
so they are stored as-is.
Windows, following the MS-DOS
convention, has file names which are
case-insensitive, and so TSM follows the
convention of that environment by
storing them in upper case.
File server A dedicated computer and its peripheral
storage devices that are connected to a
local area network that stores both
programs and files that are shared by
users on the network.
File size For migration prioritization purposes,
the size of a file in 1-KB blocks.
Revealed in server 'Query CONtent
VolName F=D".
TSM records the size of a file as it
goes to a storage pool. If the client
compresses the file, TSM records the
compressed size in its database. If the
drive compresses the file, TSM is
unaware of the compression.
See also: FILE_SIZE; File attributes
File size, maximum, for storage pool See "MAXSize" operand of DEFine STGpool.
File size, maximum supported There was a historic limitation in the
ADSM server and client that the maximum
file size for backup and archive could
not exceed 2 GB. That restriction was
lifted in the server around 8/96; and
in the client PTF 6, for platforms
AIX 4.2, Novell NetWare, Digital UNIX,
and Windows NT.
As the (Unix) client manual says:
"As long as the file system allows
creation of the file, the Tivoli Storage
Manager client will back up or archive
the file."
Ref: Client manual, "Maximum file size
for operations"
See also: Volume, maximum size
File Space (Filespace) A logical space on the *SM server that
contains a group of files that were
stored as a logical unit, as in backup
files, archived files. A file space
typically consists of the files backed
up or archived for a given Unix file
system, or a directory apportionment
thereof defined via the Unix
VIRTUALMountpoint option. In Windows,
the file system defined by volume name
or UNC name.
File Spaces are the middle part of the
unique *SM name associated with file
system objects, where node name is the
higher portion and the remainder of the
path name is the lower portion.
By default, clients can delete archive
file spaces, but not backup file spaces,
per server REGister Node definitions.
CAUTION: The filespace name you see in
character form in the server may not
accurately reflect reality, in that the
clients may well employ different code
pages (Windows: Unicode) than the
server. The hexadecimal representation
of the name in Query FIlespace is your
ultimate reference.
File Space, backup versions 'SHOW Versions NodeName FileSpace'
File Space, delete in server 'DELete FIlespace NodeName
FilespaceName [Type=ANY|Backup|
Archive|SPacemanaged]
OWNer=OwnerName'
Note that "Type=ANY" removes only Backup
and Archive copies, not HSM file copies.
File Space, delete from client From client, dsmc Delete Filespace is a
gross, overall operation which deletes
all aspects of the filespace (providing
that the node's ARCHDELete and
BACKDELete specifications allow it).
Doing DELete FIlespace from the server
allows greater selectivity as to the
type of data to be deleted.
File Space, delete selected files TSM does not provide a means for
customers to outright delete specific
files from filespaces, as you might want
to do if last night's backup sent
virus-infected files to the server. TSM
is a strict, policy-based data assurance
facility for an enterprise, where the
server administrator is provided no
means for monkeying with individual
files...which belong to the clients, who
should be guaranteed that their data
lives according to the agreed rules.
An accommodation which the developers
added to the TSM5 client is the Expire
command, to inactivate a filespace file,
and thus get it moving toward oblivion
within the prevailing retention policy
scheme.
Another thing you can do from the client
is force individual filenames to be
pushed out of the filespace via special
policy specifications: Add an Include
statement for these files in your client
options, specifying a special management
class with a COpygroup retention period
of 0 (zero) days, and then run a special
backup.
See also: DELETE OBJECT; File,
selectively delete from *SM storage
File Space, explicit specification Use braces to enclose and thus isolate
the file space portion of a path, as in:
'dsmc query archive -SUbdir=Yes
"{/a/b}/c/*"'
This will explicitly identify the file
space name to TSM, keeping it from
guessing wrong in cases where the file
system portion of the path is not
resident on the system where the command
is invoked, you lack access to it, or
the like.
(TSM assumes that the filespace is the
one with longest name which matches the
beginning of the filespec. So if you
have two filespaces "/a" and "/a/b", you
need to specify "{/a/}somefile" to
distinguish.)
Ref: (Unix) Backup/Archive client
manual: Understanding How TSM Stores
Files in File Spaces
File Space, move to another node The 'REName FIlespace' cannot do this.
within same server (The product does not provide an easy
means for reattributing file spaces to
other nodes - largely, I think, because
it would be too easy for naive customers
to get into trouble in assigning a file
space to an operating system which did
not support the kind of file system
represented in the file space.)
You can perform it via the following
(time-consuming) technique, which
temporarily renames the sending node to
the receiving node:
Assume nodes A & B, and you want to
move filespace F1 from A to B...
1. REName Node B B_temp
2. REName Node A B
3. EXPort Node B FILESpace=f1
FILEData=All DEVType=3590 VOL=123456
(wait for the export to complete)
4. REName Node B A
5. REName Node B_temp B
6. IMport Node B Replacedefs=No
DEVType=3590 VOLumenames=123456
Alternately, you could do the converse:
temporarily rename the receiving node to
the exported file space node name for
the purposes of receiving the import.
File Space, number of files in The Query FIlespace server command does
not reveal the number; and Query
OCCupancy counts only the number of file
space objects which are stored in
storage pools.
File Space, on what volumes? Unfortunately, there is no command such
that you can specify a file space and
ask ADSM to show you what volumes its
files reside upon. You have to do
'Query CONtent VolName' on each volume
in turn and look for files, which is
tedious.
File Space, remove In performing filespace housekeeping,
it's wise to do a Rename Filespace
rather than an immediate Delete: hang on
to the renamed oldie for at least a few
days, and only after no panic calls, do
DELete FIlespace on that renamee.
Alternately, you could Export the
filespace and reclaim that tape after a
prudent period; but that takes time, and
the panicked user would have to await an
equally prolonged Import before their
data could be had.
If you don't exercise prudence in this
fashion, recovering a filespace would
involve a highly disruptive, prolonged
TSM db restoral to a prior time, Export,
then restoral back to current time
followed by an import. No one wants to
face a task like that.
File Space, rename 'REName FIlespace NodeName FSname
Newname'
A step to be performed when an
HSM-managed file system is renamed.
File Space, timestamp when Backup file 'SHow Versions NodeName FileSpace'
written to
File Space locking TSM will lock a filespace as it performs
some operations, which can result in
conflicts. See IBM site TechNote
1110026.
File Space name Remember that it is case-sensitive.
For ADSM V3 Windows clients after
3.1.0.5, the filespace name is based on
the Windows UNC name for each drive,
rather than on the drive label. So if
somebody changed the Windows NT
networking ID, that would change the UNC
name, and force a full backup again.
Per the API manual Interoperability
chapter: Intel platforms automatically
place filespace names in uppercase
letters when you register or refer them.
However, this is not true for the
remainder of the object name
specification.
File Space name, list 'Query CONtent VolName'
File Space name *_OLD A filespace name like "\\acadnt1\c$_OLD"
is an indication of having a Unicode
enabled client where the node definition
allows "Auto Filespace Rename = Yes":
TSM can't change filespaces on the fly
to Unicode so it renames the non-unicode
filespaces to ..._old, creates new
Unicode filespaces, and then does a
"full" backup for the filespaces. When
your retention policies permit, you can
safely delete the old filespaces.
See AUTOFsrename in the Macintosh and
Windows B/A clients manuals.
File Space number See: FSID
File Space reporting From client: 'dsmc q b -SUbdir=Yes
-INActive {filespacename}:/dir/* >
filelist.output
File Space restoral, preview tapes Old way:
needed 'SHow VOLUMEUSAGE NodeName' to get the
tapes used by a node, then run
'Query CONtent VolName NODE=NodeName
FIlespace=FileSpaceName' on each volume
in turn.
ADSMv3: SELECT VOLUME_NAME FROM -
VOLUMEUSAGE WHERE -
NODE_NAME='UPPER_CASE_NAME' -
AND FILESPACE_NAME='____' AND -
COPY_TYPE='BACKUP' AND -
STGPOOL_NAME='<YourBkupStgpoolName>'
File Spaces, abandoned Clients may rename file systems and disk
volumes, thus giving the backed-up
filespaces new identities and leaving
behind the old filespaces for the TSM
system administrator to deal with. To
TSM, there is no difference between a
file system which hasn't been backed up
for five years and one which has not
been backed up for five hours: the data
belongs to the client, and the TSM
server's role is to simply do the
client's bidding. This is where system
administration is needed... The standard
treatment is to periodically look for
abandoned filespaces (look at last
client access time in Query Node, and
Query FIlespace last backup date),
notify the clients, and delete them if
the client says to or no response within
a reasonable time. Watch out for
filespaces which are just used for
archiving, such that backups are not
reflected.
See "Export" for a technique to preserve
abandoned filespaces but eliminate their
burden on the server db.
File Spaces, report backups Not so easy: the information is in the
database, though getting it is tedious.
The Actlog table can be mined for ANE*
messages reflecting backups (including
transfer rates), and with that timestamp
you can go at the Backups table to
determine the filespace name, and from
the filenames gotten there you could
brave the Contents table to get sizes
(which records aggregates or filesizes,
whichever is larger).
File Spaces, summarize usage 'SELECT n.node_name,n.platform_name, -
COUNT(*) AS "# Filespaces", -
SUM(f.capacity) AS "MB Capacity" -
FROM nodes n,filespaces f -
WHERE f.node_name=n.node_name -
GROUP BY n.node_name,n.platform_name -
ORDER BY 2,1'
File spaces not backed up in 5 days SELECT FILESPACE_NAME AS "Filespace", \
NODE_NAME AS "Node Name", \
DAYS(CURRENT_DATE)-DAYS(BACKUP_END) \
AS "Days since last backup" FROM \
FILESPACES WHERE (DAYS(BACKUP_END) \
< (DAYS(CURRENT_DATE)-5))
Or:
SELECT * FROM FILESPACES WHERE -
CAST((CURRENT_TIMESTAMP-BACKUP_END)DAYS
AS DECIMAL(3,0))>5
File State The state of a file that resides in a
file system to which space management
has been added. A file can be in one of
three states - resident, premigrated, or
migrated.
See also: resident file; premigrated
file; migrated file
File system, add space management HSM: 'dsmmigfs add FSname'
or use the GUI cmd 'dsmhsm'
File system, deactivate space HSM: 'dsmmigfs deactivate FSname'
management or use the GUI cmd 'dsmhsm'
File system, display HSM: 'dsmdf [FSname]'
or 'ddf [FSname]'
File system, expanding An HSM-managed file system can be
expanded via SMIT or discrete commands,
while it is active - no problem.
File system, force migration HSM: 'dsmautomig [FSname]'
File system, Inactivate all files When a TSM client is retiring, it may be
desirable to render all its files
Inactive, and allow them to age out
gracefully, rather than do a wholesale
filespace deletion. Such an inactivation
is best done by either emptying the
client file system and then doing a last
Incremental backup, or by creating an
empty file system on the client and then
temporarily renaming the TSM server
filespace to match for the final
Incremental. A tedious alternative is
to use the client EXPire command on all
the client's Active objects.
In doing this, you want the retention
policy to have date-based expiration, as
files controled by versions-only
expiration will remain in the retired
filespace indefinitely.
File system, query space management HSM: 'dsmmigfs query FSname'
or use the GUI cmd 'dsmhsm'
File system, reactivate space HSM: 'dsmmigfs reactivate FSname'
management or use the GUI cmd 'dsmhsm'
File system, remove space management HSM: 'dsmmigfs remove FSname' (q.v.)
File system, restrict incremental Use "DOMain" option in the Client User
backup to Options file to restrict incremental
backup to certain drives or file
systems.
File system, update space management HSM: 'dsmmigfs update FSname'
or use the GUI cmd 'dsmhsm'
File system incompatibility The *SM client is programmed to know
what kind of file systems your operating
system can handle - and, by logical
extension, what kinds it cannot. When
you attempt to perform cross-node
operations to for example inspect the
files backed up by a node running a
different operating system than yours,
the client will not show you anything.
The big problem here is the client's
failure to say anything useful about its
refusal, leaving the customer scratching
his head.
See also: message ANS4095E
File System Migrator (FSM) A kernel extension that is mounted over
an operating system file system when
space management is added to the file
system (over JFS, in AIX). The file
system migrator intercepts all file
system operations and provides any space
management support that is required. If
no space management support is required,
the operation is passed through to the
operating system (e.g., AIX) for it to
perform the file system operations.
(Note that this perpetual intercept adds
overhead, which delays customary file
system tasks like 'find' and 'ls -R'.)
In the AIX implementation of FSM, HSM
installation updates the /etc/vfs file
to add its virtual file system entry
like:
fsm 15 /sbin/helpers/fsmvfsmnthelp none
(HSM prefers VFS number 15.)
File system restoral, preview tapes Unfortunately, there is no command to
needed accomplish this. You could instead try
'SHow VOLUMEUSAGE NodeName' to get a
list of the Primary Storage Pool tapes
used by a node, then run
'Query CONtent VolName NODE=NodeName
FIlespace=FileSpaceName' on each volume
in turn to identify the volumes
In ADSMv3+ you can exploit the
"No Query Restore" feature, which
displays the volume name to be mounted,
which you can then skip.
See: No Query Restore
File system size 'Query Filespace' shows its size in the
"Capacity" column, and its current
percent utilzation under "Pct Util".
File system state The state of a file system that resides
on a workstation on which ADSM HSM is
installed. A file system can be in one
of these states-native, active,
inactive, or global inactive.
File system type used by a client 'Query FIlespace', "Filespace Type".
Reveals types such as JFS (AIX), FSM:JFS
(HSM under AIX), FAT (DOS, Windows 95),
NFS3, NTFS (Windows NT), XFS (IRIX).
File system types supported, Macintosh See the Macintosh Backup-Archive Clients
Installation and User's Guide, topic
"Supported file systems" (Table 10)
File system types supported, Unix See the Unix Backup-Archive Clients
Installation and User's Guide, topic
"File system and ACL support".
(Table 47)
File system types supported, Windows See the Windows Backup-Archive Clients
Installation and User's Guide, topic
"Performing an incremental, selective,
or incremental-by-date backup".
File systems, local The "DOMain ALL-LOCAL" client option
causes *SM to process all local file
systems during Incremental Backup.
For special, non-Backup processing, your
client may need to definitively acquire
the list of all local file systems. In
Unix, you can use the 'df' or 'mount'
commands and massage the output. A
cuter/sneakier method is to have TSM
tell you the file system names: have
"DOMain ALL-LOCAL" (or omit DOMain) in
your dsm.opt file, and then do 'dsmc
query opt'/'dsmc show opt' and parse the
returned DomainList. Rightly, /tmp is
not included in the returned list.
If you don't want to disturb your system
dsm.opt file, you can simply define
environment variable DSM_CONFIG to name
an empty file, like:
setenv DSM_CONFIG /dev/null
or use the -OPTFILE command line arg
(but this arg is not usable with all
commands). And to avoid having that
environment variable setting left in
your session, you can execute the whole
in a Csh sub-shell, by enclosing in
parens:
(setenv DSM_CONFIG /dev/null ; dsmc
show opt )
You might use the PRESchedulecmd to
weasle such an approach for you.
File systems to back up Specify a file system name via the
"DOMain option" (q.v.) or specify a file
system subdirectory via the
VIRTUALMountpoint option (q.v.) and
then code it like a file system in the
"DOMain option" (q.v.).
File systems supported See: File system types supported
File systems under HSM control End up enumerated in file
/etc/adsm/SpaceMan/config/dsmmigfstab
by virtue of running 'dsmmigfs'.
FILE_NAME ADSMv3 SQL: The full-path name of a
file, being a composite of the HL_NAME
and LL_NAME, like: /mydir/ .pinerc
See also: HL_NAME; LL_NAME
FILE_SIZE ADSMv3+ SQL: A column in the CONTENTS
table, supposedly reflecting the file
size. Unfortunately the SQL access we as
customers have to the TSM database is a
virtual view, which deprives us of much
information. Here, FILE_SIZE is the
size of the Aggregate (of small files),
not the individual file, except when the
file is very large and thus not
aggregated (greater than the client
TXNBytelimit setting), and except in the
case of HSM, which does not aggregate.
So, in a typical Contents listing
involving small files, you will see like
"AGGREGATED: 3/9", and all 9 files
having the same FILE_SIZE value, which
is the size of the Aggregate in which
they all reside. Only when you see
"AGGREGATED: No" is the FILE_SIZE the
actual size of the file. Note also that
the CONTENTS table is a dog to query, so
it is hopeless in a large system.
See also: File attributes
FILEEXit Server option to allow events to be
saved to a file -- NOTE: Events
generated are written to file exit when
generated, but AIX may not perform the
actual physical write until sometime
later - so events may not show up in the
file right after they are generated by
the server/client. Be sure to enable
events to be saved (ENABLE EVENTLOGGING
FILE ...) in addition to activating the
file exit receiver. Syntax:
FILEEXit [YES | NO] <filename>
[APPEND | REPLACE | PRESERVE]
-FILEList=<Filename> TSM v4.2+ option for providing to the
dsmc command a list of files and/or
directories, both as a convenience and
to overcome the long-imposed default
restriction of 20 on the number of
filespecs which may appear on the
command line. The basic rules are:
- one object name per line in the file;
- no wildcards;
- names containing spaces should be
enclosed in double-quotes;
- specifying a directory causes only the
directory itself to be processed, not
the files within it.
Invalid entries are skipped, resulting
in a dsmerror.log entry.
Processing performance (per 4.2 Tech
Guide redbook):
The entries in the filelist are
processed in the order they appear in
the filelist. For optimal processing
performance, you should pre-sort the
filelist by filespace name and path.
For restorals, the filenames are
optimized to make the best use of tapes
such that the restoral occurs in an
order different from that in the list.
Filelist restorals are *much* more
efficient than invoking 'dsmc restore'
once for each file.
See also: dsmc command line limits;
-REMOVEOPerandlimit
Files, backup versions 'SHOW Versions NodeName FileSpace'
Files, binding to management class Files are accociated with a Management
Class in a process called "binding" such
that the policies of the Management
Class then apply to the files. Binding
is done by:
Default management class in the Active
policy set.
Backup: DIRMc option
Archive: ARCHMc option on the 'dsmc
archive' command (only)
INCLUDE option of an include-exclude
list
Using a different management class for
files previously managed by another
management class causes the files to be
rebound to the rules of the new
management class - which can cause the
elimination of various inactive versions
of files and the like, depending upon
the change in rules; so be careful in
order to avoid disruption.
Ref: Admin Guide
Files, maximum transferred as a group "TXNGroupmax" definition in the server
between client and server options file.
Files, number of in storage pools, See: Query OCCupancy
query
Files sent in current or recent Sometimes, a current or recent session
client session had some impact on the server, and the
TSM administrator would like to identify
the particulars of the files involved.
It is usually well known what TSM
storage pool volume they went to, and so
a simple way to report them is:
'Query CONtent VolName COUnt=-N F=D'
where -N is some likely number which
will encompass the recently arrived
files of interest - which is most likely
to work when the files are large. This
may be even simpler if you have a disk
storage pool as the initial reception
area for Archive, Backup, or HSM client
operation. This technique is a handy
way to spot-check a set of tapes and see
what they were last used for.
(The Query Content command is targeted
at a volume and limited in scope, so no
server overhead, and results are nearly
instantaneous.)
Files in a volume, list 'Query CONtent VolName ...'
Files in database See: Objects in database
Fileserver and user-executed restorals Shops may have a fileserver and
dependent workstations, perhaps of
differing architectures. Backups occur
from the fileserver, but how to make it
possible for users - who are not on the
fileserver - to perform their own
restorals? Possibilities:
- For each user, have the fileserver do
a 'dsmc SET Access' to allow the
workstation users to employ -FROMNode
and -FROMOwner to perform restorals
to their workstations...whence the
data would flow back to the server
over NFS, which may be tolerable.
- Allow rsh access to the fileserver so
that via direct command or interface
the users could invoke ADSM restore.
- Fabricate a basic client-server
mechanism with a root proxy daemon on
the fileserver performing the
restoral for the user, and feeding
back the results. (A primitive
mechanism could even be mail-based,
with the agent on the fileserver
using procmail or the like to receive
and operate upon the request.)
- Have the fileserver employ two
different nodenames with ADSM: one
for its own system work, and the
other for the backup of those client
user file systems. This would allow
you to give the users a more
innocent, separate password which
they could use (or embed in a shell
script you write for them) to perform
ADSM restorals from their
workstations using the -nodename
option. The data in this case would
flow to the ADSM client on the
workstation, and then back to the
fileserver via NFS, which may be
tolerable. The nuisance here is
setting up and maintaining ADSM
client environments on the
workstations...which could be made
easier if you further exploited your
NFS to have the executables and
options files shared from the
fileserver (where they would reside,
but could not be executed because of
the server being Sun and client code
being AIX, say).
-FILESOnly ADSMv3+ client option, as used with
Restore and Retrieve, to cause the
operation to bring back only files, not
their accompanying directories. However,
in Archive, directories in the path of
the source file specification *will* be
archived. During Restore and Retrieve,
surrogate directories will be
constructed to emplace the original
structure of the file collection.
Ref: TSM 4.2 Technical Guide
See also: Restore Order; V2archive
Filespace See: File Space
Filespace number See: FSID
Filespace Type Element of 'Query FIlespace' server
command, reflecting the type of file
system which ADSM found when it was
*first* backed up. (Change from, for
example, FAT to NTFS, and there will be
no change in Filespace Type.)
Sample types: Platform:
JFS AIX
FSM:JFS AIX HSM
ext2 LINUX
NFS3 IRIX
XFS IRIX
FAT32 Windows 95
NTFS WinNT
AUTOFS IRIX
See also: Platform
FileSpaceList Entry in ADSM 'dsmc Query Options' or
TSM 'dsmc show options' report which
reveals the Virtual Mount Points defined
in dsm.sys. Names are reported under
this label if defined as a Virtual Mount
Point *and* something is actually there.
As such this is a good way of
determining if an incremental backup
will work on this name.
FILESPACES *SM SQL table for the node filespace.
Columns: NODE_NAME, FILESPACE_NAME,
FILESPACE_TYPE, CAPACITY, PCT_UTIL,
BACKUP_START, BACKUP_END
See also: Query FIlespace for field
meanings.
FILETEXTEXIT TSM server option to specify a file to
which enabled events are routed. Each
logged event is a fixed-size, readable
line. Syntax:
FILETEXTEXIT [No|Yes] File_Name
REPLACE|APPEND|PRESERVE
Parameters:
Yes Event logging to the file exit
receiver begins automatically at
server startup.
No Event logging to the file exit
receiver does not begin
automatically at server startup.
When this parameter has been
specified, you must begin event
logging manually by issuing the
BEGIN EVENTLOGGING command.
file_name The name of the file in which
the events are stored.
REPLACE If the file already exists, it
will be overwritten.
APPEND If the file already exists, data
will be appended to it.
PRESERVE If the file already exists, it
will not be overwritten.
Filling Typical status of a tape in a 'Query
Volume' report, reflecting a sequential
access volume is currently being filled
with data. (In searching the manuals,
note that the phrase "partially filled"
is often used instead of "filling".)
Note that this status can pertain though
the volume shows 100% utilized: the
utilization has reached the estimated
capacity but not yet the end of the
volume.
Note that "Filling" will not immediately
change to "Full" on a filled volume if
the Segment at the end of the volume
spans into the next volume: writing of
the remainder of the segment must
complete on the second volume before the
previous volume can be declared "Full".
This necessitates the mounting and
writing of a continuation volume, which
might be thwarted by volume availability
(MAXSCRatch, etc.).
Note also that it is not logical for a
non-mounted Filling status tape to be
used when the current tape fills with a
spanned file: files which span volumes
must always continue at the front of a
fresh volume. It would not be logical
for a file to span from the end of one
volume into the midst of another volume.
Thus, a Filling tape will most often be
used when an operation begins, not as it
continues.
Historically, *SM has always keep as
many volumes in filling status as you
have mount points defined to the device
class for that storage pool. So if your
device class has a MOUNTLimit of 2,
you'll always see 2 volumes in filling
status (barring volumes that encounter
an error). So when one Filling tape goes
full, it would start another one.
Advisory: Your scratch pool capacity can
dwindle faster than you would expect, by
tapes in Filling status having just a
small amount of data on them, perhaps
never again called upon for further
filling. This can be caused by a worthy
Filling tape dismounting when an
operation like Move Data starts: it
would otherwise use that Filling tape,
but because it is dismounting, *SM
instead uses a fresh tape, and that new
tape will probably be used for further
operations, leaving the old Filling tape
essentially abandoned; so your usable
tape complement shrinks.
Reclamation: Filling volumes can be
reclaimed as readily as Full volumes,
per the reclaim threshold you set.
Ref: Admin Guide, chapter 8, How the
Server Selects Volumes with Collocation
Enabled; ... Disabled
See also: Full; Pct Util
Firewall and idle session A firewall between the TSM client and
server can result in the session being
disconnected after, say, an hour of idle
time (as in a long MediaWait). The real
solution, of course, is to resolve the
wait problems. You might also set the
TCP keepalive interval to below the
value of your firewall timeout before a
session starts, or changing the
SO_KEEPALIVE on the socket for a current
session (if possible).
Msgs: ANR0480W; ANS1809W
See also: IBM site Technote 1109798
('How to make use of "keepalive" network
option with TSM clients')
Firewall support For web-based access, TSM 4.1 introduced
the option WEBPorts.
The client scheduler operating in
Prompted mode does not work when the
server is across a firewall; but it does
work when operating in Polling mode.
To enable the Backup-Archive client,
Command Line Admin client, and the
Scheduler (running in polling mode) to
run outside a firewall, the port
specified by the server option TCPPort
(default 1500) must be opened within the
firewall.
The server cannot log events to a Tivoli
Enterprise Console (T/EC) server across
a firewall.
Consider investigating VPN methods or
SAN in general.
Ref: Quick Start manual, "Connecting
with IBM Tivoli Storage Manager across a
Firewall".
See: Port numbers, for ADSM
client/server; SESSIONINITiation;
WEBPorts
Firmware IBM term for microcode.
Firmware, for 3570, 3590 May be in a secure directory on the ADSM
web site, index.storsys.ibm.com.
(login:code3570 passwd: mag5tar).
Fixed-home Cell 3494 concept wherein a cartridge is
assigned to a fixed storage cell: its
home will not change as it is used.
This is necessitated if the Dual Gripper
feature is not installed.
fixfsm (HSM) /usr/lpp/adsm/bin/fixfsm, a ksh script
for recreating .SpaceMan files when
there is a corruption or loss problem in
that HSM control area, including loss of
the whole directory.
Ref: Redbook "Using ADSM HSM", page 52
and appendix D.
Fixtest Synonymous with "patch"; indicates that
the code has not been fully tested. If
your TSM version has a nonzero value in
the 4th part of the version number
(i.e. the '8' in '5.1.5.8') then it is a
fixtest (or patch).
See also: Version numbering
FlashCopy Facility on the IBM ESS (Shark) which
purports to facilitate backups by
creating a backup image of a file
system. It performs the operation by
making a block-by-block copy of an
entire volume. The IBM doc talks of
having to unmount the file system before
taking the copy - which is impossible in
most sites - but that is actually an
advisory to ensure the consistency of
the involved data.
Floating-home Cell 3494 Home Cell Mode wherein a cartridge
need not be assigned to a fixed storage
cell: its home will change as it is
used. This is made possible via the
Dual Gripper feature.
See: Home Cell Mode
Flush In the context of a tape drive, a Flush
operation refers to writing to tape all
the data which the tape drive has
buffered...a sync operation.
FMR Field Microcode Replacement, as in
updating the firmware on a drive.
In the case of a tape drive, when the CE
does this he/she arrives with a tape
(FMR tape); but it can often be done via
host command.
.fmr Filename suffix for FMR (q.v.).
IBM changed to a .ro suffix in 2003.
Folder separator character ':'.
(Macintosh) See also: "Directory separator" for
Unix, DOS, OS/2, and Novell.
FOLlowsymbolic Client User Options file (dsm.opt)
(or 'dsmc -FOLlowsymbolic') option to specify whether ADSM is to
restore files to symbolic directory
links, and to allow a symbolic link to
be used as a Virtual Mount Point (q.v.).
Default: No
Implications in restoring a symbolic
link which pointed to a directory, and
the symlink already exists: If
FOLlowsymbolic=Yes, the symbolic link
is restored and overlays the existing
one; else ADSM displays an error msg.
You may also be thinking of
ARCHSYMLinkasfile.
FOLlowsymbolic, query ADSM 'dsmc Query Options' or TSM 'show
options" and look for "followsym".
Font to use with the dsm GUI It ignores the -fn flag. Use the
work-around of using X resources to set
the font the GUI should use. Try
invoking the GUI like this:
dsm -xrm '*fontList: fixed'
This lets the GUI come up with the font
"fixed" being used for all panels. To
use another font, simply replace "fixed"
with that font's name (the command
'xlsfonts' gives a list of fonts
available on your system).
Alternatively, you can put a line like
"dsm*fontList: fixed" into your
.Xdefaults file ("dsm" is the GUI's X
class name), and source this file using
'xrdb -merge ~/.Xdefaults"'. This sets
the default font to be used for all dsm
sessions.
forcedirectio Solaris UFS mount option: For the
duration of the mount, forced direct I/O
will be used - data is transferred
directly between user address space and
the disk, greatly improving performance.
If the filesystem is mounted using
noforcedirectio (the default), data is
buffered in kernel address space when
the user address space application moves
data.
forcedirectio is a performance option
that is of benefit only in large
sequential data transfers.
Reported value: One customer saw a
throughput enhancement factor of 5 - 15.
Ref: Solaris mount_ufs man page
Format See: Dateformat; -DISPLaymode;
MessageFormat; Numberformat; Timeformat
Format= Operand of many TSM queries, to specify
how much information to return:
Standard The default, to return a basic
amount of information.
Detailed To return full information.
FORMAT= Operand of DEFine DEVclass, to define
the manner in which TSM is to tell the
DEVType device to operate. For example,
a 3590 drive can be specified to operate
in either basic mode or compress mode.
Advice: Avoid the temptation to employ
the "FORMAT=DRIVE" specification,
available for many device types, which
says to operate at the highest format of
which the device is capable. This is
non-specific, and has historically been
the subject of defect reports where it
would not yield the highest operating
format. Specify exactly what you want,
to get what you want.
Format command /usr/lpp/adsmserv/bin/dsmfmt
Free backup products See: Amanda
http://www.backupcentral.com/
free-backup-software2.html
Freeze data See: Preserve TSM storage pool data
FREQuency A Copy Group attribute that specifies
the minimum interval, in days, between
successive backups. Note that this unit
refers to day thresholds, not 24-hour
intervals.
-FROMDate (and -FROMTime) Client option, as used with Restore and
Retrieve, to limit the operation to
files Backed up or Archived on or after
the indicated date.
Used on RESTORE, RETRIEVE, QUERY ARCHIVE
and QUERY BACKUP command line commands,
usually in conjunction with -TODATE
(and -TOTIME) to limit the files
involved.
The operation proceeds by the server
sending the client the full list of
files, for the client to filter out
those meeting the date requirement. A
non-query operation will then cause the
client to request the server to send the
data for each candidate file to the
client, which will then write it to the
designated location.
In ADSMv3, uses "classic" restore
protocol rather than No Query Restore
protocol.
Contrast with "FROMDate".
See: No Query Restore
/FROMEXCSERV=server-name TDP Exchange option for doing
cross-Exchange server restores... where
you are doing a restore from a different
Exchange Server.. and need to specify
the Exchange Server name that the backup
was taken under.
-FROMNode Used on ADSM client QUERY ARCHIVE,
QUERY BACKUP, Query Filespace,
QUERY MGMTCLASS, RESTORE, and RETRIEVE
command line to display, retrieve, or
restore files belonging to another user
on another node. (Root can always
access the files of other users, so
doesn't need this option.)
The owner of the files must have granted
you access by doing 'DSMC SET Access'.
Contrast with -NODename, which gives you
the ability to gain access to your own
files when you are at another node.
The Mac 3.7 client README advises that
using FROMNode with a large number of
files incurs a huge performance penalty,
and advises using NODename instead.
dsm GUI equivalent: Utilities menu,
"Access another node"
Related: -FROMOwner.
See also: VIRTUALNodename
-FROMOwner Used on QUERY ARCHIVE, QUERY BACKUP,
QUERY FILESPACE, RESTORE, and RETRIEVE,
client commands, when invoked by an
ordinary user, to operate upon files
owned by another user.
Wildcard characters may be used.
Root can always access the files of
other users, but would want to use this
option to limit the operation to the
files owned by this user, as in querying
just that user's archive files in a file
system.
The owner of the files must have granted
you access by doing 'DSMC SET Access'.
As of ADSM3.1.7, non root users can
specify -FROMOwner=root to access files
owned by the root user if the root user
has granted them access.
Related: -FROMNode.
-FROMTime (and -TOTime) Client option, used with Restore and
Retrieve, to limit the operation to
files backed up on or after the
indicated time.
Used on RESTORE, RETRIEVE, QUERY ARCHIVE
and QUERY BACKUP command line commands,
usually in conjunction with -FROMDate
(and -TODate) to limit the files
involved.
The operation proceeds by the server
sending the client the full list of
files, for the client to filter out
those meeting the time requirement. A
non-query operation will then cause the
client to request the server to send the
data for each candidate file to the
client, which will then write it to the
designated location.
FRU Field-Replaceable Unit. A term that
hardware vendors use to describe a part
that can be replaced "in the field": at
the customer site.
fsCheckAdd TSM client module, apparently involved
in testing for the current file system
being represented on the TSM server as a
filespace, and in updating info about
the file system on the server. The
module has to perform statfs() or the
like on the local file system and then
either establish filespace info on the
server or update stats for the
pre-existing filespace of that name.
FSID (fsID) File Space ID: a unique numeric
identifier which the server assigns to a
filespace, under a node, when it is
introduced to server storage. (FSIDs are
not unique across nodes - only within
nodes.) Is referenced in commands like
DELete FIlespace, REName FIlespace.
The fsID of a file space can be
displayed via the GUI: on the main
window, select the File details option
from the View menu.
May appear in messages ANR0800I,
ANR0802I, ANR4391I.
fslock.pid A file in the .SpaceMan directory of an
HSM-managed file system, containing the
ASCII PID of the current or last
dsmreconcile process.
FSM See: File System Migrator
Fstypes Windows option file or command line
option to specifiy which type of file
system you want to see on the ADSM
server when you view file spaces on
another node. Use this option only when
you query, restore, or retrieve files
from another node. Choices:
FAT File Allocation Table drives.
RMT-FAT Remote FAT drives.
HPFS High-Performance File System
drives (OS/2 and Windows NT).
RMT-HPFS Remote HPFS drives.
NTFS Windows NT File System drives
RMT-NTFS Remote NTFS drives.
FTP site index.storsys.ibm.com
(Better to use direct FTP than WWW.)
Go into directory "adsm".
Full Typical status of a tape in a 'Query
Volume' report, reflecting a sequential
access volume which has been used to the
point of having filled. Over time, you
will see the Pct Util for the volume
drop. This reflects the logical deletion
of files on the volume per expiration
rules. But the very nature of serial
media is such that there is no such
thing as either the physical deletion of
files in the midst of the the volume nor
re-use of space in its midst. So the
physical tape remains unchanged as the
logical Pct Util value declines: in
real, physical terms, the tape is still
full as per having been written to the
End Of Tape marker. Hence, the volume
will retain the "Full" status until
either all files on it expire, or you
reclaim it at a reasonably low
percentage. Remember that you do not
want to quickly re-use volumes that
became full, but rather want to age
them, both to even out the utilization
of tapes in your library, and to assure
that physical data is still in place
should you be forced to restore your *SM
database to earlier than latest state.
Msgs: When tape fills: ANR8341I
End-of-volume reached...
See also: Filling; Pct Util
Full backup See: Backup, full
Full volumes, report avg capacity by SELECT STGPOOL_NAME AS STGPOOL,
storage pool CAST(MEAN(EST_CAPACITY_MB/1024) AS
DECIMAL(5,2)) AS GB_PER_FULL
_VOL FROM VOLUMES WHERE STATUS='FULL'
GROUP BY STGPOOL_NAME
Fuzzy backup A backup version of an object that might
not accurately reflect what is currently
in the object because ADSM backed up the
object while the object was being
modified. See: SERialization
Fuzzy copy An archive copy of an object that might
not accurately reflect what is currently
in the object because ADSM archived the
object while the object was being
modified.

GE Excessive abbreviation of GigE, which is


Gigabit Ethernet.
GEM Tivoli Global Enterprise Manager.
GENerate BACKUPSET TSM3.7 server command to create a copy
of a node's current Active data as a
single point-in-time amalgam. The output
is intended to be written to sequential
media, typically of a type which can be
read either on the server or client such
that the client can perform a
'dsmc REStore BACKUPSET' either through
the TSM server or by directly reading
the media from the client node.
Syntax:
'GENerate BACKUPSET Node_Name
Backup_Set_Name_Prefix
[*|FileSpaceName[,FileSpaceName]]
DEVclass=DevclassName
[SCRatch=Yes|No]
[VOLumes=VolName[,Volname]]
[RETention=365|Ndays|NOLimit]
[DESCription=___________]
[Wait=No|Yes'
It is wise to set a unique DESCription
value to facilitate later identification
and searching.
See: Backup Set; dsmc REStore BACKUPSET
Query BACKUPSETContents
GENERICTAPE DEVclass DEVType for when the server
does not recognize either the type of
device or the cartridge recording
format - never the best situation.
See also: ANS1312E
Ghost (Norton product) and TSM You can use Ghost as a quick way to
install the recovery system that is used
to run TSM restores of the real system.
Sites that use Ghost this way generally
put the recovery system and its TSM
client software in a separate partition
rather than non-standard folders in the
production partition.
GIGE Nickname for Gigabit Ethernet.
global inactive state The state of all file systems to which
space management has been added when
space management is globally deactivated
for a client node. When space management
is globally deactivated, HSM cannot
perform migration, recall, or
reconciliation. However, a root user can
update space management settings and add
space management to additional file
systems. Users can access resident and
premigrated files.
GPFS General Parallel File System (GPFS) is
the product name for Almaden's Tiger
Shark file system. It is a scalable
cluster file system for the RS/6000 SP.
Tiger Shark was originally developed for
large-scale multimedia. Later, it was
extended to support the additional
requirements of parallel computing. GPFS
supports file systems of several tens of
terabytes, and has run at I/O rates of
several gigabytes per second.
http://www.almaden.ibm.com/cs/gpfs.html
Grace period The default retention period for files
where the management class to which they
were bound disappears, and the default
management class does not have a copy
group for them.
Per DEFine DOMain.
See: ARCHRETention; BACKRETention
Grant Access You mean SET Access.
See: dsmc SET Access
GRant AUTHority *SM server command to grant an
administrator one or more administrative
privilege classes. Syntax:
'GRant AUTHority Adm_Name
[CLasses=SYstem|Policy|STorage|
Operator|Analyst|Node]
[DOmains=domain1[,domain2...]]
[STGpools=pool1[,pool2...]]
[AUTHority=Access|Owner]
[DOmains=____|NOde=____]'
When you specify CLASSES=POLICY, you
specify a list of policy domains the
admin id can control. That admin can do
things ONLY for the nodes in the
specified domain(s): lock/unlock,
register, associate, change passwords.
But the admin won't be allowed to do any
things on the server end, like
checkin/checkout, manage storage pools,
or mess with admin schedules, or even
create new domains; you need SYSTEM for
that. A limitation with POLICY is the
inability to Cancel sessions for the
nodes in its domain.
See also: Query ADmin; REGister Admin;
REMove Admin; UPDate Admin
Graphical User Interface (GUI) A type of user interface that takes
advantage of a high-resolution monitor,
includes a combination of graphics, the
object-action paradigm, and the use of
pointing devices, menu bars, overlapping
windows, and icons. This is in contrast
with a Command Line Interface, where one
must type a command and arguments and
then press Enter on the keyboard to
achieve the desired action.
See: dsm, versus dsmc
Gripper On a tape robot (e.g., 3494) is the
"hand" part, carried on the Accessor,
which grabs and holds tapes as they are
moved between storage cells and tape
drives.
See also: Accessor
Gripper Error Recovery Cell 3494: Cartridge location 1 A 3 if Dual
Gripper installed; 1 A 1 if Dual Gripper
*not* installed. Also known as the
"Error Recovery Cell".
Ref: 3494 Operator Guide.
Group By SQL operator to specify groups of rows
to be formed if aggregate functions
(AVG, COUNT, MAX, SUM, etc.) are used.
SQL clause that allows you to group
records (rows) that have the same value
in a specified field and then apply an
aggregate function to each group.
For example, here we report the number
of files and megabytes, by node, in the
Occupancy table, for primary storage
pools:
SELECT NODE_NAME, SUM(NUM_FILES) as -
"# Files", SUM(PHYSICAL_MB) as -
"Physical MB" FROM OCCUPANCY WHERE -
STGPOOL_NAME IN (SELECT DISTINCT -
STGPOOL_NAME FROM STGPOOLS WHERE -
POOLTYPE='PRIMARY') GROUP BY -
NODE_NAME'
The Group By causes the Sums to occur
for each stgpool in turn.
Groups Client System Options file (dsm.sys)
option to name the Unix groups which may
use ADSM services. It is a means of
restricting ADSM use to certain groups.
Default: any group can use ADSM.
GroupWise Novell Nterprise product for
communication and collaboration, a
principal component being mail. Its
backup is perhaps best accomplished with
St. Bernard's Open File Manager.
One thing you want to be careful of with
Groupwise is how your policies are set
up... It has been reported that
GroupWise stores its messages in
uniquely named files - which it would
periodically reorganize, deleting the
old uniquely named files and creating
new ones.
See also GWTSA.
GUI Graphical User Interface; as opposed to
the CLI or WCI.
GUI, control functionality The TSM client GUI, in Windows, may be
configured to limit the services
available to the end user. See IBM site
Technote 1109086, "Dynamic configurable
client GUI functionality".
GUI client Refers to the window-oriented client
interface, rather than the command-line
interface. Note that the GUI is a
convenience facility: as such its
performance is inferior to that of the
command line client, and so should not
be used for time-sensitive purposes such
as disaster recovery. (So says the B/A
Client manual, under "Performing Large
Restore Operations".)
As of 2004, the GUI is currently
designed to query the server for all
jobs when the GUI starts up, and then
depend on events from the server to keep
in sync when jobs are printed and new
jobs are submitted. It is possible for
the GUI to get out of sync with reality:
the GUI will remove a job instance from
its repertoire if a query for the job
fails to find it (which additionally
keeps 5010-505 "cannot find" messages
out of the server error.log).
GUI not showing files See: dsmc EXPire
GUI vs. CLI By design, the GUI client is different
in its manner of operation than the CLI
client, because the nature of the GUI
means that it needs to provide responses
faster. Before v3, the GUI worked much
like the CLI, obtaining all information
about the area being queried before
returning any. That was problematic, in
the obvious delay, and client memory
utilization (where a *SM client schedule
process itself may be hanging on to a
lot of memory). As of v3, the GUI asked
the server for only as much data as it
needed to fulfill its immediate display
request (a top level set of directories,
or the immediate contents of a selected
directory). That stepwise approach can
be problematic, however, in requiring
intermediate pieces to the ultimate goal
- which the CLI can reach directly. In
GUI based restorals, a subdirectory may
be missing, because of inadequate
policies or having the directory go to a
management class having a shorter
retention than the files contained
within the directory. This is even more
problematic in a PointInTime restoral,
where the version of a needed directory
is specific. The absence of the
intervening directory thwarts progress
in drilling down the directory structure
to the desired point.
So how is the CLI so different? It
operates upon paths. If you have
explored the BACKUPS table, you will
realize that files are cataloged by
their filespace, the HLname, and the
LLname, where the LLname is the file
name and the HLname is all the
intervening subdirectories between the
filespace name and file name. A CLI
restoral will specify a file as part of
a path name, and restoral is much more
direct. Well, what about the intervening
directories: what if one of them has
expired out? This is where the Restore
Order (q.v.) principle comes into play,
where surrogate directories are created
as the restoral proceeds according to
the order in which needed elements
appear on all the volumes needed in the
restoral. If a needed subdirectory can
be re-established from the backing
store, then its original form is
recreated; but if the subdir is missing
in the backing store, the surrogate has
to stand, on the basis of best guess.
The GUI is for inexperienced users who
have to be led by the hand and guided by
pictures. Experienced users simply use
the CLI, to more directly achieve their
objectives. But don't overlook the -Pick
option, which in many cases is an
excellent middle ground, and
indispensable when choosing among the
numerous Inactive versions of a file.
Ref: APAR IC24733
GUID (TSM 4.2+) The Globally Unique
IDentifier (GUID) associates a client
node with a physical system. The GUID is
(currently) not used for functional
purposes, but is only there for
potential reporting purposes.
Aka "TIVGUID".
When you install the Tivoli software:
On Unix, the tivguid program is run to
generate a GUID which is stored in the
/etc/tivoli directory;
On Windows, the tivguid.exe program is
run to generate a GUID which is stored
in the Registry.
The GUID is a 16-byte code that
identifies an interface to an object
across all computers and networks. The
identifier is unique because it contains
a time stamp and a code based on the
network address that is hard-wired on
the host computer's LAN interface card.
The GUID for a client node on the server
can change if the host system machine is
corrupted, if the file entry is lost, or
if a user uses the same node name from
different host systems. You can perform
the following functions from the command
line:
- Create a new GUID
'tivguid -Create'
- View the current GUID
'tivguid -Show'
- Write a specific value
- Create another GUID even if one
exists.
Do 'tivguid -Help' for usage.
The GUID is not updated if client option
VIRTUALNodename is employed, but will be
updated if NODename is employed.
Msgs: ANR1639I
Ref: Unix client manual (body and
glossary); IBM site entry swg21110521
See also: TCP_ADDRESS
GUIFilesysinfo Client option that determines whether
information such as filesystem capacity
is displayed on the initial GUI screen
for all filesystems (GUIF=All, the
default), or only for local filesystems
(GUIF=Local). GUIF=Local is useful if
the remote filesystems displayed are
often unreachable, because ADSM must
wait for the remote filesystem
information or a timeout before
displaying the initial GUI screen, which
may cause a delay in the appearance of
the initial GUI screen. This option can
be specified in dsm.sys or dsm.opt, or
on the command line when invoking the
GUI.
GUITREEViewafterbackup Specifies whether the client is returned
to the Backup, Restore, Archive, or
Retrieve window after a successful
operation completes.
Specify where: Client options file
(dsm.opt) and the client system options
file (dsm.sys).
Possibilities:
No - default; Yes.
GWTSA GroupWise Target Service Agent - a
NetWare TSA module used to make an
online backup of GroupWise.
See also: GroupWise

HALT *SM server command to shut down the


server. This is an abrupt action. If
possible, perform a Disable beforehand
and give time for prevailing sessions to
finish.
Unix alternative for when you are locked
out and want to halt the server cleanly
is to send it a SIGTERM signal:
'kill <Srvr_PID>'
( = 'kill -TERM <Srvr_PID>')
( = 'kill -15 <Srvr_PID>')
Other kill signals to terminate the TSM
server:
-11 (-SEGV) Gives the server a chance
to terminate itself, with error
handling, to log errors to
dsmerror.log and generate a core
dump.
-10 (BUS) Gives the server a chance
to terminate itself, but error
handling is skipped, no error
messages are logged, but generates
a core dump.
-9 (KILL) Applications cannot
intercept and handle this signal,
so all this does is abruptly
terminates the server with no
diagnostic information captured.
The following will *not* terminate the
server, but will capture info:
-30 (USR1) Generate a core dump, but
not terminate the server. (This is
new in TSM 5.1.)
Note: Sadly, there is no standard for
server signalling among IBM products: do
not expect any other IBM server to
respond to signals in the same way.
See also: Server "hangs"; Server lockout
Hard drives list See: File systems, local
Hard links (hardlinks) Unix: When more than one directory entry
in a file system points to the same file
system inode, as achieved by the 'ln'
command. The directory entries are just
names which associate themselves with a
certain inode number within the file
system. They are equivalent, which is to
say that one is not the "original, true"
entry and that the later one is "just a
link". The "hard links" condition is
known only because the inode block
contains a count of links to the inode.
When one of its multiple names is
deleted, the link count is reduced by
one, and the inode goes away only if the
link count reaches zero.
When you back up a file that contains a
hard link to another file, TSM stores
both the link information and the data
file on the server. If you back up two
files that contain a hard link to each
other, TSM stores the same data file
under both names, along with the link
information. When you restore a file
that contains hard link info, TSM
attempts to reestablish the links. If
only one of the hard-linked files is
still on your workstation, and you
restore both files, TSM hard-links them
together. Of course, if the hard link
was broken since the backup such that
the multiple names became files unto
themselves, then it will not be possible
to restore the hardlink name.
Ref: Using the Backup-Archive Clients
manual, "Understanding How Hard Links
Are Handled".
HAVING SQL operand, as in:
"... HAVING COUNT(*)>10"
HBA Host Bus Adapter, a term commonly used
with Fibre Channel to refer to the
interface card.
Performance/impact: FibreChannel is high
speed traffic, where an HBA such as a
6228 can eat the entire available
bandwidth of a PCI bus; so each card
should be on a separate PCI bus, with
very little else on the bus.
IBM recommends: "It is highly
recommended that Tape Drives and Tape
Libraries be connected to the system on
their own host bus adapter and not share
with other devices types (DISK, CDROM,
etc.)."
The redpaper IBM TotalStorage: FAStT
Best Practices Guide further says: "It
is often debated whether one should
share HBAs for disk storage and tape
connectivity. A guideline is to separate
the tape backup from the rest of your
storage by zoning and move the tape
traffic to a separate HBA and create an
separate zone. This avoids LIPa resets
from other loop devices to reset the
tape device and potentially interrupt a
running backup."
HDD Hard Disk Drive
Header files for 3590 programming /usr/include/sys/mtio.h
/usr/include/sys/Atape.h
Helical scan tape techology Magnetic tape is tightly wound around
and passes over a drum, at an angle.
Inside the drum and protruding from a
slot cut into it is a rotating arm with
read/write heads on both ends of the
arm. The heads contact the tape in
"slash" strokes, the effect being like a
helix. This recording technique allows
higher density than if the tape were
linearly passed over a single head: it
is most commonly found used in VCRs,
where analog video frames are
conveniently recorded in the slashes.
The technique was extended to data
recording in 8mm form - where it
achieved notoriety because of high error
rates and unreadable tapes.
Helical scanning is rough on tapes,
resulting in oxide shedding and head
clogging: frequent cleaning is
essential. In contrast, linear tape
technology does not employ sharp angles
or mechanically active heads, and so its
tapes enjoy much longer, reliable lives.
As found in Exabyte Mammoth and Sony AIT
(both 8mm tape technologies).
Help files for client May have to do:
'setenv HELP /usr/lpp/adsm/bin'
Hidden directory See: .SpaceMan
Hierarchical storage management client A program that runs on a workstation or
file server to provide space management
services. It automatically migrates
eligible files to ADSM storage to
maintain specific levels of free space
on local file systems, and automatically
recalls migrated files when they are
accessed. It also allows users to
migrate and recall specific files.
Hierarchy See: Storage Pool Hierarchy
High Capacity Output Facility 3494 hardware area, located on the
inside of the control unit door,
consisting of a designated column of
slots within the 3494 from which the
operator can take Bulk Ejects by opening
the door.
To change it, you need to perform a
Teach Current Configuration, which
involves going through a multi-step
configuration review, followed by a 3494
reboot; then you need to force a partial
reinventory, for the Library Manager to
review the cells involved.
See also the related Convenience I/O
Station.
High Performance Cartridge Tape The advanced cartridges used in the IBM
3590 tape drive.
High threshold HSM: The percentage of space usage on a
local file system at which HSM
automatically begins migrating eligible
files to ADSM storage. A root user sets
this percentage when adding space
management to a file system or updating
space management settings. Contrast
with low threshold. See "dsmmigfs".
High-level address Refers to the IP address of a server.
See also: Low-level address;
Set SERVERHladdress; Set SERVERLladdress
High-level name qualifier API: The middle part of a file path,
in between the filespace name on the
left, and the low-level name qualifier
on the right. The API software wants a
slash/backslash on the left part of the
qualifier, but not on the right (which
is different from the structure reported
in Query CONTent). Thus, with path
/a/b/c, /a is the filespace name, /b is
the hight-level name qualifier, and /c
is the low-level name qualifier. (If you
attempt to relocate the slash from the
LL name portion to the right side of the
HL, ANS0225E results.)
Ref: API manual, "High-level and
low-level names"
See also: Low-level name qualifier
HIghmig Operand of 'DEFine STGpool', to define
when ADSM can start migration for the
storage pool, as a percentage of the
storage pool occupancy. Can specify
1-100. Default: 90.
To force migration from a storage pool,
use 'UPDate STGpool' to reduce the
HIghmig value (with HI=0 being extreme).
See also: Cache; LOwmig
HIPER Seen in IBM APARs; refers to a situation
which is High Impact, PERvasive.
Hivelist See: BACKup REgistry
Hives High level keys
HL_NAME SQL: The high level name of an object,
being the directory in which the object
resides. Simply put, it is everything
between the filespace name and the file
name, which is to say all the
intervening directories.
In most cases, the FILESPACE_NAME will
not have a trailing slash, the HL_NAME
will have a leading and trailing slash,
and the LL_NAME will have no slashes.
Unix examples:
For file system /users, directory name
/users: FILESPACE_NAME="/users",
HL_NAME="/", LL_NAME="".
For file system /users, directory name
/users/mgmt/: FILESPACE_NAME="/users",
HL_NAME="/", LL_NAME="users".
For file system /users, file name
/users/mgmt/phb:
FILESPACE_NAME="/users",
HL_NAME="/mgmt/", LL_NAME="phb".
For file system filename
/usr/docs/Acrobat3.0/Introduction.pdf
the FILESPACE_NAME="/usr/docs",
HL_NAME="/Acrobat3.0/",
LL_NAME="Introduction.pdf".
Windows example:
For \BONKER\C$\DATA\mydata.txt:
FILESPACE_NAME='\\BONKER\C$'
HL_NAME='\DATA\'
LL_NAME='MYDATA.TXT'
noting for Windows that:
- The HL_NAME and LL_NAME are stored
in upper case, regardless of how
they appear on the client;
- The HL_NAME begins and ends with a
backslash (unless it is the drive
root, in which case there is only
one backslash);
- Concatenated, FILESPACE_NAME,
HL_NAME, and LL_NAME should form a
proper path.
Note: The Contents table has a FILE_NAME
column which is a composite of the
HL_NAME and LL_NAME, like:
/mydir/ .pinerc
which makes it awkward to use the output
of that table to further select in the
Backups table, for example.
See also: FILE_NAME; LL_NAME
HLAddress (High Level Address) REGister Node specification for the
client's IP address, being a hard-coded
specification of the address to use, as
opposed to the implied address
discovered by the TSM server during
client sessions, where the client has
contacted the server. (In such a contact
from the client, the client may
explicitly specify the address by which
it wants to be known to the server via
the TCPCLIENTAddress option, usually
where the client has multiple ethernet
cards, on multiple subnets.)
HLAddress is needed where there is no
contact from the client for the server
to thus learn its address, as in the
case of firewall implementations, where
it is not possible for the client, out
in a public network, to contact the
server...where the server would be
scheduled to initiate contact with the
client at a determined time.
See also: LLAddress; IP addresses of
clients; SCHEDMODe PRompted
Hole in the tape test An ultimate test of tape technology
error correction ability: a (1.25mm)
hole is punched through the midst of
data-laden tape, and then the tape is
put through a read test. 3590 tape
technology passes this extreme test.
("Magstar Data Integrity Tape
Experiment")
Ref: Redbook "IBM TotalStorage Tape
Selection and Differentiation Guide";
http://www4.clearlake.ibm.com/hpss/Forum
/2000/AdobePDF/Freelance-Graphics-IBM-
Tape-Solutions-Hoyle.pdf
Home Cell Mode 3494 concept determining whether
cartridges are assigned to fixed
storage slots (cells) or can be stored
anywhere after use (Floating-home Cell).
Query via 3494 Status menu selection
"Operational Status".
Home Element Column in 'Query LIBVolume' output.
See: HOME_ELEMENT
HOME_ELEMENT TSM DB: Column in LIBVOLUMES table
containing the Element address of the
SCSI library slot containing the tape.
(Does not apply to libraries which
contain their own supervisor, such as
the 3494, where TSM does not physically
control actions.)
Type: Integer Length: 10
See also: Element
Host name You mean "Server name" or "Node name"?
(q.v.)
Hot backup Colloquial term referring to performing
a backup on an object, such as a
database, which is undergoing continual
updating as a conventional, external
backup of that object proceeds. The
restorability of the object backed up
that way is questionable at best. The
more reasonable approach involves
performing the backup from inside the
object, as for example a database API
which can capture data for backup but do
so in conjunction with ongoing
processing. Another approach is an
operating system API which performs
continual, real-time backup.
HOUR(timestamp) SQL function to return the hour value
from a timestamp.
See also: MINUTE(); SECOND()
HOURS See: DAYS
HP-UX file systems HP-UX uses the Veritas File System
(VxFS), also referred to as the
Journaled File System (JFS). VxFS
provides Logical Volume Manager (LVM)
tools to administer physical disks and
allow administrators to manage storage
assets. In general, one or more physical
disks are initialized as physical
volumes and are allocated to Volume
Groups. Storage from the Volume Group is
made available to a host by creating one
or more Logical Volumes. Once allocated,
Logical Volumes can be used for HP-UX
file systems or used as raw (logical)
devices for DBMS. Information about the
Volume Group and Logical Volume are
stored on each physical volume.
HPCT High Performance Cartride Tape.
See: 3590 'J'
Contrast with CST and ECCST.
See also: 3590 'J'; EHPCT
HSM Hierarchical Storage Management.
Currently called "TSM for Space
Management".
A TSM client option available in AIX and
Solaris. Its nature calls for operating
system modifications, typically in the
form of kernel extensions. (Was once
available for SGI as well, but that was
withdrawn. IBM intended HSM for many
platforms, but as they approached the
task they found that various parties
were being licensed to likewise modify
the operating system to their needs: in
that this uncoordinated approach would
lead to inevitable conflicts, IBM
reduced its ambitions.)
Started by /etc/inittab's "adsmsmext"
entry invoking /etc/rc.adsmhsm .
See also: DM
HSM, add file system to it Employ the GUI, or the command:
'dsmmigfs add FileSystemName'
The file system name ends up being added
to the list
/etc/adsm/SpaceMan/config/dsmmigfstab
HSM, command format Control via the OPTIONFormat option
in the Client User Options file
(dsm.opt): STANDARD for long-form, else
SHORT. Default: STANDARD
HSM, deactivate for whole node 'dsmmigfs globaldeactivate'
Later, reactiveate via:
'dsmmigfs globalreactivate'
HSM, display Unix kernel messages? Control via the KERNelmessages option in
the Client System Options file
(dsm.sys). Default: Yes
HSM, exclude files Specify "EXclude.spacemgmt pattern..."
in the Include-exclude options file
entry to exclude a file or group of
files from HSM handling.
HSM, for Windows It's Legato DiskXtender, an IBM-blessed
TSM companion product. (Formerly from
OTG Software, bought by Legato.)
http://portal1.legato.com/products/
disxtender/
In past history: Eastman Software had an
HSM for NT product called OPEN/stor,
being replaced in 1998 by Advanced
Storage for Windows NT (y2k compliant).
As of mid-98, OPEN/stor became Storage
Migrator 2.5 (version 2.5 includes the
ADSM option as part of the base product)
HSM, insufficient space in file system You can run into a situation where it
looks like there should be room in the
HSM-controlled file system to move in a
given file, but attempting to do so
results in an error indicating
insufficient space to complete the
operation. This may be due to
fragmentation of the disk space: the
query you performed to report the amount
of free space is misleading because it
includes partially free blocks of space,
whereas the file copy operation wants
whole, empty blocks. In AIX, for
example, the default file system block
size is 4 KB. A file containing 1 byte
of data requires a minimum storage unit
of one 4 KB block where 4095 bytes are
empty; but those 4095 bytes can only be
used for the expansion of that file, not
the introduction of a new file.
In AIX, a fragmentation problem at data
movement time can be determined by
examining the AIX Error Log, as via the
'errpt' command, for JFS_FS_FRAGMENTED
entries.
HSM, reactivate for whole node 'dsmmigfs globalreactivate'
as after having deactivated via
'dsmmigfs globaldeactivate'
HSM, recall daemons, max number Control via the MAXRecalldaemons option
in the Client System Options file
(dsm.sys). Default: 20
HSM, recall daemons, min number Control via the MINRecalldaemons option
in the Client System Options file
(dsm.sys). Default: 3
HSM, reconcilliation interval Control via the RECOncileinterval option
in the Client System Options file
(dsm.sys). Default: 24 hours
HSM, reconcilliation processes, max Control via the MAXRCONcileproc option
number in the Client System Options file
(dsm.sys). Default: 3
HSM, start manually In Unix: '/etc/rc.adsmhsm &'
HSM, threshold migration, query Via the AIX command:
'dsmmigfs Query [FileSysName]'
HSM, threshold migration, set Control via the AIX command:
'dsmmigfs Add|Update -hthreshold=N'
for the high threshold migration
percentage level. Use:
'dsmmigfs Add|Update -lthreshold=N'
for the low threshold migration
percentage level.
HSM, retention period for migrated Control via the MIGFILEEXPiration option
files (after modified or deleted in in the Client System Options file
client file system) (dsm.sys). Default: 7 (days)
HSM, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)]
on all volumes [DOmain=DomainName(s)]
[POoltype=ANY|PRimary|COpy'
Note: It is best to run 'AUDit LICenses'
before doing 'Query AUDITOccupancy' to
assure that the reported information
will be current.
HSM, threshold migration, max number Control via the MAXThresholdproc option
of processes in the Client System Options file
(dsm.sys). Default: 3
HSM active on a file system? 'dsmdf FSname', look in "FS State"
column for "a" for active, "i" for
inactive, or "gi" for global inactive.
HSM and Aggregation HSM did not begin utilizing Aggregation
when that capability came into being in
ADSMv3, and HSM still does not use it.
The rationale for not using Aggregation
is that the HSM design transfers each
file in its own transaction, which is
due to a number of reasons, such as that
HSM in general will be migrating "large"
files as these are favored during
candidates search (unless the size
factor is 0) and will thus be migrated
before any of the smaller files.
The effect is increased server overhead
as well as greater tape utilization.
HSM backup, offsite copypool only Some implementations seek to have only
an offsite (copypool) image of the HSM
data, seeking to avoid the use of tapes
for an onsite backup image. An approach:
Via dsmmigfs, defined the stub size to
be 512 to eliminate leading file data
from the stub, to force all files to be
eligible for migration. Employ a
relatively low HThreshold value on the
HSM file system, to cause most files to
migrate naturally. Prepatory to daily
TSM server administration tasks,
schedule a 'dsmmigrate -R' on the file
system, allowing enough time for it to
finish. As part of daily TSM server
administration, do Backup Stgpool on the
disk & tape stgpools to which that HSM
data migrates, to an appropriate offsite
stgpool.
HSM candidates list 'dsmmigquery FSname'
HSM commands, list help 'dsmmighelp'
HSM configuration directory /etc/adsm/SpaceMan/config
HSM daemons dsmmonitord and dsmrecalld.
Their PIDs are remembered in files
/etc/adsm/SpaceMan/dsmmonitord.pid and
/etc/adsm/SpaceMan/dsmrecalld.pid
HSM disaster recovery (offsite) issues For *SM offsite disaster recovery, what
should go offsite? Should you send
copies of HSM storage pool backups, or
copies of backup storage pools
reflecting HSM file system backups - or
both? HSM storage pools contain only
data which has migrated from the HSM
file system to TSM server storage -
which is *never* small (<4 KB) files.
Because HSM storage pool copy tapes are
inherently incomplete, they cannot fully
recover HSM in the event of a disaster.
However, one would *like* to depend upon
HSM copy storage pool tapes because
restoring the server storage pool is so
easy.
Depending upon HSM file system backup
storage pool data for disaster recovery
is more appropriate in that it is a
complete image of the data: files of all
sizes, migrated or not. While complete,
a backup image of HSM is problematic for
disaster recovery in that there is
little chance that it can all fit into
the HSM file system upon restoral. To
accomplish such a restoral, you will
need an aggressive migration from the
file system to the HSM storage pool,
which has the opportunity to run as the
restoral takes time to transition from
one tape to another. (Note that a
Backup storage pool tape set is far too
awkward to depend upon as a resource for
restoring a bad HSM primary tape storage
pool: depend upon HSM backup storage
pool tapes only for file recovery and
disaster recovery.)
HSM error handling Specify a program to execute via the
ERRORPROG Client System Options file
(dsm.sys). Can be as simple as
"/bin/cat".
**WARNING** If ADSM loses its mind (as
when it obliterates its own client
password), this can result in tens of
thousands of mail messages being sent.
HSM file, no migration There are some files in HSM file systems
which need to be referenceable without
recall delays, such as text files which
describe the directory they are in; so
you would want to prevent their
migration. You might expect the dsmattr
command to be able to flag a file for
no-migration, but the command has no
such capability. What you instead need
to do is update your Include-Exclude
list to name that file, as in:
EXCLUDE.SPACEMGMT /Some/Filename
or
EXCLUDE.FILE.SPACEMGMT /Some/Filename
(The latter is for a single file; the
former may be used for files or
directories.)
HSM file, recall Is implicit by use of the file, or you
can use the dsmrecall command (q.v.).
HSM file system, back up Performing a 'dsmc Incremental' on an
HSM file system results in basic backup
files. If a file is Migrated, a backup
of it results in just the single
instance of the file in the Backups
table: there will be no backup image of
the stub file.
HSM file system, mount Make sure your current directory is not
the mount point directory, then:
'mount FSname' # Mount the JFS
'mount -v fsm FSname' # Mount the FSM
(The second command will result in msg
"ANS9309I Mount FSM: ADSM space
management mounted on FSname".)
HSM file system, mounting from an NFS You can have an HSM-managed file system
client available to remote systems via NFS; but
there are procedural considerations:
- Attempting to mount the file system
too early in server start-up could
result in having the (empty) server
mount point directory being mounted.
What's worse: a 'df' on the client
misleads with historical
information.
- AIX's normal exports sequence will
result in the JFS file system being
exported from the server. You need
to do another 'exportfs' command
after HSM mounts its FSM VFS over the
JFS file system, else on the client
you get:
mount ServerName:/FSname MtPoint
mount: access denied for
ServerName:/FSname
mount: giving up on:
ServerName:/FSname
Permission denied
So try '/usr/bin/exportfs -v FSname'.
Note that this can sometimes take up
to 10 minutes to take effect (some
problem with mountd).
HSM file system, move to another ADSM The simplest method is to set up a
server replacement HSM file system in the new
environment and perform a cross-node
restore (-VIRTUALNodename=FormerClient)
to populate the new file system,
specifying -SUbdir=Yes to recreate the
full directory structure, and
-RESToremigstate=No to move all the data
across. This method depends upon the
feasibility of using a datacomm line for
so much data, being able to use a tape
drive on the source TSM server for a
prolonged period, and the receiving HSM
file system parameters being set to
perform migration and dsmreconcile in
time to make space for the incoming
data.
Another approach is to: Perform a final
backup of the HSM file system in its
original location. EXPort Node of that
backup filespace. Define the HSM file
system and HSM storage pool in its new
environment. IMport Node to plant the
backup filespace. Perform a full file
system restoral in the new environment
(dsmc restore -SUbdir=Yes
-RESToremigstate=Yes (the default
anyway)) to recreate the directory
structure, restore small files, and
recreate stub files. This basically
follows the HSM file system recovery
procedures outlined in the HSM manual
and HSM redbook (q.v.). The big
consideration to this approach is that
Export and Import are very slow.
HSM file system, move to another The following method is anecdotally
client, same server reported, but is undocumented:
import volume group
mount the HSM file system
dsmmigfs import <options> <hsm-fs>
HSM file system, remove Make sure that the file system is all
but empty, in that following REMove will
cause a full recall.
'dsmmigfs REMove FSname', which...
- runs reconcilliation for the filesys;
- evaluates space for total recall;
- recalls all files
- has the server eliminate migrated file
images from server storage
- unmounts the FSM from the JFS filesys.
You then do:
'umount FSname' # Unmount the JFS
'rmfs -r FSname' to remove the file
system, LV, and mount point.
Remove name from /etc/exports.HSM;
Update /usr/lpp/adsm/bin/dsm.opt, and
restart dsmc schedule process, if any;
Update /usr/lpp/adsm/bin/rc.adsmhsm, if
filesys named there.
HSM file system, rename 'dsmmigfs deactivate FSname'
'umount FSname' # Unmount the FSM
'umount FSname' # Unmount the JFS
Change name in /etc/filesystems;
Change name in /etc/exports.HSM;
Rename mount point;
Change name in
/etc/adsm/SpaceMan/config/dsmmigfstab;
In ADSM server: 'REName FIlespace
NodeName FSname NewFSname'
'mount NewFSname';
'mount -v fsm NewFSname';
'dsmmigfs reactivate NewFSname'
'/usr/sbin/exportfs NewFSname'
# To export the FSM
Update /usr/lpp/adsm/bin/dsm.opt
Update /usr/lpp/adsm/bin/rc.adsmhsm, if
filesys named there.
HSM file system, restore as stub files Use -RESToremigstate=Yes (the default)
(restore in migrated state) to restore the files such that the data
ends up in TSM server filespace and the
client file system gets stub files.
(Naturally, files too small to
participate in HSM migration are fixed
residents in the file system, and
physical restoral must occur.)
Can specify either on the dsmc command
line, or in the Client User Options file
(dsm.opt). Example:
'dsmc restore -RESToremigstate=Yes
-SUbdir=Yes /FileSystem'
To query, do 'dsmc Query Option' in TSM
or 'dsmc show options' in TSM and look
for "restoreMigState".
See also: dsmmigundelete; Leader data
HSM file system, unmount Do this when the file system is dormant.
Make sure your current directory is not
the mount point directory, then:
'umount FSname' # Unmount the FSM
'umount FSname' # Unmount the JFS
HSM file systems, list 'dsmmigfs query [FileSystemName...]'
The file systems end up enumerated in
file
/etc/adsm/SpaceMan/config/dsmmigfstab
by virtue of running 'dsmmigfs add'.
HSM files, database space required Figure 143 bytes + filename length.
HSM files, restore as stubs (migrated Control via the RESToremigstate Client
files) or as whole files User Options file (dsm.opt) option.
Specify "RESToremigstate Yes" to restore
as stubs (the default, usual method); or
just say "No", to fully restore the
files to the local file system in
resident state.
HSM files, actual sizes The Unix 'du -k ...' command can be used
to display the sizes of files as they
sit in the Unix file system; but it
obviously knows not of HSM and cannot
display actual data sizes for files
migrated from an HSM-controlled file
system. Use the ADSM HSM 'dsmdu'
command to display the true sizes.
See: dsmdu
HSM files, seek in database SELECT * FROM SPACEMGFILES WHERE -
NODE_NAME='UPPER_CASE_NAME' AND -
FILESPACE_NAME='___' AND FILE_NAME='___'
This will report state (Active,
Inactive), migration date, deletion
date, and management class name. It will
not report owner, size, storage pool
name or volumes that the file is stored
on.
HSM for Netware Product "FileWizard 4 TSM" from a
company called Knozall Systems.
http://www.knozall.com/hsm.htm
HSM for Windows See: HSM, for Windows
HSM installed? In AIX, do: lslpp -l "adsm*"
or: lslpp -l "tsm*"
and look for "HSM".
HSM kernel extension loaded? For ADSM:
'/usr/lpp/adsm/bin/installfsm
-q /usr/lpp/adsm/bin/kext'
For TSM:
In AIX, run the 'kdb' command, "lke"
subcommand, and look for
/usr/tivoli/tsm/client/hsm/bin/kext
in the list.
See also: installfsm
HSM kernel extension management See: installfsm
HSM Management Class, select HSM uses the Default Management Class
which is in force for the Policy Domain,
which can be queried from the client via
the dsmc command 'Query MGmtclass'.
You may override the Default Management
Class and select another by coding an
Include-Exclude file, with the third
operand on an Include line specifying
the Management Class to be used for the
file(s) named in the second operand.
HSM migration behavior Observations via 'dsmls' show that files
migrate as follows:
1. They sit in the file system for some
time, as Resident (r).
2. When space is needed, migration
candidates are migrated (m). In
addition, the Premigration Percentage
causes a certain additional amount to
be premigrated (p).
Note that the premigrated files are
recorded in the premigrdb database
located in the .SpaceMan directory.
HSM migration candidates list empty See: HSM migration not happening
HSM migration not happening Possible causes:
- The file system is not actively under
HSM control.
- The management class operand
SPACEMGTECHnique is NONE or SELective.
Check via client 'dsmmigquery -M -D'.
- The files are predominantly smaller
than the stub size defined for the HSM
file system (usually 4KB).
- If your file system usage level is not
over the defined migration threshold,
there is no need for migration.
- dsmmonitord not running (started by
rc.adsmhsm) so as to run dsmreconcile
and create a migration candidates list
(verifiable via 'dsmmigquery -c FSnm')
- By default, migration requires that a
backup have been done first, per the
MGmtclass MIGREQUIRESBkup choice.
(Look for msg ANS9297I.)
- Assure that your storage pool
migration destinations are defined as
you think they are.
- Assure that the destination storage
pool Access is Read/Write, and that
its volumes are online.
- Another cause of this problem is there
being binary (as in a Newline)
embedded in a space-managed file name.
Look for such an oddity in the
migration candidates list.
- Try a 'dsmdf' on the file system name:
it may report a Mgrtd KB value which
is way over the Quota value reported
by 'dsmmigfs query'. If the value
looks bogus, try running a
dsmreconcile on the file system, which
may fix it. See also: ANS9267E
- Try a manual dsmautomig, which may
reveal problems.
- Try a manual dsmreconcile. That may
say "Note: unable to find any
candidates in the file system.": try
doing 'dsmmigrate -R Fsname' and see
what messages result.
- If there is a migration candidates
list, manually run dsmautomig and see
if that works; else try a manual
dsmmigrate on a selected file and see
if that works.
HSM migration processes, number The 4.1.2 HSM client introduces the new
parameter MAXMIGRATORS (q.v.).
HSM quota HSM: The total number of megabytes of
data that can be migrated and
premigrated from a file system to ADSM
storage. The default is "no quota", but
if activated, the default value is the
same number of megabytes as allocated
for the file system itself.
HSM quota, define Defined when adding space management to
a file system, via the dsmhsm GUI or
the 'dsmmigfs add -quota=NNN Fsname'
command.
HSM quota, update Can be done via the dsmhsm GUI or the
'dsmmigfs update -quota=NNN Fsname'
command.
HSM rc file /etc/rc.adsmhsm, which is a symlink to
/usr/lpp/adsm/rc.adsmhsm, a Ksh script.
Invoked by /etc/inittab's "adsmsmext"
entry.
As provided by IBM, the script has no
"#!" first line to cause it to be run
under Ksh if invoked simply by name.
HSM recall Priority: Will preempt a BAckup STGpool.
HSM recall processes, cancel 'dsmrm ID [ID ...]'
HSM recall processes, current 'dsmq'
HSM server Specified on the MIgrateserver option
in the Client System Options file
(dsm.sys). Default: the server named on
the DEFAULTServer option.
HSM status info Stored in: /etc/adsm/SpaceMan/status
which is the symlink target of the
.SpaceMan/status entry in the
space-managed file system.
HSM threshold migration interval Defaults to once every 5 minutes.
Specify a value on the CHEckthresholds
option in the Client System Options file
(dsm.sys).
HTML manuals The TSM manuals, in HTML form, are
available in the Books component,
downloadable from the IBM FTP website.
Sample AIX fileset:
TSM523C.tivoli.tsm.books.en_US.client
HTTP A COMMmethod defined in the Server
Options File, for the Web-browser based
administrative interface. You need to
code both:
COMMmethod HTTP
HTTPPort 1580
HTTPport Client System Options File (dsm.sys)
option specifying the TCP/IP port
address for the Web Client.
Code a value from 1000 - 32767.
Default: 1581
Windows advisory: The HTTPport in the
options file may not actually be what
controls the port number: there may be
an HttpPort value in the Registry, which
will take precedence for the port on
which to listen. The registry entry is:
HKEY_LOCAL_MACHINE\SYSTEM\ControlSetXX
\Services\ADSM Client Acceptor
\Parameters\HttpPort . The "dsm.opt"
file will be looked at if this HttpPort
Registry entry does not exist: if there
is no HTTPport value specified in the
dsm.opt, the default value of 1581 will
be used. The HttpPort value in the
Registry can be updated with the
dsmcutil command: dsmcutil update cad
/name:"NameOfCadService " /httpport:####
Surprise: The HTTPport value also
controls the Client Acceptor (dsmcad)
port number! Ref: www.ibm.com/support/
entdocview.wss?uid=swg21079454 .
See also: WEBPorts
HTTPPort Server options file option specifying
the port number for the HTTP
communication method.
Default: 1580
See also: Web Admin
HTTPS ADSMv3 COMMmethod defined in the Server
Options File, for a Web-browser based
administrative interface using the
Secure Sockets Layer (SSL)
communications protocol. You need to
code both:
COMMmethod HTTPS
HTTPSPort 1580
Note: Not required for the Web proxy and
is not supported by TSM.
HTTPSPort Server options file option specifying
the port number for the HTTPS
communication method, which uses the
Secure Socket Layer (SSL).
Defaut: 1543
See also: Web Admin
Hyperthreading See: Intel hyperthreading & licensing

I/O error messages ANR1414W at TSM server start-up time,


reporting a volume "read-only" due to
previous write error.
ANR8359E Media fault ... (q.v.)
I/O errors reading a tape Errors are sometimes encountered when
reading tapes. Sometimes, simply
repeating the read will cause the error
to disappear. With tapes which have
been unused for a long time, or stored
in under unfavorable conditions, you may
want to retension the tape before trying
to read it.
See: Retension
IBM media problems Call (888) IBM-MEDIA about the problem
you have with media purchased from IBM.
IBM Tivoli Storage Manager Formal name of product, as of 2002/04,
previously called Tivoli Storage Manager
(and before that, ADSTAR Distributed
Storage Manager, derived from WDSF).
IBM TotalStorage New name, supplanting "Magstar" in 2002.
IBMtape The 3590/LTO/Ultrium device driver for
Solaris systems.
ftp://ftp.software.ibm.com/storage/
devdrvr/Solaris/
See also: Atape
ICN IBM Customer Number. The 7-digit number
under which you order IBM software, and
through which you obtain IBM support
under contract.
Idle timeout value, define "IDLETimeout" definition in the server
options file.
Idle wait (IdleW, IdleWait) "Sess State" value in 'Query SEssion'
output for when the server end of the
session is idle, waiting for a request
from the client.
Recorded in the 22nd field of the
accounting record, and the
"Pct. Idle Wait Last Session" field of
the 'Query Node Format=Detailed' server
command, where slower clients typically
have larger numbers.
Can result when a client has asked for
a mass of information from the server
(as in an incremental backup), the
server has sent it to the client, and
the client is now very busy sorting it
and scanning file systems for files
which need to be backed up, comparing
against the list of already-backed-up
files provided by the server. In the
midst of a Backup session, idle wait
time is as the client is running through
the file system seeking the next changed
file to back up - and changed files may
be few and far between in a given file
system. Naturally, a client system busy
doing other things will deprive the TSM
backup of CPU time and result in file
system contention (made worse by virus
checking). Also keep in mind that the
client doesn't send data to the server
until it has a transaction's worth.
Retries are another impediment to
getting back to the server.
If the server expects a response and the
client is too busy for a long time,
IDLETimeout can occur.
See also: Communications Wait;
Media Wait; SendW, Start
IDLETimeout Definition in the server options file.
Specifies the number of minutes that a
client session can be idle before its
session will be canceled.
Allowed: 1 (minute) to infinity
Default: 15 (minutes)
Too small a value can result in server
message ANR0482W. A value of 60 is much
more realistic.
See IBM site Technote 1161949 ("Why are
sessions being terminated due to
timeouts?").
See also: COMMTimeout; SETOPT
IDLETimeout server option, query 'Query OPTion'
IDRC Improved Data Recording Capability.
Technology built into the 3590 tape
drive to compress and compact data, from
two to five times that of uncompacted
data (the typical compression factor
being 3x).
IE Usually, Internet Explorer; but
sometimes an unfortunately short
abbreviation of Include/Exclude.
-IFNewer Client option, used with Restore and
Retrieve, to cause replacement of an
existing file with the file from the
server storage pool if that server file
is newer than the existing file. Note
that this is part of a full replacement
type restore ("-REPlace=All|Yes|Prompt")
and won't work if using "-REPlace=No",
which is inconsistent with -IFNewer.
A -IFNewer restoral is an NQR restoral,
and as such will not pull all the data
off storage volumes and send it to the
client for acceptance/rejection. But,
the file system objects metadata still
has to be sent from the server to the
client, which then allows the client to
request the full file data as needed to
restore candidate files. As such, a
-IFNewer restore is conceptually an
incremental backup in reverse: it's an
incremental restoral. The type of client
platform/filesystem can make a big
difference in restoral time: with Unix,
metadata is usually wholly contained
within the TSM database records; but for
Windows, with its more extensive
metadata structure, it may be in storage
pools, and that can mean a lot of pain.
A -IFNewer restore unto itself is still
a No Query Restore; but the addition of
other options can turn it into a Classic
Restore.
WARNING: -REP=All|Yes -IFNewer was
horrendously inefficient: it essentially
does a -REP=ALL, mounting every tape and
moving every file, and at the last
second, only replaces it if newer.
Ref: APARs IX87650 (server), IC23158
(client), IX89496 (client).
Use -FROMDate, -FROMTime, and -PITDate
instead, which result in database
selection being done in the server,
minimizing the movement of data.
See also: -LAtest
IGNORESOCKETS Testflag, per APAR IX80646, to give the
ability to skip socket files during
Restore. Works for all platforms except
AIX 4.2 and HP-UX, which always skip
socket files. Do not attempt to use
during Backup.
See also: Sockets, Testflag
Image Backup (aka Snapshot Backup, The 3.7 facility for backing up a
Logical Volume Backup) logical volume (partition) as a physical
image, on the AIX, HP, and Sun client
platforms.
Commands: dsmc Backup Image
dsmc Query Image
dsmc REStore Image
In TSM 5.1, available on Windows 2000,
where the Logical Volume Storage Agent
(LVSA) is available, which can take a
snapshot of the volume while it is
online. This image backup is a block by
block copy of the data. Optionally only
occupied blocks can be copied. If the
snapshot option is used (rather than
static) then any blocks which change
during the backup process are first kept
unaltered in an Original Block File. In
this way the client is able to send a
consistent image of the volume as it
was at the start of the snapshot process
to the Tivoli Storage Manager server.
Subsequently available on Windows XP
(which is built upon Windows 2000).
TSM 5.2 built upon this: its Open File
Support uses this Snapshot mechanism.
See also: Open File Support; Raw logical
volume, back up; Snapshot
.img Filename suffix for a file which is an
image for a CD, as may be found in the
product maintence area of the FTP site.
Immediate Client Actions utility After using, stop and restart the
scheduler service on the client, so it
can query the server to find out it's
next schedule, which in this case would
the immediate action you created.
Otherwise you will need to wait till the
client checks for its next schedule on
its own. Also affected by the server
'Set RANDomize' command.
Imperfect collocation Occurs when collocation is enabled, but
there are insufficient scratch tapes to
maintain full separation of data, such
that data which otherwise would be kept
separate has to be mingled within
remaining volume space.
See also: Collocation
Import To import into a TSM server the
definitions and/or data from another
server where an Export had been done.
Notes:
Code -volumenames in the order they
were created.
If the server encounters a policy set
named ACTIVE on the tape volume during
the import process, it uses a temporary
policy set named $$ACTIVE$$ to import
the active policy set. After each
$$ACTIVE$$ policy set has been
activated, the server deletes that
$$ACTIVE$$ policy set from the target
server. TSM uses the $$ACTIVE$$ name to
show you that the policy set which is
currently activated for this domain is
the policy set that was active at the
time the export was performed. After
doing the Import, review the policy
results and perform VALidate POlicyset
and ACTivate POlicyset as needed.
IMport Node *SM server command to import data
previously EXPorted from a *SM server.
The process will retain the exported
domain and node name.
Syntax:
'IMPort Node DEVclass=DevclassName
VOLumenames=VolName(s)
[NodeName(s)]
[FILESpace=________]
[DOmains=____]
[FILEData=None|ALl|ARchive|
Backup|BACKUPActive|
ALLActive|
SPacemanaged]
[Preview=No|Yes]
[Dates=Absolute|Relative]
[Replacedefs=No|Yes]'
where NodeName, FILESpace, and DOmains
are used to select from the input.
Dates= Specifies whether the recorded
backup or archive dates for client node
file copies are set to the values
specified when the files were exported
(Absolute), or are adjusted relative to
the date of import (Relative).
Default: Absolute.
Backup data will be put into the tape
pool, and HSM data will be put into the
HSM disk storage pool.
Note that the exported domain name will
typically not exist on the import system
(nor would you want it to) and so the
import operation will attempt to assign
all to domain name STANDARD - after
which you can perform an UPDate Node
to reassign the node to an appropriate
domain name in the importing system.
Note that the volumes to be imported
need to be checked in to the receiving
server before use.
If Import finds a filespace of the same
name already on the receiving server, it
will rename the incoming filespace to
have a digit at the end of the name. A
message reflecting this should appear in
the Activity Log. (See "Importing File
Data Information", "Understanding How
Duplicate File Spaces Are Handled" in
the Admin Guide.) Alas, there has been
no merging capability in Import. There
is Rename Filespace capability in the
server, to adjust things to suit your
environment, where you could make it
match a file system name so that users
could therein retrieve their imported
data.
Look for ANR0617I "success" message in
the Activity Log to verify that the
import has worked.
DO NOT perform Query OCCupancy while
Import is running: it has been seen to
result in: ANR9999D imutil.c(2555): Lock
acquisition (ixLock) failed for
Inventory node 17.
Messages: ANR0798E, ANR1366W, ANR1368W
Improved Data Recording Capability See: IDRC
IN SQL clause to include a particular set
of data that matches one of a list of
values. The set is specified in
parentheses. Literals may appear in the
set, enclosed in single quotes.
WHERE COLUMN_NAME -
IN (value1,value2,value3)
*SM does not want a name or component to
exceeds 18 characters, else ANR2914E.
See also: NOT IN
IN USE Status of a tape drive in 'Query MOunt'
output when a tape drive is committed
to a session involving a client.
-INActive 'dsmc REStore' option to cause ADSM to
display both the active and inactive
versions of files in the selection
generated via -Pick.
Inactive, when a file went Do a Select on the Backups table, where
the DEACTIVATE_DATE tells the story.
Inactive file, restore See example under "-PIck".
Inactive file system HSM: A file system for which you have
deactivated space management. When space
management is deactivated for a file
system, HSM cannot perform migration,
recall, or reconciliation for the file
system. However, a root user can update
space management settings for the file
system, and users can access resident
and premigrated files. Contrast with
active file system.
Inactive files, identify in Select STATE='INACTIVE_VERSION'
See also: Active files, identify in
Select; STATE
Inactive files, list via SQL SELECT HL_NAME, LL_NAME, -
DATE(BACKUP_DATE) as bkdate, -
DATE(DEACTIVATE_DATE) AS DELDATE,
CLASS_NAME FROM ADSM.BACKUPS WHERE -
STATE = 'INACTIVE_VERSION' AND -
TYPE = 'FILE' AND -
NODE_NAME = 'UPPER_CASE_NAME' AND -
FILESPACE_NAME = 'Case_Sensitive_Name'
See also: HL_NAME; LL_NAME
Inactive files, number and bytes Do 'Query OCCupancy NodeName
FileSpaceName Type=Backup'
Total the number of files and bytes, for
all stored data, Active and Inactive.
Do 'EXPort Node NodeName
FILESpace=FileSpaceName
FILEData=BACKUPActive
Preview=Yes'
Message ANR0986I will report the number
of files and bytes for Active files.
Subtract these numbers from those
obtained in Query OCCupancy, yielding
values for Inactive files.
See also: Active files, number and bytes
Inactive files, rebind If a file remains on the client and it
has an Active and one or more Inactive
versions in TSM server storage, changing
policies and then performing an
incremental backup will cause all the
Active and Inactive versions of the file
to be rebound. The rebinding of the
Inactive versions of the file is thus
made possible by the existence of the
Active version on the client, which
constitutes a linkage to the Inactive
versions.
But, if the file is gone from the
client: There is no command to rebind
its Inactive files (those which have
been deleted from the client but which
are retained in TSM server storage). But
there is a simple technique to effect
rebinding of the Inactive files:
1. Temporarily restore the Inactive
file, or create an empty file of
the same name.
2. Perform an unqualified Incremental
backup. (A Selective backup binds the
backed up files to the new mgmtclass,
but not the Inactive files.)
3. Remove the temp file.
Consider instead changing retention
policies within the existing management
class, as long as the change is safe to
pertain to all the file systems bound to
that mangement class.
Inactive files, restore In the command line client (dsmc), use
the -INActive option.
Inactive files, restore selectively Restoring one or more Inactive files is
awkward in that they all have the same
name, and name is the standard way to
identify files to restore. You can use
the GUI or -PIck option to point out
specific instances of Inactive files to
be restored. Example of CLI-only:
'dsmc restore -inactive -pick
</Backup/Location> </Restore/Location>'
then select one file from the list.
But this requires a human selection
process. To accomplish the same thing
via a purely command line (batch)
operation: First perform a query of the
backup files, including the inactive
ones. Then invoke the restoral as 'dsmc
restore -INActive -PITDate=____ FileName
Dest', where -PITDate serves to uniquely
identify the instance of the Inactive
version of the file. Also use -PITTime,
if there was more than one backup on a
given day.
See also: -PITDate; -PITTime
Inactive files for a user, identify SELECT COUNT(*) AS -
via Select "Inactive files count" FROM BACKUPS -
WHERE NODE_NAME='UPPER_CASE_NAME' AND -
FILESPACE_NAME='___' AND OWNER='___'-
AND STATE='INACTIVE_VERSION'
Inactive Version (Inactive File) A copy of a backup file in ADSM storage
that either is not the most recent
version or the corresponding original
object has been deleted from the client
file system. For example: you delete a
file, then do a backup - the latest
backup copy of the file is now in the
Inactive Version, and would have to be
restored from there.
Inactive backup versions are eligible
for expiration according to the
management class assigned to the object.
Note that active and inactive files may
exist on the same volumes.
Query from client:
'dsmc Query Backup -SUbdir=Yes
-INActive {filespacename}:/dir/*
(where "-INActive" causes *both* active
and inactive versions to be reported).
See also: Active Version
INACTIVE_VERSION SQL DB: State value in Backups table for
a host-deleted, Inactive file.
See also: ACTIVATE_DATE
INCLEXCL TSM server-defined option for clients of
all kinds (though the name may lead you
to think it's just for Unix), via
'DEFine CLIENTOpt'. Each INCLEXCL
contains an Include or Exclude statement
in a set of such statements to be
applied to the clients using the option
set. The Include and Exclude
specification coded in the server
logically precede and are additive to
client-defined Include and Exclude
options.
Example: DEFine CLIENTOpt INCLEXCL
EXCLUDE.FS /home
See: DEFine CLIENTOpt
INCLExcl Client System Options file (dsm.sys)
option to name the file which contains
Include-Exclude specifications. Must be
coded within a server stanza.
Current status can be obtained via the
command 'dsmc Query Option' in ADSM or
'dsmc show options' in TSM.
Note that if this file is changed, the
client scheduler needs to be restarted
to see the change.
Historical: This option was for many
years available for use only in Unix
clients.
INCLExcl ignored? See: Include-Exclude "not working"
INclude Client option to specify files for
inclusion in backup processing, archive
processing (as of TSM 3.7), image
processing, and HSM services; and to
also specify the management class to use
in storing the files on the server.
Placement:
Unix: Either in the client system
options file or, more commonly, in
the file named on the INCLExcl option.
Other: In the client options file.
Note that Include applies only to files:
you cannot specify that certain
directories be included.
Code as:
'INclude pattern...' or
'INclude pattern... MgmtClass'
(Note that the INclude option does not
provide the .backup and .spacemgmt
qualifiers which the EXclude option
does.)
Coding an Include does not imply that
other file names are excluded: the rule
is that an Include statement assures
that files are not excluded, but that
other files will be implicitly included.
Technique suggestion: Rather than have a
bunch of management classes and cause
client administrators set up somewhat
intricate Include statements, it may be
preferable to create multiple Domains on
the TSM server with a tailored default
management class in each, and then
change the client Node definition to use
that Domain.
See also: INCLExcl; INCLUDE.FILE;
INCLUDE.IMAGE
INCLExcl not working See: Include-Exclude "not working"
INCLUDE.ENCRYPT TSM 4.1 Windows option to include files
for encryption processing. (The default
is that no files are encrypted.)
See also: ENCryptkey; EXCLUDE.ENCRYPT
INCLUDE.FILE Variation on the INclude statement, to
include a specified file in backup
operations.
INCLUDE.FS Windows (only) Include spec for Open
File Support/Snapshot backups.
Note that this spec is not in Unix.
INCLUDE.IMAGE Variation on the INclude statement, for
AIX, HP-UX, and Solaris systmes, to
include a specified filespace or logical
volume in backup operations.
Note that INCLUDE.IMAGE stands alone,
being independent of all other Include
specifications.
Include-exclude list A list of INCLUDE and EXCLUDE options
that include or exclude selected objects
for backup. An EXCLUDE option identifies
objects that should not be backed up. An
INCLUDE option identifies objects that
are exempt from the exclusion rules or
assigns a management class to an object
or a group of objects for backup or
archive services. The include-exclude
list is defined either in the file named
on the INCLEXCL opton of the Client
System Options File (Unix systems) or
in the client options file.
Wildcards are allowed: * ... []
The include/exclude list is processed
from bottom to top, and exits satisfied
as soon as a match is found.
Ref: Installing the Clients
Include-exclude list, validate ADSMv3: dsmc Query INCLEXCL
TSM: dsmc SHow INCLEXCL
Include-Exclude list, verify Via manual, command line action:
ADSM: 'dsmc Query INCLEXCL' (v3 PTF6)
TSM: 'dsmc SHOW INCLEXCL'
There is no way to definitively have the
scheduler show you if it is seeing and
honoring the include-exclude list, as
there is no Action=Query in the server
DEFine SCHedule command. The best you
can do is have the scheduler invoke the
Query Inclexcl command to demonstrate
that the include-exclude options set was
in effect at the time the schedule was
run.
1. Add to your options file:
PRESchedulecmd "dsmc query inclexcl"
2. Invoke the scheduler to redirect
output to a file (as in Unix example
'dsmc schedule >> logfile 2>&1').
3. Inspect the logfile.
Include-Exclude "not working" Possible causes:
- Not coded with a server stanza.
- Scheduler process not restarted after
client options file change.
- Exclude not coded *before* the file
system containing it is named on an
Include, remembering that the
Include-Exclude list is processed
bottom-up.
- Not supported for your opsys.
- Unix: The InclExcl option must be
coded in your dsm.sys file, and it
must be within the server stanza you
are using; and, of course, the file
that it specifies must exist and be
properly coded and have appropriate
permissions.
- Perhaps 'DEFine CLIENTOpt' has been
done on the server, specifing INCLEXCL
options for all clients which, though
they logically precede client-defined
Include-Exclude options, may interfere
with client expectations.
- You cannot change the include or
exclude status of files or directories
that are automatically included or
excluded by TSM or the operating
system. Examples are Windows files
C:\pagefile.sys and C:\ADSM.sys
See also: Include-Exclude list, verify
Include-Exclude options file For Unix systems: a file, created by a
root user on your system, that contains
statements which ADSM uses to determine
whether to include or exclude certain
objects in Backup and Space Management
(HSM) operations, and to override the
associated management classes to use for
backup or archive.
Each line contains Include or Exclude
as the first line token, and named files
as the second line token(s). Include
statements may also contain a third
token specifying the management class to
be used for backup, to use other than
the Default Management Class.
The file is processed from the bottom,
up, and stops processing, satisfied, as
soon as it finds a match.
The file is named in the Client System
Options File (dsm.sys) for Unix systems,
but on other systems the Include
statements are located in the dsm.opt
file itself.
An Exclude option can be used to exclude
a file from backup and space management,
backup only, or space management only.
An Include option can be used to include
specific files for backup and space
management, and optionally specify the
management class to be used.
Automatic migration occurs only for
the Default Management Class; you have
to manually incite migration if coded
in the include-exclude options file.
Caution: If you change your
Include/Exclude list or file so that a
previously included file is now
excluded, any pre-existing backup
versions of that file are expired the
next time an incremental backup is run.
Include-Exclude options file, query Use the client 'dsmc Query Option' in
ADSM or 'dsmc show options' in TSM, and
look for "InclExcl:".
Include-exclude order of precedence As of ADSMv3, Include-Exclude
specifications may come from the server
as well as the client, and are taken in
the following order:
1. Specifications received from the
server's client options set, starting
with the highest sequence number.
2. Specifications obtained from the
client options file, from bottom to
top.
Note that, whether from the server or
client, Include-Exclude statements are
"additive", and cannot be overriden by a
Force=Yes specification in the DEFine
CLIENTOpt.
Do 'dsmc Query Inclexcl' to see the full
collection of Include-Exclude statements
in effect, in the order in which they
are processed during backup and archive
operations.
Ref: Admin Guide "Managing Client Option
Files"
See: DEFine CLIENTOpt; DEFine CLOptset;
Exclude; INCLEXCL; Include
-INCRBYDate Option on the 'dsmc incremental' command
to requests an incremental backup by
date: the client only asks the server
for the date and time of the last
incremental backup, for comparing
against the client file's last-modified
(mtime) timestamp. (A Unix inode
administrative change (ctime, as via
chmod, chown, chgroup) does not count.)
In computer science terms, this is
almost a "stateless" backup.
This method eliminates the time, memory,
and transmission path usage involved in
capturing a files list from the server
in an ordinary Incremental Backup.
Because only the last backup date is
considered in determining which files
get backed up, any OS environment
factors which affect the file but do not
change its date and time stamps are not
recognized. If a file's last changed
date and time is after that of the last
backup, the file is backed up. Otherwise
it is not, even if the file's name is
new to the file system.
Because Incrbydate operates by relative
date, there obviously must have been a
previous complete Incremental backup to
have established a filespace last backup
date.
Files that have been deleted from the
file system since the last incremental
backup will not expire on the server,
because the backup did not involve a
list comparison that would allow the
client to tell the server that a
previously existing file is now gone.
Because this backup knows nothing about
what was backed up before, it backs up
a lot of directories afresh, because
their timestamps have changed as their
contents have changed - so that may be
a time loss detracting from the other
gains in this technique, unless changes
to files within directories cause the
timestamps on the directories to be
updated such that a normal incremental
would have backed them up anyway.
Further things Incrbydate does not do:
- Does not rebind backup versions to a
new management class if you change the
management class.
- In Windows, does not back up files
whose attributes have changed, unless
the modification dates and times have
also changed.
- Ignores the copy group frequency
attribute of management classes: the
backup is unconditional.
An Incrbydate backup of a whole file
system will cause the filespace last
backup timestamp to be updated.
Prevailing retention rules are honored
as usual in an -INCRBYDate backup.
Because they do not change the last
changed date and time, changes to access
control lists (ACL) are not backed up
during an incremental by date.
Relative speed: In Windows, an
Incrbydate backup will be slower than a
full incremental backup with journaling
active.
Recommendation: Incrbydate backups are
best suited to file systems with stable
populations which are regularly updated,
and which have few directories. Mail
spool file systems are good candidates.
Incremental backup See: dsmc Incremental
Incremental backup, file systems to See: DOMain option
back up
Incremental backup, force when missed Run backup from client, if have access.
by client Else create a backup schedule on the
server (define schedule) of a small
window including the current time, then
associate the schedule with the client
(DEFine ASSOCiation).
"Incremental forever" Often cited as the mantra of the TSM
product, it is a capability rather than
a dictum. The basic scheme of the
product is to back up any new or changed
files. You don't necessarily have to
ever perform a "full" backup - but of
course the cost is having your backups
spread over perhaps many tapes
(mitigated by Reclamations), which can
aggravate restoral times. But you are
free to adopt any combination of full
and incremental backups as dictated by
economics and your restoral objectives.
INCRTHreshold TSM 4.2+ option, for Windows. Specifies
the threshold value for the number of
directories in any journaled file space
that might have active objects on the
server, but no equivalent object on the
workstation.
GUI: "Threshold for non-journaled
incremental backups"
Ref: Windows client manual; TSM 4.2
Informix database backup Use the informix DBA do a DB export,
then ADSM backs up this export. Or use
the SQL BackTrack product.
See also: TDP for Informix
Informix database backup, query 'dsmc query backup
/InstanceName/InstanceName/*/*'
Initialize tapes See: Label tapes
initserv.log TSMv4 server log file which will log
errors in initializing the server.
inode A data structure that describes the
individual files in an operating
system. There is one inode for each
file. The number of inodes in a file
system, and therefore the maximum number
of files a file system can contain, is
set when the file system is created.
Hardlinked files share the same inode.
inode number A number that specifies a particular
inode in a file system.
Insert category 3494 Library Manager category code FF00
for a tape volume added to the 3494
inventory. The 3494 reads the external
label on the volume, creates an
inventory entry for the volume, and
assigns the volume to this category as
it stores the tape into a library cell.
The "LIBVolume" command set is the one
TSM means of detecting and handling
Insert volumes.
You can have TSM adopt INSERT category
cartridges via a command like:
'CHECKIn LIBVolume 3494Name
DEVType=3590 SEARCH=yes
STATus=SCRatch'
Insert category tapes, count Via Unix environment command:
'mtlib -l /dev/lmcp0 -vqK -s ff00'
Insert category tapes, list Via Unix environment command:
'mtlib -l /dev/lmcp0 -qC -s ff00'
(There is no way to list such tapes from
TSM.)
Install directory, Windows ADSM: \program files\ibm\adsm
TSM: \program files\tivoli\tsm
installfsm HSM kernel extension management program,
/usr/lpp/adsm/bin/installfsm, as invoked
in /etc/rc.adsmhsm by /etc/inittab.
Syntax:
'installfsm [-l|-q|-u] Kernel_Extension'
where:
-l Loads the named kernel extension.
-q Queries the named kernel extension.
-u Unloads the named kernel extension.
Examples: (be in client directory)
installfsm -l kext
installfsm -q kext
installfsm -u kext
Msgs: ANS9281E
Instant Archive An unfortunate, misleading name for what
is in reality a Backup Set - which has
nothing to do with the TSM Archive
facility. The Instant Archive name
derives from the property of the Backup
Set that it is a permanent,
self-contained, immutable snapshot of
the Active files set.
See: Backup Set; Rapid Recovery
Intel hyperthreading & licensing In some modern Intel processors, fuller
use of the computing components is made
by multi-threading in hardware, which
can currently make a single physical
processor function like two.
Does this affect IBM's licensing
charges, which are based upon processor
count? What we are hearing is No.
Interfaces to ADSM Typically the 'adsm' command, used to
invoke the standard ADSM interface
(GUI), for access to Utilities, Server,
Administrative Client, Backup-Archive
Client, and HSM Client management.
/usr/bin/adsm ->
/usr/lpp/adsmserv/ezadsm/adsm.
'dsmadm': to invoke GUI for pure server
administration.
'dsmadmc': to invoke command line
interface for pure server
administration.
'dsm': client backup-archive graphical
interface.
'dsmc': client backup-archive command
line interface.
'dsmhsm': client HSM Xwindows interface.
Interposer An electrical connector adapter which
connects between the cable and the
SCSI device. Most commonly seen on
Fast-Wide-Differential chains, as with
a chain off the IBM 2412 SCSI adapter
card. The interposer is part FC 9702.
Inventory See: CONTENTS
Inventory expiration runs interval, "EXPInterval" definition in the server
define options file.
Inventory Update A 3494 function invoked from the
Commands menu of the operator station,
to re-examine the tapes in the library
and add any previously unknown ones to
the library database.
The 3494 will accept commands while it
is doing this, so you could request a
mount during the inventory.
Contrast with "Reinventory complete
system".
IP address of client changes On occasion, your site may need to
reassign the IP address of your
computer, which is a TSM client. Per
discussion in topic "IP addresses of
clients", under some circumstances the
TSM server has the client's IP address
stored in its database, for client
schedule purposes. The server would thus
be stuck on the old client address, and
keep trying and failing (i.e., timeout)
to reach the client at its old address.
(Or, worse, it might *succeed* in
entering into a session with whatever
computer has taken the old IP address!)
How to get the server to recognize the
new IP address? Given that the IP
address is remembered only for nodes
associated with a schedule, performing a
'DELete ASSOCiation' should cause the
server to forget the IP address of the
client and cause it to capture its
actual, new IP address after a fresh
'DEFine ASSOCiation' and next scheduler
communication with the client.
(Note that neither stopping and starting
the scheduler on the client, nor
performing other interactive functions
will cause the server to adopt the new
IP address. The TCPCLIENTAddress option
might be used to accomplish the change,
but the option is actually for
multi-homed (multiple ethernet carded)
clients, to force use of one of its
other IP addresses.)
IP address of server See: 'DEFine SERver', HLAddress
parameter; TCPServeraddress
IP addresses of clients The TSM server stores the IP address of
nodes in its database, but ONLY when the
address is specified on the HLAddress
parameter for the node definition, or
for nodes associated with a schedule
when running in Server Prompted
(SCHEDMODe PRompted) mode. That is, for
ordinary client contacts, the IP address
used is not important: it is only when
the server has to initiate contact with
the client that it is important enough
to be stored in the server.
The IP addresses are readily available
in the TSM 3.7 server table "Summary"
(up to the number of days specified via
Set SUMmaryretention), and are recorded
in the Activity Log on message ANR0406I
when clients contact the server to start
sessions.
TSM 5.x now provides the IP addresses in
the Nodes table (if the above
considerations apply), so you can
perform 'Query Node ... F=D' to see
them.
Otherwise they can be found (not in a
very readable format), by the following
procedure (using undocumented debugging
commands):
1. 'SHOW OBJDir': This will generate a
list of objects in the database.
Search for "Schedule.Node.Addresses".
Note the value for "homeAddr".
2. 'SHOW NODE <homeAddr>': This will
give you a list of the IP-addresses
which have registered for running
scheduled processes (by running the
DSMC SCHEDULE program on the client
node).
See also: SCHEDMODe; TCPPort
IPX/SPX Internetwork Packet Exchange/Sequenced
Packet Exchange. IPX/SPX is Novell
NetWare's proprietary communication
protocol.
IPXBuffersize *SM server option.
Specifies the size (in kilobytes) of the
IPX/SPX communications buffer.
Allowed range: 1 - 32 (KB)
Default: 32 (KB)
IPXSErveraddress Old TSM 4.2 option for Novell clients
for using IPX communication methods to
interact with the TSM server.
IPXSocket *SM server option.
Specifies the IPX socket number for an
ADSM server.
Allowed range: 0 - 32767
Default: 8522
IPXBufferSize server option, query 'Query OPTion'
IPXSocket server option, query 'Query OPTion'
-Itemcommit Command-line option for ADSM
administrative client commands
('dsmadmc', etc.) to say that you want
to commit commands inside a macro as
each command is executed. This prevents
the macro from failing if any command in
it encounters "No match found" (RC 11)
or the like.
See also: COMMIT; dsmadmc
ISC Integrated Solution Console, new in TSM
5.3, an interface replacing the
administrative Web interface for TSM
server administration. The ISC consists
of a number of different components
which will assist the administrator in
managing multiple TSM servers via a
single, integrated console. The AC is a
Web-based interface which can be used to
centrally configure and manage TSM 5.3
servers. The ISC builds on top of the
WebSphere Application Server and
WebSphere Portal base and includes
lightweight versions of both in the ISC
runtimes. It looks for common problems,
actions, and subtasks across the range
of ISC components in order to provide
reusable services. Basing the ISC on a
lightweight portal infrastructure
provides the ability to aggregate
independent tasks and information into a
single organized presentation.
ISC is the base for the TSM
Administration Center. That is, the AC
is an ISC component.
Access it with a supported Web brower,
like:
http://<NetworkAddress>:8421/ibm/console
Install where? Can be installed on the
same system as TSM, or different,
depending upon the platform requirements
of each product. The first thing
installed for ISC is a Java Virtual
Machine. Requires a lot of memory to
install. The architectural problems of
Windows 2000 may make the install more
of a problem there.
Ref: TSM 5.3 Technical Guide redbook.
See also: Adminstration Center
iSeries backups There is no TSM client per se for the
iSeries. However, there is an interface
to TSM based upon the TSM API called the
BRMS Application Client.
See also: BRMS
ISSUE MESSAGE TSM 3.7+ server command to use with with
return code processing in a script to
issue a message from a server script to
determine where the problem is with a
command in the script. Syntax:
'ISSUE MESSAGE Message_Severity
Message_Text'
Message_Severity Specifies the severity
of the message. The message severity
indicators are:
E = Error. ANR1498E is displayed in the
message text.
I = Information. ANR1496I is displayed
in the message text.
S = Severe. ANR1499S is displayed in
the message text.
W = Warning. ANR1497W is displayed in
the message text.
Message_Text Specifies the description
of the message.
See also: Activity log, create an entry
ITSM IBM Tivoli Storage Manager - the name
game evolves in 2002.
See also: TSM
ITSM for Databases Is the third generation name and new
licensing scheme for the database backup
agents in 2003:
- TDP for Informix
- TDP for MS SQL
- TDP for Oracle
ITSM For Hardware See: Tivoli Storage Manager For Hardware

"JA" The 7th and 8th chars on a 3592 tape


cartridge, identifying the media type,
being the first generation of the 3592.
Japanese filenames See: Non-English filenames
Jaz drives (Iomega) Can be used for ADSMv3 server storage
pools, via 'DEFine DEVclass
... DEVType=REMOVABLEfile'.
Be advised that Jaz cartridges have a
distinctly limited lifetime. See
articles about it on the web: search on
"Click of Death".
JBB Journal-based backups (q.v.).
JDB See: Journal-based backups (JBB)
JFS buffering? No! The ADSM server bypasses JFS buffering
on writes by requesting synchronous
writes, using O_SYNC on the open().
There is no problem using JFS for the
ADSM server database recovery log and
storage pool volumes: this is the
recommended method.
JNLINBNPTIMEOUT Journal Based Backups Testflag,
implemented in the 5.1.6.2 level
fixtest, to allow a client to specify a
timeout value that the client will wait
for a connection to the journal daemon
to become free (that is, the currently
running jbb session to finish). Use by
adding to your Windows dsm.opt file
like:
testflag jnlinbnptimeout:600
where the numeric value is in seconds.
(TSM 5.2 will better address timeouts.)
Join (noun) An SQL operation where you specify
retrieving data from more than one table
at a time by specifying FROM a
comma-separated set of table names,
using table-qualified column names to
report the results. The row data from
the multiple tables will be joined
together. Example:
SELECT MEDIA.VOLUME_NAME,
MEDIA.STGPOOL_NAME, VOLUMES.PCT_UTILIZED
FROM MEDIA, VOLUMES
Note that processing tends to occur by
repeatedly looking through the multiple
tables, which is to say that you will
experience a multiplicative effect: if
the columns being reported occur in
multiple tables, you need to use
matching to avoid repetitive output, as
in: WHERE
MEDIA.VOLUME_NAME=VOLUMES.VOLUME_NAME
So, if you had 100 volumes, this would
prevent the query from reporting 100x100
times for the same set of volumes.
See also: Subquery
Journal-based backups (JBB) TSM 4.2+: Client journaling improves
overall incremental backup performance
for Windows NT and Windows 2000 clients
(including MS Clustered systems) by
using a client-resident journal to track
the files to be backed up.
The journal engine keeps track of
changed files as they are changed, as a
jornal daemon monitors file systems
specified in the jbb config file. When
the incremental backup starts, it just
backs up the files that the journal has
flagged as changed. (Thus, the journal
grows in size only as a result of host
file update activity: backups only act
upon the contents of the journal - they
do not add to it.) When objects are
processed (backed up or expired) during
a journal based backup, the b/a client
notifies the journal daemon to remove
the journal entries which have been
processed - which releases space
internal to the journal: the journal
size itself is not reduced.
In such backups, the server inventory
does not need to be queried, and therein
lies the performance advantage.
Journal-based backups eliminate the need
for the client to scan the local file
system or query the server to determine
which files to process. It also reduces
network traffic between the client and
server.
Because archive and selective backup are
not based on whether a file has changed,
there is no server inventory query to
begin with, and therefore the journal
engine offers no advantage. The journal
engine is not used for these operations.
Default installation directory:
C:\Program Files\Tivoli\TSM\baclient
The number of journal entries
corresponds with the amount of file
system change activity and that the size
of journal entries depends primarily on
the fully qualified path length of
objects which change (so file systems
with very deeply nested dir structures
will use more space).
Every journal entry is unique, meaning
that there can only be one entry per
file/directory of the file system being
journaled (each entry represents that
the last change activity of the object).
When a journal based backup is performed
and journal entries are processed by the
B/A client (backed up or expired), the
space the processed journal db entries
occupy are marked as free and will be
reused, but the actual disk size of the
journal db file never shrinks.
Note that this design is intentionally
independent of the Windows 2000 NTFS 5
journalled file system so as to be
usable in NT as well, with the
possibility of expansion to other
platforms in the future.
The first time you run a backup after
enabling the journal service, you will
still see a regular full incremental
backup performed, done to synchronize
the journal database with the TSM server
database. Thereafter the backups should
use the journaled backup method, unless
the journal db and server db become out
of sync (for more info, see the
PreserveDbOnExit option in the client
manual appendix on configuring the
journal service).
Relative speed: A JBB is typically
faster than an Incrbydate backup.
Ref: TSM 4.2 Technical Guide redbook;
search IBM db for "TSM Journal Based
Backup FAQ" (swg21155524).
Journal-based backups & Excludes Journal-based backup employs its own
Exclude list via its tsmjbbd.ini file
JournalExcludeList stanza: JBB
processing does *not* read the dsm.opt
file for Include/Exclude specs.

KB Knowledge Base. Vendors often name their


customer-searchable databases this.
Go to www.ibm.com and use the Search box
to find articles in IBM's KB.
KB See: Kilobyte
Keepalive See: Firewall and idle session
KEEPMP= TSM 3.7+ server REGister Node parameter
to specify whether the client node keeps
the mount point for the entire session.
Code: Yes or No. Default: No
Ref: TSM 3.7 Technical Guide, 6.1.2.3
See also: MAXNUMMP; REGister Node
Kernel extension (server) /usr/lpp/adsmserv/bin/pkmonx, as loaded
by: '/usr/lpp/adsmserv/bin/loadpkx -f
/usr/lpp/adsmserv/bin/pkmonx',
usually by being an entry in
/etc/inittab, as put there by
/usr/lpp/adsmserv/bin/dsm_update_itab.
(See the Installing manual.)
NOTE: The need for the kernel extension
is eliminated in ADSM 2.1.5, which
implements "pthreads", as supported by
AIX 4.1.4.
Kernel extension (server), load Can be done manually as root via:
'/usr/lpp/adsmserv/bin/loadpkx
-f /usr/lpp/adsmserv/bin/pkmonx'
or:
'cd /usr/lpp/adsmserv/bin'
'./loadpkx -f pkmonx'
but more usually via an entry in
/etc/inittab, as put there by
/usr/lpp/adsmserv/bin/dsm_update_itab.
Alternately you can:
'/usr/lpp/adsmserv/bin/rc.adsmserv
kernel'
Messages:
Kernel extension now loaded with
kmid = 21837452.
Kernel extension successfully
initialized.
Then you can start the server.
Ref: Installing the Server...
Kernel extension (server), loaded? As root: '/usr/lpp/adsmserv/bin/loadpkx
-q /usr/lpp/adsmserv/bin/pkmonx'
May say: "Kernel extension is not
loaded" or "Kernel extension is loaded
with kmid = 21834876."
(See the Installing manual.)
Kernel extension (server), unload Make sure all dsm* processes are down
on the server, and then do:
As root: '/usr/lpp/adsmserv/bin/loadpkx
-u /usr/lpp/adsmserv/bin/pkmonx'
KERNelmessages Client System Options file (dsm.sys)
option to specify whether HSM-related
messages issued by the Unix kernel
during processing (such as ANS9283K)
should be displayed. Specify Yes or No.
Because of kernel nature, a change in
this option doesn't take effect until
the ADSM server is restarted.
Default: Yes
KEY= In ANR830_E messages, is Byte 2 of the
sense bytes from the error, as
summarized in the I/O Error Code
Descriptions for Server Messages
appendix in the Messages manual.
To further explain some values:
7 Data protect: as when the tape
cartridge's write-protect thumbwheel
or slider has been thrown to the
position which the drive will sense
to disallow writing on the tape.
Should be accompanied in message by
ASC=27, ASCQ=00, and msg ANR8463E.
Kill signals See: HALT
Kilobyte 1,024 bytes.
It is typically only disk drive
manufacturers that express a kilobyte as
1,000 bytes. Software and tape drive
makers typically use a 1,024 value. The
TSM Admin Ref manual glossary, and the
3590 Hardware Reference manual, for
example, both define a kilobyte as
1,024. See also: Megabyte

L_ (e.g., L1) LTO Ultrium tape cartridge identifier


letters, as appears on the barcode
label, after the 6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
Note that some libraries (3583,3584) can
be configured to pass the L_ letters in
addition to the volser when reporting
volumes. Be careful about this,
particularly in the internal medium
labeling matching what is being passed
as part of barcode designation. It is
generally best to not have the L_
participate in volume identities.
L1 Ultrium Generation 1 Type A, 100 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
Ref: IBM LTO Ultrium Cartridge Label
Specification
L2 Ultrium Generation 2 Type A, 200 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
Ref: IBM LTO Ultrium Cartridge Label
Specification
L3 Ultrium Generation 3 Type A, 400 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
Ref: IBM LTO Ultrium Cartridge Label
Specification
L4 Ultrium Generation 4 Type A, 800 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LA Ultrium Generation 1 Type B, 50 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
Ref: IBM LTO Ultrium Cartridge Label
Specification
Label all tapes in 3494 library The modern way is to use the LABEl
having category code of Insert LIBVolume command, to both label and
checkin the volumes. To just label,
issue the following operating system
command:
'dsmlabel -drive=/dev/XXXX
-library=/dev/lmcp0 -search -keep
[-overwrite]'
LABEl LIBVolume TSM server command (new with ADSMv3).
Allows you to label and checkin a single
tape, a range of tapes, or any new tapes
in an automated library, all in one easy
step. Note that there is no "checkin"
phase for LIBtype=MANUAL.
(The command task is serial: one volume
is labeled at a time.)
Syntax:
'LABEl LIBVolume libraryname
volname|SEARCH=Yes|SEARCH=BULK
[VOLRange=volname1,volname2]
[LABELSource=Barcode|Prompt]
[CHECKIN=SCRatch|PRIvate]
[DEVTYPE=CARTRIDGE|3590]
[OVERWRITE=No|Yes]
[VOLList=vol1,vol2,vol3 ...
-or- FILE:file_name]'
The SEARCH option will cause TSM to
issue an initial query to compile a list
of Insert tapes, which it will then
process. (If you thereafter add more
tape to the library as the command is
in its labeling phase, those Inserts
will not be processed: you will have to
reissue the command later.) The
operation tends to use available drives
rotationally, to even wear.
Failing to specify OVERWRITE=Yes for a
previously labeled volume results in
error ANR8807W.
This command will not wait for a drive
to become available, even if one or more
drives have Idle tapes or are in a
Dismounting state.
TSM is smart enough to not relabel a
volume that is in a storage pool or the
volume history file, and had been taken
out of the library and put back in (thus
getting an Insert category code): msg
ANR8816E will result.
Did the command succeed? It will end
with message ANR0985I; but that message
will always indicate success, even
though there were problems, and that no
tapes were labeled. Look for adjoining
problem messages like ANR8806E.
Advisory: Query for a reply number for
the Checkin command (make sure you have
the tape you want to checkin in the I/O
slot) key in q request and it will ask
you to enter a reply # (i.e reply 001).
Your tape should then checkin.
Warning: The foolish command will
proceed to do its internal CHECKIn
LIBVolume even if the labeling fails
(msg ANR8806E) - in ADSMv3, at least!
Note that operations such as CHECKOut
LIBVolume and MOVe MEDia will hang if a
LABEl LIBVolume is running.
Note that if any tape being processed
suffers an I/O error (Write), it will be
skipped and, in the case of a 3494, its
Category Code will remain FF00 (Insert).
Msgs: ANR8799I to reflect start;
ANR8801I & ANR8427I for each volume
processed; ANR0985I; ANR8810I; ANR8806E.
Note that there is no logged indication
as to the drive on which the volume was
mounted.
Label prefix, define Via "PREFIX=LabelPrefix" in
'DEFine DEVclass ...' and
'UPDate DEVclass ...'.
Label prefix, query 'Query DEVclass Format=Detailed'
Label tapes Use the 'dsmlabel' utility.
Newly purchased tapes should have been
barcoded and internally labeled by the
vendor, so there should be no need to
run the 'dsmlabel' utility. But you
still need to do an ADSM 'CHECKIn'
(q.v.).
Label tapes in a 3570 Do something like:
'dsmlabel -drive=/dev/rmt1,16
-library=/dev/rmt1.smc'
Labelling a tape... Will destroy ALL data remaining on it,
because a new <eof tape mark> will be
written immediately after the labels.
(It is the standard for writing on tapes
in general that an EOD is written at the
conclusion of writing.)
Disk/disc media are typically different,
as in the case of R/W Optical drives.
If you inadvertently relable a data
tape, try to restore data on the volume:
Run a Q CONTENT volumename to get a
list of file names, then try to restore
each file individually (make sure to try
several files, especially those located
at the end of the tape): this may allow
you to read past the tape mark.
LABELSource Operand in 'LABEl LIBVolume' and other
ADSMserver commands, used *only* for
SCSI libraries, as in
"LABELSource=BARCODE". Note that 3494s
do not need this operand since the label
is ALWAYS the barcode.
LABELSource=3DBARCODE
LAN configuration of 3494 Perform under the operator "Commands"
menu of the 3494 operator station.
Lan-Free Backup Introduced in TSM V3.7. Relieves the
load on the LAN by introducing the
Storage Agent. This is a small TSM
server (without a Database or Recovery
Log), termed a Storage Agent, which is
installed and run on the TSM client
machine. It handles the communication
with the TSM server over the LAN but
sends the data directly to SAN attached
tape devices, relieving the TSM server
from the actual I/O transfer.
See also: Lan-Free Restore; Server-free
Ref: TSM 3.7.3+4.1 Technical Guide
redbook; TSM 5.1 Technical Guide
LAN-Free Data Transfer The optional Managed System for SAN
feature for the LAN-free data transfer
function effectively exploits SAN
environments by moving back-end office
and IT data transfers from the
communications network to a data network
or SAN. LAN communications bandwidth
then can be used to enhance and improve
service levels for end users and
customers.
http://www.tivoli.com/products/index/
storage_mgr/storage_mgr_concepts.html
See also: Network-Free...
Lan-Free license file mgsyssan.lic
Lan-Free Restore TSM 3.7 feature designed to get around
network limitations when clients need to
be quickly restored, and they are
physically near the server. Client
backups occur as usual, over the network
each day (optimally, over over a
Storage Area Network). Once on the
server, a "Backup Set" can be produced
from the current Active files,
constituting a point-in-time bundle on
media which can be read at the client
site. Then, when a mass restoral is
necessary at the client, the compatible
media can be transported from the server
location to the client location (or
could have been sent there as a matter
of course each day) and the client can
be restored on-site from that bundled
image.
See: Backup Set
LanFree bytes transferred Client Summary Statistics element:
The total number of data bytes
transferred during a lan-free
operation. If the ENABLELanfree client
option is set to No, this line will not
appear.
LANGuage Definition in the server options file
and Windows Client User Options File.
Specifies the language to use for help
and error messages. In later TSM, this
more generally controls the
initialization of locale settings,
including the language, date format,
time format, and number format to be
used for the console and server.
Note that whereas the Windows client
sports a LANGuage client option, the
Unix client has no such option, instead
relying upon the LANG environment
variable, in that OS's environmental
language support.
Default: en_US (AMENG) for USA.
If the client is running on an
unsupported language/locale combination,
such as French/Canada or Spanish/Mexico,
the language will default to US English.
Note that the language option does not
affect the Web client, which employs the
language associated with the locale of
the browser. If the browser is running
in a locale that TSM does not support,
the Web client displays in US English.
Ref: Just about every TSM manual
discusses language.
LANGuage server option, query 'Query OPTion'
Laptop computers, back up See: Backup laptop computers
LARGECOMmbuffers ADSMv3 client system options file
(dsm.sys) option (in ADSMv2 was
"USELARGebuffers"). Specifies whether
the client will use increased buffers to
transfer large amounts of data between
the client and the server. You can
disable this option when your machine is
running low on memory.
Specify Yes or No.
Msgs: ANS1030E
See also: MEMORYEFficientbackup
Default: Yes for AIX; No for all others
Last 8 hours, SQL time ref You can form a "within last 8 hours"
spec in a SELECT by using the form:
[Whatever_Timestamp]
>(CURRENT_TIMESTAMP-8 hours)
Last Backup Completion Date/Time Column in
'Query FIlespace Format=Detailed'.
This field will be empty if the backup
was not a full incremental, or it was
but did not complete, or if the
filespace involves Archive activity
rather than Backup.
As of TSM 5.1: If the command specified
by the PRESchedulecmd or POSTSchedulecmd
option ends with a nonzero return code,
TSM will consider the command to have
failed.
Last Backup Start Date/Time Column in
'Query FIlespace Format=Detailed'.
This field may be empty: see "dsmc Query
Filespace" for reasons.
As of TSM 5.1: If the command specified
by the PRESchedulecmd or POSTSchedulecmd
option ends with a nonzero return code,
TSM will consider the command to have
failed.
Last Incr Date See: dsmc Query Filespace
Last night's volumes See: Volumes used last night
LASTSESS_SENT SQL: Field in NODES table is for data
sent for *any* TSM client operation,
whether it be Archive, Backup, or even
just a Query.
-LAtest 'dsmc REStore' option to restore the
most recent backup verson of a file, be
it active or inactive. Without this
option, ADSM searches only for active
files. See also -IFNewer.
LB Ultrium Generation 1 Type C, 30 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
lbtest AIX, NT library test program for use
with SCSI libraries using the special
device /dev/lb0 or /dev/rmtX.smc.
Beware using when TSM is also going
after the library, as TSM will fail when
it cannot open it.
Where it is: Windows: /utils directory
Unix: server/bin directory
Syntax:
Windows: lbtest -dev lbx.0.0.y
UNIX: lbtest <-f batch-input-file>
<-o batch-output-file>
<-d special-file>
<-p passthru-device>
Unix example:
lbtest -dev /dev/lbxx
Windows example:
c:>lbtest -dev lbx.0.0.y
where x is the SCSI address and y is the
port number - values available from the
server utilities diagnostic screen.
Once in lbtest, select manual test,
select open, select return element count
and then do what you want. Make sure
you have your command window scrolling
as the stuff goes by awful fast.
Ref: There is no documentation provided
by Tivoli for this TSM utility.
LC Ultrium Generation 1 Type D, 10 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LD Ultrium Generation 2 Type B, 100 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LE Language Environment.
LE Ultrium Generation 2 Type C, 60 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
Leader data HSM: Leading bytes of data from a
migrated file that are replicated in the
stub file in the local file system. (The
migrated file contains all the file's
data; but the leading data of the file
is also stored in the stub file for the
convenience of limited-access commands
such as the Unix 'head' command. The
amount of leader data stored in a stub
file depends on the stub size
specified. The required data for a stub
file consumes 511 bytes of space. Any
remaining space in a stub file is used
to store leader data. If a process
accesses only the leader data and does
not modify that data, HSM does not need
to recall the migrated file back to the
local file system.
See also: dsmmigundelete;
RESToremigstate
LEFT(String,N_chars) SQL function to take the left N
characters of a given string.
Sample usage:
SELECT * FROM ADMIN_SCHEDULES WHERE
LEFT(SCHEDULE_NAME,4)='BKUP'
See also: CHAR()
Legato Is bundled with DEC Unix.
LF Ultrium Generation 2 Type D, 20 GB
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LG Ultrium Generation 3 Type B, future
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LH Ultrium Generation 3 Type C, future
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LI Ultrium Generation 3 Type D, future
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
libApiDS.a The *SM API library.
In TSM 3.7, lives in
/usr/tivoli/tsm/client/api/bin
See also: dsmapi*
libPiIMG.sl Image library, as on HP-UX.
Libraries, multiple of same time, Sites may end up with multipe libraries
avoiding operator confusion of the same type. How to keep operators
from returning offsite tapes to the
wrong library? One approach is
color-coding: apply solid-color gummed
labels to the cartridges and frame the
library I/O portal with the same color,
making it all but impossible for the
operator to goof. Choose yellow and
purple, and put Big Bird and Barney
pictures onto each library to enhance
operator comprehension.
Library A composite device consisting of serial
media (typically, tapes), storage cells
to house them, and drives to read them.
A library has its own, dedicated scratch
tape pool (dedicated per category code
assignment during Checkin, or the like).
In TSM, a Library is a logical
definition: there may be multiple
logical Library definitions for a
physical library (as needed when a
library contains multiple drive types),
with each instance having its own,
dedicated scratch tape pool.
LIBRary TSM keyword for defining and updating
libraries. Note that in TSM a library
definition cannot span multiple physical
libraries.
Library (LibName) A collection of Drives for which
volume mounts are accomplished via a
single method, typically either
manually or by robotic actions.
LibName comes into play in Define
Library such that Checkin will assign
desired category codes to new tapes.
LibName is used in: AUDit LIBRary,
CHECKIn, CHECKOut, DEFine DEVclass,
DEFine DRive, DEFine LIBRary.
Is target of: DEFine DEVclass
and: DEFine DRive
Ref: Admin Guide
See also: SCSI Library
Library, 3494, define Make sure that the 3494 is online.
For a basic definition:
'DEFine LIBRary LibName LIBType=349x -
DEVIce=/dev/lmcp0'
which take default category codes of
decimal 300 (X'12C') for Private and
decimal 301 (X'12D') for 3490 Scratch,
with 302 (X'12E') implied for 3590
Scratch.
For a secondary definition, for another
system to access the 3494, you need to
define categories to segregate tape
volumes so as to prevent conflicting
use. That definition would entail:
'DEFine LIBRary LibName LIBType=349x -
DEVIce=/dev/lmcp0
PRIVATECATegory=Np_decimal
SCRATCHCATegory=Ns_decimal'
where the Np and Ns values are unique,
non-conflicting Private and Scratch
category codes for this Library.
(Note that defined category codes are
implicitly assigned to library tapes
when a Checkin is done.)
See also: SCRATCHCATegory
Ref: Admin Guide
Library, add tape to 'CHECKIn LIBVolume ...'
Library, audit See: AUDit LIBRary
Library, count of all volumes Via Unix command:
'mtlib -l /dev/lmcp0 -vqK'
Library, count of cartridges in See: 3494, count of cartridges in
Convenience I/O Station Convenience I/O Station
Library, count of CE volumes Via Unix command:
'mtlib -l /dev/lmcp0 -vqK -s fff6'
Library, count of cleaning cartridges Via Unix command:
'mtlib -l /dev/lmcp0 -vqK -s fffd'
Library, count of SCRATCH volumes Via Unix command:
(3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E'
category code)
Library, define drive within 'DEFine DRive LibName Drive_Name
DEVIce=/dev/???
[ELEMent=SCSI_Lib_Element_Addr]'
Note that ADSM will automatically figure
out the device type, which will
subsequently turn up in 'Query DRive'.
Library, multiple drive types Drives with different device types are
supported in a single physical library
if you perform a DEFine LIBRary for each
type of drive. If distinctively
different drive device types are
involved (such as 3590E and 3590H), you
define two libraries. Then you define
drives and device classes for each
library. In each device class
definition, you can use the FORMAT
parameter with a value of DRIVE, if you
choose.
Living with this arrangement involves
the awkwardness of having to apportion
your scratch tapes complement between
the two TSM library definitions.
Ref: Admin Guide "Configuring an IBM
3494 Library for Use by One Server"
Library, query 'Query LIBRary [LibName]
[Format=Detailed]'
Note that the Device which is reported
is *not* one of the Drives: it is
instead the *library device* by which
the host controls the library, rather
than the conduit for getting data to and
from the library volumes.
Does not reveal drives: for the drives
assigned to a library you have to do
'Query DRive', which amounts to a
bottom-up search for the associated
library.
Note that there is also an unsupported
command to show the status of the
library and its drives: 'SHow LIBrary'.
Library, remove tape from 'CHECKOut LIBVolume LibName VolName
[CHECKLabel=no] [FORCE=yes]
[REMove=no]'
Library, SCSI See: SCSI Library
Library, use as both automatic and Define the library as two libraries: one
manual automatic, the other manual:
def library=manual libtype=manual
def drive manual mtape device=_____
Then when you want to use the drive as
a manual library you do:
UPDate DEVclass ____ LIBRary=MANUAL
And to change back:
UPDate DEVclass ____ LIBRary=Automat
Library Client A TSM server which accesses a library
managed by a separate TSM server, with
data transfer ofer a server-to-server
communication path.
Specified via DEFine LIBRary ... SHAREd
See also: Library Manager
Library debugging If a library is not properly responding
to *SM, here are some analysis ideas:
- Do 'q act' in the *SM server to see if
it is reporting an error.
- If the opsys has an error log, see if
any errors recorded there. If the lib
has its own error log, inspect. Maybe
the library gripper or barcode reader
is having a problem.
- Try to identify what changed in the
environment to cause the difference
since the problem appeared.
- Is the library in a proper mode to
service requests (i.e., did some
operator leave a switch in a wrong
position or change configuration?).
For example, a 9710 must have the
FAST LOAD option enabled.
- Examine response outside of *SM, via
the mtlib, lbtest or other command
appropriate to your library, emulating
the operation as closely as possible.
Be next to the lib to actually see
what's happening.
- Check networking between *SM and the
library: If a direct connection, check
cabling and connectors; If networked
and on different subnets, maybe an
intermediary router problem, or that
the library resides in a subnet which
is Not Routed (cannot be reached from
outside).
- Is there a shortage of tape drives, as
perhaps tapes left in drives after *SM
was not shut down cleanly?
- Perform *SM server queries (e.g.,
'q pr') as a sluggish request is
pending. Do 'Query REQuest' for more
manual libs to see if mount pending.
Maybe the server is in polling mode
waiting on a tape mount: do
'SHow LIBrary' to see what it thinks.
- If CHECKIn is hanging, try it with
CHECKLabel=No and see if faster, which
skips tape loading and barcode review.
Library full situation You can have *SM track volumes that are
removed from a full library, if you
employ the Overflow Storage Pool method.
Ref: Admin Guide, "Managing a Full
Library"
See: MOVe MEDia, Overflow Storage Pool
Library Manager TSM concept for a TSM server which
controls device operations when multiple
IBM TSM servers share a storage device,
per 'DEFINE LIBRary ... SHAREd'. Device
operations include mount, dismount,
volume ownership, and library inventory.
See also library client.
Library Manager The PC and application software residing
in a 3494 or like robotic tape library,
for controlling the robotic mechanism
and otherwise managing the library,
including the database of library
volumes with their category codes.
Library Manager, microcode level Obtain at the 3494 control panel:
First: In the Mode menu, activate the
Service Menu (will result in a second
row of selections appearing in menu bar
at top of screen).
Then: under Service, select
View Code Levels, then scroll down to
"LM Patch Level", which will show a
number like "512.09".
Library Manager Control Point The host device name through which a
(LMCP) a host program (e.g., TSM or the 'mtlib'
command) accesses the unique 3494
library that has been associated with
that device name, as via AIX SMIT
configuration.
The LMCP is used to perform the library
functions (such as mount and demount
volumes).
In AIX, the library is accessed via a
special device, like /dev/lmcp0. In
Solaris, it is more simply the arbitrary
symbolic name that you code in the
/etc/ibmatl.conf file's first column.
That is, in Solaris you simply reference
the name you chose to stuff into the
file: it is not some peculiar name that
is generated via the install programs.
The "SCSI...Device Drivers: Programming
Reference" manual goes into details and
helps make this clearer.
Library Manager Control Point Daemon A process which is always running on the
(lmcpd) AIX system through which programs on
that system interact with the one or
more 3494 Tape Libraries which that host
is allowed to access (per definitions in
the 3494 Library Manager). The
executable is /etc/lmcpd.
In AIX, the lmcpd software is a device
driver. In Solaris, it is instead
Unix-domain sockets.
The /etc/ibmatl.conf defines arbitrary
name "handles" for each library, and
each name is tied to a unique lmcp_
device in the /dev/ directory, via SMIT
definitions.
The daemon listens on port 3494, that
number having been added to
/etc/services in the atldd install
process. There is one daemon and one
control file in the host, through which
communication occurs with all 3494s.
This software is provided on floppy disk
with the 3494 hardware. Installs into
/usr/lpp/atldd. Updates are available
via FTP to the storsys site's .devdrvr
dir.
It used to be started in /etc/inittab:
lmcpd:234:once:/etc/methods/startatl
But later versions caused it to be
folded into the /etc/objrepos and
/etc/methods/ database system such that
it is started by the 'cfgmgr' that is
done at boot time.
Restart by doing 'cfgmgr' (or, less
disruptively, 'cfgmgr -l lmcp0'); or
simply invoke '/etc/lmcpd'.
Configuration file: /etc/ibmatl.conf
If the 3494 is connected to the host via
TCP/IP (rather than RS-232), then a port
number must be defined in /etc/services
for the 3494 to communicate with the
host (via socket programming). By
default, the Library Driver software
installation creates a port '3494/tcp'
entry in /etc/services, which matches
the default port at the 3494 itself. If
to be changed, be sure to keep both in
sync.
Ref: "IBM SCSI Tape Drive, Medium
Changer, and Library Device Drivers:
Installation and User's Guide" manual
(GC35-0154)
Restarting: IBM site Technote 1167643
See also: /etc/.3494sock;
/etc/ibmatl.conf
Library Manager Control Point Daemon In /etc/ibmatl.pid; but may not be able
(lmcpd) PID to read because "Text file busy".
Library not using all drives Examine the following:
- Mount limit on device class.
- 'SHow LIBrary'; make sure all Online
and Available.
- If AIX, 'lsdev -C -c tape -H -t 3590'
and make sure all Available (do chdev
if not).
- At library console, assure drives are
Available.
- If AIX, use errpt to look for hardware
problems.
- Examine drive for being powered on and
not in problem state.
Library offline? Run something innocuous like:
mtlib -l /dev/lmcp0 -qL
If offline, will return:
Query operation Error - Library is
Offline to Host.
and a status code of 255.
Library sharing In a LAN+SAN environment, the ability
for multiple TSM servers to share the
resources of a SAN-connected library.
Control communication occurs over the
LAN, and data flow over the SAN.
One sever controls the library and is
called the Library Manager Server;
requesting servers are called Library
Client Servers. (Note that this
arrangement does not fully conform to
the SAN philosophy, in that peer-level
access is absent.)
Library sharing contrasts with library
partitioning, where the latter
subdivides and dedicates portions of the
library to each.
Ref: Admin Guide, "Multiple Tivoli
Storage Manager Servers Sharing
Libraries"
There are also 3rd party products to
facilitate library sharing, such as
Gresham's Enterprise DistribuTape.
Library space shortage An often cited issue is the tape library
being "full", hindering everything. This
typically results from site management
not being realistic and skimping on
resources, though that jeopardizes the
mission of data backup and leaves the
administrators in a lurch. Potential
remediations:
- Expand the library to give it the
capacity it needs for reasonable
operation.
- Go for higher density tape drives and
tapes, to increase library capacity
without physical expansion.
- Buy tape racks and employ a
discipline which keeps dormant tapes
outside the library, available for
mounting via request.
Library storage slot element address See: SHow LIBINV
Library volumes, list Use opsys command:
'mtlib -l /dev/lmcp0 -vqI'
for fully-labeled information, or just
'mtlib -l /dev/lmcp0 -qI'
for unlabeled data fields: volser,
category code, volume attribute, volume
class (type of tape drive; equates to
device class), volume type.
(or use options -vqI for verbosity, for
more descriptive output)
The tapes reported do not include CE
tape or cleaning tapes.
LIBType Library type, as operand of 'DEFine
LIBRary' server command. Legal types:
MANUAL - tapes mounted by people
SCSI - generic robotic autochanger
349X - IBM 3494 or 3495 Tape Lib.
EXTERNAL - external media management
LIBVolume commands The only TSM commands which recognize
and handle tapes whose (3494) Category
Code is Insert.
See: 'CHECKIn LIBVolume',
'CHECKOut LIBVolume', 'LABEl LIBVolume',
'Query LIBVolume', 'UPDate LIBVolume'.
Libvolume, remove Use CHECKOut.
See also: DELete VOLHistory
LIBVOLUMES *SM database table to track volumes
which belong to it and which are
contained in the named library. Columns:
LIBRARY_NAME, VOLUME_NAME, STATUS,
LAST_USE, HOME_ELEMENT, CLEANINGS_LEFT
Libvolumes, count by Status 'SELECT STATUS,COUNT(*) AS \
"Library Counts" FROM LIBVOLUMES \
GROUP BY STATUS'
Libvolumes which are Scratch, count 'SELECT COUNT(*) FROM LIBVOLUMES WHERE
STATUS='Scratch'
License See also: adsmserv.licenses; dsmreg.lic;
Enrollment Certificate Files
License, register 'REGister LICense' command.
See: REGister LICense
License, TSM 4 TSMv4 introduced the Tivoli 'Value-Based
Pricing' model, which changed the
license options and files: You no longer
buy the network enabling license.
Instead, the cost of the base server is
tiered based on the hardware you are
running on. The client license cost is
also tiered based on the hardware type
and size. Client licenses were also
split into two flavors: a managed LAN
system - which is basically what we had
prior to v4.1 - and a managed SAN
system. The end result is basically the
same, but the accounting is different.
License, unregister See: Unregister licenses
See also notes under REGister LICense.
License audit period, query 'Query STatus', see License Audit Period
'SHow LMVARS' also reveals it.
License audit period, set 'Set LICenseauditperiod N_Days'
License fees See: Server version/release number &
paying
License file ADSMv2: It is
/usr/lpp/adsmserv/bin/adsmserv.licenses
which is a plain file containing
hexadecimal strings generated by
invoking the 'REGister LICense' command
per the sheet of codes received with
your order. (The adsmserv module
invokes the outboard
/usr/lpp/adsmserv/bin/dsmreg.lic to
perform the encoding.)
ADSMv3 and TSM: The runtime file is the
"nodelock" file in the server directory.
CPU dependency: The generated numbers
incorporate your CPU ID, and so if you
change processors (or motherboard) you
must regenerate this file.
If to be located in a directory other
than the ADSM server code directory,
this must be specified to the server via
the DSMSERV_DIR environment variable.
Ref: Admin Guide; README.LIC file
included in your installation
License filesets (AIX), list 'lslpp -L' and look for
tivoli.tsm.license.cert
tivoli.tsm.license.rte
License info, get See: LICENSE_DETAILS; 'Query LICense'
LICENSE_DETAILS table SQL table added to TSM 4.1. Columns:
LICENSE_NAME One of the usual TSM
license feature names, as
in: SPACEMGMT, ORACLE,
MSSQL, MSEXCH, LNOTES,
DOMINO, INFORMIX, SAPR3,
ESS, ESSR3, EMCSYMM,
EMCSYMR3, MGSYSLAN,
MGSYSSAN, LIBRARY
NODE_NAME Either the name of a
Backup/Archive client or
the name of a library.
LAST_USED The time the library was
last initialized or the
last time that client
session ended using that
feature.
License use persistence A TSM client which continues to have
filespaces in the TSM server storage
continues to use a license instance.
(That's straightforward; but a very gray
area is if the client is just holding on
to a Backup Set.)
License Wizard One of the Windows "wizards" (see the
Windows server Quick Start manual)
See: Unregister licenses
LICENSE_DETAILS TSM 4.1 SQL table. Columns:
LICENSE_NAME Varchar L=10
NODE_NAME Varchar L=64
LAST_USED Last access Timestamp
LICENSE_NAME is the name of a license
feature, being one of: SPACEMGMT,
ORACLE, MSSQL, MSEXCH, LNOTES, DOMINO,
INFORMIX, SAPR3, ESS, ESSR3, EMCSYMM,
EMCSYMR3, MGSYSLAN, MGSYSSAN, LIBRARY
where "MGSYS" is Managed Systems.
NODE_NAME will be either the name of a
Backup/Archive client or the name of a
library.
LAST_USED will be set to the time the
library was last initialized or the last
time that client session ended using
that feature. (The datestamp may be more
than 30 days ago; an 'AUDit LICense'
will not remove the entry.)
See also: 'Query LICense'
LICenseauditperiod See: License audit period...
Licenses ADSMv3: Held in the server directory as
file "nodelock".
See: nodelock
Licenses, audit See: 'AUDit licenses'
Licenses, insufficient Archives are denied with msg ANR0438W
Backups are denied with msg ANR0439W
HSM is denied with msg ANR0447W
DRM is denied with msg ANR6750E
Licenses, unregister See: Unregister licenses
See also notes under REGister LICense.
Licenses and dormant clients There is sometimes concern that having
old, dormant filespaces hanging around
for a dormant client may take up a
client license. If your server level is
at least 4.1, doing Query LICense, will
reveal:
Managed systems for Lan in use: x
Managed systems for Lan licensed: y
where the "in use" value is the thing.
From the 4.1 Readme:
With this service level the following
changes to in use license counting are
introduced.
- License Expiration. A license feature
that has not been used for more than
30 day will be expired from the in
use license count. This will not
change the registered licenses, only
the count of the in use licenses.
Libraries in use will not be expired,
only client license features.
- License actuals update. The number of
licenses in use will now be updated
when the client session ends. An
audit license is no longer required
for the number of in use licenses to
get updated.
(Sadly, this information was not carried
over into the manuals.)
The above information was further
confused by APAR IC32946.
See also: AUDit LICenses; Query LICense
Licensing problems Can be caused by having the wrong date
in your operating system such that TSM
thinks the license is not valid.
Lightning bolt icon In web admin interface, in a list of
nodes: That is a link to the
backup/archive GUI interface for the
clients. It means you specified its URL
for the Client acceptor piece. Clicked,
it should bring up that node's web
client. You can use that to perform
client functions. For it to work:
- The client acceptor and remote client
agent must be installed on the node.
- The client acceptor must be started
but leave the remote client agent
alone in manual.
- The node must be findable on the
network, by name or numeric address.
You may need to go into the node and
update it with the correct URL for it
work correctly. This gives you a common
management point to perform
backup/restore procedures.
Linux client Is part of the collective "Unix Client".
Linux client support for >2 GB files As ov TSM 4.2.1, the TSM Linux client
can back up Large Files, as possible as
of Linux kernel 2.4.
LINUX server As Linux became more popular for server
use in general, IBM committed to
developing a TSM server for Linux.
APARs suggest that this TSM server code
does not share the same code base as the
AIX and Solaris servers, for example.
Into mid 2003, implementing a TSM Linux
server remains problematic:
- Requires very specific (often older)
kernel levels.
- Device support is spotty.
LINUX support, ADSM (client only) As of 1998/08, there was a NON-Supported
version of the ADSM Linux client was
available pre-compiled (no source code)
on ftp.storsys.ibm.com FTP server in the
/adsm/nosuppt directory: file
adsmv3.linux.tar.Z (now gone).
IBM says: "The TSM source code is not in
the public domain."
Reportedly worked well with RedHat 5.0.
Back then, there was also:
http://bau2.uibk.ac.at/linux/mdw/
HOWTO/mini/ADSM-Backup
LINUX support, TSM client As of 2000/04/27, a formally supported
Linux client is available through the
TSM clients site. Installs into
/opt/tivoli/tsm/client/. File system
support, per the README:
"The functionality of the Tivoli Storage
Manager Linux client is designed and
tested to work on file systems of the
common types EXT2, NFS (see under known
problems and limitations for supported
environment), and ISO9660 (cdrom).
Backup and archive for other file system
types is not excluded. They will be
tolerated and performed in compatibility
mode. This means that features of other
file systems types may not be supported
by the Linux client. These file system
type information of such file systems
will be forced to unknown."
The RedHat TSM Client reportedly needs
at least the 4.2.2.1 client level or
higher: the 4.1 client does not support
the Reiser file system.
LINUX support, TSM web client You may experience a Java error when
trying to use the web client interface
(via IE 6.0 SP1 with JRE 1.4.2_03). The
Unix Client manual, under firewall
support, notes that the two TCP/IP ports
for the remote workstation will be
assigned to two random ports - which may
be blocked by Linux's iptables. You'll
want to choose two ports and explicitly
open them in iptables. For example:
In dsm.sys: webports 1582 1583
In /etc/sysconfig/iptables:
-A RH-Lokkit-0-50-INPUT -p tcp -m tcp
--dport 1582 --syn -j ACCEPT
-A RH-Lokkit-0-50-INPUT -p tcp -m tcp
--dport 1583 --syn -j ACCEPT
and then restart dsmcad and iptables
(/etc/rc.d/init.d/iptables restart).
LJ Ultrium Generation 4 Type B, future
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LK Ultrium Generation 4 Type C, future
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LL Ultrium Generation 4 Type D, future
tape cartridge identifier letters, as
appears on the barcode label, after the
6-char volser.
The first character, L, designates LTO
cartridge type, and the second character
designates the generation & capacity.
Ref: IBM LTO Ultrium Cartridge Label
Specification
LL_NAME SQL: Low level name of a filespace
object, meaning the "filename" portion
of the path...the basename.
Unix example: For path /tmp/xyz, the
FILESPACE_NAME="/tmp", HL_NAME="/", and
LL_NAME="xyz".
(Remember that for client systems where
filenames are case-insensitive, such as
Windows, TSM stores them as UPPER CASE.)
See also: HL_NAME
LLAddress REGister Node specification for the
client's port number, being a hard-coded
specification of the port to use, as
opposed to the implied port number
discovered by the TSM server during
client sessions (which may be specified
on the client side via the
TCPCLIENTPort option).
See also: HLAddress
LM Library Manager.
LMCP See Library Manager Control Point
LMCP Available? 'lsdev -C -l lmcp0'
lmcpd See: Library Manager Control Point
Daemon.
lmcpd, restart '/etc/kill_lmcpd'
'/etc/lmcpd'
lmcpd, shut down '/etc/kill_lmcpd'
lmcpd level 'lslpp -ql atldd.driver'
lmcp0 Library Manager Control Point, only for
3494 libraries.
lmcp0, define Library Manager Control '/etc/methods/defatl -ctape -slibrary
Point to AIX -tatl -a library_name='OIT3494'
LOADDB See "DSMSERV LOADDB".
Local Area Network (LAN) A variable-sized communications network
placed in one location. It connects
servers, PCs, workstations, a network
operating system, access methods, and
communications software and links.
Local file systems See: File systems, local
Locale See: DATEformat; LANGuage; NUMberformat
LOCK Admin *SM server command to prevent an
administrator from accessing the server,
without altering privileges. Syntax:
'LOCK Admin Adm_Name'
Note: Cannot be used on the
SERVER_CONSOLE administrator id.
Inverse: UNLOCK Admin
LOCK Node TSM server command to prevent a client
node from accessing the server. Syntax:
'LOCK Node NodeName'.
A good thing to do before Exporting a
node.
Inverse: UNLOCK Node
lofs (LOFS) "Loopback file system", or "Loopback
Virtual File System": a file system
created by mounting a directory over
another local directory, also known as
mount-over-mount. A LOFS can also be
generated using an automounter.
Under SGI IRIX, an AUTOFS (automount)
file system.
Loopback file systems provide access to
existing files using alternate
pathnames. Once such a virtual file
system is created, other file systems
can be mounted within it without
affecting the original file system. An
example:
mount -t lo -o ro /real/files
/anon/ftp/files
To check your mount: mount -p
Then put the new info from mount -p
into your /etc/fstab.
See also: all-lofs; all-auto-lofs
Log See: Recovery log
Log buffer pool See: LOGPoolsize
Log command output To log command output, invoke the ADSM
server command as in:
'dsmadmc -OUTfile=SomeFilename ...".
See also: Redirection of command output
Log file name, determine 'Query LOGVolume [Format=Detailed]'
Log pinning See: Recovery Log pinning
%Logical ADSM v.3 Query STGpool output field,
later renamed to "Pct Logical" (q.v.).
Logical file A client file stored in one or more
server storage pools, either by itself
or as part of an aggregate file (small
files aggregation).
See also: Aggregate file; Physical file
Logical group Logical file grouping provides a method
of grouping sets of related files
together by the backup-archive client.
The grouping allows the client to
"relate" a number of files together in a
group, which is managed as a single
entity and storedas a single object on
the TSM server. This is all invisible to
the TSM users or administrators (until
something goes wrong). Logical file
grouping support was initially
introduced in TSM 3.7.3 for system
object backup support in a Windows NT
and Windows 2000 environment. This
support was further enhanced in TSM 4.1
to support the adaptive sub-file backup
for mobile clients.
Terms:
Group Leader: Manages all the objects in
a group. The group leader represents
the entire system object and gets all
the versioning and expiration
information assigned.
Group Members: The objects in the group,
representing the files of a system
object.
Msgs: ANS0343E
Logical occupancy The space required for the storage of
logical files in a storage pool.
Because logical occupancy does not
include the unused space created when
logical files are deleted from
aggregates (small files aggregation), it
may be less than physical occupancy.
See also: physical file; logical file
Logical volume See: Raw Logical Volume
Logical volume backups Available in ADSM 3.7. A way to obtain a
physical image of the overall volume,
rather than traversing the file system
contained in the volume.
Advantages:
- Fast backup and restoral, in not
having to diddle with thousands of
files.
- Minimal TSM db activity: just one
entry to account for the single image,
not thousands to account for all the
files in it.
- Simple way to snapshot your system for
straightforward point-in-time
restorals.
Disadvantages:
- Image integrity: no way to know or
deal with contained files or vendor
databases being open or active.
Logmode See: Set LOGMode
Logmode, query 'Query STatus', look for "Log Mode"
near bottom.
Logmode, set Set LOGMode
Loop mode Term used for invocation of the command
line client command in interactive
mode.
See: dsmc LOOP
Loopback file system See: lofs
LOwmig Operand of 'DEFine STGpool', to define
when *SM can stop migration for the
storage pool, as a percentage of the
storage pool occupancy. Can specify
0-99. Default: 70.
To force migration from a storage pool,
use 'UPDate STGpool' to reduce the
LOwmig value. You could reduce it all
the way to 0; but if a backup or like
task is writing to the storage pool, the
migration task will not end until the
backup ends; so a value of 1 may be
better as a dynamic minimum.
When migration kicks off, it will drain
to below this level if CAChe=Yes in your
storage pool because caching occurs only
with migration, and at that point ADSM
wants to cache everything in there.
It is also the case that Migration fully
operates on the entirety of a node's
data, before re-inspecting the LOwmig
value; thus, the level of the storage
pool may fall below the LOwmig value.
See: Migration
LOGPoolsize Definition in the server options file.
Specifies the size of the Recovery Log
buffer pool, in Kbytes. A large buffer
pool may increase the rate by which
Recovery Log transactions are committed
to the database. To see if you need to
increase the size of this value, do
'Query LOG Format=Detailed' and look at
"Log Pool Pct. Wait": if it is more than
zero, boost LOGPoolsize.
Default: 512 (KB); minimum: 128 (KB)
See also: COMMIT
Ref: Installing the Server...
LOGPoolsize server option, query 'Query OPTion', see LogPoolSize
LOGWARNFULLPercent Server option: Specifies the log
utilization threshold at which warning
messages will be issued. Syntax:
LOGWARNFULLPercent <percent_value>
where the percentage is that of log
utilization at which warning messages
will begin. After messages begin, they
will be issued for every 2% increase in
log utilization until utilization drops
below this percentage.
Code as: 0 - 98. Default: 90
See also: SETOPT
Long filenames in Netware restorals From the TSM Netware client manual:
"If files have been backed up from a
volume with long name space loaded, and
you attempt to restore them to a volume
without long name space, the restore
will fail."
Long-term data archiving See: Archive, long term, issues
Long-term data retention See: Archive, long term, issues
Lotus Domino Mail server package, backed up by Tivoli
Storage Manager for Mail (q.v.).
Domino release 5 introduced new backup
APIs, exploited by TDP for Lotus Domino.
In Domino, every user has her own mail
box database, so it can be individually
restored. However, you cannot restore
just a single document: you have to
restore the DB and copy the document
over.
See also: TDP...
Lotus Domino and compression The bytes read/written/transfered
messages from TDP for Domino will be the
same whether compression is on or off.
Those messages are all based on the
number of bytes read and does not take
into account any compression being done
by the TSM API. You would need to query
the occupancy on the server to see any
difference.
Lotus Domino backup There are two *guaranteed* ways to get a
consistent Domino database backup:
1) Shut down the Domino server and back
up the files, as via the B/A client.
2) Use Data Protection for Domino, which
uses the Domino backup and restore
APIs. This can be done while the
Domino Server is up even if the
database is changing during backup.
Some customers point to the TSM 5.1 Open
File support and believe they can use
that instead; but if a database is
"open", you cannot absolutely guarantee
that the database will be in a
consistant state during the point in
time the "freeze" happens, because not
all of the database may be on the disk -
some may still be in memory.
The Domino transaction logging
introduced in Domino 5 make sure that
the database can be made consistent even
after a crash.
Lotus Domino restoral considerations When performing a restoral with TDP
Notes, the restored physical files are
seen to have contemporary timestamps,
rather than reflecting the timestamps of
the backups. This is because the
external, physical file timestamps don't
matter, and receive no special
attention: what matters are the
timestamps internal to the Domino
database, which is what the TDP is
concerned with.
Lotus Notes Agent Note that *SM catalogs every document in
the Notes database (.NSF file).
Low threshold A percentage of space usage on a local
file system at which HSM automatically
stops migrating files to ADSM storage
during a threshold or demand migration
process. A root user sets this
percentage when adding space management
to a file system or updating space
management settings. Contrast with high
threshold. See: dsmmigfs
Low-level address Refers to the port number of a server.
See also: High-level address;
Set SERVERHladdress; Set SERVERLladdress
Low-level name qualifier API: The right part of a file path,
following the filespace name and the
high-level name qualifier. The API
software wants a slash/backslash on the
left part of the qualifier, but not on
the right (which is different from the
structure reported in Query CONTent).
Thus, with path /a/b/c, /a is the
filespace name, /b is the hight-level
name qualifier, and /c is the low-level
name qualifier. (If you attempt to
relocate the slash from the LL name
portion to the right side of the HL,
ANS0225E results.)
Ref: API manual, "High-level and
low-level names"
See also: High-level name qualifier
LOwmig Operand of 'DEFine STGpool', to define
when *SM can stop migration for the
storage pool, as a percentage of the
storage pool estimated capacity.
When the storage pool reaches the low
migration threshold, the server does not
start migration of another node's files.
Because all file spaces that belong to a
node are migrated together, the
occupancy of the storage pool can fall
below the value you specified for this
parameter. You can set LOwmig=0 to
permit migration to empty the storage
pool.
Can specify 0-99. Default: 70.
To force migration from a storage pool,
use 'UPDate STGpool' to reduce the
HIghmig value (with HI=0 being extreme).
See also: Cache; HIghmig
lpfc0 See: Emulex LP8000 Fibre Channel Adapter
LRD In Media table, the Last Reference Date
(YYYY-MM-DD HH:MM:SS.000000).
LTO Linear Tape - Open. In 1997 IBM formed
a partnership with HP and Seagate on an
open tape standard called LTO or
Linear Tape Open. LTO is based on
Magstar MP. (Conspicuously missing from
the partnership is Quantum, the sole
maker of DLT drives: LTO was devised as
a mid-range tape technology in avoiding
paying royalties to Quantum. Quantum
subsequently advanced to SuperDLT to
compete with LTO.)
The technology is linear, meaning that
the heads are stationary as the tape is
pulled lengthwise across them. Multiple
longitudinal recording "stripes"
constitute the multiple tracks. Upon
reaching the physical end of the tape,
the head switches to looking at a next
set of tracks and the tape reverses,
with this process repeating to make for
a "serpentine" or back-and-forth
traversal which can eventually deal with
the entire surface of the tape. The tape
is 1/2" wide. Employs magnetic servo
tracking on the recording surface for
precise positioning.
Originated in two flavors, with different
cartridges: Accelis (based upon IBM's
twin-reel 3570 cartridge) and Ultrium
(based upon IBM's single-reel 3590).
The Accelis and Ultrium formats use the
same head / media track layout / channel
/ servo technology, and share many
common electronic building blocks and
code blocks. Accelis is optimized for
quick access to data while Ultrium is
optimized for capacity. Note that
Accelis was abandoned in favor of
Ultrium, expecting that customers would
want higher capacity rather than high
performance.
Cartridge Memory (LTO CM, LTO-CM) chip
is embedded in both Accelis and Ultrium
cartridges. A non-contacting RF module,
with non-volatile memory capacity of
4096 bytes, provides for storage and
retrieval of cartridge, data
positioning, and user specified info.
Capacity and speed are intended to
double in each succeeding generation of
the technology.
Performance: LTO is streaming
technology. If you cannot keep the data
flowing at tape speed, it has to stop,
back up, and restart to get the tape up
to speed again, which makes for a
substantial performance penalty.
LTO seems, as a product, to be
positioned between the compating DLT and
the complementary, higher-priced 3590
and STK 9x40.
SAN usage: Initially supported via SDG
(SAN Data Gateway).
Visit: http://lto-technology.com/
http://www.lto-technology.com/newsite/
index.html
http://www.ultrium.com
http://www.storage.ibm.com/hardsoft/
tape/lto/index.html
http://www.cartagena.com/naspa/LTO1.pdf
http://www.overlanddata.com/PDFs/
104278-102_A.pdf
http://www.ibm.com/storage/europe/
pdfs/lto_mag.pdf
See also: 3583; Accelis; MAM;
TXNBytelimit and tape drive buffers;
Ultrium
LTO bar code format - Quiet zones (at each end of the bar
code).
- A start character (indicating the
beginning of the label).
- A six-character volume label.
- A two-character cartridge media-type
identifier (L1), which identifies the
cartridge as an LTO cartridge ('L')
and indicates that the cartridge is
the first generation of its type
('1').
- A stop character (indicating the end
of the label) When read by the
library's bar code reader, the bar
code identifies the cartridge's volume
label to the tape library. The bar
code volume label also indicates to
the library whether the cartridge is a
data, cleaning, or diagnostic
cartridge.
LTO cleaning cartridge See: Ultrium cleaning cartridge
LTO drive cleaning Seldom required. At each tape unload the
LTO drives have a small mechanical brush
that runs over the heads. This seems to
reduce the need for cleaning.
LTO performance See Tivoli whitepaper "IBM LTO Ultrium
Performance Considerations"
Note that performance can be impaired if
the LTO-CM memory chip (aka Medium
Auxiliary Memory: MAM) has failed.
A worse problem is one which was
divulged 2004/09/13, where bad LTO1,2
microcode will cause the CM index to be
corrupted. Without the index, the drive
has to grope its way through the data to
find what it needs to access, and
performance is severely impaired. The
LTO architecture is designed to
automatically re-build this index if it
should become corrupted. However, when
this corrupted index condition is
detected, slow performance is the result
as the index is re-built, as the tape
must be re-read from the beginning to
the end of the tape. A corrupted index
may be fixed the next time it is used,
only to be corrupted again at a future
time: installing corrected drive
microcode is the only solution.
LTO customers should use TapeAlert,
which spells out drive problems.
LTO tape errors Can be caused by the cartridge having
been dropped. (The LTO cartridges are
not as rugged as 3480/3490/3590 tape
cartridges.)
LTO tape serial number The barcode may have "SU3689L1", wherein
the serial number is "SU3689" - does not
include the "L1".
LTO vs. 3590 An LTO drive is 5 inches tall and
roughly twice as long as the data
cartridge; the motor is lightweight, and
there is no tape 'buffer' between the
cart and the internal reel. The motor on
a 3580/3590 is much larger and heavier,
and there is a vacuum column buffer
between the cart and the internal reel.
The net result is that the 3590 needs to
get one reel or the other up to speed
and has several inches of tape to
accelerate AND has a much more powerful
motor to do it. The LTO drive, with a
lighter motor, has no tape buffer and
needs to get both reels and all the tape
moving. It is also the case that LTO is
designed for streaming: the start-stop
operation associated with small files
is greatly detrimental to LTO
performance (see: Backhitch).
See also: LTO vs. 3590
LTO1 drives, IBM Those are 3580 Ultrium 1 drives.
See: 3580
LTO-2 (lto2) See: Ultrium 2
LuName server option, query 'Query OPTion'
LVM Fixed Area The 1 MB reserved control area on a *SM
database volume, as accounted for in the
creating 'dsmfmt -db' operation.
See also: SHow LVMFA
LVSA Logical Volume Snapshot Agent.
For making an image backup of a Windows
2000 volume while the volume continues
to be available for other processing.
TSM will create the OBF (Old Blocks
File) there, and perform the backup from
there.
Default location: C:\TSMLVSA
See also: Image Backup; OBF; Open File
Support; SNAPSHOTCACHELocation
LZ1 IBM's proprietary version of Lempel-Ziv
encoding called IBM LZ1.

Macintosh, shut down after backups Put into the ADSM prefs file:
"SCHEDCOMpleteaction Shutdown"
Macintosh backup file names Macintosh has traditionally used the
colon character (:) rather than slash
(/) or backslash (\) as its directory
designation character. Interestingly,
this persists into OS X, where the user
interface makes the directory character
seem to be the usual Unix slash (/); but
OS X invisibly translates that to and
from its usual colon (:). So, if you do
Query CONtent or the like at the TSM
server, you will see the actual colons
separating file path components.
Macintosh client components The following components are in the
Macintosh client package:
Backup: The interactive GUI for
backup, restore, archive, retrieve.
~2.8MB
Scheduler daemon: A background appl
that operates in sleep mode until it
is time to run a schedule, then starts
the Scheduler program. ~120KB
Scheduler program: Communicates with
the server for the next schedule to
run, and performs the scheduled
action, such as a backup or restore,
at the scheduled time. ~1.5MB
Macintosh disaster recovery Simply take some kind of removable disk
(Syquest, ZIP, ...) with enough capacity
and put a minimal version of MacOS (with
TCP/IP support) and ADSM on it.
Macintosh files, back up from NT Yes, ADSM can do this, via NT
"Services for Macintosh". NT can access
Macintosh file systems, and from NT you
can then back them up. BUT: ADSM
version 2 cannot handle the resource
fork portion of the files (ADSM v3 can).
V.2 restorals thus bring the files
back as "flat files".
See: Services for Macintosh;
USEUNICODEFilenames
Macintosh files, restore to NT The Mac files must be restored to a
directory managed by "Services for
Macintosh". Also make sure that
Services for Macintosh is up and
running.
Macintosh icons, effects of moving In the Mac client V3 manual, Chapter 3,
page 13, it says: "Simply moving an
icon makes the file appear changed. ADSM
records the change in icon position to
minimize the problem of multiple icons
occupying the same space after the files
are restored. If only the attributes of
a file or folder have changed, and not
the data, only the attributes are backed
up. You may have multiple versions of
the same file with the only difference
between them being the icon position or
color."
Macintosh OS X scheduler Via dsmcad. It's started from the script
/Library/StartupItems/dsmcad/dsmcad when
Mac OS X boots. You should see a
/usr/bin/dsmcad running. If checking
with the GUI client, you'll need to use
'TSM Backup for Administrators' rather
than the plain 'TSM Backup': the latter
will only show other users' backed up
directories, not their files.
MACRO TSM server command used to invoke a
user-programmed set of TSM commnds, as
a package, with variable substitution.
Syntax:
'MACRO MacroName [Substitutionvalues]'
where the the macro file name is
case-sensitive and Substitutionvalues
fill in percent-signed numbers, in
numerical order by invocation order.
Example of Substitution Variables: %1,
%2, %3.
Note that the %variables will be filled
in only if they are "exposed": if they
are in quotes, the macro will not
perform substitution, as quoted values
are taken as literals. This is
inconvenient as we often want to employ
an SQL Select in a macro, and in SQL, a
string must be in single quotes. The one
way around this is to feed the value
itself as a quoted string.
Redirection: Works. Note that the
facility does not perform variables
substitution on a redirection output
destination name, so the following will
*not* work: q n %1 > /tmp/qn.%1
Note that you cannot run a macro via an
Administrative Schedule - but you can
via a Client Schedule, via ACTion=Macro
with OBJects naming the macro...which
means that the schedule must be
associated with a node and that its dsmc
sched process causes the macro to run.
(Consider instead using Server Scripts.)
The TSM manuals are obscure as to where
macro files are supposed to be located.
In actuality, they can be:
- In the directory where the dsmadmc
command was invoked, whereby you can
invoke the macro simply by its base
name, as in:
MACRO mymacro
- In any system directory, whereby you
need to invoke the macro by full
path name, as in:
MACRO /usr/local/adsm/mymacro
One convenient practice would be to
create a standard macros directory, and
then 'cd' there before invoking
'dsmadmc', thus allowing you to invoke
the macros with short names.
Note that you do not need eXecute
permission to be set on macro files, in
that *SM will load and interpret them.
An unusual factor is that TSM keeps
going back to the macro as it performs
it, even if the macro is simple and
certainly involves no looping: changing
the content of the macro during a
"more..." screen transition, for
example, will result in an "ANR2000E
Unknown command" error message.
Ref: Admin Guide chapter "Automating
Server Operations", Using Macros
See also: /* */; Server scripts
Magic Number You will run into occasional TSM server
messages referring to "magic number".
This amounts to a checksum number which
TSM generated and stored in the database
at the time it put the file object into
its storage pool (wrote it to media),
to assure data integrity. When at some
time in the future TSM may be called
upon to retrieve the object from that
media, it generates a checksum from the
retrieved file data and checks that it
matches what it originally had for the
object. An error indicates that the
data could be read from the media
without hardware/OS detection of an
error, but nevertheless there is a
discrepancy. The data is thus deemed
corrupted and hopeless: you need to
perform a Restore Volume or the like to
get a usable copy of the object.
How did the data go bad? The most likely
cause is between TSM and the tape head:
Faulty hardware, erroneous firmware, bad
SCSI cables, network infrastructure
problems, and the like can all result in
bad data ending up on the media.
Magstar Product line acronym: Magnetic storage
and retrieval.
Name supplanted in 2002 by
See also: IBM TotalStorage; TotalStorage
Magstar MP IBM's name for its 3570 and 3575
technology.
MAILprog Client System Options file (dsm.sys)
option to specify who gets mail, and via
what mailer program, when a password
expires and a new password is generated.
Can be used when PASSWORDAccess Generate
is in effect. Code within the
SErvername section of definitions.
Format: "Mailprog /mail/pgmname User_Id"
See also: PASSWORDAccess; PASSWORDDIR
MAKesparsefile See: Sparse files, handling of
MAM Medium Auxiliary Memory: An Auxiliary
Memory residing on a medium, for
example, a tape cartridge.
Some tape technologies - e.g., AIT and
LTO (Ultrium) - use cartridges equipped
with Medium Auxiliary Memory (MAM), a
non-volatile memory used to record
medium identification and usage info.
This is typically accessed via an RF
interface and does not require reading
the tape itself. In a library not
equipped with a mobile MAM reader, it is
necessary to load the cartridge into the
drive to read the MAM via the drive's
MAM reader.
Ref: http://www.t10.org/ftp/t10/
document.99/99-347r0.pdf
Mammoth tape drive Exabyte Corp. 8mm (helical scan) tape
drive with SCSI-2 fast interface, wide
or narrow, with SE or differential as an
option. Introduced in 1996, aimed at the
midrange server market.
Capacity: 20 GB, native/uncompressed;
40 GB compressed.
Transfer Rate: 10.5 GB per hour,
native/uncompressed; 360 MB/min
compressed rate.
Technology is similar to AIT-1.
Mammoth-2 tape drive Exabyte 8mm tape drive (helical scan),
with multiple channels, with error
correction and ALDC compression.
Form factor: half-height, 5.25"
Capacity: 60 GB native; up to 150 GB
compressed.
Transfer rate: 12 MB/s native; up to 30
GB/s with compression.
Cartridge tape contains a section of
cleaning fabric which the drive uses as
needed.
Technology is similar to AIT-2.
Managed Server See: Enterprise Configuration and Policy
Management
MANAGEDServices Windows client option for having CAD
cause the client scheduler, and web
client, to run rather than have them
hang around as memory-holding processes.
Syntax: MANAGEDServices
{[schedule] [webclient]}
See also: CAD
Management class A policy object that contains a
collection of (HSM) space management
attributes and backup and archive Copy
Groups. The space management attributes
contained in a Management Class
determine determine whether HSM-managed
files are eligible for automatic or
selective migration. The attributes in
the backup and archive Copy Groups
determine whether a file is eligible for
incremental backup and specify how ADSM
manages backup versions of files and
archived copies of files.
The management class is typically chosen
for users by the node root administrator
(via 'ASsign DEFMGmtclass') but can
alternately be selected as the third
token on the INCLUDE line in the
include-exclude options file, or via the
DIRMc Client Systems Option File option,
or the ARCHMc 'dsmc archive' command
line option. However, automatic
migration occurs *only* for the default
management class; for the incl-excl
named management class you have to
manually incite migration.
Management class, choose Is accomplished by specifying the
mangement class as the third token on a
client Include option.
Format: Include FileSpec MgmtClassName
To have all backups use the management
class, code:
Include * MgmtClassName
To have specific file systems use the
management class, do like:
Include /fsname/.../* MgmtClassName
Ref: Client B/A manual
Management class, copy See: COPy MGmtclass
Management class, default As the name implies, this is the
management class which will be used by
default. Can be overridden via the third
token on the INCLUDE line in the
include-exclude options file. However,
automatic migration occurs *only* for
the default management class; for the
incl-excl named management class you
have to manually incite migration.
Management class, default, establish 'ASsign DEFMGmtclass DomainName SetName
ClassName'
To make this change effective you then
need to do:
'ACTivate POlicyset DomainName SetName'
Management class, define 'DEFine MGmtclass DomainName SetName
ClassName
[SPACEMGTECH=AUTOmatic|
SELective|NONE]
[AUTOMIGNOnuse=Ndays]
[MIGREQUIRESBkup=Yes|No]
[MIGDESTination=poolname]
[DESCription="___"]'
Note that except for DESCription, all of
the optional parameters are Space
Management Attributes for HSM.
Management class, delete 'DELete MGmtclass DomainName SetName
ClassName'
Management class, query 'Query MGmtclass [[[DomainName]
[SetName] [ClassName]]] [f=d]'
See also: Management classes, query
Management class, SQL queries It is: CLASS_NAME
Management class, update See: UPDate MGmtclass
Management class for HSM, select HSM uses the Default Management Class
which is in force for the Policy Domain,
which can be queried from the client via
the dsmc command 'Query MGmtclass'.
You may override the Default Management
Class and select another by coding an
Include-Exclude file, with the third
operand on an Include line specifying
the Management Class to be used for the
file(s) named in the second operand.
Management class used by a client 'dsmc query mgmtclass' or
'dsmc query options' in ADSM ('dsmc show
options' in TSM).
Management class used in backup Shows up in 'dsmc query backup', whether
via command line or GUI.
Management classes, display in detail 'dsmmigquery -M -D'
Management classes, query from client 'dsmc Query Mgmtclass [-DETail]'
Reports the default management class
and any management classes specified
on INCLude statements in the
Include/Exclude file.
Management classes, unused, identify You can perform queries like the
following, for Archives and Backups:
SELECT DOMAIN_NAME, CLASS_NAME FROM
MGMTCLASSES WHERE CLASS_NAME NOT IN
(SELECT DISTINCT(CLASS_NAME) FROM
ARCHIVES)
MANUAL (libtype) See: Manual library
Manually Ejected category 3494 Library Manager category code FFFA
for a tape volume which was in the
inventory but in a re-inventory was
not found in the 3494. Thus, the 3494
thinks that someone reached in and
removed it. This category is typically
induced by having to extricate a damaged
tape from the robot. See "Purge Volume"
category to eliminate such an entry.
Manual library No, it's not a library full of manuals;
it's a library whose volumes are to be
mounted manually, by people responding
to mount messages. It is distinguished
by LIBType=MANUAL in DEFine LIBRary; and
the tape device will be of "mt" type,
rather than "rmt" (*SM driver).
A shop running this type of operation
will usually have an operations terminal
running the *SM administrative client in
Mount Mode (dsmadmc -mountmode), simply
for the operators to see and respond to
mount requests. Outstanding mount
requests can be checked via Query
REQuest. Such requests are answered with
the REPLY command acknowledging a
specific request number, to signify that
the action requested has been performed
by the operator such that *SM can
proceed.
Manuals See: TSM manuals
"Many small files" problem The name of the challenge where backups
involve a large number of small files,
which stresses the TSM database due to
the heavy updating and number of
database entries, and the client's
memory and processing power in
performing an Incremental backup.
See "Database performance" for ways to
mitigate the impact on the TSM database
and optimize performance.
Other possible approaches:
- To somewhat reduce Backup time,
consider using -INCRBYDate backup,
which eliminates getting a long list
of files from the server, massaging it
in client memory, and then comparing
as the file system is traversed. (But
see the INCRBYDate entry for side
effects.)
- Another Backup time reduction scheme:
With some client file systems it may
be known in what area updating occurs,
as in the case of a company doing
product testing which creates
thousands of results files in
subdirectories named by product and
date. Here you can tailor your backup
to go directly at those directories
and skip the rest of the file system,
where you know that little or nothing
has changed.
- Journal-Based Backups may be a good
alternative on Windows.
- Consider 'dsmc Backup Image' (q.v.),
to back up the physical image of a
volume (raw logical volume) rather
than individually backing up the files
within it.
- Some customers pre-combine many small
files on the client system, as with
the Unix 'tar' command or personal
computer file bundling packages, thus
reducing the quantity to a single
bundle file.
- If regulations require you to keep
files for a certain period, consider
using Backup Sets rather than doing
full backups.
- Consider a "divide and conquer"
approach, using parallel backup
processes to operate on separate areas
of a file system housing many small
files, to reduce the overall time to
perform the backup. You may employ a
'dsmc i' for each major top-level
directory, to back up into the same
TSM server filespace, or use the
VIRTUALMountpoint option to cause the
file system to be treated as multiple
filespaces. Naturally, this can be
effective only if your disk and I/O
path can meet the demands.)
Your retention policies need to be
reasonable: don't arbitrarily retain a
year's worth of versions, but rather
keep as much as is really needed to
recover files.
Make sure you are running regular,
unlimited expirations, else your TSM
database will balloon.
The backup of small files is also
problematic with tape drives with poor
start-stop characteristics (see
Backhitch).
The condition of the directory in which
the small files exist can also slow
things down: see "Backup performance".
Consider turning on client tracing to
identify the specific problem area.
Master Drive An informal name for the first, SMC
drive in a SCSI library, such as the
3584. (Remove that drive and you suffer
ANR8840E trying to interact with the
library.)
MATCHAllchar Client option to specify a character to
be used as a match-all wildcard
character. The default is an asterisk
(*).
MATCHOnechar Client option to specify a character to
be used as a match-one wildcard
character. The default is a question
mark (?).
MAX SQL statement to yield the largest
number from all the rows of a given
numeric column.
See also: AVG; COUNT; MIN; SUM
MAXCAPacity Devclass keyword for some devices
(principally, File) to specify the
maximum size of any data storage files
defined to a storage pool categorized by
this device class.
MAXCAPACITY, if set to other than 0
determines the maximum amount of data
ADSM will put to a tape, ESTCAPACITY, if
MAXCAPACITY is not set, is an estimate
used for some calculations for
reclamation and display, but does not
determine when a tape is full.
On VM and MVS servers MAXCAPACITY is the
maximum amount of data that ADSM will
put on a tape, but if the tape becomes
physically full, or has certain errors,
it will be marked full before it reaches
that capacity. The capacity reported by
ADSM does not consider compression. If
client compression is used, or if the
data is not very compressible (backups
of zip files, for examples) then ADSM
will report a full tape will a smaller
capacity. Most tape manufacturers give
their tape capacity assuming compression
(I think normally around 3/1), so if you
are sending already compressed data, you
will not be able to reach the stated
capacities.
MAXCMDRetries Client System Options file (dsm.sys)
option to specify the maximum number of
times you want the client scheduler to
attempt to process a scheduled command
which fails.
Default: 2
Do not confuse with the Copy Group
SERialization parameter, which governs
attempts on a busy file, not session
reattempts.
Maximum command retries 'Query STatus'
Maximum mounts See: MOUNTLimit
Maximum Scheduled Sessions 'Query STatus' output reflecting the
number of schedule sessions possible, as
controlled by the 'Set MAXSCHedsessions'
command percentage of the the Maximum
Sessions value seen in 'Query STatus'.
Default: 50% of Maximum Sessions.
MAXMIGRATORS HSM: New in 4.1.2 HSM client, per the
IP22148.README.HSM.JFS.AIX43 file:
Starting with this release, dsmautomig
starts parallel sessions to the TSM
server that allows to migrate more than
one file at a time. The number of
parallel migration sessions is
recognized by the dsmautomig process
specific option that can be configured
in the dsm.sys file:
MAXMIGRATORS <number of parallel
migration sessions>
(default = 1, min = 1, max = 20)
Make sure that sufficient resources are
availabale on the TSM server for
parallel migration. Avoid to set the
MAXMIGRATORS option higher than number
of sessions on the TSM server can be
used for storing data.
maxmountpoint You mean MAXNUMMP (q.v.)
MAXNUMMP TSM 3.7+ server REGister Node, UPDate
Node parameter to limit the number of
concurrent mount points, per node, for
Archive and Backup operations. Prevents
a client from taking too many tape
drives at one time. Affects
parallelization.
This option is ignored for Restore and
Retrieve operations.
Code 0 - 999. Default: 1
Warning: A value of 0 will result in
ANS1312E message and immediate
termination of a backup/archive session;
but restore/retrieve will not be
impeded.
Warning: Upgrading to 3.7, with its
attendant database conversion, results
in the MAXNUMMP value being 0!
The RESOURceutilization should not
exceed MAXNUMMP.
Ref: TSM 3.7 Technical Guide, 6.1.2.3
See also: KEEPMP; MOUNTLimit;
Multi-session client; REGister Node
MAXPRocess Operand in 'BAckup STGpool',
'MOVe NODEdata', 'RESTORE STGpool', and
'RESTORE Volume' to parallelize the
operation - tempered by the number of
tape drives. Note that the "process"
implications in the name harks back to
the days when server taks were performed
by individual processes: in these modern
times, MAXPRocess is figurative and
actually governs the number of threads.
MAXRecalldaemons Client System Options file (dsm.sys)
option to specify the maximum number of
dsmrecalld daemons which may run at one
time to service HSM recall requests.
Default: 20
MAXRECOncileproc Client System Options file (dsm.sys)
option to specify the maximum number of
reconcilliation processes which HSM can
start automatically at one time.
Default: 3
MAXSCRatch Operand in 'DEFine STGpool' to govern
the use of scratch tapes in the storage
pool. Specifies the maximum number of
scratch volumes that may be taken for
the storage pool, cumulatively. That is,
each volume taken from the scratch pool
is still known as a scratch volume, as
reflected in the Query Volume "Scratch
Volume?" value, and will return to the
scratch pool when emptied. The
MAXSCRatch value is thus the storage
pool's quota limit.
Setting MAXSCRatch=0 prevents use of
scratch volumes, an intentional special
case when you want to have the storage
pool use on volumes specifically
assigned to it, via 'DEFine Volume'. If
MAXSCRatch is greater than 0 and you
have also DEFine'd volumes into the
storage pool, the DEFine'd volumes will
be used first, then scratches.
Msgs: ANR1221E
MAXSCRatch, query 'Query STGpool ... Format=Detailed';
look for the value associated with
"Maximum Scratch Volumes Allowed".
MAXSCRatch and collocation ADSM will never allocate more than
'MAXSCRatch' volumes for the storage
"raw logical volume" "lock files" /tmp
pool: collocation becomes defeated when
the scratch pool is exhausted as ADSM
will then mingle clients. When a new
client's data is to be moved to the
storage pool, ADSM will first try to
select a scratch tape, but if the
storage pool already has 'MAXSCRatch'
volumes then it will select the tape
with the lowest utilization in the
storage pool.
MAXSessions Server options definition (dsmserv.opt).
Specifies the number of simultaneous
client sessions. The MAXSessions value
is incremented by prompted sessions,
polling sessions, and admin sessions.
When an attempt is made to prompt a
client there is a 1 minute delay for
response from that client. The next
client to be prompted is not prompted
until either the first client responds
or the 1 minute delay elapses. So if you
have many prompted clients, be sure your
schedule starttime duration is large
enough to accomodate 1 minute delays.
Typically the client will start as soon
as prompted, so you may have prompted
clients that are not "loaded" and
consequently the entire delay is used
waiting for a client that is not going
to respond. Even if you are maxed out
on the MAXSessions value, you can always
start more administrative clients.
Default: 25 client sessions
Ref: Installing the Server...
See also: Multi-session Client;
"Set MAXSCHedsessions <%Sched>", whereby
part of this total MAXSessions value is
devoted to Schedule sessions; SETOPT
MAXSessions server option, query 'Query OPTion', see "Maximum Scheduled
Sessions".
MAXSize STGpool operand to define the maximum
size of a Physical file which may be
stored in this pool. (Remember that
Physical size refers to the size of an
Aggregate, not the size of a Logical
file from the client file system. See
"Aggregates".)
Limiting the size of a file eligible
for a given pool in a hierarchy causes
larger files to skip that storage pool
and try the next one down in the
hierarchy. If the file is too big for
any pool in the hierarchy, it will not
be stored.
The file's size, as reported by the
operating system, is compared to the
storage pool's MAXSize value PRIOR TO
compression.
Value can be specified as "NOLIMIT"
(which is the default), or a number
followed by a unit type: K for
kilobytes, M for megabytes, G for
gigabytes, T for terabytes.
Examine current values via server
command 'Query STGpool Format=Detailed'.
Msgs: ANS1310E
See also: Storage pool space and
transactions
MAXThresholdproc Client System Options file (dsm.sys)
option to specify the maximum number of
HSM threshold migration processes which
can start automatically at one time.
Default: 3
Maximum sessions, define "MAXSessions" definition in the server
options file.
Maximum sessions, get 'Query STatus'
MB Megabyte: To be considered equal to
1024 x 1024 = 1,048,576 in TSM.
(Note that disk makers base their
sizings on 1000, not 1024...to make
their offerings seem more capacious.)
See also: Kilobyte
MBps Megabytes per second, a data rate
typically used with tape drives.
Mbps Megabits per second, a data rate
typically associated with data
communication lines.
Media Access Status Element of Query SEssion F=D report.
"Waiting for access to output volume
______ (___ seconds)" may reflect the
volume name that the session was waiting
for when it started - but that may no
longer be the actual volume needed. For
example: an Archive session fills the
disk storage pool in a hierarchy where
tape is the next level, and so a
migration process is incited...and so
the client is waiting on the tape which
the migration process is migrating to.
Then that tape fills. Migration goes on
to a fresh tape, but the archive session
still shows waiting for access to the
original tape.
When neither Query Process nor Query
Session F=D show the volume identified
in "Waiting for access...", it can be
due to a backup of HSM-managed space
where that volume is feeding the backup
directly from the storage pool rather
than the client, as HSM backups operate
where the HSM space is on the *SM
server. Query Session F=D shows only the
output volume, not the implicit input.
"Current output volume(s): ______,(470
Seconds)" is an undocumented form, which
seems to reflect how long the tape has
been idle, as for example when the
client is looking for the next candidate
file to back up. This impression is
reinforced by the Seconds value dropping
back to zero periodically. If that HSM
backup cannot mount either the input or
output volumes for lack of drives, the
field will report two "Waiting for mount
point..." instances, which looks odd but
makes perfect sense.
Media fault message ANR8359E Media fault ... (q.v.)
Media Type IBM 34xx tape cartridges have an
external one-character ID, as follows:
'1' Cartridge System Tape (CST): 3490
'E' Enhanced Capacity Cartridge System
Tape (ECCST): 3490E
'J' Magstar 3590 tape cartridge (HPCT)
'K' Magstar 3590 tape cartridge (EHPCT)
See also: CST; ECCST; HPCT
Media TSM db table to intended to report
volumes managed via the MOVe MEDia cmd.
Columns: VOLUME_NAME, STATE
(MOUNTABLEINLIB, MOUNTABLENOTINLIB),
UPD_DATE (YYYY-MM-DD HH:MM:SS.000000),
LOCATION, STGPOOL_NAME, LIB_NAME, STATUS
(EMPTY, FILLING, FULL, ACCESS (READONLY,
etc.), LRD (YYYY-MM-DD HH:MM:SS.000000).
(LRD is Last Reference Date.)
MEDIA1 A less-used designation for 3490 base
cartridge technology. See CST.
MEDIA2 A less-used designation for 3490E
cartridge technology. See ECCST.
MEDIA3 A less-used designation for 3590
cartridge technology.
mediaStorehouse 199901 product from Manage Data Inc.
which functions as an ADSM proxy client
to service backup and restore of
network-client data via CORBA wherever
the user currently happes to be (based
upon userid). www.managedata.com
Media Wait (MediaW) "Sess State" value in 'Query SEssion'
for when a sequential volume (tape) is
to be mounted to serve the needs of that
session with a client and the session
awaits completion of that mount. This
could mean waiting either for a mount
point or a volume in use by another
session or process. Another cause is
the tape library being unavailable, as
in a 3494 in Pause mode.
When using a TDP, refer to its User Guide
regarding multi-session (Stripes), where
you will probably need to enable
collation by filespace.
Recorded in the 24th field of the
accounting record, and the
"Pct. Media Wait Last Session" field of
the 'Query Node Format=Detailed' server
command.
See also: Communications Wait;
Idle Wait; SendW; Run; Start
Medium changer, list contents Unix: 'tapeutil -f /dev/____ inventory'
Windows: 'ntutil -t tape_ inventory'
See: ntutil; tapeutil
Medium Mover (SCSI commands) 3590 tape drive: Allows the host to
control the movement of tape cartridges
from cell to cell within the ACF
magazine, treating it like a mini
library of volumes.
Megabyte See: MB
Memory limits See: Unix Limits
Memory-mapped I/O You mean Shared Memory (q.v.)
MEMORYEFficientbackup ADSMv3+ Client User Options file
(dsm.opt) option specifies a more memory
conserving algorithm for processing
incremental backups, backing up one
directory at a time, and using less
memory. This obviously occurs at (great)
expense of backup performance.
Choices:
No Your client node uses the faster,
more memory-intensive method when
it processes incremental backups.
Yes Your client node uses the method
that uses less memory when
processing incremental backups -
BUT WITH A BIG PERFORMANCE PENALTY.
Note: This option can also be defined on
the server.
Msgs: ANS1030E
See also: LARGECOMmbuffers
Message explanation You can do 'help MsgNumber' to get info
about a message. For example: with
message ANR8776W, you can simply do
'help 8776'.
Message filesets (TSM AIX server) tivoli.tsm.msg.en_US.devices
tivoli.tsm.msg.en_US.server
tivoli.tsm.msg.en_US.webhelp
Message interval "MSGINTerval" definition in the server
options file.
MessageFormat Definition in the server options file.
Specifies the message headers in all
lines of a multi-line message. Possible
option numbers:
1 - Only the first line of a multi-line
message contains the header.
2 - All lines of a multi-line message
contain headers.
Default: 1
Ref: Installing the Server...
MessageFormat server option, query 'Query OPTion'
Messages, suppress Use the Client System Options file
(dsm.sys) option "Quiet".
See also: VERBOSE
MGMTCLASSES SQL Table for Management Classes.
Columns: DOMAIN_NAME, SET_NAME,
CLASS_NAME, DEFAULT, DESCRIPTION,
SPACEMGTECHNIQUE, AUTOMIGNONUSE,
MIGREQUIRESBKUP, MIGDESTINATION,
CHG_TIME, CHG_ADMIN, PROFILE
MGSYSLAN Managed System for LAN license.
MIC Memory-in-Cassette: Sony's non-volatile
memory chip in their AIT cartridge.
See: AIT; MAM
Microcode, acquire Call 1-800-IBM-SERV and request the
latest microcode for your device.
Microcode, install Can use tapeutil or ntutil (Tape Drive
Service Aids): select "Microcode
Load"...
- position to equivalent /dev/rmtx and
hit Enter;
- at "Enter Filename" enter the
filename of your new firmware;
- press F7
- download of firmware to the drive
begins; successful download will be
displayed (message "Operation
completed successfully!")
- press F10 and enter q to exit
tapeutil/ntutil.
Microcode in tape drive Run /usr/lpp/adsmserv/bin/mttest...
select 1: manual test
select 1: set device special file
e.g.: /dev/rmt0
select 20: open
select 46: device information or select
37: inquiry
MICROSECONDS See: DAYS
Microsoft Cluster Server Environment See IBM article swg21109932
scheduled backups, verify
Microsoft Exchange See: Exchange; TDP for Exchange
MIGContinue ADSMv3+ Stgpool keyword to specify
whether *SM is allowed to migrate files
that have not exceeded the MIGDelay
value. Default: Yes.
Because of the MIGDelay parameter, it is
now possible for *SM to complete a
migration process and not meet the low
migration threshold. This can occur if
the MIGDelay parameter value prevents
*SM from migrating enough files to
satisfy the low migration threshold. The
MIGContinue parameter allows system
administrators to specify whether ADSM
is allowed to migrate additional files.
Exploitation note: This setting allows a
very nice archival scheme to be
implemented. Say you run a time sharing
system, and when users leave you archive
their home directories as a tar file in
a storage pool. But you only want to
keep the most recent year's worth of
data there, and want anything older to
be written to separate tapes that can be
ejected from the tape library when they
fill. You can set MIGDelay=365 and
MIGContinue=No. This will keep recent
files in the "current" storage pool and,
when you drop the HIghmig value to cause
migration to the "oldies" storage pool
below it, files more than a year old
will go there. Neat.
See also: MIGDelay; Migration
MIGDelay ADSMv3+ Stgpool keyword to specify the
minimum number of days that a file must
remain in a storage pool before the file
becomes eligible for migration from the
storage pool. The number of days is
counted from the day that the file was
stored in the storage pool or retrieved
by a client, whichever is more recent.
(The NORETRIEVEDATE server option
prevents retrieval date recording.)
This parameter is optional.
Allowable values: 0 to 9999 (27.39 yrs)
Default: 0, which means migration is
not delayed, which causes migration to
be determined purely in terms of
occupancy level.
See also: MIGContinue; NORETRIEVEDATE
MIGFILEEXPiration Client System Options file (dsm.sys)
HSM option to specify the number of days
that copies of migrated/premigrated
files are kept on the server after they
are modified on or deleted from the
client file system. That is, the
no-longer-viable migrated copy of the
file in the HSM server is removed while
the original remains intact on the
client and a new, migrated copy of a
modified file may now be present on the
ADSM server. Note that the expiration
clock starts ticking after
reconciliation is run on the file
system; and that HSM takes care of its
own expiration, rather than it being
done in EXPIre Inventory.
Default: 7 (days)
MIGPRocess Operand of 'DEFine STGpool' and
'UPDate STGpool' to specify the number
of processes to be used for migrating
files from the (disk) storage pool to a
lower storage pool in the hierarchy of
storage pools. (You cannot specify this
operand on sequential (tape) storage
pools, in that tape is traditionally a
final destination.) Default: 1 process.
Note that it pertains to migrating from
a disk storage pool down to tape: you
cannot specify migration *from* tape.
Migration occurs with one process per
node, moving *all* of the data for one
node before going on to the data for
another node. The order of nodes
processed is per largest amount of data
in the disk storage pool. See APAR
IX77884. This means that if only one
node session is active, you will get
just one migration process, regardless
of the MIGPRocess value.
%Migr (ADSMv2 server) See: Pct Migr
Migrate files (HSM) 'dsmmigrate Filename(s)'
Migrate Install Usually refers to an upgrade of the TSM
server, in place, installing new TSM
server software on a system which had
been running an earlier TSM.
Ref: Quick Start manual
See also: dsmserv UPGRADEDB
migrate-on-close recall mode A mode that causes HSM to recall a
migrated file back to its originating
file system only temporarily. If the
file is not modified, HSM returns the
file to a migrated state when it is
closed. However, if the file is
modified, it becomes a resident file.
You can set the recall mode for a
migrated file to migrate-on-close by
using the dsmattr command, or set the
recall mode for a specific execution of
a command or series of commands to
migrate-on-close by using the dsmmode
command. Contrast with normal recall
mode and read-without-recall recall
mode.
Migrated file A file that has been copied from a local
file system to ADSM storage and replaced
with a stub file on the local file
system. Contrast with resident file and
premigrated file.
See also: Leader data; Stub file
Migrated file, accessibility 'dsmmode -dataACCess=n' (normal) makes
migrated files appear resident, and
allow them to be retrieved.
'dsmmode -dataACCess=z' makes migrated
files appear to be zero-length, and
prevents them from being retrieved.
Migrated file, display its recall 'dsmattr Filename'
mode
Migrated file, set its recall mode 'dsmattr -recallmode=n|m|r Filename'
(HSM) where recall mode is one of:
- n, for Normal
- m, for migrate-on-close
- r, for read-without-recall
Migrated files, HSM, list from client 'dsmls'
'dsmmigquery -SORTEDMigrated'
(this takes some time)
Migrated files, HSM, list from server 'Query CONtent VolName ...
Type=SPacemanaged'
Migrated files, HSM, count In dsmreconcile log.
MIgrateserver HSM: Client System Options file
(dsm.sys) option to specify the name of
the ADSM server to be used for HSM
services (file migration - space
management). Code at the head of the
dsm.sys file, not in the server stanzas.
Cannot be overridden in dsm.opt or via
command line. Using -SErvername on the
command line does not cause
MIgrateserver to use that server.
Default: server named on DEFAULTServer
option.
Migration A concept which occurs in several places
in ADSM:
Storage pools: Refers to migrating files
from one level to a lower level in a
storage pool hierarchy when the Pct Migr
value (Query STGpool report) reaches the
specified threshhold percentage
(HIghmig), mitigated by other control
values such as MIGDelay and
NORETRIEVEDATE.
Occurs with one process per node
(regardless of the MIGPRocess value),
moving *all* of the data for one node
before going on to the data for another
node - or before again checking the
LOwmig value. The order of nodes
processed is per largest amount of data
in the disk storage pool.
Priority: Will wait for a Move Data
process to complete, and then take a
tape drive before any additional waiting
Move Data processes start.
By using the ADSMv3 Virtual Volumes
capability, the output may be stored on
another ADSM server (electronic
vaulting).
HSM: The process of copying a file from
a local file system to ADSM storage and
replacing the file with a stub file on
the local file system.
See also: threshold migration; demand
migration; selective migration
See: DEFine STGpool; HIghmig; LOwmig;
MIGDelay, NORETRIEVEDATE
Migration, Auto, manually perform for HSM: 'dsmautomig [FSname]'
file system
Migration, prevent at start-up To prevent migration from occurring
during a problematic TSM server restart,
add the following (undocumented) option
to the server options file:
NOMIGRRECL
Migration, prevent over time To prevent migration from occurring
during normal TSM operation, do
'UPDate STGpool <PoolName>
MIGContinue=No MIGDelay=9999'
This says that the server is not to
migrate files unless the files satisfy
the migration delay time, and that delay
time is maximized (27.39 years), which
in combination prevents migration.
Migration, storage pool files General ADSM concept of migrating a
storage pool's files down to the next
storage pool in a hierarchy when a given
pool exceeds its high threshold value.
Migration, storage pool files, query 'Query STGpool [STGpoolName]'
Migration, storage pool files, set The high migration threshold is
specified via the "HIghmig=N" operand of
'DEFine STGpool' and 'UPDate STGpool'.
The low migration threshold is specified
via the "LOwmig=N" operand.
Note that LOwmig is effectively
overridden to 0 when CAChe=Yes is in
effect for the storage pool, because
ADSM wants to cache everything once
migration is triggered.
Migration and reclamation As a TSM server pool receives data, the
server checks to see if migration is
needed. This migration causes cascading
checks as the next stgpool in the
hierarchy receives data. When the bottom
of the storage pool hierarchy is
reached, the migration checking thread
will initiate reclamation checking
against this lowest level stgpool if it
is a sequential stgpool. If there are
multiple sequential storage pools within
the storage pool hierarchy, reclamation
processing will start on the lowest
hierarchy position and proceed to the
next level storage pool in the
hierarchy.
Migration candidate considerations Too small? A file will not be a
(HSM) candidate for migration if its size is
smaller than the stub file size (as
revealed in 'dsmmigfs query').
Management class proper? As installed,
HDM will not migrate files unless they
have been backed up.
'dsmmigquery FSname'
Migration candidates, list (HSM) 'dsmmigquery FSname'
Migration candidates list (HSM) A prioritized list of files that are
eligible for automatic migration at the
time the list is built. Files are
prioritized for migration based on the
number of days since they were last
accessed (atime), their size, and the
age and size factors specified for a
file system. Note that time of last
access is a measure of demand for the
file, so is used as a basis rather than
modification time.
Can be rebuilt by the client root user
command:
'dsmreconcile [-Candidatelist]
[-Fileinfo]'
See: candidates
Migration in progress? 'Query STGpool ____ Format=Detailed'
"Migration in Progress?" value.
Migration not happening That is, migration from a higher level
storage pool to a lower one in a
storage pool hierarchy is not
happening.
- The presence of server option
NOMIGRRECL will prevent it.
Migration not happening (HSM problem) See: HSM migration not happening
Migration performance The migration of data from one storage
pool to a lower one - particularly to
tape - is limited by:
- Your collocation specification, which
can cause many tapes to be mounted as
files are "delivered" to their
appropriate places in the next storage
pool.
- The *SM database is in the middle of
the action, so its cache hit ratio
performance is important with many
small files.
- Long mount retention periods can
prolong processing in having to wait
for an idle tape to be dismounted
before the next one can be mounted.
- The MOVEBatchsize and MOVESizethresh
server option values will govern how
much data moves in each server
transaction.
- The performance of your tape
technology is also a factor.
- In moving from disk to tape, realize
that the conflicting characteristics
of the two media can hamper
performance... Disk is a bit-serial
medium which has to perform seeks to
get to data. Tape is a byte-parallel
medium which is always ready to write
when in streaming mode, where its
transfer rate is typically much faster
than disk. If the tape to wait for the
disk to provide data, the tape drive
is forced into start/stop mode, which
particularly worsens throughput in
some tape technologies.
- With caching in effect, there will be
more disk seek time to step over older
cached files in migrating new files,
while the receiving tape drive waits.
See: MOVEBatchsize, MOVESizethresh
Migration Priority A number assigned to a file in the
Migration Candidates list (candidates
file), computed by:
- multiplying the number of days since
the file was last accessed by the age
factor;
- multiplying the size of the file in
1-KB blocks times the size factor;
- add those two products to produce the
priority score (Migration Priority).
This ends up in the first field of the
candidates file line.
See: candidates
Migration processes, number of Code on "MIGPRocess=N" keyword of
'DEFine STGpool' and 'UPDate STGpool'.
Default: 1.
See: MIGPRocess
Migration storage pool (HSM) Specified via
'DEFine MGmtclass MIGDESTination=StgPl'
or
'UPDate MGmtclass MIGDESTination=StgPl'.
Default destination: SPACEMGPOOL.
Migration vs. Backup, priorities Backups have priority over migration.
MIGREQUIRESBkup (HSM) Mgmtclass parameter specifying that a
backup version of a file must exist
before the file can be migrated.
Default: Yes
Query: 'Query MGmtclass' and look for
"Backup Required Before Migration".
See also: Backup Required Before
Migration; RESToremigstate
MIM (3590) Media Information Message. Sent to
the host system. AIX: appears in Error
Log.
Severity 1 indicates high temporary
read/write errors were detected
(moderate severity).
Severity 2 indicates permanent
read/write errors were detected (serious
severity).
Severity 3 indicates tape directory
errors were detected (acute severity).
Ref: "3590 Operator Guide" manual
(GA32-0330-06) esp. Appendix B
"Statistical Analysis and Reporting
System User Guide"
See also: SARS; SIM
MIN SQL statement to yield the smallest
number from all the rows of a given
numeric column.
See also: AVG; COUNT; MAX; SUM
MINRecalldaemons Client System Options file (dsm.sys)
option to specify the minimum number of
dsmrecalld daemons which may run at one
time to service HSM recall requests.
Default: 3
See also: MAXRecalldaemons
MINUTE(timestamp) SQL function to return the minutes value
from a timestamp.
See also: HOUR(), SECOND()
MINUTES See: DAYS
Mirror database Define a volume copy via:
'DEFine DBCopy Db_VolName Copy_VolName'
MIRRORRead DB server option, query 'Query OPTion'
MIRRORRead LOG|DB Normal|Verify Definition in the server options file.
Specifies the mode used for reading
recovery log pages or data base log
pages. Possibilities:
Normal: read one mirrored volume to
obtain the desired page;
Verify: read all mirror volumes for a
page every time a recovery log
or database page is read, and
if an invalid page is
encountered, to resync with
valid page from other volume
(decreases performance but
assures readability).
This should be in effect when a
(standalone) dsmserv auditdb is
run.
Default: Normal
Ref: Installing the Server...
MIRRORRead LOG server option, query 'Query OPTion'
MIRRORWrite DB server option, query 'Query OPTion'
MIRRORWrite LOG|DB Sequential|Parallel Definition in the server options file.
Specifies how mirrored volumes are
accessed when the server writes pages to
the recovery log or data base log during
normal processing. "Sequential" is
"conditional mirroring" such that data
won't be written to a mirror copy until
successfully written to the primary.
Default: Sequential for DB;
Parallel for LOG
Comments: *SM Sequential mirroring *is*
better than RAID because of the danger
of partial page writes - which *do*
occur in the real world as hardware and
human defects evidence themselves. RAID
will perform the partial writing in
parallel, thus resulting in a corrupted
database if the writing is interrupted,
whereas *SM Sequential mirroring will
leave you with a recoverable database -
by simple resync, not "recovery". That
is, RAID is just as problematic as *SM
Parallel mirroring.
Mirroring of the *SM database is much
debated. You could let the hardware or
operating system perform mirroring
instead, but you lose the advantaged of
the *SM application mirroring - which
also include being able to put the
mirrors on any arbitrary volume, not in
a single Volume Group as AIX insists.
Ref: Installing the Server...
MIRRORWrite LOG server option, query 'Query OPTion'
Missed Status in Query EVent output indicating
that the scheduled startup window for
the event has passed and the schedule
did not begin. When you have SCHEDMODe
PRompted and have a client schedule set
up for the node, then it is missed if
the server couldn't contact the client
within the time window.
The dsmsched.log will typically show
"Scheduler has been stopped."
One mundane cause of Missed is that the
client scheduler process already has a
(long-running) session underway, as in
the case of a backup which runs much
longer than expected because of a lot of
new data in the file system, which runs
well past the start time for the next
session.
See also: Failed; Schedule, missed
Mobile Backups See: Adaptive differencing; SUBFILE*
MODE A TSM server Copy Group attribute that
specifies whether a backup should be
performed for an object that was not
modified since the last time it was
backed up.
Choices:
MODified The default, almost always
used. Causes a file to be backed up
only if it has changed since the last
backup. In general, TSM considers a
file changed if any of the following is
true:
- The date last modified is different
- The file size is different
- The file owner is different
- The file permissions are different
Criteria may vary by platform,
particularly in Windows.
ABSolute Specifies that file system
objects are to be backed up regardless
of whether they have been modified.
Putting this choice into effect for one
backup is a technique for performing a
full backup of a file system.
See also: ABSolute; MODified;
Backup, which files are backed up
MODE (-MODE) Client option used in conjunction with
Backup Image to specify the type of file
system style backup that should be used
to supplement the last image backup.
Choices:
Selective The default. Causes the
usual image backup to be performed, to
distinguish from the Incremental
choice.
(The name of this choice is unfortunate
in that it invites confusion with the
standard TSM Selective backup, which
this choice has nothing to do with. The
name of this choice should have been
"Image".
Incremental Only back up files whose
modification timestamp is later than
that of the last image backup. This is
accomplished via an -INCRBYDate backup,
whose nature means that deleted files
cannot be detected and head toward
expiration on the server, and nor can
files whose attributes have changed be
detected for backup. If there was no
prior image backup, this Incremental
choice will be ignored as an erroneous
specification, and a full image backup
will be performed, as if Selective had
instead been the choice.
See also: dsmc Backup Image
MODified A backup Copy Group attribute that
indicates that an object is considered
for backup only if it has been changed
since the last backup. An object is
considered changed if the date, size,
owner, or permissions have changed.
(Note that the file will be physically
backed up again only if TSM deems the
content of the file to have been
changed: if only the attributes (e.g.,
Unix permissions) have been changed,
then TSM will simply update the
attributes of the object on the server.)
See also: MODE
Contrast with: ABSolute
See also: Backup, which files are backed
up; SERialization (another Copy Group
parameter)
Monitoring products See: TSM monitoring products
MONTHS See: DAYS
Mount in progress Server command: 'SHow ASM'
Mount limit See: MOUNTLimit
Mount message See: TAPEPrompt
Mount point, keep over whole session? The 'REGister Node' operand KEEPMP
controls this.
Mount point queue Server command: 'SHow ASQ'
Mount point wait queue IBM internal term for how ADSM
prioritizes server tasks needing tapes.
MOVe Datas have a higher priority than
some other tasks.
Mount points Defined globally in DEVclass MOUNTLimit
Restricted thereunder via REGister Node
parameters KEEPMP and MAXNUMMP,
governing the number of mount points
available for other sessions.
See: KEEPMP; MAXNUMMP; MOUNTLimit
Mount points, maximum See: MOUNTLimit
Mount points, report active 'SHow MP'
Mount request timeout message ANR8426E on a CHECKIn LIBVolume.
Mount requests, pending 'Query REQuest' (q.v.).
Via Unix command:
'mtlib -l /dev/lmcp0 -qS'
Mount requests, service console See: -MOUNTmode
Mount Retention Output field in report from
'Query DEVclass Format=Detailed'.
Value is defined via MOUNTRetention
operand of 'DEFine DEVclass' command.
See also: KEEPMP; MAXNUMMP; MOUNTLimit;
MOUNTRetention
Mount retention period, change See: MOUNTRetention
Mount tape Via Unix command:
'mtlib -l /dev/lmcp0 -m -f /dev/rmt?
-V VolName' # Absolute drivenm
'mtlib -l /dev/lmcp0 -m -x Rel_Drive#
-V VolName' # Relative drive#
(but note that the relative drive
method is unreliable).
Note that there is no ADSM command to
explicitly mount a tape: mounts are
implicit by need.
Once mounted, it takes 20 seconds for
the tape to settle and become ready for
processing.
See also: Dismount tape
Mount tape, time required For a 3590 tape drive:
If a drive is free, it takes a nominal
32 seconds for the 3494 robot to move to
the storage cell containing the tape,
carry the tape to the drive, load the
tape, and have it wind within the drive.
Wind-on time itself is about 20 seconds.
Note that if you have two tape drives
and your mount request is behind another
which is just starting to be processed,
you should expect your mount to take
twice as long, or about 64 seconds.
To rewind, dismount, mount a new tape in
that drive, and position it can take 120
seconds.
If a mount is taking an usually long
time, it could mean that the library has
a cleaning tape mounted, cleaning the
drive. Or the tape could be defective,
giving the drive a hard time as it tries
to mount the tape.
MOuntable DRM media state for volumes containing
valid data and available for onsite
processing.
See also: COUrier; COURIERRetrieve;
NOTMOuntable; VAult; VAULTRetrieve
MOUNTABLEInlib State for a volume that had been
processed by the MOVe MEDia command: the
volume contains valid data, is
mountable, and is in the library.
See also: MOVe DRMedia
MOUNTABLENotinlib State for a volume that had been
processed by the MOVe MEDia command: the
volume may contain valid data, is
mountable, but is not in the library (is
in its external, overflow location).
See msg ANR1425W.
See also: MOVe DRMedia
Mounted, is a tape mounted in a drive? The 3494 Database "Device" column will
show a drive number if the tape is
mounted, and a Cell number of "_ K 6",
where '_' is the wall number. If the
Cell number says "Gripper", the tape is
in the process of being mounted.
Mounted volumes Server command: 'SHow ASM'
MOUNTLimit (mount limit) Operand in 'DEFine DEVclass', to specify
the maximum number of concurrent mounts
within that device class (which is the
same as the maximum for the library
definition associated with that device
class). This is the maximum number of
tape drives which can be used at one
time, among all the tape drives you
have. Usually, you would have your
MOUNTLimit value be equal to the number
of drives you have, so that all of them
may be used at the same time, to fully
service all your clients.
Affects BAckup STGpool, etc.
It should be set no higher than the
number of physical drives you have
available. In ADSMv3+, you can specify
"MOUNTLimit=DRIVES", and ADSM will then
dynamically adjust the MOUNTLimit.
However, IBM recommends (as in the 5.2.2
AIX Admin Guide) that you explicitly
specify the mount limit instead of using
MOUNTLimit=DRIVES.
Default: 1.
Note that MOUNTLimit is an absolute
limit, which sets an upper bound for
related configuration parameters
RESOURceutilization and MAXNUMMP.
-MOUNTmode Command-line option for *SM
administrative client commands
('dsmadmc', etc.) to have all mount
messages displayed at that terminal.
No administrative commands are accepted.
See also: -CONsolemode; dsmadmc
Ref: Administrator's Reference
MOUNTRetention Devclass operand, to specify how long,
in minutes (0-9999), to retain an idle
sequential access volume before
dismounting it. Default: 60 (minutes).
The value should be long enough to allow
for re-use of same mounted tape within
a reasonable time, but not so long that
the tape could end up trapped in the
drive upon an operating system shutdown
which does not give *SM the opportunity
to dismount it. (Always shut *SM down
cleanly if possible.) Another reason to
keep mount retention fairly short is
that having a tape left in a drive only
delays a mount for a new request, in
that the stale tape must be dismounted
first: this is a big consideration in
restorals, particularly of a large
quantity of data as for a whole file
system, in which case it would be worth
minimizing the MOUNTRetention when such
a job runs. Also, the drive mechanism
stays on while tape is mounted, so adds
wear.
Keep mount retention short when
collocation is employed, to prevent
waiting for dismounts, given the
elevated number of mounts involved.
But keep the retention value sufficient
to cover client think time during file
system backups.
Msgs: ANR8325I for dismount when
MOUNTRetention expires.
See also: KEEPMP; MAXNUMMP; MOUNTLimit
MOUNTRetention, query 'Query DEVclass Format=Detailed' and
look for "Mount Retention" value.
Mounts, current 'SHow MP'.
Or Via Unix command:
'mtlib -l /dev/lmcp0 -qS'
for the number of mounted drives;
'mtlib -l /dev/lmcp0 -vqM'
for details on mounted drives.
Mounts, maximum See: MOUNTLimit
Mounts, monitor Start an "administrative client session"
to control and monitor the server from a
remote workstation, via the command:
'dsmadmc -MOUNTmode'. If having a human
operator perform mounts, consider
setting up a "mounts operator" admin ID
and a shell script which would invoke
something to the effect of:
'dsmadmc -ID=mountop -MOUNTmode
-OUTfile=/var/log/ADSM-mounts.YYYYMMDD'
and thus log all mounts.
Ref: Administrator's Reference
Mounts, pending Via ADSM: 'Query REQuest' (q.v.).
Via Unix command:
'mtlib -l /dev/lmcp0 -qS'
Mounts, historical SELECT * FROM SUMMARY WHERE
ACTIVITY='TAPE MOUNT'
Mounts count, by drive See: 3590 tape mounts, by drive
MOUNTWait DEVclass and CHECKIn LIBVolume command
operand specifying the number of minutes
to wait for a tape to mount, on an
allocated drive.
Note that this pertains only to the time
taken for a tape to be mounted by tape
robot or operator once a tape mount
request has been issued, and has been
honored by the library. Example: a task
requires a tape volume which is not in
the library. It does not pertain to a
wait for a tape *drive* when for example
one incremental backup is taking up all
tape drives and another incremental
backup comes along needing a tape drive.
Default: 60 min.
Advice: The MOUNTWait value should be
larger than the MOUNTRetention to assure
that idle volumes have a chance to
dismount and free drives before the
MOUNTWait time expires.
MOVe Data Server command to move a volume's viable
data to volume(s) within the same
sequential access volume storage pool
(default) or a specified sequential
access volume storage pool. (MOVe Data
cannot be used on DISK devtype (Random
Access) storage pools. The source
storage pool may be a disk pool, with
the target being the defined
NEXTstgpool, whereby MOVe Data
essentially will accomplish what
migration does, but physically rather
than logically.
Copy storage pool volume contents can
only be moved to other volumes in the
same copy storage pool: you cannot move
copy storage pool data across copy
storage pools.
MOVe Data can effectively reclaim a tape
by compacting the data onto another
volume. Syntax:
'MOVe Data VolName [STGpool=PoolName]
[RECONStruct=No|Yes] [Wait=No|Yes]'
RECONStruct is new with TSM 5.1, and
allows the vacated space within
aggregates to be reclaimed, thus
allowing Move Data to be the equivalent
of Reclamation. The reconstruction does
incur more time. And, again, this can be
done only on sequential access storage
pools.
The "from" volume gets mounted R/O.
By default, data is moved by copying
Aggregates as-is: unlike Reclamation,
MOVe Data does not reclaim space where
logical files expired and were logically
deleted from *within* an Aggregate. (Per
1998 APAR IX82232: RECONSTRUCTION DOES
NOT OCCUR DURING MOVE DATA: "MOVe Data
by design does not perform
reconstruction of aggregates with empty
space. Although this was discussed
during design, it was decided to only
perform reconstruction during
reclamation. A major reason for this
decision was performance as
reconstruction of aggregates requires
additional overhead that MOVe Data does
not; hence requires additional time to
complete.")
Like Reclamation, MOVe Data brings
together all the pieces of each
filespace, which means it has to skip
down the tape to get to each piece. (The
portion of a filespace that is on a
volume is called a Cluster.)
In addition, if the target storage pool
is collocated, each cluster may ask for
a new output tape, and TSM isn't smart
enough to find all the clusters that are
bound for a particular output tape and
reclaim them together. Instead it is
driven by the order of filespaces on the
input tape, so the same output tape may
be mounted many times.
In doing a MOVe Data, *SM attempts to
fill volumes, so it will select the most
full available volume in the storage
pool. Note that the data on the volume
will be inaccssible to users until the
operation completes.
During the move, the 'Query PRocess'
"Moved Bytes" reflects the data in
uncompressed form.
Ends with message ANR1141I (which fails
to report byte count).
May be preempted by higher priority
operation - see message ANR1143W - but
may not preempt the lower priority
reclamation process (msg ANR2420E).
(Move Data has a higher priority on what
IBM internally refers to as the Mount
point wait queue.)
See also: AUDit Volume; NOPREEMPT;
Pct Util; Reclamation
Move Data, find required volumes Move Data would obviously involve the
subject volume itself, and any volumes
containing files that spanned into (the
front of) or out of (the back of) the
volume. This would be identifiable by
the Segment number in Query CONtent
_volname_, or the corresponding Select,
being other than 1/1. For spanning
files, you would then have to perform a
Content table search on the related
segment. (A tape in Filling status would
obviously have no span-out-of segment on
another volume.)
Move Data, offsite volumes When (copy storage pool) volumes are
marked "ACCess=OFfsite", TSM knows not
to use those volumes, to instead use
onsite copy storage pool volumes
containing the same data (from the same
primary storage pool). Naturally, the
files on one offsite volume may be found
on any number of onsite volumes, so
multiple mounts may be expected,
accompanied by a bunch of TSM "think
time" between volumes.
See also: ANR1173E
MOVe Data and caching disk volumes Doing a Move Data on a cached disk pool
volume has the effect of clearing the
cache. This is obvious, when you think
about it, as the cache represents data
that is already in the lower storage
pool in the hierarchy...that data has
been "pre-moved".
MOVe Data performance Move Data operations can be expected to
involve considerable repositioning as
the source tape is processed, to skip
over full-expired Aggregates. Whether
your tape technology is good at
start-stop operations will affect your
throughput.
See also: BUFPoolsize; MOVEBatchsize;
MOVESizethresh
MOVe DRMedia DRM server command to move disaster
recovery media offsite and back onsite.
Will eject the volumes out of the
library before transitioning the volumes
to the destination state. Syntax:
'MOVe DRMedia VolName
[WHERESTate=MOuntable|
NOTMOuntable|COUrier|
VAULTRetrieve|COURIERRetrieve]
[BEGINDate=date] [ENDDate=date]
[BEGINTime=time] [ENDTime=time]
[COPYstgpool=StgpoolName]
[DBBackup=Yes|No]
[REMove=Yes|No|Bulk]
[TOSTate=NOTMOuntable|
COUrier|VAult|COURIERRetrieve|
ONSITERetrieve]
[WHERELOcation=location]
[TOLOcation=location]
[CMd=________]
[CMDFilename=file_name]
[APPend=No|Yes]
[Wait=No|Yes]'
Do not do a MOVe DRMedia where a
MOVe MEDia is called for.
REMove=BUlk is not supposed to result in
a Reply required on SCSI libraries, but
may: the workaround is Wait=Yes.
MOVe MEDia ADSMv3 command to deal with a full
library by moving storage pool volumes
to an external "overflow" location,
typically named on the OVFLOcation
operand of Primary and Copy Storage
Pools. (Think "poor man's DRM".) Unlike
with Checkout, the volume remains
requestable and ultimately mountable,
via an outstanding mount request. (Note
that, internally, MOVe MEDia actually
performs a Checkout Libvolume, as
indicated in its ANR6696I message.)
Syntax:
'MOVe MEDia VolName STGpool=PoolName
[Days=NdaysSinceLastUsage]
[WHERESTate=MOUNTABLEInlib|
MOUNTABLENotinlib]
[WHERESTATUs=FULl,FILling,EMPty]
[ACCess=READWrite|READOnly]
[OVFLOcation=________]
[REMove=Yes|No|Bulk]
[CMd="command"]
[CMDFilename=file_name]
[APPend=No|Yes]
[CHECKLabel=Yes|No]'
By default, moving a volume out of the
library causes it to be made ReadOnly,
and moving it back into the library
causes it to be made ReadWrite.
If you are moving a volume back into a
library (MOUNTABLENotinlib) and it is
not empty, you must specify
WHERESTATUs=FULl for the command to
work, else get ANR6691E error.
OVFLOcation can be used to override that
specification had by the storage pool.
Do not do a MOVe MEDia where a
MOVe DRMedia is called for.
This command moves whole volumes, not
the data within them.
Note that a MOVe MEDia will hang if a
LABEl LIBVolume is running.
After doing MOVe MEDia to move the
volume back into the library:
- The volume will be READWrite, rather
than the READOnly that is conventional
for a moved-out volume;
- Query MEDia no longer shows the volume
(Query Volume does), until CHECKIn is
done;
- You must do a CHECKIn LIBVolume to get
the volume back into play.
What happens when there are more than 10
tapes to go to the 3494 Convenience I/O
Station? TSM moves one at a time, then
an Intervention Required shows up ("The
convenience I/O station is full"): when
you empty the I/O station, the Int Req
goes away, and TSM resumes ejecting
tapes. No indication of the condition
shows up in the Activity Log.
Watch out for ANR8824E message condition
where the request to the library is
lost: the volume will probably have
actually been ejected from the library,
but the MOVe MEDia updating of its
status to MOUNTABLENotinlib would not
have occurred, leaving it in an
in-between state.
Msgs: ANR8762I; ANR2017I; ANR0984I;
ANR0609I; ANR0610I; ANR6696I; ANR8766I;
ANR6683I; ANR6682I; ANR0611I;
ANR0987I (completion)
See also: Overflow Storage Pool;
OVFLOcation; Query REQuest
Ref: Admin Guide, "Managing a Full
Library"
MOVe NODEdata TSM 5.1+ server command to move data for
all filespaces for one or more nodes.
As with the 'MOVe Data' command, when
the source storage pool is a primary
pool, you can move data to other volumes
within the same pool or to another
primary pool; but when the source
storage pool is a copy pool, data can
only be moved to other volumes within
that copy pool (so the TOstgpool
parameter is not usable).
This command can operate upon data in a
storage pool whose data format is NATIVE
or NONBLOCK.
As of 2003/11 the Reference Manual fails
to advise what the Tech Guide does: that
the Access mode of the volumes must be
READWRITE or READONLY, which precludes
OFFSITE and any possibility of onsite
volumes standing in for the offsite
vols.
Cautions: As of 2003/05, the command may
report success though that was not the
case, as in specifying a non-existant
filespace.
Ref: TSM 5.1 Technical Guide
MOVEBatchsize Definition in the server options file.
Specifies the maximum number of client
files that can be grouped together in a
batch within the same server transaction
for storage pool backup/restore,
migration, reclamation, or MOVe Data
operations. Specify 1-1000 (files).
Default: 40 (files).
TSM: If the SELFTUNETXNsize server
option is set to Yes, the server sets
the MOVEBatchsize option to its maximum
values to optimize server throughput.
Beware: A high value can cause severe
performance problems in some server
architectures when doing 'BAckup DB'.
MOVEBatchsize, query 'Query OPTion'; look for
"MoveBatchSize".
MOVESizethresh Definition in the server options file.
Specifies a threshold, in megabytes, for
the amount of data moved as a batch
within the same server transaction for
storage pool backup/restore, migration,
reclamation, or MOVe Data operations.
Specify 1-500 (MB)
Default: 500 (megabytes).
TSM: If the SELFTUNETXNsize server
option is set to Yes, the server sets
the MOVESizethresh option to its maximum
values to optimize server throughput.
MOVESizethresh and MOVEBatchsize Server data is moved in transaction
units whose capacity is controlled by
the MOVEBatchsize and MOVESizethresh
server options. MOVEBatchsize specifies
the number of files that are to be moved
within the same server transaction, and
MOVESizethresh specifies, in megabytes,
the amount of data to be moved within
the same server transaction. When either
threshold is reached, a new transaction
is started.
MOVESizethresh, query 'Query OPTion'; seek "MoveSizeThresh".
MP1 Metal Particle 1 tape oxide formulation
type, as used in the 3590.
Lifetime: According to Imation studies
(http://www.thic.org/pdf/Oct00/
imation.jgoins.001003.pdf)
"All Studies Conclude that Advanced
Metal Particle (MP1) Magnetic Coatings
Will Achieve a Projected Magnetic Life
of 15-30 Years. Media will lose 5% -
10% of its magnetic moment after 15
years. Media resists chemical
degradation even after direct exposure
to extreme environments."
MPTIMEOUT TSM4.1 server option for 3494 sharing.
Specifies the maximum time in seconds
the server will retry before failing the
request. The minimum and maximum values
allowed are 30 seconds and 9999 seconds.
Default: 30 seconds
See also: 3494SHARED; DRIVEACQUIRERETRY
MSCS Microsoft Cluster Server.
MSGINTerval Definition in the server options file.
Specifies the number of minutes that the
ADSM server waits before sending
subsequent message to a tape operator
requesting a tape mount, as identified
by the MOUNTOP option.
Default: 1 (minute)
Ref: Installing the Server...
MSGINTerval server option, query 'Query OPTion'
MSI (.msi file suffix) Designates the Microsoft Software
Installer.
Note that such files are on the CD-ROM,
not in the online download area (which
has .exe, .TXT, and .FTP files).
If you copy the files from the CD for
alternate processing, be aware that
Microsoft does not support running an
MSI from a mapped network drive when you
are connect to a server via remote
desktop to terminal server.
MSI (Microsoft Installer) return codes See item 21050782 on the IBM web site
("Microsoft Installer (MSI) Return Codes
for Tivoli Storage Manager Client &
Server").
msiexec command Invokes the Microsoft Software Installer
as for example
msiexec /i "Z:\tsm_images\TSM_BA_Client
\IBM Tivoli Storage Manager Client.msi"
to install from the CD-ROM or network
drive containing the installation image.
See: Windows client manual
mt See: /dev/mt
MT0, MT1 Tape drive identifiers on Windows 2000.
Example: MT0.0.0.2 for a 3590E drive in
a 3494 library.
mt_._._._ Designation for a tape drive in a
Windows configuration, using Fibre
Channel, as in mt0.0.0.5, where the
encoding means "magnetic tape device,
Target ID 0, Lun 0, Bus 0, with the
final digit being auto assigned by
Windows based on the time of first
detection.
mtadata Exchange server: Message Transfer Agent
data, as in \exchsrvr\mtadata
mtevent Command provided with 3494 Tape Library
Device Driver, being an interface to the
MTIOCLEW function, to wait for library
events and display them.
Usage: mtevent -[ltv?]
-l[filename] Library special filename,
i.e. "/dev/lmcp0".
-t[timeout] Wait for asychronous
library event, for the
specified # of seconds.
If omitted, the program
will wait indefinitely.
-? this help text.
NOTE: The -l argument is required.
mtlib Command provided with 3494 Tape Library
Device Driver to manually interact with
the Library Manager. For environments:
AIX, SGI, Sun, HP-UX, Windows NT/2000.
Do 'mtlib -\?' to get usage info - but
beware that its output fails to show the
legal combinations of options as the
Device Drivers manual does.
-L is used to specify the name of a
file containing the volsers to be
processed - and only with the -a and
-C operands. This is handy for
resetting Category Code values in a
3494 library, via like: 'mtlib -l
/dev/lmcp0 -C -L filename -t"012C"'
-v (verbose) will identify each element
of the output, which makes things
clearer than the "quick" output
which is produced in the absence of
the -v option.
Specify category codes as hex numbers.
(Remember that this is a library
physical command: it knows nothing about
TSM or what is defined in your TSM
system.)
If command fails because "the library is
offline to the host", it indicates
either that the host is not defined in
the 3494's LAN Hosts allowance list, or
that the host is not on the same subnet
as the 3494 in the unusual case that the
subnet is defined as Not Routed.
A mount (-m) may take a considerable
time and then yield:
"Mount operation Error - Internal error"
due to the tape being problematic, but
the mount will probably work.
Ref: "IBM SCSI Tape Drive, Medium
Changer, and Library Device Drivers:
Installation and User's Guide"
(GC35-0154)
mttest Undocumented command for performing
ioctl operations and set's on a tape
drive.
/usr/lpp/adsmserv/bin/mttest. Syntax:
'mttest <-f batch-input-file> <-o
batch-output-file> <-d special-file>'
MTU Maximum Transmission Unit: the hardware
buffer size of an Ethernet card, as
revealed by 'netstat -i'. This is the
maximum size of the frame/packet that
can be transmitted by the adapter.
(Larger packets need to be subdivided to
be transmitted.)
The standard Ethernet MTU size is 1500.
Note that this maximum packet size is a
constraining factor for processes which
use ethernet. For example, a single
process can max out a 10Mb ethernet
card, but it can only drive a 100Mb card
about 2.5x faster because the measly
packet size is so constraining. To make
full use of higher-speed ethernets,
then, one must have multiple processes
feeding them. (10Mb, 100Mb, and gigabit
ethernet all use the same format and
frame size.)
See: TCPNodelay
Multi-homed client See: TCPCLIENTAddress
Multi-session Client TSM 3.7 facility which multi-threads, to
(Multi session client) start multiple sessions, in order to
transfer data more quickly. This will
work for the following program
components: Backup-archive client
(including Enterprise Management Agent,
formerly Web client) Backup and Archive
functions. This new functionality is
completely transparent: there is no
need to switch it on or off. The TSM
client will decide if a performance
improvement can be gained by starting an
additional session to the server. This
can result in as many as five sessions
running at one time to read files and
send them to the server. (So says the
B/A client manual, under "Performing
Backups Using a GUI", "Displaying Backup
Processing Status".)
Types of threads:
- Compare: For generating the list of
backup or archive candidate files,
which is handed over to the Data
Transfer thread. There can be one or
more simultaneous Compare threads.
- Data Transfer: Intereacts with the
client file system to read or write
files in the TSM operation, performs
compression/decompression, handles
data transfer with the server, and
awaits commitment of data sent to the
server. There can be one or more
simultaneous Data Transfer threads.
- Monitor: The multi-session governor.
Decides if multiple sessions would be
beneficial and initiates them.
The number of sessions possible is
governed by the RESOURceutilization
client option setting and server option
MAXSessions.
Mitigating factors: Using collocation,
only one data transfer session per file
space will write to tape at one time:
all other data transfer sessions for the
file space will be in Media Wait state.
Under TSM 3.7 Unix, with "PASSWORDAccess
Generate" in effect, a non-root session
is single-threaded because the TCA does
not support multiple sessions.
Multi-session Client is supported with
any server version; but if the server is
below 3.7, the limit is 2 sessions.
Considerations: Multiple accounting
records for multiple simultaneous
sessions from one command invocation.
Ref: TSM 3.7 Technical Guide, 6.1
See also: MAXNUMMP; MAXSessions;
RESOURceutilization; TCA; Threads,
client
Multi-Session Restore TSM 5.1 facility which allows the
backup-archive clients to perform
multiple restore sessions for No Query
Restore operations, increasing the speed
of restores. (Both server and client
must be at least 5.1.) This is similar
to the multiple backup session feature.
Elements:
- RESOURceutilization parameter in
dsm.sys
- MAXNUMMP setting for the node
definition in the server
- MAXSessions parameter in dsmserv.opt
The efficacy of MSR is obviously limited
by the number of volumes which can be
used in parallel.
From an IBM System Journal article:
"During a large-scale restore operation
(e.g., entire file space or host), the
TSM server notifies the client whether
additional sessions may be started to
restore data through parallel transfer.
The notification is subject to
configuration settings that can limit
the number of mount points (e.g., tape
drives) that are consumed by a client
node, the number of mount points
available in a particular storage pool,
the number of volumes on which the
client data are stored, and a parameter
on the client that can be used to
control the resource utilization for TSM
operations. The server prepares for a
large-scale restore operation by
scanning database tables to retrieve
information on the volumes that contain
the client's data. Every distinct volume
found represents an opportunity for a
separate session to restore the data.
The client automatically starts new
sessions, subject to the afore-mentioned
constraints, in an attempt to maximize
throughput."
Additional info:
http://www.ibm.com/support/
docview.wss?uid=swg21109935
See also: DISK; Storage pool, disk,
performance
Multi-threaded session See: Multi-session Client
Multiple servers See: Servers, multiple
Multiple sessions See: MAXNUMMP; Multi-session Client;
RESOURceutilization
Multiprocessor usage TSM uses all the processors available to
it, in a multi-processor environment.
One customer cited having a 12-processor
system, and TSM used all of them.
MVS Multiple Virtual Storage: IBM's
mainframe operating system, descended
from OS/MFT and OS/MVT (multiple fixed
or variable number of tasks). Because
the operating system was so tailored to
a specific hardware platform, MVS was a
software product produced by the IBM
hardware division. MVS evolved into
OS/390, for the 390 hardware series.
MVS server performance Turn accounting off and you will likely
see a dramatic improvement in
performance. Especially boost the
TAPEIOBUFS server option.
See also: Server performance
MySQL database, back up to TSM See Redpaper "Backing Up Linux Databases
with the TSM API".

Named Pipe In general: A type of interprocess


communication which allows message data
streams to be passed between peer
processes, such as between a client and
a server.
Windows: The name of the facility by
which the TSM client and server
processes can directly intercommunicate
when the are co-resident in the same
computer, to enhance performance by
not going through data communications
methods to transfer the data. The
governing option is NAMedpipename.
See also: Restore to tape, not disk
NAMedpipename (-NAMedpipename=) Windows client option for direct
communication between the TSM client and
server processes when they are running
on the same computer or across connected
domains, thus avoiding the overhead of
going through data communication methods
(e.g., TCP/IP). This depends upon a
file system object which the server and
client will both reference in order to
communicate - which can be a point of
vulnerability, in contrast to
traditional networking (ANS1865E).
Syntax:
NAMedpipename \\.\pipe\SomeName
-NAMedpipename=\\.\pipe\SomeName
Default: Originally: \pipe\dsmserv
Later: \\.\pipe\Server1
See also: COMMMethod; NAMEDpipename;
Shared Memory
NAMEDpipename Windows server option for direct
communication between the TSM server and
client processes when they are running
on the same computer or across connected
domains, thus avoiding the overhead of
going through data communication methods
(e.g., TCP/IP). This depends upon a
file system object which the server and
client will both reference in order to
communicate - which can be a point of
vulnerability, in contrast to
traditional networking (ANS1865E). And
note that the involvement of Windows
Domain itself can mean networking, which
can obviate the advantage.
Syntax:
NAMEDpipename name
Default: Originally: \pipe\dsmserv
Later: \\.\pipe\Server1
See also: COMMMethod; NAMedpipename;
Shared Memory
Names for objects, coding rules Content: the following characters are
legal in object names:
A-Z 0-9 _ . - + &
(It is best not to use the
hyphen because ADSM uses it when
continuing a name over multiple
lines in a query, which would be
visually confusing.)
Length: varies per type of object.
Ref: Admin Ref
NAS See: Network Appliance
See also IBM site Solution 1105834
NATIVE Refers to storage pool DATAFormat
definition, where NATIVE is the default.
TSM operations use storage pools defined
with a NATIVE or NONBLOCK data format
(which differs from NDMP).
DATAFormat=NATive specifies that the
data format is the native TSM server
format and includes block headers.
NATIVE is required:
- To back up a primary storage pool;
- To audit volumes;
- To use CRCData.
See also: NONBLOCK
native file system A file system to which you have not
added space management.
NDMP Network Data Management Protocol: a
cross-vendor standard for enterprise
data backups, to tape devices. Its
creation was led by Network Appliance
and Legato Systems. The backup software
orchstrates a network connection between
an NDMP-equipped NAS appliance and an
NDMP tape library or backup server. The
appliance uses NDMP to stream its data
to the backup device.
The NDMP support in TSM works only with
tape drives as the backup target, and
there are no plans to extend NDMP
support to disk.
As of 2004/01, NDMP backs up at volume
level only.
Originally, only SCSI libraries were
supported for NDMP operations. Support
for ACSLS libraries was introduced in
5.1.1 and support for 349x libraries
came in 5.1.5.
To perform NDMP operations with TSM,
tape drives must be accessible to the
NAS device. This means that there must
be a SCSI or FC connection between the
filer and drive(s) and a path must be
defined in TSM from the NAS data mover
to the drive(s). Some or all of the
drives can also be accessed by the TSM
server, provided that there is physical
connectivity and a path definition from
the TSM server to those drives. This
does not mean that data is funneled
through the TSM server for NDMP
operations. It simply allows sharing
drives for NDMP and conventional TSM
operations. In fact, if the library
robotics is controlled directly by the
TSM server (rather than through a NAS
device), it is possible to share drives
among NAS devices, library server,
storage agents and library clients.
Data flow for NDMP operations is always
directly between the filer and the drive
and never through the TSM server. The
TSM server handles control and metadata,
but not bulk data flow. The TSM server
does not need to be on a SAN, but if you
want to share drives between the TSM
server and the NAS device, a SAN allows
the necessary interconnectivity.
See: dsmc Backup NAS; Network Appliance
(NAS) backups
Nearline storage A somewhat odd, ad hoc term to describe
on-site, nearby storage pool data; as
opposed to offsite versions of the data.
NetApp Network Appliance, Inc. Long-time
provider of network attached storage.
Company was founded by guys who helped
develop AFS.
www.netapp.com
NetTAPE NetTAPE provides facilities such as
remote tape access, centralized operator
interfaces, and tape drive and library
sharing among applications and systems.
As of late 1997, reportedly a shakey
product as of late 1997.
Ref: redbook 'AIX Tape Management'
(SG24-4705-00)
NETBIOS Network Basic Input/Output System. An
operating system interface for
application programs used on IBM
personal computers that are attached to
the IBM Token-Ring Network.
NETBIOSBuffersize *SM server option. Specifies the size
of the NetBIOS send and receive buffers.
Allowed range: 1 - 32 (KB).
Default: 32 (KB)
NetbiosBufferSize server option, query 'Query OPTion'
NetbiosSessions server option, query 'Query OPTion'
NETTAPE IBM GA-product that allows dynamic
sharing of tape drives among many
applications.
NetWare Novell product. Has historically not
had virtual memory, and so tends to be
memory-constrained, which hinders *SM
backups and restorals.
See also: nwignorecomp
NetWare backup recommendation Code "EXCLUDE sys:/.../*.qdr/.../*.*"
to omit the queues on the SYS volume.
NetWare Loadable Module (NLM) Novell NetWare software that provides
extended server functionality. Support
for various ADSM and NetWare platforms
are examples of NLMs.
Netware restore, won't restore, saying Reason unknown, but specifying option
incoming files are "write protected" "-overwrite" has been seen to resolve.
Netware restore fails on long file See: Long filenames in Netware restorals
name
Netware restore performance - Make sure your ADSM client software is
recent! (To take advantage of
"No Query Restore" et al. But beware
that No Query Restore is not used for
NetWare Directory Services (NDS).)
- Avoid client or Netware compression of
incoming data (and no virus scanning
of each incoming file).
- If you have a routed network
environment, have this line in
SYS:ETC\TCPIP.CFG :
TcpMSSinternetlimit OFF
- Use TXNBytelimit 25600 in the DSM.OPT
file, and TXNGroupmax 256 in the ADSM
server options file.
- Set up a separate disk pool that does
not migrate to tape, and use DIRMc to
send directory backups to it.
- Consider using separate management
classes for directories, to facilitate
parallel restorals.
- Disable scheduled backups of that
filespace during its restoral.
- Try to minimize other work that the
server has to do duing the restoral
(expirations, reclamnations, etc.).
- And the usual server data storage
considerations (collocation, etc.).
Data spread out over many tapes means
many tape mounts and lots of time.
- Consider tracing the client to see
where the time is going:
traceflags INSTR_CLIENT_DETAIL
tracefile somefile.txt
(See "CLIENT TRACING" section at
bottom of this document.)
- During the session, use ADSM server
command 'Q SE' to gauge where time is
going; or afterwards, review the ADSM
accounting record idle wait, comm
wait, and media wait times.
Network Appliance (NAS) backups Lineage: Tivoli originally announced
that TSM version 4.2 would provide
backup and restore of NAS filers - 3Q
2001. The product was "TDP for NDMP"
(5698-DPA), a specialized client that
interfaces with the Network Data
Management Protocol (NDMP). Full volume
image backup/restore will be supported.
File level support is announced for TSM
version 5.1 - 1Q 2002.
TDP for NDMP was then folded into TSM
Enterprise Edition, which was withdrawn
from marketing 2002/11/12, supplanted by
TSM Extended Edition (5698-ISX).
Note that options COMPRESSION and
VALIDATEPROTOCOL are not valid for a
node of Type=NAS.
The name of the NAS node must be the
same as the data mover.
Netware timestamp peculiarities The Modified timestamp on a Netware file
is attached to the file, and remains
constant as it may move, for example,
from a vendor development site to a
customer site. The Created timestamp is
when the file was planted in the
customer file system. Thus, the Created
timestamp may be later than the Modified
timestamp.
Network card selection on client See: TCPCLIENTAddress
Network data transfer rate Statistic at end of Backup/Archive job,
reflecting the raw speed of the network
layer: just the time it took to transfer
the data to the network protocol handler
(expressed that way to emphasize that
*SM does not know if the data has
actually gone over the network).
The data transfer rate is calculated by
dividing the total number of bytes
transferred by the data transfer time.
The time it takes to process objects is
not included in the network transfer
rate. Therefore, the network transfer
rate is higher than the aggregate
transfer rate.
Corresponds to the Data Verb time in an
INSTR_CLIENT_DETAIL client trace.
Contrast with Aggregate data transfer
rate.
Beware that if the Data transfer time is
too small (as when sending a small
amount of data) then the resulting
Network Data Transfer Rate will be
skewed, reporting a higher number than
the theoretical maximum. This reflects
the communications medium rapidly
absorbing the initial data in its
buffers, which it has yet to actually
send. That is, ADSM handed off the data
and considers it logically sent, having
no idea as to whether it has been
physically sent. This also explains why
at the beginning of a backup session
that you see some number of files
seemingly sent to the server before an
ANS4118I message appears saying that a
mount is necessary (for backup directly
to tape), rather than appearing after
the first file. Thus, to see meaningful
transfer rate statistics you need to
send a lot of data so as to counter the
effect of the initial buffering.
Ref: B/A Client manual glossary
See also: Data transfer time; TCPNodelay
Network performance Many network factors can affect
performance:
- Technology generation: Are you still
limited to 10 Mbps or 100, when
Gigabit Ethernet is available, with
its faster basic speed and optional
larger frame sizes?
- Are you using an ethernet switch
rather than a router to improve subnet
performance (and security)?
- Are your network buffer sized
adequate? In AIX, particularly do
'netstat -v' and see if the "No
Receive Pool Buffer Errors" count is
greater than zero: if so boost the
Receive Pool Buffer Size. (A value of
384 is no good: needs to be 2048.)
Network Storage Manager (NSM) The IBM 3466 storage system which
combines a tape robot and AIX system in
one package, wholly maintained by IBM.
The IBM Network Storage Manager (NSM)
is an integrated data storage facility
that provides backup, archive, space
management, and disaster recovery of
data stored in a network computing
environment. NSM integrates ADSM server
functions and AIX with an RS/6000 RISC
rack mounted processor, Serial Storage
Architecture (SSA) disk subsystems,
tape library (choose a type) and drives,
and network communications, into a
single server system.
Network transfer rate See: Network data transfer rate
Network-Free Rapid Recovery Provides the ability to create a backup
set which consolidates a client's files
onto a set of media that is portable and
may be directly readable by the clients
system for fast, "LAN-free" (no network)
restore operations. The portable backup
set, synthesized from existing backups,
is tracked and policy-managed by the
TSM server, can be written to media such
as ZIP, Jaz drives, and CD-ROM volumes,
for use by Windows 2000, Windows NT,
AIX, Sun Solaris, HP-UX, NetWare
backup-archive client platforms. In
addition, for the Windows 2000, Windows
NT, AIX, Sun Solaris (32-bit) and HP-UX
backup-archive clients, the backup sets
can be copied to tape devices. TSM
backup-archive clients can, independent
of the TSM server, directly restore data
from the backup set media using standard
operating system device drivers.
Ref: Redbook "Tivoli Storage Manager
Version 3.7: Technical Guide"
(SG24-5477), see CREATE BACKUPSET.
http://www.tivoli.com/products/index/
storage_mgr/storage_mgr_concepts.html
Newbie Someone who is new to all this stuff.
NEXTstgpool Parameter on 'DEFine STGpool' to define
the next primary storage pool to use in
a hierarchy of storage pools. (Copy
storage pools are not eligible for
hierarchical arrangement.)
This can be used creatively to cause
ADSM to use lower storage pools to be
used as overflow areas rather than
migration areas, by defining the HIghmig
value to be 100 percent. This would be
used in cases where storage pool filling
has to keep up with incoming data, and
could not if migration were used.
NFS client backup prohibition You can establish a site policy that
file systems should not be backed up
from NFS clients (they will be done from
the NFS server). Violators can be
detected in a ADSM server 'Query
Filespace' command (Filespace Type),
whereupon you could delete the filespace
outright or rename it for X days before
deleting it, with warning mail to the
perpetrator, and a final 'Lock Node' if
no compliance.
NFSTIMEout Client system options file (dsm.sys) or
command line option to deal with error
"ANS4010E Error processing
'<SOME_FILE_SYSTEM>': stale NFS handle".
Specifies the amount of time in seconds
the server waits for an NFS system call
response before it times out.
If you do not have any NFS-mounted
filesystems, or you do not want this
time-out option, remove or rename the
dsmstat file in the ADSM program
directory. Syntax:
"NFSTIMEout TimeoutSeconds".
Note: This option can also be defined on
the server.
NIC selection on client See: TCPCLIENTAddress
NLB Microsoft Network Load Balanced
NLS National Language Support, standard in
ADSMv3. The message repository is now
called dsmserv.cat, which on AIX is
found in /usr/lib/nls/msg/en_US (for the
english version, other languages are
found in their respective directories).
The dsmameng.txt file still exists in
the ADSM server working directory and is
used if the dsmserv.cat file is not
found.
See also: Language
No Query Restore (NQR) ADSMv3+: Facility to speed restorals by
eliminating the preliminary step of the
server having to send the client a
voluminous list of files matching its
restoral specs, for the client to
traverse the list and then sort it for
server retrieval efficiency ("restore
order"). That is, in a No Query Restore
the client knows specifically what it
needs and can simply ask the server for
it, so there is no need for the server
to first send the client a list of
everything available.
NQR also allows for restartable client
restores.
Both client and server have to be at
Version 3+ in order to use No Query
Restore. It is used automatically for
all restores unless one or more of the
following options are used: INActive,
Pick, FROMDate, FROMTime, LAtest,
TODate, TOTime. Also, No Query Restore
is not used for NetWare Directory
Services (NDS).
Note that NQR has nothing to do with
minimizing tape mounts for restore: for
a given restore, TSM mounts each needed
tape once and only once, retrieving
files as needed in a single pass from
the beginning of the tape to the end.
A big consideration in NQR is that the
client specification may be so general
that the server ends up sending the
client far more files than it needs.
IBM used the term "No Query Restore" in
their v3 announcements, but did not use
it in their v3.1 manuals: usage was
implied. Later manuals reinstated No
Query Restore as a specific action, and
documented it. IBM now refers to the v2
method of restoral as "Classic Restore"
or "Standard Restore".
The most visible benefit of no query
restore is that data starts coming back
from the server sooner than it does with
"classic" restore. With classic restore,
the client queries the server for all
objects that match the restore file
specification. The server sends this
info to the client, then the client
sorts it so that tape mounts will be
optimized. However, the time involved in
getting the info from the server, then
sorting it (before any data is actually
restored), can be quite lengthy - and
may incite client timeout at the server.
When you think about it, Classic Restore
is a dumb way to approach a major
restoral: rather than taking advantage
of the inherent lookup efficiencies and
ordered results which the server can
effect with its database, the data is
sent to the client for it to process,
where the client has to effectively
create a pseudo database environment in
memory. Good grief.
With NQR the *SM server does the work:
the client sends the restore file specs
to the server, the server figures out
the optimal tape mount order, and then
starts sending the restoral data to the
client. The server can usually do this
faster, and thus the time it takes to
start actually restoring data is reduced.
(A consideration is that while the server
is busy figuring this out, no activity
is visible from the client, which may
concern the user.)
Ref: Backup/Archive Client manual,
chapter 3 (Backing Up and Restoring),
"Restore: Advanced Considerations";
Redbook "ADSM Version 3 Technical Guide"
(SG24-2236).
See also: No Query Restore, disable;
Restart Restore; Restore Order
No Query Restore, disable Whereas this v3 feature was supposed to
improve performance, it has had
performance impacts of its own. Under
limited circumstances it may be
advantageous to disable NQR, which can
be achieved with command line option
-traceflags=DISABLENQR, or by specifying
option "TESTFLAG DISABLENQR" in
dsm.opt. (Advice: Do not all this to
your option file, as NQR is often the
better choice.)
Note that this option will make the
restoral non-restartable.
See "DISABLENQR" in "CLIENT TRACING".
No-Query Restore See: No Query Restore
NOAGGREGATES Temporary server options file option,
to compensate for early v.3 defect. Is
intended for customers who have a
serious shortage of tapes. If you use
this option, any new files backed up or
archived to your server will not be
aggregated. When the volumes on which
these files are reclaimed, you will not
be left with empty space within
aggregates. The downside is that these
files will never become aggregated, so
you will miss the performance benefits
of aggregation for these files. If you
do not use the NOAGGREGATES option,
files will continue to be aggregated and
empty space may accumulate within these
aggregates; this empty space will be
eliminated during reclamation after you
have run the data movement/reclamation
utilities.
NOARCHIVE ADSMv3 option for the include-exclude
file, to prohibit Archive operations for
the specified files, as in:
"include ?:\...\* NOARCHIVE"
to prohibit all archiving.
NOAUDITStorage Server options file option, introduced
by APAR PN77064 (PTF UN87800), to
suppress the megabyte counting for each
of the clients during an "AUDit
LICenses" event, and thus reduce the
time required for AUDit LICenses.
Obsolete: now AUDITSTorage Yes|No.
See: AUDITSTorage
NOBUFPREFETCH Undocumented server option to disable
the buffer prefetcher - at the expense
of performance.
(Useful where the 'SHow THReads' command
reveals sessions hung on a condition in
TbKillPrefetch, where the prefetcher is
looping because of a design defect.)
Node See: Client Node
Node, add administrator Do 'REGister Admin', then
'GRant AUTHority'
Node, define See: 'REGister Node'
Node, delete See: 'REMove Node'
Node, disable access 'LOCK Node NodeName'
Node, lock 'LOCK Node NodeName'
Node, move across storage pools Use 'MOVe Data', specifying a
different storage pool; then reassign
the node to the new stgpool's domain.
But if a node shares tapes with other
nodes: reassign it to the new stgpool,
then let the data expire off of the old
stgpool.
Node, move to another Policy Domain 'UPDate Node NodeName DOmain=_____'
In doing this, note:
- If the receiving domain does not have
the same management classes as were
used in the old domain, the domain
files will be bound to the receiving
domain's default management class,
which could have an adverse effect
upon retention periods you expect.
But in all cases, check the receiving
domain Copypool retention policies
before doing the move.
- If the node was associated with a
schedule, it will lose it, so be sure
to examine all scheduling values.
Node, number used See: Tapes, number used by a node
Node, prevent data from expiring A request comes in from the owner of a
client that because of subpoena or the
like, its data must not expire; but that
client has been using the same
management class as is used for the
backup of all clients. How to satisfy
this request?
1. Use 'COPy DOmain' to create a copy of
the policy domain the node is in.
2. Update the retention parameters in
the copy group in the new domain.
3. Activate the appropriate policy set.
4. Use 'UPDate Node' to move the node to
the new policy domain.
Node, prohibit access 'LOCK Node NodeName'
Node, prohibit storing data on server See: Client, prevent storing data on
server
Node, remove See: 'REMove Node'
Node, space used for Active files 'Query OCCupancy' does not reveal this,
as it reports all space. A simple way
to get the information is to
'EXPort Node NODENAME
FILEData=BACKUPActive Preview=Yes'.
Node, space used on all volumes 'Query AUDITOccupancy NodeName(s)
[DOmain=DomainName(s)]
[POoltype=ANY|PRimary|COpy'
Note: It is best to run 'AUDit LICenses'
before doing 'Query AUDITOccupancy' to
assure that the reported information
will be current.
Also try the unsupported command
'SHow VOLUMEUSAGE NodeName'
Node, unregister See: 'REMove Node'
Node, volumes in use by 'SHow VOLUMEUSAGE NodeName' or:
'SELECT DISTINCT VOLUME_NAME,NODE_NAME
FROM VOLUMEUSAGE' or:
'SELECT NODE_NAME,VOLUME_NAME FROM
VOLUME_USAGE WHERE -
NODE_NAME='UPPER_CASE_NAME'
Node, volumes needed to restore ADSMv3:
SELECT FILESPACE_NAME,VOLUME_NAME -
FROM VOLUMEUSAGE WHERE -
NODE_NAME='UPPER_CASE_NAME' AND -
COPY_TYPE='BACKUP' AND -
STGPOOL_NAME='<YourBkupStgpoolName>'
Node conversion state An *SM internal designation.
Node state 5 is Unicode, for Unicode
enabled clients, which is to say
platforms in which Unicode is supported.
(Within Unicode-enabled clients, it is
the filespace which specifically employs
Unicode.)
May be seen on ANR4054I and ANR9999D
messages.
Node name A unique name used to identify a
workstation, file server, or PC to the
server. Should be the same as returned
by the AIX 'hostname' command.
Is specified in the Client System
Options file and the Client User Options
file.
Node name, register 'REGister Node ...' (q.v.)
(register a client with the server) Be sure to specify the DOmain name you
want, because the default is the
STANDARD domain, which is what IBM
supplied rather than what you set up.
There must be a defined and active
Policy Set.
Node name, remove 'REMove Node NodeName'
Node name, rename (Windows) See: dsmcutil.exe
Node name, update registration 'UPDate Node ...' (q.v.)
(register a client with the server) Node must not be currently conducting a
session with the server, else command
fails with error ANR2150E.
Node names in a volume, list 'Query CONtent VolName ...'
Node names known to server, list 'Query Node'
Node password, update from server See: Password, client, update from
server
Node sessions, byte count SELECT NODE_NAME, SUM(LASTSESS_RECVD) -
AS "Total Bytes" FROM NODES -
GROUP BY NODE_NAME
nodelock File the server directory, housing the
licenses information generated by the
ADSMv3 and TSM REGister LICense
operation. The *SM server must have
access to this file in order to run.
If the server processor board is
upgraded such that its serial number
changes, the this file must be removed
and regenerated.
file first.
See also: adsmserv.licenses;
REGister LICense
nodename /etc/filesystems attribute, set "-",
which is added when 'dsmmigfs' or its
GUI equivalent is run to add ADSM HSM
control to an AIX file system. The dash
tells the mount command to call the HSM
mount helper.
NODename Client System Options file operand to
specify the node name by which the
client is registered to the server.
Placement: within a server stanza
The intention of this option is to
firmly specify the identity of the
client where the client may have
multiple identities, as in a multi-homed
ethernet config.
If your client system has only a single
identity, it is best if this option is
not used, letting the node name default
to the natural system name. If you *do*
code NODename, it is best that it be in
upper case.
If "PASSWORDAccess Generate" is in
effect, you *cannot* use NODename
because the password directory entry
(e.g., /etc/security/adsm/) must be
there for that node, and thus you must
not have the choice of saying that you
are some arbitrary node name.
PASSWORDAccess Generate does not work if
you code NODename. If in Unix you put
it in dsm.opt, then ADSM assumes you
want to be the "virtual root user",
which gives you access to all of that
node's data, requiring you to enter a
password. Instead, put NODename in the
dsm.sys file.
Do not put another node's name into your
client options file, as a "clever" way
to access that node's data, as the
involvement of PASSWORDAccess Generate
will cause that foreign node's password
to be stored in your system upon first
access as superuser: this is unhealthy,
from a security standpoint. Doing this
can also have dire consequences if your
ad hoc access is from a client which is
at a higher TSM level than the actual
owning client, as that can prevent the
owning client from performing any
further operations on the data, as it
will then be downlevel relative to the
storage pool data. So: use
VIRTUALNodename for such accesses.
If you are attempting to use NODename
for cross-node restorals, DO NOT change
your client options file to code the
name of the originating node: remember
that the options file is for all
invocations of client functions, not
just the one task you are performing, so
your modification could yield incorrect
results in incidental client invocations
other than your own. Also, it is too
easy to forget that this options file
change was made. You should instead use
the -NODename=____ invocation override
form of the option.
Note that as long as the Nodename
remains the same, changes in the
client's IP address (as in switching
network providers) will not incite a
password prompt.
See also: GUID; PASSWORDAccess;
TCPCLIENTAddress; VIRTUALNodename
-NODename=____ (Employed on some clients (Netware and
Windows), which otherwise would use
-VIRTUALNodename if available there.)
Command line equivalent, but override
of the same options file definition,
used when you want to restore or
retrieve your own files when you are
on other than your home nodename.
Beware that specifying this causes ADSM
to ask you for the password of that
node, and thereafter regards you as a
virtual root user. Worse, it will cause
the password to be encrypted and stored
on the machine where invoked. Thus
anyone else can subsequently access your
node's data, presenting a potential
security issue. Unless that is your
intent, use VIRTUALNodename instead of
NODename.
Note that when overriding the node name
this way, with the ADSM server, a 'Query
SEssion' will show the session as coming
from the node whose name you have
specified.
Contrast with -FROMNode, which is used
to gain access to another user's files.
Note that a 'Query SEssion' in the
server will say that the session is
coming from the client named via
-NODename, rather than the actual
identity of the client.
See also: -PASsword; VIRTUALNodename
NODES SQL table containing all the information
about each registered node. Columns:
NODE_NAME, PLATFORM_NAME, DOMAIN_NAME,
PWSET_TIME, INVALID_PW_COUNT, CONTACT,
COMPRESSION, ARCHDELETE, BACKDELETE,
LOCKED, LASTACC_TIME, REG_TIME,
REG_ADMIN, LASTSESS_COMMMETH,
LASTSESS_RECVD, LASTSESS_SENT,
LASTSESS_DURATION, LASTSESS_IDLEWAIT,
LASTSESS_COMMWAIT, LASTSESS_MEDIAWAIT,
CLIENT_VERSION, CLIENT_RELEASE,
CLIENT_LEVEL, CLIENT_SUBLEVEL,
CLIENT_OS_LEVEL, OPTION_SET,
AGGREGATION, URL, NODETYPE, PASSEXP.
Note that the table is indexed by
NODE_NAME, so seeking on an exact match
is faster than on a "LIKE".
Nodes, registered 'Query DOmain Format=Detailed'
Nodes, registered, number SELECT COUNT(NODE_NAME) -
AS "Number of registered nodes" -
FROM NODES
Nodes, report MB and files count SELECT NODE_NAME, SUM(LOGICAL_MB) AS -
Data_In_MB, SUM(NUM_FILES) AS -
Num_of_files FROM OCCUPANCY GROUP BY -
NODE_NAME ORDER BY NODE_NAME ASC
Nodes not doing backups in 90 days 'SELECT NODE_NAME, CONTACT, \
LASTACC_TIME, REG_TIME, DOMAIN_NAME \
FROM NODES WHERE DOMAIN_NAME='FL_INTL'\
AND DAYS(CURRENT_TIMESTAMP)-\
DAYS(LASTACC_TIME)>90 ORDER BY \
LASTACC_TIME DESC > SomeFilename'
Nodes without filespaces There will always be nodes which have
registered with the server but which
have yet to send data to the server. The
following will report them:
SELECT NODE_NAME AS -
"Nodes with no filespaces:", -
DATE(REG_TIME) AS "Registered:", -
DATE(LASTACC_TIME) AS "Last Access:" -
FROM NODES WHERE NODE_NAME NOT IN -
(SELECT NODE_NAME FROM FILESPACES)
NOMIGRRECL Undocumented server option to prevent
migration and reclamation at server
start-up time. Would be used chiefly
for TSM conversions and disaster
recoveries. Note that there is no
server Query that will evidence the use
of this option: the server options file
has to be inspected.
Non-English filenames (NLS support) The TSM product is a product of the USA,
written in an English language
environment, originally and
predominantly for English language
customers using an alphabet comprised of
the characters found in the basic ASCII
character set. Trying to use TSM in a
non-English environment is a stretch, as
customers who have tried it have found
and reported in ADSM-L. The product has
experienced many, protracted problems
with non-English alphabets, as seen in
numerous APARs - and some debacles
("the umlaut problem" - see message
ANS1304W). As of mid-2001, there is no
support for mixed, multi-national
languages, as for example a predomiantly
English language client which stores
some files whose names contain
multi-byte character sets (e.g.,
Japanese).
Customers find, for example, that to
back up Japanese filenames you must run
the Windows client on a Japanese
language Windows server.
Some customers circumvent the whole
problem on their English language
systems by copying the non-English files
into a tarchive or zip file having an
English name, which then backs up
without problems. Another approach is to
use NT Shares across English and
non-English client systems, to back up
as appropriate.
NONBLOCK Refers to storage pool DATAFormat
definition, where NATIVE is the default.
TSM operations use storage pools defined
with a NATIVE or NONBLOCK data format
(which differs from NDMP).
DATAFormat=NONblock specifies that the
data format is the native TSM server
format, but does not include block
headers.
See also: NATIVE
NOPREEMPT ADSMv3+ Server Options file entry to
prevent preemption. By default, TSM
allows certain operations to preempt
other operations for access to volumes
and devices. For example, a client data
restore operation preempts a client data
backup for use of a specific device or
access to a specific volume. When
preemption is disabled, no operation can
preempt another for access to a volume,
and only a database backup operation can
preempt another operation for access to
a device. The effect, then, is to cause
high-priority tasks like Restores to
wait for resources, rather than preempt
a lower-priority task so as to execute
asap.
Ref: Admin Guide "Preemption of Client
or Server Operations"
See also: Preemption; DEFine SCHedule
NORETRIEVEDATE Server option to specify that the
retrieve date of a file in a disk
storage pool is not be updated when the
file is restored or retrieved by a
client. This option can be used in
combination with the MIGDelay storage
pool parameter to control when files are
migrated. If this option is not
specified, files are migrated only if
they have been in the storage pool the
minimum number of days specified by the
MIGDelay parameter. The number of days
is counted from the day that the file
was stored in the storage pool or
retrieved by a client, whichever is more
recent. By specifying this option, the
retrieve date of a file is not updated
and the number of days is counted only
from the day the file entered the disk
storage pool. If this option is
specified and caching is enabled for a
disk storage pool, reclamation of cached
space is affected. When space is needed
in a disk storage pool containing cached
files, space is obtained by selectively
erasing cached copies. Files that have
the oldest retrieve dates and occupy the
largest amount of space are selected for
removal. When the NORETRIEVEDATE option
is specified, the retrieve date is not
updated when a file is retrieved. This
may cause cached copies to be removed
even though they have recently been
retrieved by a client.
See also: MIGDelay
Normal File--> Leads the line of output from a Backup
operation, as when backup is incited
by the file's mtime (file modification
time) having changed, or if a chown or
chgrp effected a change.
See also: Updating-->; Expiring-->;
Rebinding-->
Normal recall mode A mode that causes HSM to copy a
migrated file back to its originating
file system when it is accessed. If the
file is not modified, it becomes a
premigrated file. If the file is
modified, it becomes a resident
file. Contrast with migrate-on-close
recall mode and read-without-recall
recall mode.
NOT IN SQL clause to exclude a particular set
of data that matches one of a list of
values:
WHERE COLUMN_NAME -
NOT IN (value1,value2,value3)
See also: IN
"Not supported" Vendor parlance indicating that a
certain level or mix of
hardware/software is not supported by
the vendor. It may mean that the vendor
knows that the level is not viable by
virtue of design; but more usually
indicates that an older level of
software was not deemed worth the
expenditure to test compatibility,
rather than having tested and having
found incompatibilities. It is common
for customers to inadvertently or
intentionally use unsupported software
and encounter no problems. Usually,
usage of such software which "stays near
the center of the path" can do okay;
it's when the usage gets near the edges
of complexity that functional problems
are more likely to arise.
NOTMOuntable DRM media state for volumes containing
valid data, located onsite, but TSM is
not to use it.
This value can also be the default
Location if Set DRMNOTMOuntablename has
not been run.
See also: COUrier; COURIERRetrieve;
MOuntable; MOVe DRMedia;
Set DRMNOTMOuntablename; VAult;
VAULTRetrieve
Novell See also: Netware
Novell and TSM problems Novell customers report that problems
using TSM (or, for that matter, many
other applications) under Novell Netware
are almost universally due to Novell
irregularities and failing to
communicate OS changes to developers.
Novell (Netware) performance The standard Backup considerations
apply, including too many files in one
directory.
See also: PROCESSORutilization
Novell trustee rights With Novell your trustee rights are
normally set on a directory level. If
this is a case with your Novell systems,
then just use the -dirsonly option when
doing a restore. TSM backs up rights and
IRFs only at a directory level, not a
file level.
Trustee Rights are not seen by the
client workstation who maps the drive
for his use. Client workstations should
not be doing the backups: they should be
done from the Novell system.
.NSF file Lotus Notes database file.
NSM See: Network Storage Manager
NT Microsoft Windows New Technology
operating system, situated between
Windows 98 and Windows 2000.
See: Windows NT
.NTF files (Lotus Notes) and backup By default, the Lotus Notes Connect
Agent will not back up .NTF files: you
have to specifically request them to get
them backed up.
NTFS NT File System. Is understood by OS/2.
Unlike FAT, NTFS directories are
complex, and cannot be stored in the *SM
database, instead having to go into a
storage pool.
NTFS and Daylight Savings Time Incredibly, NTFS file timestamps are
offsets from GMT rather than absolute
values - and hence the perceived
timestamps on all files in the NTFS will
change in DST transitions. (Another
reason that NT systems cannot be
regarded as serious contenders for
server implementations.)
http://support.microsoft.com/support/kb/
articles/q129/5/74.asp
NTFS and permissions changes If someone happens to make a global
change to the permissions (security
information) of files in an NTFS, the
next Backup will cause the files to be
backed up afresh...which is warranted,
as the attributes are vital to the
files. The fresh backup will occur if
any of the NTFS file security
descriptors are changed: Owner Security
Identifier (SID), Group SID,
Discretionary Access Control List (ACL),
and System ACL.
Possible mitigations (all of which have
encumbrances and side effects):
- Perform -INCRBYDate backups.
- In Windows Journal-Based Backups, you
may employ the NotifyFilter.
- Subfile backups should avoid wholesale
backups, if you happen to use them.
- Another approach to mitigation is to
follow MS's AGLP (AGDLP for AD) rules:
assign users to Global Groups, add
Global Groups to Local (DOMAIN Local
in AD) and only assign permissions to
the local groups. You create the
appropriate local groups (eg read
access, write etc) and only assign
permissions once to these groups. Any
user changes are done through removal
of uses from the Global groups or GG
from local groups which doesn't
trigger any ACL changes on the files
so no extra backups are done. As far
as initial security lockdown, this
should be done at server setup.
NTFS and security info in restorals NTFS object security information is
stored with the object on the server and
will be restored when the individual
NTFS object is restored.
"Security" in Windows NTFS and what gets
restored:
Inherited:
The only security info is "provide same
access as the parent directory is
providing".
TSM will restore the "checkmarked"
inheritance. It *will not* restore
parent's ACL, or the ACL of the
parent's parent, ... up to the origin
of the inherited ACL. As result you
have resolved ability to inherit but
not *what* to inherit.
Explicitly specified:
There is a list of users along with set
of allowed operations.
TSM will restore "no inheritance" mode
and list of defined privileges. This is
probably what you want in a restoral
Mixed permissions:
Both access inherited from the parent
plus some explicitly specified
additions/deletions/changes to the ACL.
TSM restores both "inheritance" mode
and the explicit access. As a result,
the explicitly defined entities will
have their access intact but the other
are left to the mercy of ACL inherited
from the parent directory.
If the whole drive is restored,
file/directory specific ACL elements are
restored together their parents'.
All this should explain why sometimes
you see the ACL "restored", sometimes
"not restored" and sometimes "partially
restored".
NTFS security info as stored in TSM Because of the amount of information
involved in NTFS security data, it is
too much to be stored in the TSM
database, as simple file attribute data
can otherwise be, and so NTFS security
info has to go into a TSM storage pool.
The NTFS security info is stored as part
of the file data - an implication being
that if just the security info is
changed, the file itself has to be
backed up afresh as well.
NTuser.dat The NT current profile of each user
registered to use the NT system. When
you log on to NT, the contents of
NTUSER.DAT are loaded into the
HKEY_CURRENT_USER Registry key, where
that copy persists only for the duration
of the user session. So a TSM backup
captures that as part of Registry
backup; and you can do 'dsmc REStore
REgistry USER CURUSER' to get your
profile back.
If the user is not logged in at the time
of the backup, the file will be backed
up from where it sits. If the user is
logged in at the time, the file will be
in use by the system, and will be backed
up as part of the Registry, which is to
say that the API used by the client for
Registry backup will make a copy in the
adsm.sys directory, and back that up.
(The above assumes that the backup is
run by Administrator: if run by an
ordinary user, there is no access to
either source of NTUSER.DAT data: it has
to be skipped as busy.)
C:\adsm.sys\Registry\<SystemName>\Users
contains a directory for each id, and
each id that was logged on at the time
of the backup will have a file with a
name like: S-1-5-21-1417001333-
436374069-854245398-1000
This is the logical equivalent of
NTUSER.DAT.
To restore it requires an extra step,
though: When doing a bare metal
restore, you restore the files, then the
Registry; then you reboot; then you log
on under that user's account. Since you
don't have a restored copy of
NTUSER.DAT, you will see the default
profile. Run: dsmc REStore REgistry
USER CURUSER which reloads the profile
stuff from adsm.sys into the registry.
Then you reboot again, and on the way
down it will write the profile out to
NTUSER.DAT again, and you are back in
business. When you come back up, you
have your restored/customized profile.
If using the 4.1.2 client, the names in
adsm.sys have changed, and the backed up
user profile for each user is actually
called NTUSER.DAT. And you can't
restore individual Registry keys. So
after you do the bare-metal restore of
files & Registry as ADMINISTRATOR, you
drag that person's NTUSER.DAT from the
adsm.sys directory back to where it is
supposed to be, before that account logs
on again.
In running standard TSM backups, be sure
to run the TSM Scheduler Service under
the Local System account, not a user
account, to avoid the inevitable problem
of finding the user profile (NTuser.dat)
locked.
Note that if a user has no NTUSER.DAT
User Profile, upon login Windows creates
a new one, using the default User
Profile (which is stored on the System
drive (typically, C:) in
Documents and Settings\Default User\.
It is vital, therefore, that
"NTUSER.DAT" not be a blanket Exclude,
as a Windows PC recovery could then
result in there being no default User
Profile.
NTUSER.DAT is normally excluded from
Journal Based Backup.
ntutil Like 'tapeutil' for Unix, this utility
for Windows NT or 2000 controls tape
motion once a tape is mounted. It is
part of the Magstar Device Drivers for
NT available at the ADSM ftp server and
its mirrors (ftp.storsys.ibm.com, under
devdrvr/WinNT, within IBMmag.*, being a
self-extracting file and contains the
NTUTIL.EXE. With ntutil you can control
some operations on a 3570. Syntax:
'ntutil <-f InputFile> <-o OutputFile>
<-d SpecialFile> <-t>'
Invoke simply as 'ntutil' to enter
interactive mode.
There is documentation in the manual
"IBM SCSI Tape Drive, Medium Changer,
and Library Device Drivers: Installation
and User's Guide", available from the
same ftp location. Also in Appendix A of
the 3590 Maintenance Information manual.
Null String Nullifying various operands in TSM
requires that you code what is called a
Null String, instead of a text value.
A Null String is a string which contains
nothing, and is coded as two adjacent
quotes with nothing in between: "" .
Number of Times Mounted Report line from 'Query Volume'. The
number reported is since the tape came
out of the scratch pool, and does not
reflect the number of mounts over its
lifetime. Above ADSM, your tape library
may track tape mounts over the life of
the tape's residency in the library, as
the 3494 does in its Database menu
selection. ADSM provides no means of
resetting this number (a
Checkout/Checkin sequence does not do
it).
NUMberformat Client User Options file (dsm.opt)
option to select the format in which
number references will be displayed.
"1" - format is 1,000.00 (default)
"2" - format is 1,000,00
"3" - format is 1 000,00
"4" - format is 1 000.00
"5" - format is 1.000,00
"6" - format is 1'000,00
NUMberformat Definition in the server options file
for ADSM and old TSM.
Specifies the format by which numbers
are displayed by the *SM server:
"1" - format is 1,000.00 (default)
"2" - format is 1,000,00
"3" - format is 1 000,00
"4" - format is 1 000.00
"5" - format is 1.000,00
"6" - format is 1'000,00
Default: 1
This option is obsolete since TSM 3.7:
the date format is now governed by the
locale in which the server is running,
where the LANGuage server option is the
surviving control over this.
Ref: Installing the Server...
See also: DATEformat; LANGuage;
TIMEformat
NUMberformat server option, query 'Query OPTion'
nwignorecomp ADSM client 2.1.07 supports the
"nwignorecomp yes" parameter in the
opt file. This will prevent ADSM from
backing up the file if the only change
to it is Netware compression.
NWWAIT Netware option.
As of TSM 5.2, this option was renamed
to NWEXITNLMPROMPT.

OBF Old Blocks File, as used in Windows 2000


image backup of volumes.
In Server-free TSM backups, the
terminology is "Original Blocks File".
See also: LVSA; SNAPSHOTCACHELocation
Object A collection of data managed as a single
entity.
OBJECT_ID (ObjectID) Decimal number object identifier in the
ARCHIVES and BACKUPS tables. More
generally, Object IDs are the surrogate
database keys to which alphanumeric
filenames are mapped. The Object IDs are
64-bit values, but the higher half is
usually 0, making the ID effectively a
32-bit value.
See also: Bitfile; SHow BFObject; SHow
INVObject
OBJects Operand in client 'DEFine SCHedule' ADSM
server command which allows
specification of names to be operated
upon by the ACTion. Here you would
define file systems to be backed up when
ACTion=Incremental, which would
otherwise take the filesystem names from
the Client User Options File (dsm.opt)
DOMain names.
Objects compressed by: Element in a Backup statistics summary
reporting how compressible the data was,
as determined by the client as it was
required to compress the data during the
backup, per client or server options.
Is computed as sum of the size of the
files as the reside in the client file
system, minus the number of bytes sent
to the server, divided by the size of
the files as they reside in the client
file system.
If negative (like "-29%"), then the data
is expanding during compression, as can
happen when it is already compressed. In
this case, see if you have the option
COMPRESSAlways is coded as Yes, and
consider instead making it No.
See also: COMPRESSAlways
See: COMPRESSAlways
objects deleted: Element of Backup summary statistics,
reflecting the number of file system
objects that the Backup process found
gone since the last backup, by virtue of
comparing the file system contents
against the list of objects that the
client got from the server at the
beginning of the backup job.
Note that there will necessarily be no
objects deleted if running a Selective
backup or an Incremental with
-INCRBYDate.
Objects in database, by nodename SELECT SUM(NUM_FILES) AS \
"Number of filespace objects", \
NODE_NAME FROM OCCUPANCY GROUP BY \
NODE_NAME ORDER BY \
"Number of filespace objects" DESC'
Objects in database, total SELECT SUM(NUM_FILES) AS \
"Total filespace objects" FROM OCCUPANCY
Objects Updated "Total number of objects updated"
element in a backup statistics summary.
The Objects Updated field displays the
number of files or directories whose
contents did not change but whose
attributes or ACLs had changed. The
server updates its information about the
attributes or ACLs without the objects
themselves having to be sent to the
server.
OBSI Open Backup Stream Interface.
OBSI is an SQL-BackTrack component that
provides the interface between BackTrack
and a storage device or storage
management system (like ADSM).
OCCUPANCY SQL table reflecting the filespace
objects inventory as reside in storage
pools (which is not necessarily all of
the file system objects). Occupancy
reflects all versions of stored files,
Active and Inactive. Columns:
NODE_NAME Node name, upper case.
TYPE 'Bkup', 'Arch', 'Spmg'
FILESPACE_NAME
STGPOOL_NAME
NUM_FILES Number of files in
storage pools.
PHYSICAL_MB
LOGICAL_MB
See also: Query OCCupancy
Occupancy of storage pool See: Query OCCupancy
ODBC Open DataBase Connectivity (Microsoft).
A standard low-level application
programming interface (API) designed for
use from the C language for accessing a
variety of DBMSs with the same source
code. It uses Structured Query Language
(SQL) as its database access language.
ADSMv3 provides an ODBC interface in the
Windows client (only), which enables the
SQL client to perform Select's (only) on
the TSM database, with output therefrom
to be manipulated by other ODBC
compliant applications. This is
beneficial in offloading SQL processing
from the TSM server. Sample
applications which can use this:
Lotus Approach, Microsoft Access, Excel.
Because Selects are employed, ODBC has
the same limited view of the TSM DB as
the server administrator has, meaning
that file attributes, etc. cannot be
seen. It is also just as slow.
Ref: ADSM Version 3 Technical Guide
redbook; TSM 5.1 Technical Guide
redbook, Appendix A.
ODBC driver Is supplied by your DB supplier.
ODBC tracing For ODBC, there are two types:
1. ODBC Driver Manager trace, which is
enabled via the "Tracing" tab in the
ODBC Data Source Administrator.
2. TSM-specific ODBC driver tracing,
which is enabled in the TSM-specific
ODBC driver configuration dialog (the
one whose title is "Configurate a
TSM Data Source").
OEM Oracle Enterprise Manager
Off-Line Copy Status of a database volume in a
'Query DBVolume' display.
Investigate why it's offline. If it
looks like it should be okay, do a
'VARy ONline VolName' to get it back.
Don't tarry, as you are in jeopardy
while the mirrored copy is down.
OFfsite Access Mode for a Copy Storage Pool
volume saying that it is away and can't
be mounted. The Offsite designation
serves to both identify the disaster
recovery intent of the volumes and
prohibit their incidental mounting.
(They should be mounted only to recover
from a disaster, after being brought
back onsite.)
Special characteristics for Offsite:
- Mount requests are not generated;
- In reclamation or Move Data
operations conducted on Offsite
volumes, the files represented on
those volumes is taken from available
on-site storage pools;
- Empty offsite scratch volumes are not
deleted from the offsite copy storage
pool.
Set with 'DEFine Volume' and
'UPDate Volume ... ACCess=OFfsite'.
You would typically do this after a
'BAckup STGpool' such that the volumes
which it created could be removed to an
offsite location after library ejection
via CHECKOut. (It is best to do this
with a Copy Storage Pool separate from
the one which you would keep on-site
for immediate, non-disaster recoveries.)
Offsite, how to send volumes Can query first, as:
'Query Volume *
ACCess=READWrite,READOnly
STatus=FILling,FULl
STGpool=copypoolname'
Mark all newly created copy storage pool
volumes unavailable:
'UPDate Volume * ACCess=OFfsite
LOcation="Sent offsite."
WHERESTGpool=CopypoolName
WHEREACCess=READWrite,READOnly
WHERESTatus=FILling,FULl'
Then eject each volume:
'CHECKOut LIBVolume LibName VolName
[CHECKLabel=no] [FORCE=yes]'
Later, to bring back:
'CHECKIn LIBVolume LibName VolName
STATus=PRIvate DEVType=3590'
'UPDate Volume VolName ACCess=READWrite'
(Alternately, consider using the MOVe
MEDia command, which replaces the UPDate
Volume and CHECKOut LIBVolume steps.)
Offsite reclamation See: Offsite volume reclamation
Offsite REUSEDELAY It is recommended that you set the
REUsedelay parameter for your copy
storage pool to be at least as long as
the oldest database backup you intend to
keep. This will ensure that reclaimed
volumes are retained long enough to
guarantee the recovery of expired files.
Offsite volumes that you see are in the
PENDING state are empty but are awaiting
release based on the REUsedelay value.
(From Admin Guide, Chapter 11 "Managing
Storage Pools", "Reclamation and MOVE
DATA Command Processing".
Offsite tapes, eject Consider acquiring the ADSM DRM facility
and using its command 'MOVE DRMEDIA',
which will eject the volumes out of the
library before transition the volumes to
the destination state.
Offsite tapes, empty? Do 'Query Volume ACCess=OFfsite
STatusus=EMPty' to identify. Also note
that at start-up, TSM writes messages
like the following to the Activity Log:
ANR1423W Scratch volume 000052 is empty
but will not be deleted - volume
access mode is "offsite".
Offsite tapes, empty, return 'UPDate Volume * ACCess=READWrite
(for copy storage pool tapes) WHERESTGpool='name of offsite pool'
WHERESTatus=EMPTY WHEREACCess=OFfsite'
This will automatically delete empty
offsite volumes from ADSM and if you
are using a tape management system,
flag them to be returned.
Offsite volume reclamation When you do perform offsite reclamation,
it is recommended that you turn on
reclamation for copy storage pools
during your storage pool backup window
and before marking the copy storage pool
volumes as OFfsite. Next, turn off
reclamation and then mark any newly
created volumes as OFfsite. This
sequence will keep partially filled
offsite volumes as-is, prevent them from
essentially being copied to onsite
volumes. (See Admin Guide, Managing
Storage Pools, "Reclaiming Space in
Sequential Access Storage Pools",
"Reclamation for Copy Storage Pools",
"Reclamation of Offsite Volumes".)
Because the volume involved is not
present, its file complement has to be
obtained from onsite tapes in order to
effect reclamation. The process is
designed so that all files needed from a
particular primary volume are obtained
at the same time, regardless of which
volume reclamations need these files.
Note that it may take some time for the
reclamation to actually start in that
the server has to perform a lookup for
every file on the offsite volume to
determine what onsite volumes they are
on, so as to gather all the input tapes
into an efficient, ordered collection.
This is obviously rather expensive, so
it's best to let offsite tapes get as
empty as possible by themselves, and do
reclamation only if and when the tape
supply is low.
An offbeat approach to emptying
nearly-empty volume is to simply do a
DELete Volume on them: being copy
storage pool volumes, the deleted
contents would be recreated on a fresh,
local tape by the next BAckup STGpool.
Note that this may be ill-advised in
that you are eliminating your safety
copy of client data.
See also: ANR1173E
Offsite volume recovery An offsite (copy storage pool) volume
has evidenced a bad spot. How to recover
its data? RESTORE Volume is not an
option, as it is for primary storage
pool volumes. You might proceed to
perform a DELete Volume, to let the next
BAckup STGpool recreate the contents of
that volume - but that would be prudent
only if you also have an onsite copy
storage pool, as you would otherwise be
gambling that the primary tapes are
perfect. That is, the offsite volume you
may be eager to delete may contain the
only viable copy of some client data. If
no onsite copy storage pool, the most
prudent course would be to do a MOVe
Data on the bad offsite volume, and then
DELete Volume after all data (or as much
as can be) has been moved.
Offsite volume now onsite, but A volume returned from offsite and its
reclamation happening like offsite Access mode was changed from Offsite to
Readwrite or Readonly; but reclamation
of the volume is occurring like an
offsite reclamation, using volumes from
original storage pools which contain the
files on the "offsite" volume. Possible
causes:
- The Access mode of the offsite storge
pool itself is perhaps Unavailable,
rather than Readwrite or Readonly;
- When the volume was returned to
onsite, it was not Checked In.
- If using DRM, you should not be trying
to do on-site reclamation: you need to
let reclamation empty the volumes,
then request the return of volumes
that are in a DRM state of
VAULTRetrieve (empty). Upon their
return, one way to handle is 'MOVe
DRMedia VolName WHERESTate=VAULTR
TOSTate=ONSITERetrieve' for each. The
volumes will then be available for
Checkin as scratch tapes.
OFS See: Open File Support
OnBar Informix DB: Online Backup And Restore.
OnBar is a utility that comes with the
online product starting with the
7.21.UC1 version. This utility has the
ability to:
- Perform parallel backups and restores
of online.
- Automatic & continuous backups of the
logical logs.
- Use 3rd Party Storage Managers to to
store the online backups.
OnBar and it keeps track of all the
backup objects in its SYSUTILS table:
the name and object ID from the storage
manager.
See also: TDP for Informix
Online documentation (Books) Located in /usr/ebt/adsm/
From the Unix prompt: 'dtext', which
invokes the DynaText hypertext browser:
/usr/bin/dtext -> /usr/ebt/bin/dtext.
OP=REW A rewind operation, as seen in tape
error messages.
Open File Support (OFS) TSM 5.2 facility for backing up one
files. OFS is not a default install
option: you would have to perform a
custom install to get it. OFS cannot be
turned on and off via options: once
there, it is always there - you would
need to use the setup wizard to remove
OFS. The Windows INCLUDE.FS option can
be used to specify whether a drive uses
open file support.
When using open file support, the entire
volume is backed up via the snapshot
method - not just open files. The idea
is to capture the entire volume at a
moment in time (hence the photographic
term "snapshot"). While the backup is
running, disk writes are intercepted by
the LVSA, held until the LVSA can copy
the original data (at the block level)
to the snapshot cache, then allowed to
go through. When it is time for TSM to
back up the changed file, TSM backs up
the original data from the snapshot
cache, not the changed data.
Performance: There is some additional
overhead, which will vary with the
amount of data being changed during the
course of the backup.
See also: Image Backup; Snapshot
Open registration Clients can be registered with the
server by the client root user.
This is not the installation default.
Can be selected via the command:
'Set REGistration Open'.
Ref: Installing the Clients
Contrast with "Closed registration".
Open Systems Environment Name of licensing needed for AFS/DFS
volume/fileset backup. If you try to
use buta but lack the license, you will
get error message:
ANR2857E Session 19 with client AFSBKP
has been rejected; server is not
licensed for Open Systems Environment
clients.
If have license, start-up shows:
ANR2856I Server is licensed to support
Open Systems Environment clients.
OpenLDAP database, back up to TSM See Redpaper: "Backing Up Linux
Databases with the TSM API"
OpenVMS Is supported as a client using the
client software called STORServer ABC
(Archive Backup Client).
http://www.storserver.com
http://www.rdperf.com/RDHTML/ABC.HTML
See also: ABC
Operating system used by a client Shows up in 'Query Node' Platform.
-OPTFILE ADSMv3+ client option for specifying the
User Options File to use for the
session. (In Unix, this means the client
user options file: you cannot use
-OPTFILE to point to an alternate client
system options file.)
Note that this command line option
cannot be used with all commands, while
the DSM_CONFIG environment variable
method always works. And, obviously,
this option which specifies an options
file cannot be specified in the options
file.
Not for use in a client schedule.
See also: DSM_CONFIG; Platform
Optical disc performance vs. tape Thus far, the performance of optical
volumes/libraries is far below that of
tapes, whether SCSI 1 or II.
Ref: performance measurements in
Redbook "AIX storage management"
(GG24-4484), page 43/44.
OPTIONFormat (HSM) Client User Options file (dsm.opt)
option to specify the format users must
use when issuing HSM client commands:
STANDARD (long names) or SHORT.
Default: STANDARD
Options, client, query ADSM: 'dsmc Query Option'
TSM: 'dsmc show options'
Options, server, query 'Query OPTion'
Options file, Windows Use 'dsmcutil update' and use "/optfile"
to specify a different option file for
any of the installed TSM services.
.ora Filename suffix for Oracle files.
Oracle backup See: TDP for Oracle
Oracle database factoids Oracle .dbf files are initially
allocated at a pre-specified size and
populated with long runs of zero bytes.
Some of the zero bytes are replaced with
real data as applications write to the
database. A .dbf file with a generous
allocation may still consist mostly of
long runs of zero bytes even after it
has been in use for a while. Compression
algorithms can achieve results much
better than the typical three to one
when working on long runs of zero
bytes: such files compress down to
nearly nothing.
Order By SQL operation to sort the data in a
query. Syntax:
ORDER BY "column-list" [ASC | DESC]
where the order can be ASCending or
DESCending: ASC is the default.
This is expensive, so don't use unless
you have to.
ORM Offsite Recovery Media: media that is
kept at a different location to ensure
its safety if a disaster occurs at the
primary location of the computer
system. The media contains data
necessary to recover the TSM server and
clients. The offsite recovery media
manager, which is part of DRM,
identifies recovery media to be moved
offsite and back onsite, and tracks
media status.
ORMSTate UPDate VOLHistory operand, to specify a
change to the Offsite Recovery Media
state of a database backup volume. The
ORMSTATE options correspond to the DRM
STATE shown in the Q DRMEDIA output.
Orphaned stub file (HSM) A stub file for which no migrated file
can be found on the ADSM server your
client node is currently contacting for
space management services.
Reconcilliation detects orphaned files
and writes their names to the
.Spaceman/orphan.stubs file.
A stub file can become orphaned, for
example, if you modify your client
system options file to contact a
different server for space management
than the one to which the file was
migrated.
OS Operating System (Unix, Windows, etc.).
.OST Filename extension, "Off Site Tape", for
Backup Sets where Devclass is type FILE.
Out-of-band database backup TSM 3.7 facility to be able to make a
full backup of the TSM database, as for
offsite purposes, without interfering
with the prevailing full+incremental
backup series. This backup can be used
to restore the server db to a point in
time.
Out-of-space protection mode One of four execution modes provided by
the 'dsmmode' command. Execution modes
allow you to change the HSM-related
behavior of commands that run under
'dsmmode'. The out-of-space protection
mode controls whether HSM intercepts
out-of-space conditions.
See also: execution mode.
Originating file system The file system from which a file was
migrated. When a file is recalled using
normal or migrate-on-close recall mode,
it is always returned to its originating
file system.
OS/400 There has not been a TSM Backup/Archive
client or related scheduler for this
operating system.
"Out of band" Refers to an action which does not
participate within an established
regimen. In TSM, examples are:
- Selective backups, as opposed to
Incremental backups.
- BAckup DB ... Type=DBSnapshot
-OUTfile Command-line option for ADSM
administrative client commands
('dsmadmc', etc.) to capture interactive
command results in the file named in
"-OUTfile=FileName". Note that this
output is "narrow".
Alternately you can selectively redirect
the output of commands by using ' > '
and ' >> '. Note that this output is
supposed to be "wide" - but the output
of some commands line 'q stg' is still
narrow.
See also: Redirection of command output
Ref: Administrator's Reference
Output width See: -COMMAdelimited; -DISPLaymode;
SELECT output, column width;
Set SQLDISPlaymode; -TABdelimited
OVFLOocation Keyword for Primary and Copy Storage
Pool definitions specifying a string
identifying the location where volumes
will go when they are ejected from the
(full) library when processed by the
MOVe Media command.
See: MOVe Media, Overflow Storage Pool
Overflow Storage Pool An overflow storage pool can be used for
both primary and copy storage pools and
allows, when a library becomes full, the
removal and tracking of some of the
volumes to an overflow location. An
overflow storage pool is not a physical
storage pool; it is a location name
where volumes are physically moved to,
having been removed from a physical
library.
Ref: Admin Guide, "Managing a Full
Libary"
See also: MOVe MEDia; OVFLOcation;
Query MEDia
OVFLOwlocation You mean: OVFLOcation
Owner The owner of backup-archive files sent
from a multi-user client node, such as
AIX.
OWNER SQL: Column in BACKUPS table.
Is the owner of the file as defined on
the client system.
In Unix, this would normally be the
username of the owner. If the username
is not defined in the passwd system,
such that 'ls -l' shows the owner as a
UID number instead of a username, then
the same numeric will show up in the
OWNER column.

Paging space This is not really an *SM topic, but it


can affect *SM server functionality, so
I include these notes... Paging space is
in effect "tidal" space for real memory.
It is the space which makes virtual
memory possible. As such, it size needs
to be proportional to real memory size
for it to be meaningful - and for the
system to be able to function. Sadly,
we've seen some operating systems set up
by people who don't understand virtual
memory, and TSM suffers as a result. For
example, we've seen a major AIX-based
TSM system, with a hefty 12 GB of real
memory, given 2 GB of paging space...as
if the person who did it was referring
to a worksheet which the site has been
using for the past eight years for
setting up any AIX system, regardless of
size. Such a system is effectively being
put into a "virtual=real" state where
it's like the system is supposed to run
in real memory only - which it
architecturally can't...and will crash
processes as there's no room. (AIX will
issue a SIGDANGER signal to processes
for them to voluntarily quit, before it
gets drastic or fails utterly.)
In general, it is healthy for paging
space to be about twice the size of real
memory. A specific recommendation from
the AIX performance Analysis group (in
APAR IX88159): total paging space =
512MB + (memory size - 256MB) * 1.25
Parallel backups See: Backups, parallelize
Partial Incremental Is an Incremental which operates without
a list of the Active files having been
obtained from the *SM server, and thus
does not necessarily back up all files
in a file system, does not cause
expiration or rebinding of files on the
server, and ignores the frequency
attribute of the Copy Group. Types:
INCRBYDate, which operates only upon the
date of the last Incremental backup; and
Subset Incremental, which addresses only
the file system objects which you
specify.
Partition disks, should you? The question comes up as to whether
disks used for the TSM Database and
Recovery Log should be used as whole
disks or partitioned ("logical volumes",
in AIX parlance). If you have smallish
disks (2-4GB), by all means use them as
whole disks, as that's a nice, modular
size. With the larger disks more common
today, it is better to partition them
into units of about 4 GB each. This
modular approach yields greater TSM
parallelism in multiple TSM threads, and
allows you to add dbvols in nice unit
sizes as the db grows. The basic
advantage of partitioning also pertains:
it isolates the effects of a surface
fault, which then affects only that
partition instead of the whole disk, if
it were unpartitioned. This makes it far
less painful and time-consuming to swap
that LV out of a mirrored set and swap
in one of those nice replacement LVs you
have set aside.
PASsword ADSMv2: Macintosh and Windows clients
only.
ADSMv3: All clients.
The PASsword option specifies an ADSM
password. If this option is not used and
your administrator has set
authentication on, you are prompted for
a password when you start an ADSM
session. Ostensibly, this password
would serve to satisfy the first
requirement for a password in the
Generate case, and every occurrence in
Prompt mode. But if it's changed in the
server, the client has to be brought
into sync.
-PASsword Option you can code on client command
line ('dsmc', 'dsmadmc', etc) to specify
the client password for interacting with
the server.
Example: 'dsmadmc -id=MyId -pas=MyPw'.
Note that you will not have to do this
for basic 'dsmc' operation when
"PASSWORDAccess Generate" is active for
your client, except when you are
performing cross-client operations,
where you have to specify the password
of the alien client. But you *do* have
to specify it when invoking 'dsmadmc'
because the password involved is not
that of the node, but rather for the
administrator specified via -ID=____.
The -PASsword option is ignored when
PASSWORDAccess Generate is in effect:
you cannot provide it on the command
line to establish the client-stored
password.
The client is supposed to alter the
argv[] strings to that the password is
not revealed to other users in the
system when they run the 'ps -efl'
command.
Where you do have to specify
-PASsword=____, an issue for interpreted
scripts is that the password apparently
has to be coded into the script, thus
exposing it in that way. This can be
circumvented by coding the password
itself in a file which is accessibly
only to the authorized user, or group of
authorized users, and have the script
read the password from that file.
Another approach is to engineer a rather
trivial proxy agent which would accept a
TSM command string you provided, which
it would itself invoke with the password
it knows about, and then pass back the
results. Such an agent could be a
command where the password is encrypted
into the binary, or a minor daemon.
For query-only processing you might
define an administrator ID with only
query capability, and not be concerned
about the password being known. This
lessens concerns, but is nevertheless a
privacy/security issue in all the server
information being potentially available
to anyone.
See also: -NODename
Password, administrator, change/reset 'UPDate Admin Admin_Name PassWord'
See also: Administrator passwords, reset
If the administrator foolishly set up
their own ID so that its password
expires, it can be re-established by
restarting the TSM server in foreground,
then reset the admin password from the
privileged console.
Password, client, change at client dsmc SET Password <OldPw> <NewPw>
Password, client, display On Windows, prior to TSM 5.3 (where
capability was removed) you can do:
dsmcutil showpw /node:yournodename
Password, client, establish without Windows: You can establish the client
contacting the server password into the registry without
contacting the TSM server by issuing the
command:
dsmcutil updatepw /node:nodename
/password:xxx /validate:no
This is particularly necessary when
client option SESSIONINITiation
SERVEROnly is in effect, or the
equivalent spec is in effect on the
server side in the Node's definition,
such that the client cannot initiate a
session with the server.
Password, client, reset at client Windows: Sometimes the client password
is screwed up and has to be
reestablished by coeercion. The
following will reset the password
Registry key (noted in "Password,
client, where stored"):
1. Be the local administrator.
2. Clean out the <Nodename> subkey in
the Registry key.
3. Set PASSWORDAccess Prompt in the
options file.
4. Start the client. When prompted for
the password, make sure you can
connect, then quit.
5. Change PASSWORDAccess back to
Generate.
6. Start the client. Enter the password
when prompted, then quit.
Password, client, reset at server 'UPDate Node NodeName PassWord'
Password, client, rules 1-64 chars: A-Z, 0-9, -, .,
+, & allowed; % not allowed.
Password, client, update from server 'UPDate Node NodeName PassWord'
Node must not be currently conducting a
session with the server, else command
fails with error ANR2150E.
Password, client, where stored on When "PASSWORDAccess Generate" is
client selected in the Client System Options
File, the encrypted password is stored
on the client as follows:
Unix: Per the PASSWORDDIR option.
Defaults:
AIX ADSM: /etc/security/adsm/SrvrName
AIX TSM: In the baclient directory in
a file called X.pwd where X is a long
alphanumeric name made up by dsm*.
Other Unixes: /etc/adsm/SrvrName
Macintosh: Per the PASSWORDDIR option.
Default: In the install directory.
Windows: In Registry key
HKEY_LOCAL_MACHINE\SOFTWARE\IBM\ADSM
\CurrentVersion\BackupClient\Nodes
\<Node_Name>\<Server_Name>
Data name: Password
The encrypted password is stored in
the Registry on a per-node basis (a
separate password is generated for
each node used to connect to the
server). The SHOWPW command of the
DSMCUTIL utility may be used to
decrypt the password for a specified
node and display it in clear text.
2000: Under SOFTWARE, string ADSM
The password was established in the
server 'REGister Node' command, and
becomes set on the client when a
non-trivial command such as 'dsmc Query
SCHedule' is run ('dsmc Query Option'
is too trivial) by the "superuser"
(root in Unix; Administrator in
Windows).
Note that if you have multiple server
stanzas in your options file and have
"PASSWORDAccess Generate", you will be
be prompted once for each as you use it,
and it will be stored under that server
stanza name.
Note that if you upgrade the operating
system (e.g., from Windows NT to Windows
2000), the place where the password was
stored will likely be replaced,
obliterating the previously stored
passwords.
See also: /etc/adsm;
/etc/security/adsm; PASSWORDDIR
Password, change client's 'dsmsetpw' (an HSM command)
NT: 'dsmcutil updatepw'
Password authentication Require password for administrators and
client nodes to access the server per
REGister Node and Set AUthentication.
Password authentication, turn off Via TSM server command:
'Set AUthentication OFf'
Password authentication, turn on Via TSM server command:
'Set AUthentication ON'
Password expiration, node Per REGister Node, PASSExp= .
Password expiration period, query In server: 'Query STatus', look for
"Password Expiration Period"
Password expiration period, set 'Set PASSExp N_Days' 1-9999 days.
(Defaults to 90 days).
Password length, query Do 'Query STatus', view "Minimum
Password Length"
Password length, set See: Set MINPwlength
Password security The *SM (encrypted) password is not sent
in the clear: During authentication, the
client sends the server a message that
is encrypted using the password as the
key. The server knows what the decrypted
message should be, so if the wrong
password was used to encrypt the
message, then the authentication will
fail.
PASSWORDAccess Option for Client System Options File
(PASSWORDAccess Generate) to specify how your *SM client node
password is to be handled. Code within a
server stanza (under the appropriate
SErvername spec).
"Prompt" will cause a prompt for the
password every time the server is
accessed. This is the default - but
should not be used with HSM. If used
with Shared Memory access
(COMMMethod SHAREDMEM), the client must
either be root or be the same UID under
which the server is running.
"Generate" suppresses password
prompting, causing the password to be
encrypted and stored locally (in
/etc/security/adsm/SrvrName), and
generate a new password when the old
one expires. Causes dsmtca (q.v.) to
run as root. Use this when HSM or the
web client are involved. "Generate"
should be used with Shared Memory
access (COMMMethod SHAREDMEM) when the
client is not root or does not match
the UID under which the server is
running.
To establish the password: As
superuser, perform any client-server
operation, like 'dsmc q f'.
Note that if you have multiple server
stanzas in your options file, you will
be prompted once for each as you use
it. (If the generated password file
turns out to be zero-length, look for
its file system being full.)
Generate is unsuitable for use with
various APIs, such as TDP for Domino
with 'DOMDSMC /ADSMNODE', as a security
feature. (To use Generate, you would
have to code NODENAME in dsm.opt.) TDP
for Oracle similarly prohibits
Generate.
When "Generate" is in effect, you cannot
use the NODename option, because of the
need to reference the /etc/security/adsm
password, so you must not have the
option to fake the node name.
APAR IC11651 claims that if
PASSWORDAccess is set to Generate in
dsm.sys, then dsm.opt should *not*
contain a NODE line.
See also: ENCryptkey; MAILprog;
PASSWORDDIR
PASSWORDDIR Option for Client System Options File
to override the natural directory which
the TSM client should use to store the
encrypted password file when the
PASSWORDAccess option is set to
GENERATE.
Default: Is the most appropriate place
for the given operating system:
AIX: /etc/security/adsm/SrvrName
Other Unixes: /etc/adsm/SrvrName
NT: Registry.
See also: /etc/adsm;
/etc/security/adsm; Password, client,
where stored on client; PASSWORDDIR
Patch levels E-fix: An emergency software patch
created for a single customer's
situation.
Limited Availability (LA) patch: A
limited release of a patch just before
it is generally available.
General-availability (GA) patch:
Intended to be distributed to all
users. These patches have completed
the verification process.
Ref: Tivoli Field Guide: An Approach to
Patches
Path, drives SQL query SELECT COUNT(*) AS -
"Number of Free Drives" from drives -
WHERE DRIVE_NAME NOT IN (SELECT -
DESTINATION_NAME FROM PATHS WHERE -
ONLINE='NO') AND ONLINE='YES' AND -
DRIVE_STATE IN ('EMPTY','UNKNOWN')
Paths As of TSM 5.1, the procedure for
defining a tape library of tape drive
changed: it is now necessary to define a
data path for all libraries and drives,
including local libraries and drives.
The path definitions are necessary for
the server-free product enhancements.
PATHS SQL table for info about library and
drive paths. Elements, as of TSM5.2:
SOURCE_NAME:
SOURCE_TYPE:
DESTINATION_NAME:
DESTINATION_TYPE:
LIBRARY_NAME:
NODE_NAME:
DEVICE: Like "/dev/rmt2"
EXTERNAL_MANAGER:
LUN:
INITIATOR_ID:
DIRECTORY:
ONLINE: YES/NO
LAST_UPDATE_BY: <Admin ID>
LAST_UPDATE: <Date> <Time>
See also: DRIVES
Pct Logical Header in Query STGpool F=D output.
Specifies the logical occupancy of the
storage pool as a percentage of the
total occupancy. Logical occupancy
represents space occupied by files which
may or may not be part of an Aggregate.
A value under 100% indicates that there
is vacant space within the Aggregates,
which Reclamation can reclaim in its
compaction of Aggregates.
A high value is desirable and means that
a small fraction of the total occupancy
in your storage pool is vacant space
used by logical files that have been
deleted from within aggregates. There
are various reasons why this value may
appear to remain high, including:
- Most of the storage pool occupancy is
attributed to non-aggregated files
that were stored using a pre-Version 3
server;
- You are not getting much aggregation
because client files are very large or
because your settings from the client
TXNBytelimit option or TXNGroupmax
client option are too small;
- If logical files within aggregates are
closely related, they may all tend to
expire at the same time so entire
aggregates get deleted rather than
leaving aggregates with vacant space.
- Reclamation of sequential storage
pools removes vacant space within
aggregates and raises the %Logical
value for that pool.
See also: Logical file
Pct Migr Header in Query STGpool output.
Estimates the percentage of data in the
storage pool that can be migrated; that
is, migratable. It is this value that
is used to determine when to start or
stop migration.
Pct Migr indicates the amount of space
occupied by committed files, as
contrasted with the Pct Util value which
can reflect allocated, pending file
occupancy when a client data transaction
is in progress. So, a Pct Util value may
be like 77.9, and no migration is
happening - because the Pct Migr value
is 0.0.
Caching: Pct Migr does *not* include
space occupied by cached copies of
files. For example, an archive storage
pool that is 99% full with a Pct Migr of
15.1 means that 15.1% of the data is
new: an image of it has not yet been
migrated down to the next storage pool
in the hierarchy so as what's in this
higher level storage pool represents
caching. The other 83.9% of the files
are old, and were previously migrated
with the cached image left in the
storage pool.
A value of 0.0% says that all data has
already been migrated.
For a disk storage pool, a high Pct Util
and a low Pct Migr may reflects caching,
with the data being in both places.
For sequential devices (tape), reflects
the number of volumes containing viable
data; and Pct Util shows how much of
that space is actually used.
See also: Cache; Migration; Pct Util -
Query STGpool
Pct. Reclaimable Space Report element from Query Volume.
(SQL: PCT_RECLAIM) This is how much of the volume is empty
and reclaimable, reflecting all empty
space:
- places where whole Aggregates have
been logically deleted;
- where space within Aggregates has
been freed.
Contrast with Pct Util, which does not
account for voids within Aggregates.
Pct. Reclaimable Space is more in tune
with what Reclamation will address:
space within aggregates.
Unfortunately, though percent
reclaimable space may be high for some
volumes, their percent utilization may
be high as well, which will make for a
lot of data movement during
reclamation. Frustratingly, volumes
further down in percent reclailable
space levels may have far smaller
percent utilizations, and would reclaim
much faster.
The Pct. Reclaimable Space figure climbs
as a reclamation or MOVe Data proceeds.
Seeing the reclaimable space go from a
considerable value to 0 in a MOVe Data
operation suggests that all the
reclaimable space was whole Aggregates,
as in the case of a tape volume
containing predominantly large files,
with almost no possibility of space
being logically freed with Aggregates.
Pct Util, from Query FIlespace Column in 'Query FIlespace' server
command output, which reflects the
percent utilization of the object as it
exists on the client, such as how full a
Unix file system is. Note that this
does *not* reflect the space occupied in
TSM.
See also: Capacity
Pct Util, from Query STGpool Column in 'Query STGpool server command
output. Specifies, as a percentage, the
space used in the storage pool. That
space may be occupied by data in any
state, including data involved in
transactions which have not yet
committed it (so not yet eligible for
Migration (see Pct Migr)).
Disk: Reflects the total number of disk
blocks currently allocated by TSM. Space
is allocated for backed-up, archived, or
space-managed (HSM) files that are
eligible for server migration, cached
files that are copies of previously
migrated files, and files that reside on
any volumes that are offline.
Note that the Pct Util value has few
decimal places, which limits the
accuracy of values computed with it, as
in multiplying times the Estimated
Capacity value to hope to yield the
amount of data stored in the stgpool.
Remember that the value is a percent
number: to use it in computation, you
must adjust. For example: for a Pct Util
of 0.2, its corresponding computational
value is 0.002 .
The Pct Util value from the query
corresponds to the PCT_UTILIZED value
from a 'Select from Stgpools' - and note
that the PCT_UTILIZED value has been
seen to be lower than the Pct Util
value (e.g., 0.2 for the query, 0.1 for
the Select).
Note that Pct Util can be higher than
the value of Pct Migr (the migration
control percentage) when a client data
transaction such as a Backup is in
progress. The Pct Util value reflects
the amount of space actually allocated
(while the transaction is in progress).
Contrast with the value for Pct Migr,
which only represents the space occupied
by *committed* files.
At the conclusion of the transaction,
Pct Util and Pct Migr become
synchronized.
See also: Pct Migr
Pct Util, from Query Volumes In a Query Volumes report, reflects the
(SQL: PCT_UTILIZED) space taken up by unexpired data:
non-aggregated files or, if aggregated,
the amount of space occupied by whole
aggregates (regardless of any empty,
expired space within them, yielding a
somewhat inflated number versus
Pct. Reclaimable Space).
Pct Util is more in tune with what
MOVe Data will address: whole
aggregates.
When the Volume Status is Filling: the
value is *SM's computation of the
amount of data written versus the
volume's estimated capacity. (Note that
if you have short retention periods,
you can have the unusual situation of
files expiring as the tape fills, and
so can also exhibit characteristics of
Full volumes, as below.)
When the Volume Status is Full: the
value will be 100% at the time that *SM
encountered End Of Tape (EOT) when
writing the volume, and thereafter will
reflect the amount of data *logically*
remaining on the volume after file
expirations. (The volume itself remains
unmodified since that time, and in the
real, physical sense it really is
full.)
See also: Filling; Full; Pct Migr
Tapes get marked "full" when *SM hits
the end of volume. If you are getting
media errors, this could happen
prematurely.
Note that the value has only one decimal
position (e.g., 95.1), which may be
insufficient to reflect a tiny amount of
data on a tape: that is, there may still
be data on the tape though the Pct Util
is 0.0 .
Beware Migration, at some TSM levels,
not updating the Pct Util values for
involved tapes until after Migration has
concluded!
Note: Disk pool space is allocated by a
backup session, in anticipation of the
requirements of the backup session. It
will show up as percent utilized and not
percent migratable.
See also: DLT
Peer In data communications parlance, the
subsystem at the other end of a (TCP)
session. This term will show up in
various client ANS error messages, like
"Connection reset by peer", which is to
say the TSM server - which may have
terminated the session because of the
needs of a higher priority session or
process (preemption).
Pending Typical status of a tape in a 'Query
Volume' report (not 'Query LIBVolume'),
reflecting a sequential access volume
which has been purged of all data (it's
empty), but which is awaiting for the
STGpool REUsedelay number of days to
elapse before it can be re-used.
Offsite volumes would have a REUsedelay
value at least as long as the oldest
database backup to be kept, to
guarantee the recovery of expired files.
Pending volumes are re-evaluated every
hour, beginning 60 minutes after the
server is started. (Changing the
REUsedelay value to 0 does not cause the
Pending volumes to immediately return to
scratch: it will happen in the next
hourly examination.)
To return a volume to the Scratch pool
before the REUsedelay expires (as when
you're desperate for scratches and
cannot wait for the REUsedelay period),
just do 'DELete Volume ______'. ('UPDate
Volume' cannot return a volume to
Scratch status.)
'DELete Volume' cannot succeed on a
Pending volume while the Space
Reclamation process that cleared it is
still running, clearing other volumes
that it also found reclaimable.)
Messages: ANR1342I when volume becomes
pending; ANR1341I when automatically
deleted from stg pool per REUsedelay.
See also: Empty
Pending, when volume became 'Query Volume ______ F=D'
examine "Date Became Pending" value.
Pending volumes 'Query Volume STatus=PENDing'
Percent utilization of storage pool(s) 'Query STGpool [STGpoolName]'
See also: Query OCCupancy
perfctr.ini ADSM 3.1.0.7 introduced a new
performance monitoring function which
includes this file. See APAR IC24370
See also: dsmccnm.h; dsmcperf.dll
Performance topics See: 3590 performance; Backup
performance; Database performance;
Directory performance; DNSLOOKUP;
Expiration performance; Migration
performance; MOVe Data performance; MVS
server performance; Netware restore
performance; NT performance; Reclamation
performance; Restoral performance;
Server performance; Storage pool, disk,
performance; Storage pool volumes and
performance; Sun client performance;
Tape drive performance; Tape drive
throughput; V2archive; Web Admin
performance issues
Perl access to TSM Try the perl module TSM.pm located on
CPAN. The modules provides very easy
access to TSM, says one customer.
Phantom tape ejections See: Ejections, "phantom"
Phantom volume, remove See: Storage pool volume, long gone,
delete
Physical file A file, stored in one or more storage
pools, consisting of either a single
logical file, or a group of logical
files packaged together (an aggregate
file, in small files aggregation).
See also: aggregate file; logical file
Physical occupancy The occupancy of physical files in a
storage pool. This is the actual space
required for the storage of physical
files, including the unused space
created when logical files are deleted
from aggregates (small files
aggregation).
See also: Physical file; Logical file;
Logical occupancy
Physical Space Occupied (MB) Report column from Query OCCupancy
server command: The amount of physical
space occupied by the file space.
Physical space includes empty space
within aggregate files, from which files
may have been deleted or expired.
-PIck Client option, as used with Restore and
Retrieve, to present a numbered list of
objects matching the file specification
you entered, allowing you to select or
"pick" from the list just those objects
you want back. Each object that you
select will get an 'x' mark next to it.
When all desired have been selected,
enter 'O' (ok) to proceed with the
restoral.
Note that if in invocation you entered a
destination specification, you can pick
only one item from the list, which is
the singular object to go to that
destination.
-PIck is of particular value when you
need to restore an Inactive version of a
file from among many such versions. To
perform such an operation, restoring to
an alternate name so as to preserve the
original, do like:
dsmc restore -ina -pick
currentfilename currentfilename.old
See also: Inactive files, restore
selectively
PING SERVER ADSMv3 server command to test the
connection between the local server and
a remote one. Syntax:
'PING SERVER ServerName'
Pinning See: Recovery Log pinning
PIT Abbreviation for Point In Time
(restoral).
See: Point-In-Time restoral; GUI vs. CLI
-PITDate Point-In-Time Date option in ADSMv3, to
restore Active files (only) up to the
date specified. (The format of the date
must be that specific to your system,
per the prevailing DATEformat. You can
perform a Query Restore to see the
format in use.)
Will use the No Query Restore protocol.
PITDate will consider every backup made
*until* the indicated date.
Performance note: Is reported to cause
every tape to be mounted and every file
to be moved though few may actually be
needed for replacing client files.
Contrast with "FROMDate" and "TODate".
See also: Inactive files, restore
selectively; Point-In-Time restoral
-PITTime Client option, used with the PITDate
option, to establish a point-in-time for
which you want to display or restore the
latest version of your backups. Files or
images that were backed up on or before
the date and time you specified, and
which were not deleted before the date
and time you specified, are processed.
Backup versions that you create after
this date and time are ignored. This
option is ignored if the -PITDate option
is not specified.
Syntax: PITTime time
where the time specifies a time on a
specified date. If you do not specify a
time, the time defaults to 23:59:59.
Specify the time in the format you
selected with the TIMEformat option.
When you include the TIMEformat option
in a command, it must precede the
FROMTime, PITTime, and TOTime options.
See also: Inactive files, restore
selectively; Point-In-Time restoral
Planet Tivoli A technical, solutions-oriented, systems
management conference that offers
attendees an in-depth look at the
Tivoli management solution and the
industry surrounding it: your
opportunity to mingle with your industry
peers.
Go to http://www.tivoli.com/news/,
click on Planet Tivoli in sidebar.
Platform As in 'Query FIlespace' report.
The platform designation reflects the
operating system under which the client
node last contacted the server. There
is no command to change this value.
For dsm and dsmc clients, reflects the
operating system name (e.g., "AIX",
"IRIX", "Linux", "SUN SOLARIS",
"WinNT").
For the API, reflects the name of the
application used in the dsmInit() call.
Note that inadvertently accessing the
server with a nodename associated with a
different platform type can cause real
problems: re-accessing it from the
original platform may reset the platform
designation; but the problem access may
have caused the server to latch onto an
inappropriate "level" designation, which
cannot be reversed like the platform
designation can (see msg ANR0428W).
See also: Query Node
Point-In-Time restoral (PIT) ADSMv3 feature for Query and RESTORE.
Recovers a file space or a directory to
a previous condition, as used to
eliminate data corruption known to have
occurred at a certain time, by restoring
to before that time. It operates by
restoring specified file system objects
known at that time. It is necessarily
vital that your retention values for
both files and directories cover the age
to which you want to recover. (A
capricious DIRMc setting could cause
needed directories to not be available
for the restora.)
Note that a Point-In-Time restoral does
NOT remove new-name objects that were
created after that point in time: it
does not reinstantiate the file system
to what it entirely looked like at that
time, but rather just brings back files
which were backed up at that time.
Point-In-Time restoral is supported on
the file space, directory, or file
level.
IMPORTANT: Use the command line
interface (CLI) version of the client to
perform Point-In-Time restoral, rather
than the GUI! See "GUI vs. CLI". In
concert with that, the Admin Guide
advises: "Performing full incremental
backups is important if clients want the
ability to restore files to a specific
time. Only a full incremental backup can
detect whether files have been deleted
since the last backup. If full
incremental backup is not done often
enough, clients who restore to a
specific time may find that many files
that had actually been deleted from the
workstation get restored. As a result, a
client's file system may run out of
space during a [PIT] restore process."
See: GUI vs. CLI; -PITDate, -PITTime
Policy domain A policy object that contains one or
more policy sets and management classes
which control how ADSM manages the files
which you back up and archive.
Client nodes are associated with a
policy domain.
See policy set, management class, and
copy group.
Policy domain name associated with 'Query Node' shows node name and the
a client node, query Policy Domain Name associated with it.
Policy domain name associated with Done via 'REGister Node ...' (q.v.).
a client node, set
Policy domain, copy 'COPy DOmain FromDomain ToDomain'
Name can be up to 30 characters.
Policy domain, define 'DEFine DOmain DomainName
[DESCription="___"]
[BACKRETention=NN]
[ARCHRETention=NN]'
Since a client node is assigned to one
domain name, it makes sense for the
domain name to be the same as the client
node name (i.e., the host name).
Policy domain, define Policy Set in 'DEFine POlicyset Domain_Name SetName
[DESCription="___"]'
Policy domain, delete 'DELete DOmain DomainName'
Policy domain, policy set which has 'Query DOmain' will show the Activated
been activated, query Policy Set currently in effect.
Policy domain, query 'Query DOmain' for basic info.
'Query DOmain f=d' for detailed info.
Policy domain, update 'UPDate DOmain DomainName
[DESCription="___"]
[BACKRETention=NN]
[ARCHRETention=NN]'
Policy set A policy object that contains a group of
management class definitions that exist
for a policy domain. At any one time,
there can be many policy sets within a
policy domain, but only one policy set
can be active. So what good is that? Not
much, really. It gives you a really
gross means of switching from one Policy
Set to another via administrator action,
but no means of selecting one or another
from the client end.
See: Active Policy Set; Management Class
Policy set, activate To activate a policy set, specify a
policy domain and policy set name. Be
sure that you have done:
'VALidate POlicyset DomainName
PolicysetName'
beforehand. When you activate a policy
set, the server:
- Performs a final validation of the
contents of the policy set
- Copies the original policy set to the
active policy set
Command:
'ACTivate POlicyset DomainName SetName'
Policy set, active, update You cannot update the ACTIVE policy
set. After a policy set has been
activated, the original and the ACTIVE
policy sets are two separate objects:
updating the original policy set has no
effect on the ACTIVE policy set. To
change the ACTIVE policy set you must do
the following:
- Copy the ACTIVE policy set to a
policy set with another name (or just
use the one from whence the ACTIVE
one came, as 'q domain' shows).
- Update that policy set.
- Validate that policy set.
- Activate that policy set, to have
the server use the changes.
Policy set, copy 'COPy policyset DomainName OldSet
NewSet'
Policy set, define 'DEFine POlicyset Domain_Name SetName
[DESCription="___"]'
Policy set, delete 'DELete policyset DomainName Setname'
Policy set, query 'Query policyset [[DomainName [Setname]]
[f=d]'
Policy set, rename There is no command to simply rename a
policy set; you have to:
- 'COPy policyset DomainName OldSet
NewSet'
- 'UPDate policyset DomainName NewSet
DESCription="___"'
- 'VALidate POlicyset DomainName
NewSet'
- 'ACTivate POlicyset DomainName
NewSet'
- 'DELete policyset DomainName OldSet'
Policy set, update The policy set to be updated cannot be
the ACTIVE policy set.
'UPDate policyset DomainName SetName
DESCription="___"'
Policy set, validate 'VALidate POlicyset DomainName
PolicysetName'
There must be a default management class
defined for the Policy Set.
Polling See: SCHEDMODe
Port number, for 3494 communication Installation of the LMCP should result
in a /etc/services entry looking like:
"lmcpd 3494/tcp # IBM Automated Tape
Library Daemon",
to permit TCP/IP communication via a TCP
port number common between the AIX host
and the 3494. By default, port '3494'
is used, which matches the default at
the 3494 itself. If to be changed, be
sure to keep both in sync. Also, if
using other than the default (3494) you
need to code the port number in
/etc/ibmatl.conf .
Port number, in 3494 LAN Status menu When you define LAN host specification
via the 3494 console, that results in an
assigned port number (100, 101, 102)
which is visible in the LAN Status
display. The number is purely for
internal identification, for the 3494's
own purposes, and has nothing to do with
TCP/IP port numbers (as you would find
in Unix's /etc/services).
Port number for a session See: Session port number
Port numbers (ports) Internet network addressing and access
is unique by:
1. Host
2. Port number
3. Protocol (UDP, TCP)
Port numbers range from 0 to 65535 with
0-1023 being for root use. In the
Internet world, port numbers are
formally assigned (see
http://www.iana.org/assignments/
port-numbers) but within a site the
numbers may be used as needed. Note that
TSM has not formally registered its port
numbers - which have been taken for
other purposes, internationally - which
in some unusual contexts may cause a
non-TSM application to attempt to
interact with the TSM server, with
resultant protocol mismatch failure
(perhaps msg ANR0444W, ANR0484W).
Port numbers, for TSM client/server TSM conventionally uses the following
TCP/IP port numbers, for TCP
communication:
1500 Server port default number for
all session types.
Use the TCPADMINPort server option
to specify a port to separately
handle sessions other than client
sessions (admin, server-to-server,
SNMP subagent, storage agent,
library client, managed server,
event server sessions).
Use the TCPPort server option
to specify a port to separately
handle just client sessions.
(The distinction between the two
options facilitates firewall
configuration.)
Startup msg: ANR8200I.
Settable via server option
TCPADMINPort.
Specify via TCPPort server option
and DEFine SERver LLAddress and
SET SERVERLladdress.
This is also the default port
number for the client to contact
the TSM server, settable via the
client TCPPort option.
See also client option
LANFREETCPport.
1501 Client default port for backups
(schedule) on which the client
listens for sessions from the TSM
server. Per server's Node
definition, LLaddress spec.
Settable via client option
TCPCLIENTPort.
Note that this port exists only
when the scheduled session is due:
the client does not keep a port
when it is waiting for the
schedule to come around.
1510 Client port for Shared Memory,
settable via client option
SHMPort.
(Startup msg ANR8285I).
The TSM Storage Agent also listens
on this shared memory port by
default, settable via client
option LANFREEShmport.
1510 Server TCP/IP port number when
using Shared Memory, settable via
server option SHMPort.
1521 SNMP subagent default port,
settable via server option
SNMPSUBAGENTPORT.
1543 ADSM HTTPS port number.
1580 Administrative web interface
default (settable via server
option HTTPPort). See: Web Admin
1580 Client admin port.
1581 Client port default to respond to
web administrative interface or
Web Client. Settable via client
option HTTPport.
The Trusted Communication Agent client
will use a non-privileged port number
(>1023).
Port 1500 is for the initial
communication with the server, but once
established, a separate session is
forked off with it's own port: when the
client connects to the assigned port,
the server rolls the client over to
another random port to keep the initial
port open for further connections. To
avoid this 'random' choice, consider
using Polling Mode scheduling for
clients outside the firewall: the
clients will then only use the TCP port
specified in the client options file.
Establishing separate sessions allows
multiple clients sessions to be
established to the server at one time.
When the *SM client establishes a
session with the server, it randomly
selects a socket (port) number that it
calls out on. The adsm server then uses
that client port number for return
transmissions. If using server-initiated
backups, you can set the client's port
number for the server to use in the
client's system options file. If you do
this, then you will have to set up the
client's TCP/IP to reserve that port
number. The "tcpport <port_address>" is
how the initial port number is
specified. A separate session is forked
once the inital contact is made, but
there is no way to predetermine what
port number will be used: the attempts
will increment the port number until an
established connection is made (or the
client times out).
The Tivoli Event Client port may be set
via the server option TECPort.
Note that the client port number shows
up on msg ANR0406I when the session
starts, like the "4300" in:
(Tcp/Ip 100.200.300.400(4330)).
See also: DEFine SERver; Firewall
support; HTTPport; TCPCLIENTPort;
TCPPort; WEBPorts
PostgreSQL database, back up to TSM See Redpaper: "Backing Up Linux
Databases with the TSM API"
POSTNschedulecmd Like POSTSchedulecmd, but don't wait.
See: POSTSchedulecmd
POSTSchedulecmd Client System Options file (dsm.sys)
option to specify a command to be run
after running a schedule, and wait for
it to complete. (Cannot be used on the
command line.) If you don't want to
wait for the post-schedule command to
complete, code POSTNschedulecmd instead.
In Unix, the command is run as a child
process of the dsmc parent.
Placement: code within server stanza.
Code the command string within either
single or double quotes: you can then
code either double or single quotes
inside as needed.
Avoid coding this option with a blank or
null value, as it may cause the
scheduled command to fail.
Caution: This option is perhaps best
used with SCHEDMODe POlling, where
triggering is under the control of the
client. Using SCHEDMODe PRompted can be
problematic as DEFine CLIENTAction tasks
can hit the client at random, and have
nothing to do with work that you set the
option up to do.
Example need: To restart a database
server after backing up the database.
Verify via 'dsmc query options' in ADSM
or 'dsmc show options' in TSM; look
for "PostSchedCmd".
See also: PRESchedulecmd
Pre-fetch See: NOBUFPREFETCH
Pre-labeled tapes, a good idea? One can order tapes pre-labeled
(standard tape labels written on the
media, and a barcode which presumably
matches); but is that a viable thing to
do? There have been reports of customers
satisfied with the performance of the
pre-labeled tapes they received from a
vendor - and some who have had bad
experiences. (The label should be ANSI
standard, ASCII.) The reality is that
you simply do not know for certain that
the supposedly pre-labeled tapes have
been pre-labeled or that it was done
compatibly. It costs little to have TSM
label tapes for you, and you will be
assured of proper results by having it
do so. Remember that you as the TSM
technician are ultimately responsible
for results - not the vendor, or the
site personnel who ordered the tapes.
Error msgs: ANR8353E ANR8355E ANR8472I
ANR8780E ANR8780E ANR8783E
Precedence, Include-exclude order See: Include-exclude order of precedence
Precedence of operations See Admin Guide topic "Preemption of
Client or Server Operations".
Preemption (pre-emption) TSM gives priority to more important
processes, as when a Restore requires as
input a tape that is currently being
read by a BAckup STGpool: the storage
pool backup process is terminated to
relinquish the volume to the Restore.
APAR IX72372 added v3 Admin Guide topic
"Preemption of Client or Server
Operations", which lists operations and
priority order.
Control: NOPREEMPT option in server
options file (dsmserv.opt).
Note that you can define a PRIority
value on an administrative schedule -
which defaults to a middle priority
value of 5.
Note that preemption may seem not to
work, in that TSM is pursuing completion
of a unit of work before interrupting
that process, such as reclamation of a
tape with a single, very large backup
file on it. It has also been observed
that a high-priority operation (e.g.,
data restore) will only pre-empt a
process / session with the same
devclass.
When a client backup schedule is
interrupted by preemption, it will
usually be able to resume where it left
off, as seen in its backup log
containing message "ANS1809E Session is
lost; initializing session reopen
procedure."
Msgs: ANR0487W; ANR0492I; ANR1440I
Ref: Admin Guide, "Preemption of Client
or Server Operations"
See also: NOPREEMPT
Preferences, GUI In the GUI, Preferences may be choices
corresponding to client options. You
thus can refer to a combination of the
GUI Help function and the client
manual for information.
Prefixes See: Client component identifiers
Premigrated file A file that has been copied to ADSM
storage, but has not been replaced with
a stub file on the local file system. An
identical copy of the file resides both
on the local file system and in ADSM
storage. When free space is needed, HSM
verifies that the file has not been
modified and replaces the copy on the
local file system with a stub file. HSM
premigrates files after automatic
migration is complete if there are
additional files eligible for migration,
and the premigration percentage is set
to allow remigration. Contrast with
migrated file and resident file.
Premigrated files database A database that contains information
about each file that has been
premigrated to ADSM storage. The
database is stored in a hidden directory
named .SpaceMan in each file system to
which space management has been
added. HSM updates the premigrated files
database whenever it premigrates and
recalls files and during reconciliation.
If the database becomes corrupted, it
can be recreated by doing the following:
- cd .SpaceMan
- bkurfile premigrdb.dir premigrdb.pag
- Run '/usr/lpp/adsm/bin/fixfsm' (a ksh
script). See: fixfsm
- Run 'dsmreconcile'
Premigration The process of copying files that are
eligible for migration to ADSM storage,
but leaving the original file intact on
the local file system.
Premigration candidates 'dsmmigquery FileSystemName'
Premigration Database Is the premigrdb.dir and premigrdb.pag
file set located in the .SpaceMan
directory. The 'dsmls' command reports
from this when it lists premigrated (p)
files.
premigration percentage A space management setting that controls
whether the next eligible candidates in
a file system are premigrated following
threshold or demand migration. The
default for remigration percentage is
the difference between the percentage
specified for the high threshold and the
percentage specified for the low
threshold for a file system.
premigrdb See "Premigrated files database"
PRENschedulecmd Client System Options file (dsm.sys)
option to specify a command to be run
before running a schedule. (Cannot be
used on the command line.) In Unix and
Macintosh, TSM will not wait for the
command to complete before proceeding.
Contrast with PRESchedulecmd.
Placement: code within server stanza.
Prepending to SQL output Use the "||" SQL specification, as in
the example:
SELECT 'MOVe Data || VOLUME_NAME -
FROM VOLUMES WHERE PCT_UTILIZED <50
PRESchedulecmd Client System Options file (dsm.sys)
option to specify a command to be run
before running a schedule. (Cannot be
used on the command line.) For Unix,
Macintosh, DOS, Windows, and OS/2, ADSM
waits for the command to complete
before continuing with processing. If
you don't want ADSM to wait, in Unix and
Macintosh you can code PRENschedulecmd
instead.
Code the command string within either
single or double quotes: you can then
code either double or single quotes
inside as needed.
In Unix, the command is run as a child
process of the dsmc parent.
Placement: code within server stanza.
Avoid coding this option with a blank or
null value, as it may cause the
scheduled command to fail.
Example need: to shut down a database
server before backing up the database.
Verify via 'dsmc query options' in ADSM
or 'dsmc show options' in TSM; look
for "PreSchedCmd".
Caution: This option is perhaps best
used with SCHEDMODe POlling, where
triggering is under the control of the
client. Using SCHEDMODe PRompted can be
problematic as DEFine CLIENTAction tasks
can hit the client at random, and have
nothing to do with work that you set the
option up to do.
Evidence of pre-schedule execution
will show up in the SCHEDLOGname-d
file under "Executing Operating System
command or script".
Note that messages or textual reporting
produced by the invoked command do not
show up in the scheduler log, but will
show up in the redirected output of the
scheduler invocation. That is, if you
invoke the scheduler to redirect output
to a file (as in Unix example 'dsmc
schedule >> logfile 2>&1'), the output
will show up there.
If the PRESchedulecmd returns a non-zero
return code, the scheduled event will
not run - because it has every reason to
believe that steps prepatory to the
scheduled action have not succeeded.
Use this approach to perform some
perfunctory operation before the
schedule runs. To instead conditionally
perform some action, schedule a script
to run, which will internally invoke
'dsmc i' or similar client command if
all is well.
You cannot validly code this option more
than once in the file: if you do, no
error will result, but only the last
occurrence of the option will be used.
See also: POSTSchedulecmd
PRENschedulecmd
Preserve TSM storage pool data Sometimes a sudden regulatory or
emergency requirement arises wherein at
least Active (if not also Inactive) data
from a file system directory, as stored
in TSM storage pools, needs to be
preserved for an indefinite amount of
time. The TSM product provides no
facility for selectively preserving
data, except with Backupsets - which
capture only Active file versions and
which are outside the TSM server regimen
and hard to track.
A partial, but perhaps satisfactory,
measure might be to temporarily create a
"preservation" file system (name
appropriately), perform a -latest
restoral of the subject directory into
it, and perform a TSM Archive on that,
whereafter the file system can be
disposed of. This will at least capture
all of the object names which are and
have been in that directory, in their
most recent versions. The end result is
associated with a filespace of an
indicative name, and is readily queried
from the client system, fully
participating in the TSM regimen.
See also: TSM for Data Retention
-PRESERvepath Client option, as used with Restore and
Retrieve, to specify how much of the
source path to reproduce as part of the
target directory path when you get files
back, but to a new location.
Parameters:
subtree Creates the lowest level
source directory as a
subdirectory of the target
directory. This is the
default.
complete Restores the entire path,
starting from the root, into
the specified directory. The
entire path includes all the
directories *except* for the
filespace name.
nobase Restores the contents of the
source directory without the
lowest level, or base,
directory into the specified
destination directory.
none Restore all selected source
files to the target directory.
No part of the source path at
or above the source directory
is reproduced at the target.
Previous Command Recall See: Editor
Primary Storage Pool vs. Copy Storage Customers will sometimes compare their
Pool total storage size Primary Storage Pool contents against
the corresponding Copy Storage Pool
contents (after recent BAckup STGpool)
and, despite the number of files
matching, the total size as reported in
the Physical Space Occupied value
differs between the two storage pools.
This causes concern. But realize that
Physical Space includes empty space
within aggregate files, from which files
may have been deleted or expired.
See also: Aggregates; Physical
occupancy; Query OCCupancy
Prioritization See: NOPREEMPT; Preemption
Priority of TSM server processes See: Preemption
Priority Score See: Migration Priority
Private, make tape a private volume Via TSM command:
'UPDate LIBVolume LibName VolName
STATus=PRIvate'
Private Status value reported in
'Query LIBVolume'.
A tape just checked in as Private will
have a null Last Use because there was
no last use. (Make sure you label new
volumes, to prevent new Checkins from
getting a status of Private rather than
the desired Scratch.)
A tape will be forced to Private status
when there is an I/O failure on a
Scratch volume, as *SM sets it to
Private to keep from thrashing on the
scratch mount. Look in the Activity Log
for the message "8778W Scratch vol
... changed to Private Status to prevent
re-access". (However, that message may
not be present, as in TSM 5.2: msgs
ANR8944E, ANR8359E report the tape
problem, and it is set Private
implicitly.)
If in 'Query LIBVol':
- Last Use is blank: expect that the
volume was last used for a DUMPDB
operation if the volume is a long-term
resident of the library.
- Last Use is "Data": Could be a
Backup Set.
Private, make tape Via ADSM command:
'UPDate LIBVolume LibName VolName
STATus=PRIvate
PRIVATE category code 'Query LIBRary' reveals the decimal
category code number.
See also: Volume categories
Private subnets See: 10.0.0.0 - 10.255.255.255;
172.16.0.0 - 172.31.255.255;
192.168.0.0 - 192.168.255.255
PRIVATECATegory Operand of 'DEFine LIBRary' server
command, to specify the decimal category
number for private volumes in the
repository, which are to be mounted by
name (volser). Default value: 300.
/proc (Solaris) Like /tmp, is a pseudo file system, this
one providing access to the state of
each active process in the system. The
process info. monitored in the /proc
file system changes as the process moves
through its life cycle.
Due to its nature, this file system is
not worth backing up or restoring.
Process, cancel 'CANcel PRocess NN'
Process numbering Begins at 1 with each *SM server
restart.
Process start time Not revealed in Query PRocess: you have
to do 'SELECT * FROM PROCESSES' and look
at START_TIME.
PROCESSES TSM SQL table. Fields:
PROCESS_NUM Integer process number.
PROCESS Process name, like
"Backup Storage Pool".
START_TIME Like "2002-12-19
07:19:15.000000"
FILES_PROCESSED Integer. Value may also
appear in STATUS.
BYTES_PROCESSED Integer. Value may also
appear in STATUS.
STATUS Free-form text
describing the status
of the process.
Processes, maximum There seems to be no way to define how
many processes may be active at one time
within the server - which is too bad in
that such would be handy in causing
serialization for commands which result
in processes, like 'BAckup STGpool'.
See MAXPRocess value on commands like
'BAckup STGpool', 'RESTORE Volume', etc.
Processes, server (dsmserv's) When the ADSM server starts, in AIXv3 it
will start a lot of processes, and in
AIXv4 it will start one process with
numerous threads. In either case these
are ADSM threads. There will be one
thread for each volume in your ADSM
system (database, recovery log, storage
pool) and so you are better off with
multiple, smaller volumes than one large
one, as parallelization will improve.
There are other threads for each of the
comm methods for accepting new
conversations, migration and reclamation
watchdog threads that will start these
processes when needed, a deadlock
detector, the server console, expiration
watchdog to start expiration at the
appropriate interval, the schedule
manager, etc. These threads do not stop
and restart. New threads (processes)
are created and terminated as needed for
client sessions, tape mounts and
dismounts, server processes, etc.
Do 'SHow THReads' to see 'em.
To see the threads in AIXv4, use the -m
option of the 'ps' command, as in
'ps -eflm'.
See also: dsmserv; Storage pool volumes
and performance
Processor usage See: Multiprocessor usage
Processors, number of and TSM TSM does not store in its database the
number of processors with which the
client system is equipped.
See: Intel hyperthreading & licensing
See also: Multiprocessor
PROCESSORutilization N Novell-only (Netware-only) option to
control the percentage of CPU time
allotted to ADSM, in 100ths of seconds.
Said to be the single biggest impact
parameter in the Novell dsm.opt file.
Producer Session In Backup, the session that is
responsible for querying and reporting
results to the server. (To use an FTP
analogy, this is the "control channel".)
In the accounting records, there is
no explicit marking to allows
distinguising a Producer session from a
Consumer session: one can only infer the
Producer session by its fields 16 and 17
being zero.
Contrast with: Consumer session
See also: RESOURceutilization
Programmable Workstation Communication A product that provides transparent
Services (PWSCS) high performance communications between
programs running on workstations or on
host systems.
.PST filename suffix and access A filename with that suffix is a
Microsoft Outlook or Exchange personal
folder. Such personal files files do not
support shared access. When Outlook
opens a PST, it locks it for exclusive
access ("Open Exclusive"). No other user
can touch that file until it is
physically closed by the person who
opened it. This is due, in large part,
to the database format Outlook uses:
contacts, calendar entries, messages,
journal entries, etc. are stored in one
big flat-file. If you attempt to share
such files, the first person to open the
file gains exclusive access to it,
meaning that the owner of the file may
be locked out of using her/his own file.
Outlook does release the locks
periodically (by default, after 15
minutes of inactivity), meaning that you
can have Outlook open, and your .pst
files won't stay always locked if they
are not actively in use. (See MS KB
article 222328, "OL2000: (CW) How to
Change File LockTimeout Value for PST
Inactivity".)
The Outlook client can be configured to
release the PST file after some period
of inactivity so that another
application can open and read it, even
though the Outlook client is running:
The "MSPST.INI" file controls this...
DisconnectDelay=60 // Seconds till
disconnect. Default is 15 min.
DisconnectDisable=2 // 0 = disallow
disconnect to occur, 2 = allow
disconnect. Default is 2.
Multiple .pst files can be exist on the
PC and not be opened by Outlook.
Also, if there is more than one Outlook
mail profile on the PC and they both
have .pst files, then one of the .pst
files will be available for backup.
Related: .edb
PTFs applied to ADSM on AIX system 'lslpp -l adsm\*'
Purge Volume category 3494 Library Manager category code FFFB
to delete a Library Manager database
entry, as when a tape ends up with a
"Manually Ejected" FFFA category code
because it was unusable, such that this
useless 3494 database entry remains.
See also: Volume, delete from Library
Manager database
PWSCS Programmable Workstation Communication
Services.

QFS Solaris: A high-performance file system


that enables file sharing in a SAN. It
eliminates performance bottlenecks
resulting from applications using very
large file sizes.
QIC Quarter Inch Cartridge tape technology,
using a twin-spool, flat cartridge,
usually with an aluminum base plate and
plastic enclosure, housing tape a
quarter of an inch wide.
See also: 7207
Query, restrict access See QUERYAUTH
Query ACtlog TSM server command to report info from
the Activity log.
Syntax:
'Query ACtlog [BEGINDate=___]
[BEGINTime=___] [ENDDate=___]
[ENDTime=___] [MSGno=___]
[Search=SearchString]
[ORiginator=ALL|SErver|CLient]
[NODEname=node_name]
[OWNERname=owner_name]
[SCHedname=schedule_name]
[Domainname=domain_name]
[SESsnum=session_number]'
Defaults to reporting the latest hour's
activity. MSGno and Search can be used
together for more effective results.
Note: In AIX TSM, the date format is
MM/DD/YY, regardless of the server
Dateformat setting. This is reportedly a
function of the international NLS locale
setting in AIX: there is no ready way
for it to be any other format.
Note: This command cannot be scheduled.
Query ADmin ADSM server command to display info
about administrators. Syntax:
'Query ADmin [Adm_Name|*]
[CLasses=SYstem|Policy|STorage|
Operator|Analyst]
[Format=Detailed]'
Also: GRant AUTHority, revoke admin.
Query ARchive See: dsmc Query ARchive
Query ASSOCiation *SM server command to display the client
nodes associated with one or more client
schedules, as for Backup and Archive
operations. Syntax:
'Query ASSOCiation
[[DomainName] [ScheduleName]]'
Query ASSOCiation will report schedules
having no associations, because that is
good to know.
See also: ASSOCIATIONS
Query AUDITOccupancy *SM server command to display info about
the client node data storage
utilization. The numbers reported will
by default include both primary and copy
storage pool contents, or but may be
selected separately. Syntax:
'Query AUDITOccupancy NodeName(s)
[DOmain=DomainName(s)]
[POoltype=ANY|PRimary|COpy'
By default, the command shows you the
occupancy of all nodes in all domains
for all storage pools; but the resulting
report itself doesn't provide any
indications of what is being included.
Report details:
The fixed report size granularity of a MB
can easily lead to misunderstanding: a
0 MB value may not mean nothing there,
but may instead mean too much less than
a megabyte to have an integer value.
You will find that the number for Backup
Storage Used, for example, is equal to
the sum of the Physical Space Occupied
values from Query OCCupancy for all the
backup data storage pools for that node.
Note: It is best to run 'AUDit LICenses'
before doing 'Query AUDITOccupancy' to
assure that the reported information
will be current.
Alternately, you may perform
'SELECT * FROM AUDITOCC'.
Also try the unsupported command
'SHow VOLUMEUSAGE NodeName'
See also: AUDITOCC; Query OCCupancy
Query Backup See: dsmc Query Backup
Query BACKUPSET TSM server command to display
information about one or more Backup
Sets: Node Name, Backup Set Name,
Date/Time, Regetion Period, Device Class
Name, Description (but not the volumes
constituting the set). Syntax :
'Query BACKUPSET [*|NodeName[,NodeName]]
[*|BackupsetName[,BackupsetName]]
[BEGINDate=____] [BEGINTime=____]
[ENDDate=____] [ENDTime=____]
[WHERERETention=Ndays|NOLimit]
[WHEREDESCription=____]
[WHEREDEVice=DevclassName]'
See also: dsmc Query BACKUPSET
Query BACKUPSETContents TSM server command to display
information about the contents of a
Backup Set: its files and directories.
Syntax:
'Query BACKUPSETContents NodeName
BackupSetName'
Note that there is no provided means for
the client CLI or GUI to obtain such
information.
Considerations: Processing this command
can consume considerable time, network
resources, and mount points. (The
command has to look inside the Backup
Set to report its contents, meaning that
it has to mount the media and plow
through the data.)
See also: Backup Set; dsmc Query
BACKUPSET; GENerate BACKUPSET
Query CLOptset TSM server command to query a client
option set defined on the server for all
clients. Syntax:
'Query CLOptset Option_Set_Name
DESCription=Description'
Query CONtent TSM server command to display info about
one or more files currently residing in
a storage pool volume.
Syntax:
'Query CONtent VolName [COUnt=N|-N]
[NODE=NodeName] [FIlespace=____]
[Type=ANY|Backup|Archive|
SPacemanaged]
[DAmaged=ANY|Yes|No]
[COPied=ANY|Yes|No]
[Format=Detailed]'
A positive COUnt value shows the first N
files on the volume, listed in forward
order; a negative COUnt value shows the
last N files on the volume, latest
first. The reported Segment Number
reveals whether the file spans volume
(where "1/1" says it's wholly contained
on the volume).
COPied is for reporting on whether files
have been backed up to a copy storage
pool.
Displays: Node Name (in upper case),
Type (Arch, Bkup, SpMg), Filespace
(e.g., "/usr"), Client's Name for File
(e.g., "/include/ limits.h" - see
HL_NAME and LL_NAME).
Use "F=D" to additionaly display
Stored Size, Segment Number, and
Cached Copy. (Does not reveal owner or
when the object was written to tape.)
Note that an F=D display tends to make
the file name column wider and thus more
convenient for copy-paste operations.
Performance: The more files on the
volume, the longer the query takes, if
you impose no count limit: with a modest
limit, there is no significant server
overhead, as made apparent by the nearly
instantaneous results.
If you have collocation enabled and
each node's files fit on one tape, you
can do 'Q CON VolName count=1' to
determine what node's files are on
each tape, as for generating pulllists
for export node processes, etc.
Note that Query CONtent will not report
the contents of a volume which *SM has
just started writing if it is spanning a
transaction from a volume it has just
filled, as during Copy Stgpool.
See also: Damaged; Span volumes, files
that, find; Stored Size
Query COpygroup *SM server command to display info
about one or more Copy Groups, where
retention periods are defined.
'Query COpygroup [DomainName] [SetName]
[ClassName] [Type=Archive]
[Format=Detailed]'
Query DB Server command to display allocation and
statistical information about the server
database: Available Space, Assigned
Capacity, Maximum Extension, Maximum
Reduction, Page Size, Total Usable
Pages, Used Pages, Pct Util, Max. Pct
Util, Physical Volumes (count), Buffer
Pool Pages, Total Buffer Requests, Cache
Hit Pct, Backup In Progress?, Type of
Backup In Progress, Incrementals Since
Last Full, Changed Since Last Backup
(MB), Percentage Changed, Last Complete
Backup Date/Time.
Syntax: Query DB [Format=Detailed]
Query DBBackuptrigger Server command to display the current
settings for the database backup
trigger, used in Rollforward mode.
Syntax: Query DBBackuptrigger
[Format=Detailed]
See: Recovery Log; Set LOGMode
Query DEVclass ADSM server command to display info
about one or more device classes.
Syntax:
'Query DEVclass [DevClassName]
[Format=Standard|Detailed]'
See also: SHow DEVCLass
Query DRive TSM server command to display
information about a drive located in a
server-attached library: the state of a
drive, whether it is online, offline,
unavailable or being polled by the
server. Syntax:
'Query DRive [* [LibName [DriveName]]]
[Format=Standard|Detailed]'
Notes: This command reports only
whether the drive is said to be online
to TSM. An online drive is not
necessarily usable or operational.
Do 'SHow LIBrary' to get more detailed
information, supplemented by operating
system drive inquiries, including use of
the 'mtlib' command with 3494s.
An "Unavailable Since" condition usually
indicates a hardware problem, as per
msg ANR8848W.
Query DRMedia Server command to display information
about database backup and copy storage
pool volumes, or create a file of
executable commands to process the
subject volumes. (You do not need DRM
to use this handy command.) Syntax:
'Query DRMedia [*|VolName]
[WHERESTate=All|MOuntable|
NOTMOuntable|COUrier|VAult|
VAULTRetrieve|COURIERRetrieve|
REmote]
[BEGINDate=date]
[ENDDate=date]
[BEGINTime=time]
[ENDTime=time]
[COPYstgpool=pool_name]
[Source=DBBackup|DBSnapshot|
DBNone]
[Format=Standard|Detailed|Cmd]
[WHERELOCation=location]
[CMd="command..."]
[CMDFilename=file_name]
[APPend=No|Yes]'
By default, this will display all copy
storage pool volumes and database backup
volumes. You can cause it to show only
db backup volumes by invoking with
COPYstgpool having a non-existent copy
storage pool name, as in
"COPYstgpool=NONE".
Note: The Source operand was
"DBBackup=Yes|No" in ADSMv3.
CMd specifies a command to be generated
for each volume found by the Query. The
command can be up to 255 characters
long, and may be coded as multiple lines
via the handy &NL substitution variable.
Other substitution variables:
&VOL The volume name.
&VOLDSN The file name that the server
writes into media labels.
&LOC The volume's Location.
Note that, whereas redirection under an
administrative client session is
relative to the system where the admin
client is running, the CMDFilename spec
is relative to the TSM server system.
This command is particularly valuable in
compensating for the inability to use
redirection in server scripts, as when
you would like to perform a Select to
obtain the volname of the latest db
backup, for massaging into a CHECKOut
LIBVolume command, to eject that volume
for offsite storage.
See also: DRMEDIA; MOVe DRMedia;
Query MEDia; Set DRMCMDFilename;
Set DRMCOPYstgpool
Query DRMSTatus TSM server command to query parameters
defined to the TSM Disaster Recovery
Manager. Reports: recovery plan prefix,
plan instructions prefix, replacement
volume postfix, primary storage pools,
copy storage pools, courier name, vault
site name, DB backup series expiration
days, recovery plan file expiration
days, check label yes/no, process FILE
device type yes/no, command file name.
See also: Set DRMDBBackupexpiredays
Query EVent (for admin schedules) TSM server command to display scheduled
and completed events. Syntax:
'Query EVent SchedName
Type=Administrative
[BEGINDate=NNN] [BEGINTime=Time]
[ENDDate=Date] [ENDTime=Time]
[EXceptionsonly=No|Yes]
[Format=Standard|Detailed]'
Query EVent (for client schedules) TSM server command to display scheduled
and completed events. Syntax:
'Query EVent DomainName SchedName
[Nodes=NodeName(s)]
[BEGINDate=NNN] [BEGINTime=Time]
[ENDDate=Date] [ENDTime=Time]
[EXceptionsonly=No|Yes]
[Format=Standard|Detailed]'
Remember that events log entries are
retained only as long as specified via
'Set EVentretention' (q.v.).
In the report...
Status Is the status of the event at
the time that Query EVent was issued:
In Progress Customers report seeing
this in a failure of the client (such
as the scheduler service/daemon
freezing or dying).
Query EVent notes A status of "Uncertain" usually means
that the schedule event record has been
deleted by automatic pruning functions:
it is no longer in the database, per
"Set EVentretention".
It may be that you asked for too old
information.
You could change the amount of time that
schedule event records are retained
using the Set EVentretention command to
keep these records around longer so that
you can query their status.
Query Event shows only the latest status
for each event. If a scheduled
operation is executed successfully,
the status will indicate that the
event was successful, although previous
attempts at this event may have been
unsuccessful.
A status of "(?)" may only prevail in
TSM 4.x: it reflects being unable to get
the schedule state from the client prior
to the error in communications. Check
the TSM client(s) in question for
completion of the scheduled event
(through the client dsmsched.log and
dsmerror.log). If the scheduled backup
failed, rerun the scheduled event or
perform a manual incremental backup to
ensure the backup of the data.
See "UPDate SCHedule, client" for the
reason that prior event records may
disappear.
Query FIlespace *SM server command to display
information about file spaces. Syntax:
'Query FIlespace [NodeName]
[FilespaceName]
[Format=Detailed]'
The reported Filespace Name will be as
"..." if it is unicode and your server
cannot interpret that (code page). In
such case, you can perform the command
with Format=Detailed and transliterate
the Hexadecimal Filespace Name.
The Capacity and Pct Util values
reported reflect the Unix file system
size and utilization when TSM last
looked, as you would see in a Unix 'df'
command, for example. The values will be
zero for AUTOFS filespaces and API
client work, such as Oracle TDP backups.
Query FIlespace does not reveal how much
data has been stored by a node.
(Use 'Query OCCupancy' to see space
consumed in *SM server storage.)
The "Last Backup Date" reflects only
Incremental Backup executions...
If "Last Backup" is:
empty It indicates that there is
nothing to report, as in the
filespaces having been created in the
server by virtue of Archive activity.
Or look to see if the filespace type
indicates that it was created by an
API, which is inherently separate from
regular backups. Or it may have been
created by a Selective backup.
stagnant It would seem that the client
has not been doing unqualfied
Incremental backups - that is, backing
up whole file systems without
modifying options. The value will be
stagnant if the client is doing
only Selective Backups, or if the
client is doing qualified Incremental
backups (where 'dsmc i /fsname/*' is
an erroneous form, which should
instead be 'dsmc i /fsname').
See also: "..."; dsmc Query Filespace;
FILEPSPACES
Query INCLEXCL See: dsmc Query INCLEXCL
Query LIBRary *SM server command to display info about
libraries you created via 'DEFine
LIBRary', including Category Codes for
Scratch and Private type volumes.
Syntax:
'Query LIBRary [LibName]
[Format=Standard|Detailed]'
In the output, don't forget that with
3494 libraries and 3590 tapes, the
defined Scratch category code is for
3490 type tapes, and that value + 1 is
for your 3590 tapes.
See also: DEFine LIBRary
Query LIBVolume TSM server command to display info
about one or more volumes that have
been previously checked into an
automated tape library and are
physically still in it, whether they are
currently scratch volume or volumes now
assigned to a storage pool. Syntax:
'Query LIBVolume [LibName] [VolName]'
Note that this command is not relevant
for LIBtype=MANUAL.
For each library, reports volume names,
volume status (Private/Scratch), and
Last Use (Data/DbBackup/Export/...).
There is no date/time information: that
is in the Volhistory table.
Note that the volume status implies the
category code, as can be numerically
determined via
'Query LIBRary [LibName]'.
If Status shows as "Private" and
Last Use is blank, it may be that the
volume was last used for a DUMPDB
operation or, more commonly, the volume
is empty and Defined to a storage pool.
Note that volumes checked out of the
library (especially Offsite tapes) will
not show up in 'Query LIBVolume': do
'Query Volume' instead.
Query LICense TSM server command to display license
audit, license terms, and compliance
information. Reports:
- Date and time of last AUDit LICenses
- Number of registered client nodes
- Number of client node licenses
- For each component, two lines
reporting whether it is in use, and
whether it is licensed.
Note that it is possible for the number
of licenses in use to be greater than
zero while the number licensed is zero:
this is an artifact of someone trying to
use such a license (and obviously
failing). TSM is simply recording the
attempt. In such a case, the number in
use value should automatically return to
zero after some 30 days after the
attempt to use it: if it doesn't clear,
run AUDit LICenses'.
See also: AUDit LICenses; AUDITSTorage;
LICENSE_DETAILS; Licenses and dormant
clients
Query LOG Server command to display allocation
information and statistics about the
Recovery Log: Available Space, Assigned
Capacity, Maximum Extension, Maximum
Reduction, Page Size, Total Usable
Pages, Used Pages, Pct Util, Max. Pct
Util, Physical Volumes (count), Log Pool
Pages, Log Pool Pct Util, Log Pool Pct
Wait, Cumulative Consumption,
Consumption Reset Date/Time. Syntax:
'Query LOG [Format=Detailed]'
The Log Pool Pct Wait value should
always be zero for a healty situation.
See also: RESet LOGConsumption;
RESET LOGMaxutilization
Query MEDia ADSMv3 server command to display
information about the sequential access
primary and copy storage pool library
volumes moved by the MOVe MEDia command.
(Actually, it will report on all library
volumes, but via operands can be
restricted to volumes with specific Move
Media attributes. The global
capabilities of this command can be used
as an alternative to Query Volume, as in
reporting all volumes that are dedicated
to storage pools, which are empty. But
there is a basic requirement that the
storage pool(s) involved by managed by
an automated library.)
Syntax:
'Query MEDia [*|VolName]
STGpool=PoolName|* [Days=Ndays]
[WHERESTATUs=FULl|FILling|EMPty]
[WHEREACCess=READWrite|READOnly]
[WHERESTate=All|MOUNTABLEInlib|
MOUNTABLENotinlib]
[WHEREOVFLOcation=location]
[CMd="command"]
[CMDFilename=FileName]
[APPend=No|Yes]
[Format=Standard|Detailed|Cmd]'
Days is the number of elapsed days
since the most recent of the read or
write date for the volume.
A checked-in volume will be reported as
"Mountable in library".
A checked-out volume will be reported as
"Mountable not in library".
See also: MOVe MEDia; Overflow Storage
Pool; OVFLOcation; Query DRMedia
Query MGmtclass ADSM server command to get info on about
one or more Management Classes. Syntax:
'Query MGmtclass [[[DomainName]
[SetName] [ClassName]]] [F=D]'
See also: Management classes, query
Query MOunt TSM server command to get info on
mounted volumes (tapes). Syntax:
'Query MOunt [Vol_Ser]'
Report will be in mount request order,
not drive or volume order.
Report messages:
ANR8329I IDLE: The tape is currently not
read or written.
ANR8330I IN USE: The tape is being read
or written.
ANR8331I DISMOUNTING: Just what it says.
Notes: Does not reflect drives in use by
LABEl LIBVolume. Does not return
information on tapes mounted by other
means on drives "owned" by TSM (as via
the 'mtlib' command, manual mounts,
etc.).
SQL equiv: There is no Mount(s) table;
but doing a Select from the Drives table
yields comparable info, though not RW or
RO status.
See also: DISMount Volume
Query Node Note that the Platform value is set the
first time the client uses TSM, and
that value persists though the actual
platform type may change. There is no
command to change this value. In any
case, it is just a nicety: the actual
platform type is dynamically recognized,
as can be seen via 'Query SESsion'.
See also: Platform
Query OCCupancy Find the number of file system objects
and the amount of space they take in
storage pools (utilization). Occupancy
reflects all versions of stored files,
Active and Inactive. The Space values
reported reflect the amount of data
which the server knows about, which
means the number of MB received from the
client *after* client compression, and
the number of MB written to a storage
device (tape drive) *before* it may have
performed its own compression.
Syntax:
'Query OCCupancy [NodeName]
[FileSpaceName]
[STGpool=PoolName]
[Type=ANY|Backup|Archive|
SPacemanaged]'
By default, reports all storage pools:
primary and copy storage pools.
Note that this command displays info
about files stored in storage-pools, and
thus does not reflect objects which
require no storage pool space, such as
zero-length files and directories from
Unix clients: they are just attributes,
which can be stored solely in the TSM
database. Query OCCupancy does not
report cached files or space occupied by
these files. Only migratable files are
included.
Report details:
"Physical Space Occupied" and "Logical
Space Occupied" refer to the ADSMv3
Small File Aggregation feature: the
physical file can be an aggregate file
(composed of logical files), with empty
space resulting from expiration of
logical files.
"Logical Space Occupied" is the amount
of space occupied by logical files in
the file space, which amounts to the
Physical Space value minus the "holes"
created by expired files within
Aggregates.
"Number of Files" is the number of
logical files stored in the stgpool.
This number DOES NOT necessarily equate
to the number of file system objects
stored for this filespace, in that
empty files, directories, symbolic
links, and the like may not participate
in the storage pool (see points raised
above).
Avoid doing a Query OCCupancy while an
intense database operation, such as an
Import, is running: that may cause an
ANR9999D condition.
See also: OCCUPANCY; Symbolic links
Query Option (dsmc client command) Undocumented client command to reveal
all options in effect for this client.
Note that output is more comprehensive
than what is returned from the dsm GUI's
Display Options selection. For example,
this command will report INCLExcl status
whereas the GUI won't.
TSM: show options
Query OPTion TSM server command to reveal all options
in effect for this server, as coded in
the server options file. Syntax:
'Query OPTion [* | Option_Name]'
where you can specify one option name or
a wildcard specification.
Note that this command will not show
values currently in effect by virtue of
self-tuning (per SELFTUNE* options).
Syntax: 'Query OPTion [OptionName]'
See also: Query STatus
Query PRocess *SM server command to see what
processes have been started to
internally process long-running
commands. Note that the Process Number
reported is ADSM's relative process
number, and is not the same as the AIX
process number of the dsmserv process
doing the work.
Syntax: 'Query PRocess [ProcessNum]'
Note: Odd formatting after an upgrade
might be due to not installing all the
message repositories.
Where a CHECKOUT LIBVOLUME process
lingers for a long time in a 3494, it
has been seen to be due to a vision
failure. (An 'mtlib' command to show
status will reveal the problem.)
See also: CANcel PRocess; Expiration
process
Query REQuest ADSM server command to display info
about pending mount requests. Syntax:
'Query REQuest [requestnum]'.
Obviously, if you have an automated tape
library, there will be no mount
requests.
See also: CANCEL REQUEST; REPLY
Query RESTore TSM server command to display
information about restartable restore
sessions. Syntax:
'Query RESTore [NodeName]
[FilespaceName] [Format=Detailed]'
See also: Query Backup
Query SCHedule (administrative) Server command to query an
administrative schedule. Syntax:
Query SCHedule
[Schedule_Name]
[Type=Administrative]
[Format=Standard|Detail]'
Query SCHedule (client) Server command to query a client
schedule. Syntax:
Query SCHedule
[Domain_Name=*|Schedule_Name]
[Type=Client]
[Nodes=NodeName[,NodeName]]
[Format=Standard|Detail]'
Query SERver ADSMv3 server command to display
information about a server definition.
'Query SERver [ServerName]
[Format=Detailed]'
See also: DEFine SERver;
Set SERVERHladdress;
Set SERVERLladdress
Query SEssion ADSM server command to display info
about current sessions with ADSM
client nodes. Syntax:
'Query SEssion [SessionNumber]
[Format=Detailed]'
'Query SEssion [SessionNumber]
[MINTIMethreshold=minutes]
[MAXTHRoughput=kBs]
[Format=Standard|Detail|Gui]'
The MINTIMethreshold and MAXTHRoughput
parameters act as filters on the Query
SEssion output for client nodes. They
can be used to setup time and throughput
thresholds with which to automatically
cancel sessions which have become a
bottleneck to the server by using the
THROUGHPUTTimethreshold and
THROUGHPUTDatathreshold options.
Note that the Detailed report's only
additional information is to reveal any
tapes in use for the session, as in:
"Media Access Status: Current output
volume: 000043."
A Media Access Status of "Waiting for
mount" can be due to the library not
being in automated operation state.
The "Date/Time First Data Sent" value
reflects when the Consumer session began
sending client data to the TSM server
for storage in storage pools, after the
Producer session set up processing and
garnered the filespace Active files
inventory from the server.
Note: If the Platform and Client Name
are null for a session, it is a TCP
connection from an interloper, rather
than a legitimate TSM client. There will
be no Activity Log entry for the start
of the session - because there was no
session initiation interaction. You may
employ an OS command (netstat, lsof) to
identify the source of the session.
See also: CommW; Consumer session;
IdleW; Media Access; MediaW; Producer
session; RecW; Run; SendW; Status
See also: SHow NUMSESSions; SHow SESSions
Query SPACETrigger ADSMv3 server command to report the
settings for the database or recovery
log space triggers. Syntax:
Query SPACETrigger DB|LOG
[Format=Standard|Detailed]
See: DEFine SPACETrigger
Query SQLsession Server command to display the current
values of the SQL session attributes as
defined by Set SQLDATETIMEformat,
Set SQLDISPlaymode, and Set SQLMATHmode.
Report: Column Display Format, Date-Time
Format, Arithmetic Mode, Cursors Allowed
Query STatus ADSM server command to display info
about the general server parameters,
such as those defined by the SET
commands.
See also: Query OPTion
Query STGpool *SM server command to display info
about one or more storage pools.
Syntax:
'Query STGpool [STGpoolName]
[POoltype=PRimary|COpy|ANY]
[Format=Detailed]'
Obviously, there is no need for column
entries for migration where the stgpool
has no next level in the stgpool
hierarchy. (Column entries for migration
may persist where there had been a next
stgpool, but was removed.)
Query SYStem ADSMv3+ command to show much the same
info as the previous unsupported
command 'SHOW CONFIGuration', but sticks
to information valuable to customers.
This is a relatively time-consuming
command, as query commands go - which
can make it useful as an artificial
delay in server scripts and macros.
Query TAPEAlertmsg TSM 5.2+ server command to display the
current Set TAPEAlertmsg setting.
See also: Set TAPEAlertmsg; TapeAlert
Query Trace ADSM client command (dsmc Query Trace)
to display the current state of ADSM
tracing, as per Client User Options File
(dsm.opt) options.
See "CLIENT TRACING" section at bottom
of this document.
Query VOLHistory ADSM server command to show VOLUME
HISTORY data from db and export. Syntax:
'Query VOLHistory [BEGINDate=date]
[ENDDate=date] [BEGINTime=time]
[ENDTime=time]
[Type=All|BACKUPSET|DBBackup|
DBDump|DBRpf|DBSnapshot|EXPort|
RPFile|RPFSnapshot|STGDelete|
STGNew|STGReuse]'
Note the lack of selectivity by volume:
you can compensate for this by instead
doing: Select * FROM VOLHISTORY WHERE
VOLUME_NAME='______'.
The timestamp displayed is when the
operation started, rather than finished.
Does not show Checked-in volumes: the
volumes reported are those which at
one time had been assigned to a
storage pool.
See also: Query LIBVolume
Query Volume Shows storage pool volumes (not Scratch
volumes, or DB backup tapes, Backupset
tapes, or Export tapes.) Syntax:
'Query Volume [VolName]
[ACCess=READWrite|READOnly|
UNAVailable|OFfsite|
DEStroyed]
[STatus=ONline|OFfline|EMPty|
PENding|FILling|FULl]
[STGpool=*|PoolName]
[DEVclass=DevclassName]
[Format=Detailed]'
VolName may employ wildcard characters:
if omitted, all volumes are reported.
The "Estimated Capacity" volume is the
"logical capacity" of the volume: if
3590 hardware compression is active, the
value reflects contents after
compression. The better compressed that
files were on the client (as with
'gzip -9'), the less compression will be
possible, and the closer the value will
be to physical capacity.
Note that STatus=EMPty will report only
volumes which have been explicitly
assigned to a storage pool via DEFine
Volume and which are devoid of data: it
will *not* report scratch volumes,
because the command is for reporting
storage pool volumes and scratches are
only potentials, not assigned to a
storage pool. You can instead do:
SELECT * FROM LIBVOLUMES WHERE
STATUS='Scratch'
QUERYAUTH ADSM server option for specifying the
level of authority that is required for
issuing server QUERY or SELECT commands.
Refer to the information on QUERYAUTH
parameter in the sample server options
file for more details.
QUERYSCHedperiod Client System Options file (dsm.sys)
option to specify the number of hours
the client scheduler should wait between
attempts to contact the *SM server for
scheduled work. Default: 12 (hours)
Syntax: "QUERYSCHedperiod N_Hours".
This option applies only when the
SCHEDMODe option is set to POlling
(not PRompted), and the client SCHEDULE
command is running.
The server can override this: see
'Set QUERYSCHedperiod'
Debugging: If you need to go to
extremes to determine where your
governing value is coming from, add the
following to your dsm.opt file:
TRACEFLAGS OPTIONS
TRACEFILE trace.txm
then restart the scheduler...wait a
moment...then stop the scheduler and
inspect the trace.tsm file, seeking
"queryschedperiod".
Quiet (-Quiet) Client System Options file (dsm.sys)
option or command line option to
suppress the output of most ADSM
commands. Of particular value for Backup
Restoral performance: eliminates the
overhead of formulating and writing
progress messages.
Default: Verbose See also: Verbose
Quiet (server command line option) See: dsmserv
QUIT Command to leave an administrative
client session (dsmadmc).
Cannot be used for SERVER_CONSOLE
sessions.
Quota See: HSM quota
"Quotas" on storage used TSM provides no "quota" system to limit
the amount of server storage space which
a node may use.
Client node storage utilization might be
enforced via a mechanism based upon
dsmadmc which literally or effectively
performs 'Query Occupancy' and/or 'Query
Auditoccupancy' to see how much clients
have stored. You can do a 'Cancel
Session' on the unruly, or even do a
'Lock Node', and send them mail about
their behavior.
See also: Client sessions, limit amount
of data
q.v. Abbreviation for Latin phrase "quod
vide", meaning "which see", which in
reference works is a referral to another
definition.

Rapid Recovery The ultimate objective of the


Instant Archive function - to be able to
quickly restore your client files using
a Backup Set that had been created on
the TSM server, without the need for a
network connection, via media which your
workstation can read.
See also: Backup Set; Instant Archive
Raw logical volume TSM AIX terminology for a volume which
is used as addressable blocks by TSM for
database, recovery log, storage pools:
the volume does not contain a file
system. The absence of a file system
affords the opportunity for greater
performance in *most* aspects of TSM
operation. (There is no read-ahead in
RLV processing, as there is in JFS file
processing, so storage pool migration
will be slower with RLV than with JFS -
which is to say that JFS is preferable
for storage pool volumes while RLV is
better for the TSM DB and Recovery Log.)
As always, performance is subject to
many vagaries, such as OS settings,
hardware capabilities and their
operating attributes, etc.
RLVs are much simpler to set up (no need
to format volumes and create file
systems) - which makes RLVs the way to
go for disaster recovery scenarios,
where time is of the essence.
However, the use of raw volumes is
discouraged in some IBM doc: the Admin
Guide topic "The Advantages of Using
Journaled File System Files" offers
specific warnings against use of raw
logical volumes. In contradiction,
however, the TSM Performance Tuning
Guide recommends using RLVs. (The issue
is taken up in APAR IC41481.)
Raw logical volumes are handled via
their /dev/rlv____ name. (Note that all
logical volumes have a /dev/rlv, so be
careful about using one in TSM.)
They are created in AIX via the 'mklv'
command.
Note that TSM caches database and
recovery log pages in memory, lessening
implicit advantages of raw volumes for
recent data. Note also that JFS does
caching as well, which further increases
performance with a file system (but at
the expense of AIX system paging, in
that AIX filesystem caches participate
in virtual memory). The biggest
undocumented issue with raw volumes is
in "visibility"... Site administration
typically involves a bunch of people who
are not always cognizant of everything:
without a file system on the volume, its
purpose and usage is far less apparent
than a volume with a well-defined and
readily viewable file system. This
greatly increases the probability of
"accidents"...very expensive accidents,
such as thinking that the logical volume
is unused, and trying to create a file
system on it. (In AIX, the 'lslv'
command - if used - would show the
logical volume as being Open.) And the
naive may seek to extend the size of the
LV at the OS level. (Protect against
this by making the LV a fixed,
non-extendable size.)
With AIX there is no locking when using
RLV. *SM deals with this by implementing
locks using files in the /tmp directory
(ref: msg ANR7805E), with names of the
form: adsm.disk.dev.rlv... System
housekeeping must not delete these lock
files between system reboots. Sample
lock file: adsm.disk.dev.rlv-tsm-sp101
contains: "/dev/rlv-tsm-sp101" (no
newline at the end of the string).
Formatting? Not for raw logical volumes:
they do not need to be formatted, and
the dsmfmt command has no provision for
them (it only accepts file names).
Beware: *SM overlays the first 512
bytes of a raw logical volume, where
the Logical Volume Control Block (LVCB)
usually resides, making the logical
volume unusable for export-import and
like operations. Although this might
seem fatal, it is not the case. Once
the LVCB is overwritten, you can still
do the following:
- Expand a logical volume
- Create mirrored copies of the logical
volume
- Remove the logical volume
- Create a journaled file system to
mount the logical volume.
Do not use AIX volume mirroring with
RLVs: AIX uses space in the LVCB to
manage the mirroring, which overlays
ADSM data.
Performance: ADSM spreads its activity
across logical volumes assigned to it.
Avoid adding RAID striping, as this will
slow performance.
A technique to employ if running
multiple TSM servers in the same system
with RLVs is to run the TSM instances as
non-root and give ownership of the /dev
RLV special files to separate non-root
users.
Ref: IBM site Technotes 1173045, 1152712
See also: Raw partition
Raw logical volume, back up TSM 3.7 introduces the ability for *SM
to back up raw logical volumes, via what
is known as "Logical Volume Backup" and
"Image Backup". (The unsupported
Adsmpipe utility used to fill this role,
but is now officially obsolete for that
purpose.)
If your logical volumes are for use with
Oracle/Sybase/Informix, there are
intelligent backup agents for TSM which
provide better functionality and
application intelligence than the lv
backup.
Ref: 3.7 UNIX client manual under BACKUP
IMAGE; or redbook Tivoli Storage Manager
Version 3.7: Technical Guide
(SG24-5477), Chapter 3, Section:
"Logical volume backup".
See: 'dsmc Backup Image'
Raw Logical volume, change lvname You many have to do this in
reconstructing a replacement for a
destroyed logical volume. AIX command:
'chlv -n NewLvName OldLvName'
Raw Logical volume, dsmfmt? You do not format logical volumes: the
dsmfmt command is used only for files to
be used as ADSM volumes.
Raw Logical Volume, query See: SHow LVM; SHow LVMCOPYTABLE;
SHow LVMFA; SHow LVMVOLS
Raw Logical Volume, size limit Through AIX 4.1, Raw Logical Volume
(RLV) partitions and files are limited
to 2 GB in size. It takes AIX 4.2 to
go beyond 2 GB.
Raw Logical volume in Sun/Solaris Watch out for two gotchas:
1. You cannot use the first cylinder of
a physical disk: the first blocks hold
the partition table and volume label.
*SM does not skip the first sector and
so would overwrite the volume label.
2. *SM checks if there is file system
on the disk before using it. It does
this by trying to mount the partition as
a file system! It the mount succeeds the
define fails. New disks from Sun ship
partitioned and with empty file systems
on them.
Solution: Make the partition start on
cylinder 1. You could also do:
'dd if=/dev/zero of=/dev/rdsk/....
count=1024' to destroy the first
superblocks so the mount fails.
Msg: ANR2404E
Raw partition TSM Solaris term for a disk partition
used by TSM as randomy addressable
blocks, for database and storage pool
volumes: the OS volume does not contain
a file system.
You do not have to format the volume in
TSM terms, but you do in OS terms. Watch
out for cylinder 0.
Ref: Admin Guide
Raw partition, back up See: Raw logical volume, back up
Raw volume support in Linux As of 2004/05, there is no support for
raw volume usage in Linux as on other
Unix platforms.
rc.adsmhsm See: HSM rc file
read-without-recall recall mode A mode that causes HSM to read a
[no recall; no-recall; norecall; migrated file from ADSM storage without
[Readwithoutrecall] storing it back on the local file
[Read without recall] system. The last piece of information
read from the file is stored in a buffer
in memory on the local file system.
However, if a process that accesses the
file writes to or modifies the file or
uses memory mapping, HSM copies the file
back to the local file system. Or, if
the migrated file is a binary executable
file, and the file is executed, HSM
copies the file back to the local file
system. You can change the recall mode
for a migrated file to
read-without-recall by using the
'dsmattr' command. Contrast with normal
recall mode and migrate-on-close recall
mode.
CAUTION: Readwithoutrecall has been seen
to cause problems with NFS-exported file
systems, as in file access stalling on
the NFS client.
ReadElementStatus SCSI command for some SCSI libraries
(e.g., StorageTek 9714) to obtain
information about the storage slots in
the library. You can run that SCSI
command by using the lbtest facility,
selecting options 1, 6, 8, and 9. The
output from option 9 will be for each
slot and will reveal the address, among
other things.
READOnly Access Mode saying that you can only
read the Storage Pool or Volume.
Set with 'UPDate STGpool' or
'UPDate Volume'.
TSM will spontaneously change a
volume's Access Mode to READOnly if it
encounters a failure of a Write
operation (message ANR1411W)...which
could be the result of dirty tape
heads...which can occur if a manual
library has not been manually cleaned or
in an automatic library the automatic
cleaning has been disabled or cleaning
cartridges have been exhausted.
Tapes in READOnly state are so noted
with the *SM server starts (ANR1414W).
When did the volume go READOnly? Do
'Query Volume ______ F=D' and inspect
the Last Update Date/Time.
See also: Pending
READWrite Access Mode saying that you can read or
write the Storage Pool or Volume.
Set with 'UPDate STGpool' or
'UPDate Volume'.
Real time statistics from TSM? A customer may want periodic snapshots
or progress indications, such as number
of MB backed up each minute or disk pool
utilization. TSM does not provide
real-time number gathering unto itself:
you would need to perform periodic
queries and capture values yourself, or
invest in an outboard aid such as
TSMManager.
Reason code Appears in various TSM error messages,
such as ANR8216W. TSM generalizes terms
because it has to accommodate multiple
environments. In Unix the "reason code"
is the Unix errno value (refer to
/usr/include/sys/errno.h).
Rebind deleted files See: Inactive files, rebind
Rebinding The process of associating a file with a
backed-up file with a new management
class name. Rebinding occurs:
- When you code a new management class
on the Include statements governing
subject files and do an unqualified
Incremental backup. (A Selective
backup binds the backed up files to
the new mgmtclass, but not the
Inactive files.)
with a backup file is deleted.
- When the management class associated
with a backup file is deleted.
- If you boost the retention of a copy
group to which files are *not*
currently bound, or decrease the
retention of the copy group to which
files *are* bound. What's happening:
directories are by default bound to
the management class/copygroup with
the longest retention (RETOnly), in
the absence of DIRMc specification,
and so they "move" to the longest
retention managment class.
Rebinding does *not* occur:
- For Archive files.
- For partial Incremental backups.
- For Inactive files where the client
file system no longer contains that
filename for a backup to operate on.
Rebinding does not necessarily occur:
- For directories, which want to be
bound to the mgmtclass with the
longest retention period, unless
DIRMc specifically tells them
otherwise.
If you added Include statement to your
client options file to specify use of a
new management class and are perplexed
to find no rebinding to it upon the next
backup, it may be the case that you have
a client option set on the TSM server,
where its include-exclude statements
take precedence of your local file.
Watch out for Windows cluster servers
with multiple options files: you need
to be careful to code the mgmtclass on
the right set of Include statements.
See also: Archived files, rebinding does
not occur
Rebinding--> Leads the line of output from a Backup
operation, as when a filespace has moved
from one TSM server to another, or
perhaps the management class has
changed, as via Include spec. The
rebinding of directories reflects their
fresh backup.
The rebinding indicator does not
identify the management class to which
the object is rebound: that can be
identified in the Backups table.
Note that rebinding does not apply to
Archived files: see "Archived files,
rebinding does not occur".
See also: Directory-->; Expiring-->;
Normal File-->; Updating-->
Recall (HSM) The process of copying a migrated file
from an ADSM Space-Managed Storage Pool
back to its originating client file
system.
Set recall modes with the HSM command
'dsmmode -recall=Normal|Migonclose'
for overall HSM action; or
'dsmattr -RECAllmode=Normal|Migonclose
|Readwithoutrecall File_Name'
for a specific file or files.
Contrast with Restore and Retrieve.
See also: Transparent Recall;
Selective Recall; Recall Mode
Recall information (HSM) 'dsmq' command.
Recall list (HSM) 'dsmmigquery FSname'
Recall Mode (HSM) 1) One of four execution modes provided
by the dsmmode command. Execution modes
allow you to change the HSM-related
behavior of commands that run under
dsmmode. The recall mode controls
whether an unmodified, recalled file is
returned to a migrated state when it is
closed.
2) A mode assigned to a migrated file
with the dsmattr command that determines
how the file is processed when it is
recalled. It determines whether the file
is stored on the local file system, is
migrated back to ADSM storage when it is
closed, or is read from ADSM storage
without storing it on the local file
system.
Recall mode of migrated file, set 'dsmattr -recallmode=n|m|r Filename'
(HSM) where recall mode is one of:
- n, for Normal
- m, for migrate-on-close
- r, for read-without-recall
Recall process, remove from recall 'dsmrm Recallid'
queue as determined by doing 'dsmq'.
Recall processes, display 'dsmq'
Recall queue, remove a process from 'dsmrm Recallid'
as determined by doing 'dsmq'.
REClaim= Keyword on 'DEFine STGpool' and
'UPDate STGpool' specifies the amount of
reclaimable space on a volume (as a
percentage) at which point reclamation
should kick off, to copy the tape's
contents and thus reclaim that space.
That is, the value is the percentage
of empty space on the volume, including
empty space within Aggregates.
The conventional value is 60 (%), such
that volumes should undergo reclamation
when their Pct. Reclaimable Space values
reach 60%.
The REClaim value should be 50 (%) or
greater such that two volumes could be
combined into one.
Important note: Due to occasional I/O
errors, tapes will be thrown into
Readonly state, and their Pct Util may
be quite low, like 3.0%. Such tapes are
quite usable, but often go unnoticed,
leaving you short of scratches - and
reclamation won't reclaim them because
their Pct. Reclaimable Space is low. You
should periodically perform 'Query
Volume ACCess=READOnly STatus=Filling'
and do a MOVe Data to replenish your
scratch pool.
Reclaim pool See: RECLAIMSTGpool
RECLAIM_ANALYSIS ADSMv3 SQL: Provisional database table
created by the AUDIT RECLAIM command,
which fixed problems created by defects
in the early levels of the V3 server.
See also: AUDIT RECLAIM
Reclaimable space Do 'Query Volume [VolName] F=D' and look
at the "Pct. Reclaimable Space" for each
volume.
Reclaimable volumes See: Storage pool, reclaimable volumes
RECLAIMSTGpool=poolname ADSMv3: DEFine STGpool operand.
(single drive reclamation) Specifies another storage pool as a
target for reclaimed data. This
parameter is primarily for use with
storage pools that have only one drive
in its library. This parameter allows
the volume to be reclaimed to be mounted
in its library and the data is then
moved to the specified reclaim storage
pool. This parameter must be an
existing primary sequential storage
pool. This parameter is optional,
however: if used, all data will be
reclaimed to that storage pool
regardless of the number of drives in
that library.
The reclaim storage pool itself must be
defined as a primary storage pool. There
are no restrictions on this storage
pool's definition, but it should be
defined with a NEXTSTGPOOL= value that
will migrate its data back into the data
storage hierarchy. Because its primary
function is to collect reclaimed data,
its NEXTSTGPOOL= value should be the
same storage pool from which the data
was reclaimed.
When having just a single drive, you
should have your disk
STGpool MIGPRocess=1 and
DEVclass MOUNTLimit=1.
Ref: Admin Guide "Reclaiming Volumes in
a Storage Pool with One Drive"
Reclamation Files on tape volumes may expire per
standard rules or by virtue of the
owning filespace having been deleted.
With abundant tapes, one may be able to
simply let the contents of tape volumes
expire and recycle tapes with no effort.
But in most sites that's not possible:
tapes are needed, and the remaining
contents of volumes have to be copied to
newer, compacted volumes to create
needed scratches. This is Reclamation.
Volumes are chosen by the oldest "Date
Last Written", not Pct Util or
Pct. Reclaimable Space.
It copies the remaining data on a volume
to a volume that is in a Filling state,
or an empty volume if no partials are
present. Emptied volumes return to where
they came from: the scratch pool or, if
the volume had been defined to the
storage pool, then it remains defined to
the storage pool. The volume being
reclaimed is mounted R/O, and the volume
to receive the data is obviously mounted
R/W.
Reclamation is not something you want to
do: it ties up drives, takes time, and
entails additional wear on drives and
media. Do it only when your scratch tape
pool reaches a comfortable minimum.
(There is some consideration that
delaying reclamation can mean longer
restoral times as compared to data on
reclaimed, compacted tapes; but
reclamation typically involves your
oldest tapes and data, so it's usually
not an issue.)
ADSMv3+: When logical files are
reclaimed from within an Aggregate, the
Aggregate is compacted to reclaim
space. Note that, in contrast, MOVe Data
by default does not reclaim space where
logical files were logically deleted
from *within* an Aggregate. (As of TSM
5.1 there is a RECONStruct option which
does allow aggregate-internal space to
be reclaimed.) If the volume being
reclaimed is *not* aggregated (as in the
case of a volume produced under ADSMv2,
or where too-small TXNGroupmax and
TXNBytelimit values conspire to
effectively prevent aggregation) the
files are simply transferred as-is: the
output likewise *not* aggregated. Thus,
in some cases, a Move Data (which does
no aggregate tampering) may be just as
effective as a reclamation.
If you are in a hurry to produce needed
scratch tapes, use Move Data rather than
Reclamation.
Reclamation also brings together all the
pieces of each filespace, which means it
has to skip down the tape to get to each
piece. (The portion of a filespace that
is on a volume is called a Cluster.)
In addition, if the target storage pool
is collocated, each cluster may ask for
a new output tape, and TSM isn't smart
enough to find all the clusters that are
bound for a particular output tape and
reclaim them together. Instead it is
driven by the order of filespaces on the
input tape, so the same output tape may
be mounted many times.
The nature of collocation means that
reclamation of a collocated storage pool
will not harvest needed scratches as
quickly as reclamation of a
non-collocated copy storage pool.
If an Expire Inventory is running and
has reduced the Pct Util of a volume
below the reclamation threshold,
Reclamation will not occur until the
Expire is done.
The reclamation thread wakes up at least
once per hour minutes to see if there is
work to do (more frequently when the
reclamation threshold is lower).
Beware that the reclamation process may
be single-threaded such that multiple
MOVe Data commands may be advantageous.
Note that after a reclamation, the 3590
ESTCAPacity value returns to its base
number of "10,240.0" MB.
When Reclamation is running, a Backup
cannot start if the Reclamation is using
tape drives that it needs.
Messages: ANR1040I for each volume being
reclaimed; ANR1044I specifying required
tapes; ANR8324I for tape mounts;
ANR1041I at end.
See also: Cluster; MOVe Data;
Pct. Reclaimable Space
Reclamation, activate Do: 'UPDate STGpool PoolName REClaim=NN'
making the NN percentage less than 100%.
REClaim specifies the percentage of
reclaimable space left on a volume for
when reclamation will occur for it.
When will it start? Experience is that
for copy storage pools, it starts
immediately; for primary storage pools,
"in a little while".
At what point should you reclaim tapes?
In an ideal world, you would never have
to: you would have sufficient tapes and
library capacity such that content
attrition alone would empty and return
tapes automatically. In the real world,
we have to perform reclamation. The best
approach is to perform reclamation only
when the number of scratches falls below
a comfortable level. This maximizes data
elimination through attrition and then
acts on the residual data on media,
while minimizing occupancy and wear on
drives. You should avoid using a
REClaim value of less than 60 (%) -
which means that when the volume has a
Pct. Reclaimable Space value of 60% or
more less that it will undergo
reclamation. If you're going that low,
you are overly constrained. Note that
the REClaim value should be 50 or
greater such that two volumes could be
combined into one.
The anticipated reclamation process may
take considerable time to start,
particularly on collocated storage pools
with a large number of volumes: it takes
much less time to start on
non-collocated copy storage pools which
have a comparable amount of data.
Reclamation, deactivate At a minimum you need to do:
'UPDate STGpool PoolName REClaim=100'.
Now take action based upon stgpool type:
- Primary storage pool: Reclamation for
primary stgpools is performed on a
volume by volume basis. That is, each
volume is reclaimed as its own
reclamation process. When reclamation
of a single, primary stgpool volume
completes, the TSM Server will check
the reclamation threshold for that
stgpool before looking for additional
volumes to reclaim. If the
reclamation threshold has been
increased to 100%, no further volumes
in the primary stgpool will be
reclaimed.
- Copy storage pool: With these, all
eligible volumes are reclaimed as
part of a single process. Because of
this, the only time TSM checks the
reclamation threshold for the copy
stgpool is when the reclamation
process begins. At that time, all of
the eligible volumes are queued up to
be reclaimed: the TSM Server does not
check the reclamation threshold again
until that composite process ends.
Setting the reclamation percentage to
100% prevents any new reclamation
processes from starting, but does not
stop any running ones.
You can usually force a reclamation of
either pool type to end by issuing a
CANcel PRocess on it. (The cancel will
not take effect until at least the
current aggregate is completed.)
For an onsite storage pool, the new
REClaim value is observed as the next
volume is handled.
For an offsite storage pool, the new
REClaim value is *not* observed prior to
the conclusion of the current process.
Ref: Admin Guide manual topic "Choosing
a Reclamation Threshold", "Lowering the
Migration Threshold".
Reclamation, offsite Volumes are not ordered by any
externally visible parameter. The
processing order will appear to be
arbitrary. Possibly, *SM looks at all
the data on all the eligible tapes, then
tries to mount each input tape required
(from your onsite pool) just once -
which compares with working on all the
eligible offsite tapes at the same
time.
You don't need to bring back offsite
volumes in order to do reclamation on
them. The valid files remaining on
sparsely filled offsite volumes are
copied from the original copies of the
files. These original copies of the
files are in the primary storage pools
onsite...thus no offsite volumes need to
be brought back to do reclamation. A new
set of copy stgpool volumes is created
which contain all the valid files
reclaimed from the offsite volumes: the
reclamation of an offsite storage pool
effectively brings the data back onsite.
You must then be sure to send these
freshly-written volumes offsite.
(Because of this exposure, you may want
to avoid inciting reclamation of offsite
volumes, and instead simply let their
contents dissipate over time.)
The reclaimed offsite volumes go into a
holding state (Pending) for as long as
you specify with the REUsedelay
parameter (on define copy storage pool),
meaning that in the event of a disaster,
the restored TSM db will probably again
point to data on those offsite volumes,
which because of the db restoral would
no longer be Pending.
Note that all eligible offsite storage
pool volumes are reclaimed in a
continuous operation which remains blind
to administrative changes to the
reclamation threshold: If you change the
REClaim value while that process is
running, it will have no effect. In
contrast, the reclamation of onsite
volumes will look at the value as it
goes to reclaim the next volume.
Reclamation, pre-emption Space Reclamation will be pre-empted if
an HSM recall needs a tape; will see msg
ANR1080W in the Activity Log.
Reclamation, prevent Do: 'UPDate STGpool PoolName
REClaim=100';
More drastically achieve by setting
DEVclass MOUNTLimit=1.
Reclamation, prevent at start-up To prevent reclamation from occurring
during a problematic TSM server restart,
add the following (undocumented) option
to the server options file:
NOMIGRRECL
Reclamation and migration See: Migration and reclamation
Reclamation and the single tape drive See: RECLAIMSTGpool
Reclamation failure Most commonly occurs due to unreadable
files on the volume being reclaimed,
whereupon TSM makes the volume's access
mode Unavailable (msg ANR1410W).
Retrying the operation, or doing Move
Data on it, will often get the remaining
files off the volume, particularly if
another drive is used to read the
volume.
Reclamation in progress? 'Query STGpool ____ Format=Detailed'
"Reclamation in Progress?" value.
Reclamation not clearing some offsite You've done Reclamation, but some
tapes offsite volumes still show small percent
utilizations - not being fully
reclaimed. This may be due to TSM
checking for files which span volumes,
to prevent an endless chain of
reclamation.
Reclamation not happening Be aware that with a large storage pool,
(reclamation not working) it can take a substantial amount of time
for TSM to start the
reclamation... sometimes, hours.
Beyond that, possible problem areas:
- No volumes have a Pct. Reclaimable
Space value at least as high as the
Stgpool REClaim value.
- Two mount points are not
simultaneously available. (Check your
DEVclass MOUNTLimit value and the
actual viability of your drives.)
- Large storage pools it can take a
while for Reclamation to initiate -
perhaps longer than the window that it
is alloted by server administration
schedules.
- Do the subject volumes themselves have
good Access values, which allow them
to be mounted and reclaimed?
Volumes which are offsite cannot be
reclaimed if they have no represented
data onsite.
- A small Pct Util value may involve
storage pool files which span volumes,
and reclamation may not be happening
because the volume that the files span
to/from are in a state which precludes
their use. Use 'Query CONtent
<Volname> F=D' on suspect volumes,
looking for Segment numbering other
than 1/1 in the first and/or last
files, which indicates spanning
from/to other volumes. Do 'Move Data'
on one such volume and see what
happens.
- The presence of server option
NOMIGRRECL will prevent it.
Check your Activity Log for errors.
Note that tapes are candidates for
reclamation whether they are Full or
Filling.
Reclamation performance Is governed by the MOVEBatchsize and
MOVESizethresh options, which help tune
the performance of server processes that
involve the movement of data between
storage media. (There was a problem in
TSM 4.2 where those options were not
being honored for disk-to-tape
reclamation where disk caching was
turned on: it has since been fixed.)
Number of processes: There can be only
one per stgpool, as the product is
currently designed. (You can instead
perform multiple MOVe Data operations -
but MOVe Data is not the same as
reclamation.)
If using LTO Ultrium, slow reclamation
performance can reveal an ugly LTO
firmware defect, in which the CM index
is corrupted. See: LTO performance
Reclamation process, cancel The cancel will take effect when it
reaches a point to safely stop the
reclamation. The system will finish the
last process started, and once it is
complete, stop.
Reclamation processes, number of Only one reclamation process per storage
storage pool runs at a time - and then
only per the Reclamation Threshold value
for the storage pool being less than
100%.
Most server operations do not support
multiple parallel processes. The only
exceptions are migration from disk
pools, backup storage pool, restore
storage pool, and restore volume.
Reclamation stalls awaiting tape It cannot get the tape drive(s) it needs
mounts to perform the mount(s), which can be
due to the drive(s) being busy with
other tapes, or busy with a cleaning
cartridge, or that the drive names
changed across an AIX reboot wherein
tape drives were added or removed.
REConcile Volumes TSM server command to reconcile
differences between virtual volume
definitions on the source server and
archive files on the target server. TSM
finds all volumes of the specified
device class on the source server and
all corresponding archive files on the
target server. The target server
inventory is also compared to the local
definition for virtual volumes to see if
inconsistencies exist.
'REConcile Volumes
[* | '-device_class_name-']
[Fix=No|Yes]'
See also: Virtual volumes
RECOncileinterval Client System Options file (dsm.sys)
option to specify how often *SM
automatically reconciles HSM-controlled
file systems, by running dsmreconcile.
Possible values: 0 thru 9999
(Value 0 prevents reconciliation from
happening automatically at specific
intervals.)
Default: 24 hours
Note that unless you run dsmreconcile,
HSM file expiration will not occur, and
HSM files whose stubs were deleted from
the HSM file system will build up in *SM
server storage.
See also: dsmmigundelete; dsmreconcile
RECOncileinterval, query Via ADSM 'dsmc Query Options' or TSM
'dsmc show options'.
Look for "reconcileInterval".
Reconciliation (HSM) The process of synchronizing a file
system to which you have added space
management with the ADSM server you
contact for space management services
and building a new migration candidates
list for the file system.
Initiated by:
- Automatically via the dsmreconcile
daemon, at intervals specified via the
RECOncileinterval option in the Client
System Options File.
- Automatically before performing
threshold migration if the migration
candidates list for a file system is
empty.
- Manually: The client root user can
start reconciliation manually at any
time, via the 'dsmreconcile' command.
Reconciliation interval (HSM) Control via the RECOncileinterval option
in the Client System Options file
(dsm.sys). Default: 24 hours
Reconciliation processes (HSM), max Control via the MAXRCONcileproc option
in the Client System Options file
(dsm.sys). Default: 3
Query via client 'dsmc Query Options' in
ADSM or 'dsmc show options' in TSM;
Look for "maxReconcileProc".
Reconstruction See: Aggregates and reclamation;
MOVe Data
Recover volume See: AUDit Volume; RESTORE Volume;
Volume, bad, handling
Recovery Log The Recovery Log houses in-flight
transactions, either:
- until they are committed to the TSM
database, when LOGMode Normal is in
effect;
- until the next database backup is
performed, when LOGMode Rollforward
is in effect.
Note that changes are initially housed
in the Recovery Log buffer pool, which
means that the Recovery Log and Database
on disk are not always consistent.
Space must be available in the Recovery
Log for a session to be established
(else get msg ANS1364E).
Be aware that more space will be needed
as the TXNBytelimit client option and
the MOVEBatchsize, MOVESizethresh, and
TXNGroupmax server option values are
increased. Also, longer tapes make
Reclamation run longer and require more
Recovery Log space.
The backup of large files will keep
Recovery Log space from being
committed.
ADVISORY: EXPIre Inventory quickly
consumes Recovery Log space. Use its
DUration parameter to limit the amount
of time that the expiration runs.
See also: Transactions, minimize number
Named in
/usr/lpp/adsmserv/bin/dsmserv.dsk, as
used when the server starts.
(See "dsmserv.dsk".)
Installation default is to create it 9MB
in size.
A database backup will reportedly empty
the log.
See also: LOGPoolsize;
DEFine SPACETrigger
Recovery Log, analysis To see what caused the Recovery Log to
fill, issue internal commands:
q se f=d q log f=d
q pr SHow dbtxn
SHow THReads SHow logseg
SHow locks SHow logv
SHow txnt SHow dbvars
SHow dbtxnt
Recovery Log, checkpoint Consider doing this if the Recovery Log
is inflated by a flurry of activity.
See: CKPT
Recovery Log, compressed records Only occurs when the Recovery Log is in
Normal mode (as opposed to Rollforward).
Msgs: ANR2362E
Recovery Log, convert second primary 'REDuce LOG Nmegabytes'
volume to volume copy (mirror) 'DELete LOGVolume 2ndVolName'
'DEFine LOGCopy 1stVolName 2ndVolName'
Recovery Log, create 'dsmfmt -log /adsm/DB_Name Num_MB'
where the final number is the desired
size for the database, in megabytes, and
is best defined in 4MB units, in that
1 MB more will be added for overhead if
a multiple of 4MB, else more overhead
will be added. For example: to allocate
a database of 1GB, code "1024": ADSM
will make it 1025.
Recovery Log, define additional volume 'DEFine LOGVolume RecLog_VolName'
Recovery Log, define volume copy 'DEFine LOGCopy RecLog_VolName
(mirror) Mirror_Vol'
Recovery Log, delete volume 'DELete LOGVolume VolName'
Will cause TSM to start a process to
move data from that volume to the
remaining Recovery Log volumes.
Recovery Log, extend Via ADSM server command:
'EXTend LOG N_Megabytes'.
Causes a process to be created which
physically formats the additional space
(because it takes so long).
If server down, use Unix command line:
'dsmserv extend log <volname> <mb>'
where "volname" is typically the name
of a dsmfmt-formatted file which you
want to augment the existing recovery
log.
See also: dsmserv EXTEND LOG
Recovery Log, maximum size Before TSM 4.2: Per APAR IC15376, the
recovery log should not exceed 5.5 GB
(5440 MB). But APAR IY09200 says that
the maximum size is 5420 MB; and the
max usable is 5416 MB (because of how
calculations are performed which store
data structures in a certain fixed area
in the first 1 MB of each DB and LOG
volume).
Msgs: ANR2452E and ANR2429E
Ref: Server Admin Guide, topic
Increasing the Size of Database or
Recovery Log topic, in Notes.
See: SHow LVMFA, which reveals that the
max is 5.3GB, not 5.5. (See the
reported "Maximum possible LOG 1 LP
Table size".)
As of TSM 4.2 (June 2001): The maximum
size of the recovery log is increased
to 13 GB. (Note that automatic
expansion of the Recovery Log, by
DBBackuptrigger, will not go beyond 12
GB, to provide wiggle room.)
Advisory: It is best to not run with a
maximum value because you may run into
the very ugly ANR7837S situation where
your Recovery Log is full and, being at
the maximum, you can't add space to get
your server restarted. And consider
running in Normal rather than
Rollforward mode: many customers are
doing that to avoid log filling
problems. If you run in Rollforward
mode, use DBBackuptrigger.
(The max size is apparently in the TSM
source code as #define LOG_MAX_MAXSIZE.)
Recovery Log, mirror, create Define a volume copy via:
'DEFine LOGCopy RecLog_VolName
Mirror_Vol'
Recovery Log, mirror, delete 'DELete LOGVolume RecLog_VolName'
(It will be nearly instantaneous)
Messages: ANR2263I
Recovery Log, Pct. Utilized A defect in v4.1 prevents this value
from going to zero after a database
backup. Circumvention: do 'ckpt'.
Another customer reports that setting
Logmode to Normal, then back to
Rollforward allows the next incremental
or full to clear the log. If neither
works, Halt and restart the server.
Recovery Log, query 'Query LOG [Format=Detailed]'
Recovery Log, reduce Via ADSM server command:
'REDuce LOG N_Megabytes'.
Recovery Log allocation on a disk See: Recovery Log performance
Recovery Log buffer pool See: LOGPoolsize
Recovery Log consumption stats, 'RESet LOGConsumption'
reset
Recovery Log filling - Assure that your Copy Pool MODE is not
ABSolute, which would force full
backups every time, and thus burden
the Recovery Log.
- Review your client systems to assure
that the backups they are doing are
true Incrementals, to minimize the
amount of data backed up each day.
- Look into having your clients spread
their backups out over time, to
prevent Recovery Log congestion. (In
particular, make sure that clients are
not needlessly running backups in
parallel.)
- Check your server Set RANDomize
setting to assure that you are
staggering the start of scheduled
backups.
- Consider having massive clients break
up their backups into multiple pieces,
as via VIRTUALMountpoint and the like.
- Use DBBackuptrigger.
- Watch out for clients backing up very
large files or commercial databases,
as that constitutes a single, very
large transaction, which burdens the
Recovery Log.
- Do sufficient BAckup DB operations
over the day (like, 1 full, multiple
incrementals) to keep recovery log
space low. Keep in mind that TSM
server processes like Expiration
consume a lot of Recovery Log space.
- Assure that no other TSM processes
(like Expiration) are running during
high-load backup periods. And likewise
assure that the server system is not
burdened with work that interferes
with the ability of TSM to deal with
its load at that time.
- Look into your server LOGPoolsize, as
it governs the rate at which Recovery
Log transactions are committed to the
database.
- Tune your TSM server and database to
assure that database commits can occur
rapidly when they do occur.
- Assure that the computer and operating
system in which the TSM server runs is
properly configured and tuned to
assure that TSM can promptly attend to
its database.
- If your server is "maxed out", you
should consider splitting the load to
another server.
- The active client may not be sending
commits often enough. (Clients with
NICs set to Autonegotiate may end up
with dismal, erroneous datacomm rates
and so "pin" the log due to not
getting to a commit point.)
- A TSM database volume on a very slow
or troubled disk can be an affector.
See also: Recovery Log pinning
Recovery Log location Is held within file:
/usr/lpp/adsmserv/bin/dsmserv.dsk
(See "dsmserv.dsk".)
Gets into that file via 'DEFine
LOGVolume' (not by dsmfmt).
ADSM seems to store the database file
name in the ODM, in that if you restart
the server with the name strings within
dsmserv.dsk changed, it will still look
for the old file names.
Recovery Log max utilization stats, 'RESet LOGMaxutilization'
reset
Recovery Log mode, query 'Query STatus', look for "Log Mode"
near bottom.
Recovery Log mode, set See: Set LOGMode
Recovery Log pages, mode for reading, "MIRRORRead LOG" definition in the
define server options file.
Recovery Log pages, mode for writing, "MIRRORWrite LOG" definition in the
define server options file.
Recovery Log performance As its name implies, the Recovery Log is
more of a serially written thing rather
than randomly accessed. As such, it is
less sensitive to disk position than the
TSM DB for server performance. Some
guidelines:
- Obviously, don't share the Recovery
Volume disk(s) with other
high-activity functions.
- For best dealings with disk problems,
spread the Recovery Log over multiple
volumes rather than making it all
one, large volume: if there is a disk
surface defect, it will be isolated
to one isolateable volume rather than
taking out your whole, large Recovery
Log volume. Via TSM mirroring, you
can swap in another modest volume to
take the place of the failed area.
(TSM creates one thread per volume,
which helps parallelization in places
where benefits can be had; but with
the nature of the Recovery Log file,
thread counts don't matter.)
Recovery Log pinning/pinned A phenomenon of long-running
transactions which causes Recovery Log
space to be greatly consumed...
The nominally occupied region of the
recovery log is bounded by head and tail
pointers. The head pointer moves forward
as new transactions are started. The
tail pointer moves forward when the
oldest existing transaction ends. Both
pointers wrap around to the beginning of
the log when they reach its end. During
the copying of a huge file occurs will
be one or more log entries relating to
that operation just ahead of the tail
pointer. There will be a huge area
filled with log entries for transactions
that have started and ended since the
copying of the huge file started. There
will be a small area just behind the
head pointer containing log entries for
the remaining pending transactions and
possibly some entries for recently ended
transactions. That huge area in the
middle is considered to be occupied log
space. When the copying of the huge
file ends the tail pointer will advance
to the end of the area containing recent
transactions and the utilization will
drop suddenly. The other activities
running concurrently with the copying of
the huge file are generating the
transactions that keep moving the head
pointer forward.
Expiration is probably the biggest
generator of transactions.
Look also for lingering client sessions
which eventually time out and cancel
like "ANR0481W Session ___ for node ____
(WinNT) terminated - client did not
respond within 7800 seconds."
Ref: IBM site article swg21054574
See also: CKPT; SHow LOGPINned
Recovery Log statistics, reset 'RESet LOGConsumption' resets the
statistic on the amount of recovery
log space that has been consumed since
the last reset.
'RESet LOGMaxutilization' resets the
max utilization statistic for the
recovery log.
Recovery Log volume (file) Each Recovery Log volume (file) contains
info about all the other db and log
files.
See also: dsmserv.dsk
Recovery Log volume, add 'DEFine LOGVolume VolName'
Recovery Log volume, move The best approach to relocating Recovery
Log volumes is to "leap-frog": add a new
volume, then 'DELete LOGVolume' on the
old volume. It is best to disable
sessions and processes in the mean time,
to prevent a mass of data from going
into the Recovery Log.
Note: TSM keeps track of Recovery Log
volume pathnames in its database; so you
can't expect to change names in the
dsmserv.dsk file and then simply bring
up the server: that will result in
ANR7807W and ANR0259E messages.
Recovery Log volume, remove 'DELete LOGVolume VolName'
You may have to do a 'REDuce LOG'
beforehand to take the space away from
ADSM if it was previously told that that
much space was available to it. (Msg
ANR2445E)
Recovery Log volume usage, verify If your *SM Recovery Log volumes are
implemented as OS files (rather than
rlv's) you can readily inspect *SM's
usage of them by looking at the file
timestamps, as the time of last read and
write will be thereby recorded.
Recovery Plan Files Part of TSM DRM.
Recovery Plan Files not expiring This is controlled via
'Set DRMRPFEXpiredays __', and obviously
is effective only if you are running
Expirations (to completion).
Be sure that you are using the Prepare
command with a DEVclass spec, to keep
the recovery plan file from being
written to a file based upno the plan
prefix, such that Expiration cannot deal
with it.
Note also that Recovery Plans stored on
another TSM server can thwart
expiration: there you need to clean them
out via a shell script or the like.
RecvW (sometimes "RECW") "Sess State" value from 'Query SEssion'
saying that the server is waiting to
receive an expected message from the
client.
See also: Communications Wait;
Idle Wait; Media Wait; Run; SendW; Start
Recycle bin (Windows), excluding Exclude.dir '?:\...\RECYCLE*'
Redbooks IBM practical usage guides, named for
their red covers, are "how to" books,
written by very experienced IBM,
Customer and Business Partner
professionals from around the world.
Redbooks are most commonly downloaded
from www.redbooks.ibm.com, but can also
be ordered in hardcopy form if desired.
Redpieces Are Redbooks that are under development
- made available this way to make the
information available in advance of
formal publication.
Redpapers Are smaller technical documents
available on the IBM Redbooks site which
reflect information gained during work
on a particular topic. Redpapers are
only available on the Web.
Redirection of command output The ADSM server allows command output to
be redirected, as in capturing output in
a file. Use ' > ' to create a file
afresh or ' >> ' to append to a file.
Be sure have spaces around the
angle-brackets. Be aware that ADSM
tends to inflate the width of such
redirected output, way beyond what you
are accustomed to in terminal display.
For narrower output, use the
"-OUTfile=SomeFilename" option on the
dsmadmc invocation. Examples:
'q cont vol27 > temp'
'q cont vol28 >> temp'
Note that you can't redirect output from
an administrative schedule, however.
Ref: Admin Ref
REDuce DB nnn Reduce the amount of space that can be
used in the *SM server database: reduce
the assigned capacity of the database.
Arg "nnn" is the number of megabytes,
which must be in multiples of 4 (MB).
This command may be employed while the
server is "live" with sessions and
processes - but, obviously, lots of
database activity will hinder the
reduction.
For the Reduce to work, the far end of
the database must have at least that
much free, completely unused space.
The Maximum Reduction value reported by
the 'Query DB' command is your limit,
reflecting the number of 4 MB partitions
which have no database pages currently
in them, measured from the end of the
last volume, working backwards toward
the first volume until encountering a MB
which contains data.
(It is common for database volumes to be
fragmented as the database entries
representing file system objects expire,
thus creating "holes" in the continuum.)
Why perform a reduction? One reason is
in having encountered message ANR2434E
when attempting a DELete DBVolume.
Advisory: Reducing the DB can only be
done when logmode is normal. So
temporarily:
Set LOGMode Normal
Then reduce the DB and set the logmode
back to roll-forward:
Set LOGMode Rollforward
Be aware that this will immediately
trigger a full backup of the DB.
See also: DELete DBVolume; EXTend DB
REDuce LOG nnn Reduce the amount of space that can be
used in the *SM server recovery log,
where "nnn" is the number of megabytes,
which must be in multiples of 4.
The amount of reduction possible is
reflected in the "Maximum Reduction"
value from 'Query LOG' output, which in
turn reflects the number of 4 MB
partitions which have no log pages
currently in them.
The LOGMode must be Normal for this
operation to be possible. Perform a
'Set LOGMode Normal' if necessary.
See also: EXTend LOG
RedWood Name for the StorageTek helical tape
cartridge system. Utilizes parallel
1-by-1 CTU design to eliminate
traditional queueing delays. More
than 11 MB/sec head-to-tape data
physical transfer rate. Cartridge
holds 50GB. Unknown is the tape search
speed: helical tape typically sacrifices
such speed for density, and is inferior
to the speed of linear tape technology.
REGBACK NT Registry backup tool (from the
NT Resource Kit).
REGister Admin ADSM server command to define an
adminstrator to the server.
'REGister Admin Adm_Name Adm_Passwd
[PASSExp=0-9999Days]
[CONtact="Full name, etc...]"
[FORCEPwreset=No|Yes]'
where a PASSExp value of 0 means that
the password never expires.
FORCEPwreset=Yes will induce ANR0425W.
After registering, you need to
'GRant AUTHority'.
REGister LICense TSM server command which enables the
server for a given number of licenses,
per your contract. Creates or updates a
file named "nodelock" in the server
directory. Syntax:
'REGister LICense
HexLicenseNumbers|FILE=_____
Number=NumberOfLicenses'
FILE may specify the files like
"10client.lic" that appear in your
server directory. Or you might directly
enter the hex numbers supplied in the
printed material that came with your
shipment (though it is better to first
enter them into files). You may use
wildcards with FILE to grab all desired
files in the current directory.
Advisory: Assure that the permissions on
the license files prevent unauthorized
people from reading them.
Note that NT deviates in requiring
coding as "FILE(____)".
Note that you must invoke REGister
LICense as many times as it takes to add
up to the total number of licenses you
bought.
It is not necessary to run AUDit
LICenses after REGister LICense.
Note that this command is an interface
to a license manager package (originally
a 3rd party product, but since purchased
by Tivoli) - one which does little
exception handling and/or returns
inadequate information to the TSM server
code. This inadequacy results in the
following observed problems: REGister
LICense will result in no change (and no
error message) if the file system that
the server directory is in is
full. (Message ANR9627E is supposed to
appear if the file system is full.) The
operation can also fail in the same
manner if the server system date is
wacky, or the input license files
specify a different server level.
If the server processor board is
upgraded such that its serial number
changes, the REGister LICense procedure
must be repeated: remove the nodelock
file first.
REGister LICense relies on the
computer's date/time. When registering a
license or restarting the ITSM server
the "LicenseStartDate" is compared to
the computer's date/time.
"LicenseStartDate" is hardcoded in each
of the ITSM server's license files. If
the computer's date/time is set to
before the "LicenseStartDate" that
license will not be registered, and you
can end up with message ANR2841W.
Further, Query LICense will not show
that license registered. (Note, of
course, that LicenseStartDate values may
differ, so you may see mixed results.)
Msgs: ANR2841W
See also: AUDit LICenses; Unregister
licenses
REGister Node ADSM server command to register a node.
Syntax:
'REGister Node NodeName Password
[PASSExp=Expires0-9999Days]
[CONtact=SomeoneToContact]
[DOmain=DomainName]
[COMPression=Client|Yes|No]
[ARCHDELete=Yes|No]
[BACKDELete=No|Yes]
[CLOptset=______]
[FORCEPwreset=No|Yes]
[Type=Client|Server]
[URL=____] [KEEPMP=No|Yes]
[MAXNUMMP=1|UpTo999]
[USerid=<NodeName>|NONE
|SomeName]'
where:
FORCEPwreset Force the next/first usage
to incite changing the password. This
is particularly valuable for the
server administrator to set an initial
password which the client admin can
change to be something known only to
that person.
PASSExp value of 0 means that the
password never expires - unless
overridden by the Set PASSExp value.
COMPression=Yes Requires that the
client compress its files before
sending to the server. Results in the
following scheduler message:
"Data compression forced on by the
server"
URL Specifies the URL address that is
used in your Web browser to administer
the TSM client.
By default, this command automatically
creates an administrative user ID whose
name is the nodename, with client owner
authority over the node. This
administrative user ID may be used to
access the Web backup-archive client
from remote locations through a Web
browser. If an administrative user ID
already exists with the same name as the
node being registered, an administrative
user ID is not automatically defined.
You can suppress creation of such an
administrative user ID via USerid=NONE.
This process also applies if your site
uses open registration.
Be sure to specify the DOmain name you
want, because the default is the
STANDARD domain, which is what IBM
supplied rather than what you set up.
There must be a defined and active
Policy Set.
Note that this is how the client node
gets a default policy domain, default
management class, etc.
Msgs: ANR0422W for when a non-registered
node attempts to use TSM.
Opposite: REMove Node
See also: MAXNUMMP; Password;
Set AUthentication
Registered nodes, number 'Query DOmain Format=Detailed'
Registered nodes, query 'Query Node'
Registration The process of identifying a client node
or administrator to the server by
specifying a user ID, password, and
contact information. For client nodes, a
policy domain, compression status, and
deletion privileges are also specified.
See "Open Registration", "Closed
Registration".
Registration, make Closed Can be selected via the command:
'Set REGistration Open Closed'.
Registration, make Open Can be selected via the command:
'Set REGistration Open'.
Registration, query 'Query STatus' ADSM server command,
look for "Registration:" value (as in
Closed or Open).
Registry (Windows) backup See: Backup Registry, BACKUPRegistry
REGREST Standalone Windows utility to restore
the registry file created with the
Windows BACKUP REGISTRY command.
Provided in the Windows Server Resource
Kit. NTBackup will backup the registry
as part of the System State. REGBACK and
REGREST are Resource Kit utilities to
backup and restore the Registry without
the rest of the System State.
See also: dsmc REStore REgistry
Reinventory complete system A 3494 function invoked from the
Commands menu of the operator station,
to freshly inventory all storage
components. Normally protected with
sysadmin password.
WARNING!!! This function will cause the
category codes of all tapes in the
library to be reset, to Insert!! (The
re-inventory processes cause the
existing library manager volume database
to be deleted, a new database
initialized, and records added for all
the cartridges within the library.)
You should perform this operation only
when first installing the 3494, but
*never* thereafter. If you inadvertently
execute this destructive operation, you
can perform a TSM AUDit LIBRary, which
will fix the category codes.
Contrast with "Inventory Update".
Relabelling a tape... Will destroy ALL data remaining on it,
because a new <eof tape mark> will be
written immediately after the labels.
Release tape drive from host Unix: 'tapeutil -f dev/rmt? release'
Windows: 'ntutil -t tape_ release'
after having done a "reserve".
Remote Client Agent A.k.a. TSM Remote Client Agent
Windows component of the client as used
by the web client.
See also: Client Acceptor Daemon;
Scheduler
Ref: "Starting the Web Client" in the
Installing the Clients manual
Remote console See: -CONsolemode
Remote Desk Top Connection See: TDP for Domino (TDP Domino),
Terminal Services restriction
Removable volumes, show See: SHow ASACQUIRED
REMove Admin TSM server commadn to remove an
administrator from the system.
'REMove Admin Adm_Name'
See also: REGister Admin; REName Admin
REMove Node Server command to delete a defined node.
You should have removed all of the
node's filespaces and backup sets prior
to removing the node itself. Syntax:
'REMove Node NodeName'
See also: DELete BACKUPSET;
DELete FIlespace
-REMOVEOPerandlimit TSM 5.2.2 Unix client option to remove
the artificial limit of 20 operands on
the command line of Archive,
Incremental, and Selective commands.
Note that this option must appear on the
command line: it is not valid in an
options file.
REName Admin Server command to rename an
administrator. Syntax:
'REName Admin Old_Adm_Name New_Name'
REName FIlespace Server command to rename a FIlespace.
Syntax:
'REName FIlespace NodeName FSname
Newname'
Note that you can only rename a
filespace within a node: you cannot
rename it so that it is under another
node.
Advisory: Be careful that the new name
does not conflict with an existing host
file system, and particularly if the
file system types differ.
CAUTION: The filespace name you see in
character form in the server may not
accurately reflect reality, in that the
clients may well employ different code
pages (Windows: Unicode) than the
server. The hexadecimal representation
of the name in Query FIlespace is your
ultimate reference.
REName Node Server command to rename a node.
Syntax:
'REName Node <OldName> <NewName>'
The new name must not already exist,
else you get error "ANR2147E RENAME
NODE: Node <NewName> is already
registered."
Notes: The node's filespaces are, of
course, brought along to be under the
new name.
REName STGpool ADSMv3 server command to rename a
storage pool. Syntax:
'REName STGpool PoolName NewName'
REPAIR STGVOL Special command, to be used under the
instructions of TSM Support, to repair
TSM database issues relating to storage
pool problems from various causes,
including from storage pool simultaneous
write (COPYSTGPOOL=), as described in
APAR IC37275, involving extraneous rows
in the DS.Segments table or the
AS.Segments table.
See also: ANR0102E
Note that repair tools are not
rigorously developed, and may have
problems, as a search of the IBM site
reveals; hence the importance of running
such only under IBM supervision.
REPlace (-REPlace=) Client User Options file (dsm.opt) or
(REPlace=No) 'dsmc' command option to specify
handling when a file to be Restored or
Retrieved already exists at the client
location. Choices:
Prompt for choice of overwriting;
All to overwrite any existing
files, including those
read-only
Yes to overwrite any existing
files, except read-only files
No do not overwrite any existing
files, as when restarting an
interrupted restoral.
(Expect to see msgs like
"File ____ exists, skipping",
which reflects the server
having gone through the effort
to retrieve the file and send
it to the client, only to have
it be skipped by the client.)
No-replace is based solely on
the file name: the relative
content of the file, its size,
and timestamps are not factors.
Command line example: -REPlace=Yes
If the file system is to be NFS-served,
"Prompt" should not be in effect because
the NFS client won't get the prompt.
See also: IFNewer
Report width See: -COMMAdelimited; -DISPLaymode;
SELECT output, column width;
Set SQLDISPlaymode; -TABdelimited
Reporting products (reports) See: TSM monitoring products
REQSYSauthoutfile ADSM server option, as of 199908, to
provide additional control related to
the administrative authority required to
issue selected commands that cause the
ADSM server to write information to an
external file. Choices:
Yes Specifies that system authority is
required for administrative
commands that cause the server to
write to an external file:
- MOVE and QUERY DRMEDIA when CMD
specified;
- MOVE and QUERY MEDIA when CMD
specified;
- BACKUP VOLHISTORY when FILENAMES
specified;
- BACKUP DEVCONFIG when the
FILENAMES specified;
- TRACE BEGIN when a file name is
specified;
- QUERY SCRIPT when OUTPUTFILE
specified.
Yes is the default.
No Specifies that system authority is
not required for administrative
commands that cause the server to
write to an external file (i.e.,
there is no change to the privilege
class required to execute the
command).
Reserve A special device command to retain
control of a tape drive or the like in
an environment where the drive is shared
by multiple hosts, over multiple
open-close processing sequences.
In AIX, this is accomplished at the
driver level by issuing an ioctl() to
perform an SIOC_RESERVE command.
Msgs: ANR8376I
Reserve tape drive from host Unix: 'tapeutil -f dev/rmt? reserve'
Windows: 'ntutil -t tape_ reserve'
When done, release the drive:
Unix: 'tapeutil -f dev/rmt? release'
Windows: 'ntutil -t tape_ release'
Reserve/Release A facility available via the 3590 device
driver whereby an accessing system can
dedicate (reserve) a tape drive to
itself for the duration of processing a
tape, and thereafter release it. In
this way all the drives in a 3494 may be
serially shared by all the RS/6000s
which access the 3494.
Ref: 3494/3590 device drivers manual
discussion of SIOC_RESERVE and
SIOC_RELEASE.
RESet BUFPool Server command to reset the database
buffer pool statistics, as reported by
'Query DB Format=Detailed'.
Do this after changing BUFPoolsize.
RESet DBMaxutilization Server command to reset the maximum
utilization statistic (Max. Pct Util)
for the database, as reported from
'Query DB'.
RESet LOGConsumption Server command to reset the statistic on
the amount of recovery log space that
has been consumed since the last reset,
as shows up in a 'Query LOG
Format=Detailed' report.
RESet LOGMaxutilization Server command to reset the max
utilization statistic (Max. Pct Util)
for the recovery log, as seen in
'Query LOG'.
RESETARCHIVEATTRibute TSM 5.2 Windows client option to allow
resetting the Windows archive attribute
for files during a backup operation.
Specify Yes or No.
Default: No, do not reset the Windows
archive attribute for files during a
backup operation.
resident file A file that resides on a local file
system. It has not been migrated or
premigrated, or it has been recalled
from ADSM storage and modified. When
first created, all files are
resident. Contrast with premigrated file
and migrated file.
RESOURCETimeout TSM 4.2+ server option to specify how
long the server waits for a resource
before cancelling the acquisition of a
resource. At timeout, the request for
the resource will be cancelled, with msg
ANR0530W. See also msg ANR0538I.
Specify: 1 - N (minutes)
Default: 10 in TSM 4.2; 60 in 5.1 (per
APAR PQ56967).
RESOURceutilization [1-10] TSM 3.7+ client system options file
(dsm.sys) option to regulate the level
of resources the TSM server and client
can use during Multi-Session Backup and
Archive processing later extended in TSM
5.1 for Multi-Session Restore. Specifies
the number of sessions opened between
the TSM server and client.
Code: 1 - 10. Default: 2
With a value of 2, one Producer
(control) session is used for querying
the TSM server and reporting final
results to the TSM server, and one
Consumer (data) session is used for
transferring file data.
With a value of 1, you get a single,
combined Producer+Consumer session. In
IBM parlance, this prevents "thread
switching".
With numbers higher than two you may get
some multiple combination: with 5 there
may be 2 Producer sessions and 3
Consumer sessions.
Each Consumer session results in its own
entry in the accounting log and summary
table, as reported by the associated
Producer session.
Note that IDLETimeout still pertains:
if the IDLETimeout limit is reached
before the 2nd session has finished
backing up the filespace, the
'communication' session (1st session) is
terminated and any additional file
systems are not backed up, and/or the
summary statisticss are not transmitted.
For example, a setting of
"RESOURceutilization 1" uses less system
resources than a setting of
"RESOURceutilization 10".
The RESOURceutilization should not
exceed MAXNUMMP.
Notes: the full exploitation of multiple
sessions is possible only if you have
both TSM 3.7 client AND server.
RESOURceutilization is not available in
the API: the option is used to funnel
data at the file level, and the TSM API
does not perform any file I/O.
Ref: TSM 3.7 Technical Guide redbook
TSM 5.1 Technical Guide redbook
RESTArt Restore ADSM v.3 client command to restart a
restoral from where it left off, as when
the restoral was interrupted. This is
available in restorals in which ADSM is
keeping track of the files involved in
the restoral (see "No Query Restore").
Note that you *must* either restart an
interrupted restoral, or perform a
CANcel Restore, else burther backups and
restorals of the filespace are
inhibited.
See also: RESTOREINTERVAL
Restartable Restore ADSMv3+ facility restartable restore, to
prevent having to start over when a
restoral was interrupted, as by a data
communications (network) problem or a
media (disk, tape) or file problem. Is
an extension of No Query Restore (NQR)
in that the server, rather than the
client, is maintaining the list of files
involved in the restoral, thus
facilitating restart after client
session demise. NQR does the sorting of
client files on the server machine and
thus can keep a record of the list of
files to restore and which ones have
already been restored. RR cannot prevail
where NQR cannot be used, as in the use
of any of the following options (or
their GUI equivalents:
-latest -inactive -pick
-fromdate or -todate
-fromtime or -totime
Falls under the more general category
Fault Tolerance.
Note that having a Restartable Restore
pending blocks that filespace from any
other action (backup, reclamation,
BAckup STGpool, etc.) until the restore
is finished: the filespace is locked.
RR state is preserved in the *SM
database and thus prevails across *SM
server restart.
Removal: The RR state is normally
removed, and the filespace unlocked, by:
- Successful conclusion of the restore.
- Cancellation.
The RR state is also removed by some
server processes (especially,
Expiration) after the RESTOREINTERVAL
has elapsed. Server data movement
operations such as storage pool
migration, reclamation processes,
expiration processing, and MOVE DATA
commands remove the restartable restore
state from the ADSM database when they
run.
Ref: ADSM v3 Technical Guide redbook
See also: dsmc Query RESTore;
Expiration; Query RESTore; RESTArt
Restore; RESTOREINTERVAL
Restoral, tapes needed See: Restoral preview
Restoral performance Overall, consider that restoral
(slow restoral) performance is inherently limited by the
choices you made in configuring your TSM
backup scheme. Further, the manner in
which you request TSM to perform the
restoral can have a dramatic impact upon
performance. Consider also that
establishing a file in a file system
takes considerably more time than simply
reading an established one, as during
backup. Detailed factors:
- A restoral via command line invocation
(CLI) runs faster than a restoral
invoked via the GUI. (See: GUI client)
- In a Unix or like environment where a
shell will expand exposed wildcards,
prevent that from happening: let TSM
expand wildcards, and thus figure out
the best order in which to restore
objects. This helps minimize tape
mounts and rewinding. Likewise, use a
single restoral operation to restore
as many objects as possible, rather
than multiple commands.
- Avoid use of the client COMPRESSIon
option for backups, as the client will
have to uncompress every file being
restored!
- A file system that does compression
(e.g., NTFS) will prolong the job.
- Restoring to a file system which is
networked to this client system rather
that native to it (e.g., NFS, AFS)
will naturally be relatively slow.
- Use Collocation...to the extent that
you can afford it in Backups.
Collocation by FILespace will optimize
restorals but cost a lot in tapes and
tape mount time.
- Beyond Collocation: have your storage
pools defined so that Archive, Backup,
HSM each have their own primary
storage pools, to keep them separate.
Intermingling will cause Backup data
to get spread out and thus prolong
Restorals.
- Consider using MAXNUMMP to increase
the number of drives you may
simultaneously use.
- In Unix clients where sparse files are
rarely restored, consider adding
MAKesparsefile NO to dsm.opt.
See: Sparse files, handling of
- Use the Quiet (-Quiet) option to
eliminate the overhead of formulating
and writing progress messages.
- ADSMv3 Small File Aggregation helps
speed restorals.
- Perform full backups periodically to
create a complete, contiguous image of
the filespace. See: Backup, full
- Planning your storage pool hierarchy
can make restorals a lot faster by
keeping newer (more likely Active)
data in an upper level storage pool
and migrating older (more likely
Inactive) data to a lower level
storage pool via the MIGDelay control.
- Employ two different node names and
management classes for the same client
so as to have a storage pool for only
Active data as well as a more
conventional storage pool of Active
and Inactive data. See IBM site
Technote 1148497.
- ADSMv3 "No Query Restore" speeds
restorals by eliminating the
preliminary step of the server having
to send the full repertoire of file
objects it has for the client, and the
need for the client to traverse the
list if it already knows what needs to
be restored. (But note: There have
been performance problems with No
Query Restore itself. IBM created the
DISABLENQR client trace option to
compensate. See notes at end of this
file.)
- If your operating system has data-rich
directories such that they cannot be
contained within the *SM database (as
they can with most Unix systems),
consider using DIRMc to keep them in a
disk storage pool, to eliminate tape
operations in the initial, directories
portion of a restoral.
- Minimize other server activity during
the restoral period. Suppress some
administrative schedules, which could
interfere with resources available to
the restore. (In particular, note that
'BAckup DB' can pre-empt other
processes when it needs tape drives.)
- Maximize your buffer sizes; but watch
out for performance penalty at certain
TCPBufsize sizes (q.v.).
- Minimize your MOUNTRetention value for
the duration of the restoral so as to
avoid a new tape mount having to wait
for a lingering tape to be dismounted
from that drive. (Note that TSM does
not call for a next mount as it's
finishing work on the current tape, so
there is always wasted time waiting
the next mount.)
- May be waiting for mount points on the
server. Do 'Query SEssion F=D'.
- Automatic tape drive cleaning and
retries on a dirty drive will slow
down the action in a very
unaccountable way.
- Tapes written years ago, or tapes
whose media is marginal, may be tough
for the tape drive to read, and the
drive may linger on a tape block for
some time, laboring to read it - and
may not give any indication to the
operating system that it had to
undertake this extra effort and time.
- Tape/drive difficulties during Backup
cause TSM to continue the Backup on
another tape, which results in spread
data. Later returning the problem tape
to read-write state for further backup
use unfortunately further spreads the
data.
- Make sure that if you activated client
tracing in the past that you did not
leave it active, as its overhead will
dramatically slow client performance.
- If CRC data is associated with the
storage pool data, the CRC is
validated during the restoral, which
adds some time.
- Unix: Consider disabling sync for that
file system for the duration of the
restoral. There is also the public
domain 'fastfs' program for Solaris
systems, to speed restorals through
use of delayed I/O.
- Restoral works by reconstructing the
file system directory structure first.
The directories for many operating
systems reside in the *SM database
itself; but if yours goes to a storage
pool, make the storage pool disk (as
via DIRMc).
- When restoring a single file, DO NOT
use -SUbdir=Yes, because it may cause
the directory tree to be restored (see
APAR IC21360)
- In Novell Netware: Try boosting the
PROCESSORutilization value.
- Is your tape drive technology fast in
real-world start-stop processing, as
opposed to streaming? That's what's
involved in restoring smaller files
distributed over a tape, with the
positioning required. (DLT has been
distinguished by poor start-stop
performance.)
- Tape length: Longer tapes are nice for
increased data storage, but obviously
make for longer positioning times.
- If using ethernet (particularly
100 Mb), make sure your adapter cards
are not set for Auto Negotiation. See
the topic "NETWORK PERFORMANCE
(ETHERNET PERFORMANCE)" near the
bottom of this document.
- Beware the invisible: networking
administrators may have changed the
"quality of service" rating - perhaps
per your predecessor - so that *SM
traffic has reduced priority on that
network link.
- If using MVS, be aware that its TCP/IP
has a history of inferior performance,
partly because it is an adjunct to the
operating system, rather than built
in.
- Make sure there is no virus-scanning
software running: it will take time to
examine every incoming file!
- If you have multiple tape drives on
one SCSI chain, consider dedicating
one host adapter card to each drive in
order to maximize performance.
- If you mix SCSI device types on a
single SCSI chain, you may be limiting
your fastest device to the speed of
the slowest device. For example,
putting a single-ended device on a
SCSI chain with a differential device
will cause the chain speed to drop to
that of the single-ended device.
- If using a database TDP, your host
configuration may be self-defeating: a
single drive containing your
transaction log and trying to satisfy
the current running server log entries
and trying to restore and replay the
old transaction log entries is one
very busy drive, with much arm
movement trying to satisfy all
demands. In any database scenario,
distributing I/O demands makes for
much better performance.
- Restorals of TDP for MSSQL (q.v.) may
take a long time because the database
"container" has to be recreated
(formatted) before the restoral of
content can occur.
- Depending upon the nature of the
restoral and storage pool collocation
you may be able to invoke multiple
'dsmc RESTore' commands to parallelize
the task, wihout running into volume
contention in the TSM server.
- A primary storage pool volume needed
for the restoral is marked as being
present in the library, but is not,
and a MOUNTWait timeout has to occur
before the restoral process goes on to
mount a copy storage pool volume
instead.
- With a JFS file system (e.g., in AIX),
a jfslog which is at the edge of the
volume rather than in the middle will
reduce performance.
- The v5 client provides the option of
multiple restore streams.
- If using an IBM ESS 2105 (Shark),
avoid using AIX LVM striping: the ESS
stripes write operations internally,
and redundantly striping with AIX will
increase the number of write I/O
operations, which can negatively
performance.
See also: Backup performance; Client
performance factors; Restore Order;
Server performance
For additional info, search the APAR
database for "adsm restore performance".
Restoral preview You may be disappointed to find that
there is no restoral preview in the
product - an option you may see for
restoral planning: you embark upon
restorals with no fore-awareness of the
number of tapes or which volumes will be
involved. This seeming shortcoming
derives from the file-oriented
philosophy of the product - that you
should not be concerned about where
files are on their storage media. You
might think that this would have been in
the earlier incarnations of the product,
in the days before automatic tape
libraries, when operators had to respond
to tape mount requests; but it didn't
get implemented then. Now that TSM is an
enterprise type product, the presumption
is that you would by definition always
have all needed tapes available in your
library anyway.
A Preview capability would tell us:
- What volumes would be required;
- If all the volumes are available
(onsite, offsite, volumes Unavailable,
files Damaged, etc.);
- If sufficient drives are available,
and how many would be used;
- The amount of data that will be
restored.
In the absence of a restoral preview
capability in the product, there are no
good alternatives. Some will advise
getting a list of volumes from the
Volumeusage table (via Select, or SHow
VOLUMEUsage), but that's a false
recommendation in that the list will be
that of all primary storage pool volumes
in use by the node - not just those
which a restoral will need. Select
queries in the server, to identify the
tapes containing files to be restored,
are prohibitively time-consuming in the
Contents table (far slower than the
client itself can obtain the info); and
doing a dummy restoral to a trash area
to identify the tapes is wasteful, and
not possible if the volumes are offsite
- which is why you wanted the preview in
the first place.
One form of "preview" that you can do is
to put the same source filespec into a
'dsmc query backup ...' which you intend
to put into the 'dsmc restore ...': that
will display the files which will be
restored, and is particularly valuable
where you are having TSM expand
wildcards.
Restoral timestamps, Unix The product reinstates the original
atime and mtime as they were at the time
of backup. In doing so, the ctime
(inode admin change time) is necessarily
changed to the restoral time, which is
typically fine as ctime is of no
consequence except in security
investigations.
Note that the product backs up files if
they are changed; so if you read a file
after the backup, it will not be backed
up again because its mtime remains
unchanged, though the atime value is
changed by the reading. A restoral in
effects resets the atime value.
Restoral tips, NT There are some basic rules when trying
to restore directories and files to an
NT system, and this specifically for
permissions.
1. File Permissions are ALWAYS restored
2. Directory permissions are restored
when the original directory still
exists
3. Directory permissions are only
restored on non-existing directories
if the command line interface is
used, together with the -SUbdir=Yes
option.
4. Restoring files to a temporary
destination and then moving them will
only keep the permissions when moved
on the same logical drive. (NT rule)
When you share a directory the sharing
information is not written to the
shared directory. So when you restore
the directory, it won't get shared
automatically.
Restoral volumes, determine See: Restoral preview
Restorals, prevent The product does not provide a way to
disallow restorals, given that the
ability to recover data is a
fundamental requirement of the product.
However, one way to achieve it is to
have backups performed only via client
schedules, with SCHEDMODe PRompted, and
do UPDate Node ___
SESSIONITITiation=SERVEROnly.
See also: Archives, prohibit; Backups,
prevent
Restore The process of copying a backup version
of a file from ADSM storage to a local
file system. You can restore a file to
its original location or a new location.
The backup copy in the storage pool is
not affected.
Priority: Lower than Restore.
ADSMv2 Restore works as follows...
Phase 1: Get info from the server about
all filespace files which qualify for
the restoral;
Phase 2: Create those file system
objects involving descriptions rather
than data...
Directories are restored first,
directly from the ADSM database info
about the directory.
If the directory exists, it is not
restored - the existing directory is
used.
If the directory does not exist:
For the command line client: the
directory is restored with backed up
attributes if SUbdir=Yes.
For the GUI:
Restore by Subdirectory Branch: the
directory is created and restored
with backed up attributes.
Restore by File Specification/Restore
by Tree: the directory is created
with default directory attributes.
(Note that directory reconstruction
occurs WITHOUT a session with the
server!)
Empty (zero-length) files are restored
after directories and before any files
containing data...
Phase 3: Restore data-laden files...
Files are restored with their backed
up permissions when REPlace=Yes, all.
If REPlace=No, *SM does not restore
the existing files.
Option Verbose shows name and size
information for files backed up and
restored, not permission information.
ADSMv3 Restore works as documented in
the B/A Client manual, under
"No Query Restore".
When a restore is running, a 'Query
Mount' will show the tape mounted R/O.
Note that restoral will by necessity
change directory and symbolic link dates
as it reestablishes them; and symbolic
links may be created under "root" rather
than their original creator if the
operating system lacks the lchown()
system call.
Unicode note: The server allows only
a Unicode-enabled client to restore
files from a Unicode-enabled file
space.
WARNING: When a Restore is occurring,
prevent new backup processes from
running, which could create new backup
file versions that could conflict with
and screw up the restoral. (See:
Backups, prevent.)
Contrast with Backup, Retrieve, Recall.
See also: dsmc REStore
RESTORE Server database SQL table involved in
Restartable Restore processing.
See also: RESTOREINTERVAL
Restore, client no longer exists Sometimes the client system that had
backed up data has disappeared, but the
enterprise wants to restore some data
that had been on it.
Refer to: Backup-Archive Clients manual,
"Restore: Advanced Considerations"
Restore, handling of existent file Use the Client User Options file
file on client (dsm.opt) option REPlace to specify
handling.
Restore, number of tape drives used The manuals are unspecific about this,
but TSM uses one tape drive per client
session in performing restorals. The
most said about this is in the
Performing Large Restore Operations
topic of the client Backup-Archive
manual, which advocates starting
multiple restore commands to use
multiple tape drives - but does not say
that only one tape drive will be used if
only one command is issued. Note,
however, that having multiple drives
will not be productive if the data
needed is on a single tape, as there is
no tape sharing.
See also: MAXNUMMP, as it affects the
number of drives the client can use;
KEEPMP for keeping the mount point
through the session.
Restore, tape mounted multiple times Though TSM in most cases mounts tapes
only once during a restoral, there may
be occasions where you see it mounting a
tape more than once. This has been
observed where files span volumes: the
tape from which a file spans is mounted
to get the first part of the file, then
the tape containing the rest of the file
is mounted, plus other files. But TSM
may need to go back to that first tape
for other files.
Restore, using "GUI" Users with Xterminals can simply use the
'dsm' command and be presented with a
nice graphical interface. Beware that
the final report will not reveal the
elapsed time.
(Users with dumb tty terminals can
have a similar capability via the
"-pick" option, which presents a list,
as in:
'dsmc restore -pick /home -SUbdir=Yes')
See also: -PIck
Restore, volumes needed See: Restoral preview
Restore across architectural Cross platform restores only work on
platforms those platforms that understand the
other's file systems, such as among
Windows, DOS, NT, and OS/2; or among
AIX, IRIX, and Solaris (the "slash" and
"backslash" camps). For cross-platform
restores to be possible, the respective
clients would both have to support the
same file system type, meaning both that
the client software was programmed to do
so and that it was formally documented
that it really could do so, in the
client manual. Simply look in the Unix
Client manual, under "File system and
ACL support" vs. the Windows Client
under "Performing an incremental,
selective, or incremental-by-date
backup".
See also: Platform; Query Backup across
architectural platforms
Restore across clients (nodes) You can restore files across clients if
(cross-node restoral) you know the proper client password, and
in invocation of the restoral command
you use option -VIRTUALNodename in Unix,
or -NODename in Netware and Windows.
That is, files belonging to client
C_owner can be accessed from client
C_other if you invoke the TSM client
program (dsm or dsmc) from client
C_other and know client C_owner's
password. Sample CLI session, as
invoked on client C_other to access
C_owner files:
'dsmc restore -NODename=C_owner
-PASsword=xxx ...'
or use the GUI from client C_other as:
'dsm -NODename=C_owner'
and more securely supply that client
password at the prompt.
This technique is a way for root to get
files across systems, and operates upon
all files - root's as well as those of
all other users. Note that a 'Query
SEssion' in the server shows the session
active for the node specified by
-NODename, rather than the actual
identity of the client.
Requirements: The source and destination
file system architectures must be
equivalent, and the level of the
restoring client software must be at
least the same level as the software on
the client which did the backup.
Ref: Backup-Archive Clients manual,
"Restore or Retrieve Files to Another
Workstation"
See also: NODename; VIRTUALNodename
Restore across nodes See: Restore across clients
Restore across servers You can restore files across servers if
you know the proper client password.
That is, for client C1 whose natural
files are on server S1, you can instead
go after files stored by client C2 on
server S2 if you know that other
client's password and redirect to that
server. Sample syntax:
'dsmc restore -server=S2
-NODename=C2 -PASsword=xxx'
or use the GUI as:
'dsmc -server=S2 -NODename=C2'
and more securely supply that client
password at the prompt.
This technique is a way for root to get
files across systems and clusters, and
operates upon all files - root's as well
as those of all other users.
Note: The other server must be defined
in the Client System Options file
(/usr/lpp/adsm/bin/dsm.sys).
Restore and management class When a Backup is done on a file, you can
employ any of a number of management
classes to accomplish it. Thereafter,
you can see the managment class used for
that backup when you either do a
'dsmc q backup' or use the GUI. The
management class reflected in a restoral
is, like file size, an informational
value rather than selectable, as date
is.
RESTORE DB See: DSMSERV RESTORE DB
Restore directly from Copy Storage See: Copy Storage Pool, restore files
Pool directly from
Restore empty directories To ensure that you can restore empty
directories, you must back them up at
least once with an incremental backup.
Also, ADSM restores empty directories
when you use the subdirectory path
method. You should also note that if a
directory and its contents are deleted,
and you use ADSM to restore the
directory and data, all associated ACPs
will be restored. If the contents of a
directory are deleted but the directory
is not, and ADSM is used to recover the
data, all ACPs associated with the data
will be recovered, but the ACPs
associated with the directory will not
be recovered. Directory ACPs are
recovered only when a directory is newly
created during restore from the ADSM
backup copy.
Do 'dsmc Query Backup * -dirs -sub=yes'
the client to find the empties, or
choose Directory Tree under 'dsm'.
Example: Restore the empty directory
/home/joe/empty-dir:
'dsmc restore -dir /home/joe/empty-dir'
It will yield message "ANS4302E No
objects on server match query", but will
nevertheless restore the empty
directory.
Restore failing on "file not found" A way around it is to create a file by
problem that name, do a selective backup to
fulfill its existence, and then retry
the full restore.
Restore fails in Netware on long file See: Long filenames in Netware restorals
name
Restore Order (Restoral Order) From APAR IC24321: ADSM V3 CLIENTS
ALWAYS RESTORE OR RETRIEVE DIRECTORIES
EVEN WHEN PARMS SUCH AS REPLACE=NO OR
-IFNEWER ARE USED (1999/07).
"During ADSM restore and/or retrieve
processing the objects being
restored/retrieved are being returned
from the server to the client in
"restore order". This concept of
"restore order" is that the objects are
returned in the order on which they
appear on the given media. This avoids
restore/retrieve performance issues of
sequential volume "thrashing"
(positioning back and forth on a
sequential volume) and multiple mounts
of the same sequential media. The
"restore order" considers where objects
exist on sequential media and brings
them back in this order so that the
media can be moved from beginning to
end. One of the side effects of this
type of processing involves the
restore/retrieve of directories. When a
file needs to be restored/retrieved into
a directory that does not exist yet
(because its restore order is down
further) the ADSM client must build a
skeleton [surrogate] directory to place
this file under. When the client then
encounters the directory in the restore
order it will overwrite this skeleton it
originally put down. At this time the
ADSM client is not designed to track
which directories it lays down as
skeletons and which were already there.
This means that the client
restore/retrieves directories whenever
it encounters them within the restore
order. This is true regardless of
REPlace=No being specified. Or
regardless of -ifnewer being used and
the directory being restored being
older. The ADSM client needs a design
change in this area to track which
directories it puts down as skeletons
and which it does not. It needs to only
restore those where it put down the
skeleton. The requirement to not
replace existing directories when
-REPlace=No is in effect involves a
design change in ADSM restore/retrieve
processing that is beyond the scope of a
PTF fix. However, ADSM Development
agrees with the need for this
requirement, and has accepted it for
implementation in a future version of
the product."
MY NOTE: Clients like AIX which have
simple directory structures have their
directories in the *SM database rather
than storage pools, and so they would
not be on sequential media and hence
would be immune to this problem.
Restore performance See: Restoral performance
Restore runs out of disk space? If it looks like there is sufficient
file system space and yet this occurs,
it's likely that files are being
restored for a user whose disk quota
is being exceeded.
RESTORE STGpool *SM server command to restore files
from one or more copy storage pools to
a primary storage pool. Syntax:
'RESTORE STGpool PrimaryPool
[COPYstgpool=PoolName]
[NEWstgpool=NewPrimaryPool]
[MAXPRocess=1|N]
[Preview=No|Yes]
[Wait=No|Yes]'
Attempts to minimize tape mounts and
positioning for the Copy Storage pool
volumes from which files are restored.
Depending on how scattered these files
are in your Copy Storage pool, quite a
bit of CPU and database activity may be
required to locate the necessary files
and to restore them in the optimal
order. File aggregation in ADSM V.3
should help significantly.
RESTORE STGpool vs. RESTORE Volume The Restore Stgpool and Restore Volume
commands are very closely related.
Under the covers, most of the code is
the same. The major differences are:
- Restore Stgpool restores primary
files that have previously been
marked as damaged because of a
detected data-integrity error. This
is done regardless of whether the
volume has been designated as
destroyed.
- Restore Volume allows you to specify
the volume name(s) rather than using
UPDate Volume to designate the
destroyed volume(s).
For restoring a small number of volumes,
the Restore Volume is more convenient,
particularly if you are not interested
in restoring damaged files on other
volumes. For restoring damaged files
or a large number of destroyed volumes,
Restore Stgpool is preferable.
Restore to different node See: Restore across clients
Restore to tape, not disk The Restore function wants to write the
subject file to disk (which is cheap and
capacious these days). But sometimes you
simply don't have enough disk space to
accomodate standard retrieval of very
large files. Here is a Unix technique
for instead restoring the files, one at
a time, and putting each directly to
tape:
In one window, do:
mkfifo fifo; # Create Named Pipe,
# called "fifo".
dd if=fifo of=/dev/rmt1 # Tape drive
# of your choice, tape in it.
In another window, do:
dsmc restore -REPlace=Yes
SubjectFilename fifo
This will restore the desired backup
file and, instead of restoring it to
its natural name, will direct it to
"fifo". The "-REPlace=Yes" will quell
the restore's fear of replacing the
file which, as a FIFO type special file,
will instead result in the data being
sent to whatever is reading the named
pipe, which in this case is the 'dd'
command, which passes it to tape. When
the restoral ends, the 'dd' command
will end and the file's data will be on
that tape. Record on the tape's
external label the identity of the data
written to the tape. To later extract
the data from the tape, again use the
'dd' command, specifying the chosen tape
drive via "if" and an output file via
"of". Whereas this is plain data on a
non-labeled tape, an operating system
other than Unix should be able to as
easily get the data from the tape.
Note that the inverse is not possible:
you cannot have a FIFO as input to a
dsmc backup operation. (TSM will detect
the named object as being a special file
and back it up as such, which is to say
send its description to the server,
rather than try to read it as a file.)
RESTORE Volume Server command to recover a primary
storage pool volume (disk or tape) from
data backed up to the copy storage pool,
by restoring the data to one or more
other volumes in the same (or
designated) storage pool. At the
beginning of the operation the Access
Mode of the volume is changed to
DEStroyed. When restoration is complete,
the destroyed volume is logically empty
and so is automatically deleted from the
database and be given Status Scratch.
'RESTORE Volume VolName(s)
[COPYstgpool=CopyPool]
[NEWstgpool=NewPoolName]
[MAXPRocess=1|N]
[Preview=No|Yes]
[Wait=No|Yes]'
'RESTORE Volume VolName Preview=Yes'
will give you (among other information)
a list of copy storage pool volumes
needed to restore your primary volume.
(Note: If you perform the Preview when
expirations and reclamations are
running, the volumes can change.)
As the invoked restore proceeds,
performing successive Query Volume
commands on the bad volume will show it
progressively emptying.
The operation attempts to minimize tape
mounts and positioning for the Copy
Storage pool volumes from which files
are restored by first assembling a list
of restoral files by volume. Depending
on how scattered these files are in your
Copy Storage pool, quite a bit of
database activity may be required to
locate the necessary files and then
restore them in the optimal order, so
you can expect the restoral to take
hours!! (File aggregation helps.)
Primary Storage Pools are often
collocated whereas it is impractical to
collocate Copy Storage Pools (because of
the very many mounts that would be
required in a BAckup STGpool operation).
Because of the collocation incongruity,
the files needed to restore a volume
will inevitably be spread over many copy
storage pool volumes, making for a lot
of mounts. (And if the client/filesystem
involved only backs up a small amount of
data per day, you will find the data
spread over a VERY large number of Copy
Storage Pool tapes, dwarfed by data from
much more active clients/filesystems.)
Because of this, it is of great
advantage to first perform a Move Data
to get as much viable data as possible
off the volume before invoking the
Restore Volume.
The restore may request an offsite
volume, as seen in Query REQuest. If you
CANcel REQuest on that, the restore will
continue, not stop - and it may realize
that calling for the offsite volume was
unnecessary, and proceed with an onsite
copy storage pool volume instead. But
instead it may end "successfully" though
the data represented on those offsite
tapes was not restored. Repetat the
Restore Volume to use onsite tapes to
complete it.
Note that an interrupted Restore can be
reinvoked to continue where it left off.
You can gauge the progress of the
recovery by doing 'Query Volume' on the
subject volume, whose Pct Util will
approach zero as its contents are
recovered to other volume(s). Likewise,
'Query CONtent' will show the contents
of the volume dwindling as the restore
proceeds. And, obviously, Query ACtlog
can be done to follow progress.
Msgs: ANR2114I, ANR2110I
See also: Collocation and RESTORE Volume
RESTOREINTERVAL ADSMv3+ server option specifying how
long a restartable restore can be saved
in the *SM server database.
"RESTOREINTERVAL n_minutes"
where the value can be 0-10080 minutes
(maximum = 1 week). Default: 1440 (1
day).
See also: dsmc Query RESTore;
Expiration; Restartable Restore;
RESTORE; SETOPT
RESToremigstate (-RESToremigstate=) Client User Options file (dsm.opt)
option and dsmc option to specify
whether restorals of HSM-migrated files
should return just the stub files (Yes),
thus restoring them to their migrated
state; or to fully restore the files to
the local file system in resident state
(No). Default: Yes
Files with ACLs are always fully
restored!
Typically used on restoral command...
'dsmc restore -RESToremigstate=Yes
-SUbdir=Yes /FileSystem'
The restoral will report the full size
of the file being restored; but no
volume mount is needed to accomplish it,
the statistics show 0 bytes transferred,
and a dsmls afterward will show only the
stub file (511 bytes).
You should always explicitly specify
-RESToremigstate=___ on the command
line, because if you don't and it is
coded in your options file contrary to
what you intend, you will get perplexing
results.
Realize that Yes can only work if the
file had been migrated and *then* backed
up, for the stub to have been created
and backed up. A file which has not been
migrated obviously does not have a stub
file: Backup will back up the file in
the same way as for a non-HSM file
system. And, naturally, small files
(less then or equal to the stub size)
cannot participate in migration and must
be physically restored.
It is important to understand that Yes
only causes the TSM record portion of
the stub files (first 511 bytes) to be
reinstated: it does not reinstate either
the Leader Data within the stub file,
nor the file data in the HSM storage
pool, and so is no good for restoring
HSM files across TSM servers. Moreover,
the stub file is *recreated*, but not
*restored*, which is to say that it ends
up with the default attributes for HSM
files: any pre-existing attributes you
may have specially set
(migrate-on-close, read-without-recall)
are lost. Specifying No causes a full
restoral to occur, which actually
restores the stub and its original
attributes, plus the file data.
See also: dsmmigundelete; Leader Data;
MIGREQUIRESBkup
RESToremigstate, query 'dsmc Query Option' in ADSM or 'dsmc
show options' in TSM; look for
"restoreMigState".
RESTORES SQL table for currently active client
restoral operations, introduced in v3
for Restartable Restores. Is what is
inspected by the client 'dsmc Query
RESTore' command and the server 'Query
RESTore' comand.
Restoring to renamed disk volumes on One day you back up your files when your
OS/2, NT, and the like PC volume name is "DATA". Later that
day you rename the volume to "APPS". If
you wanted to restore the previously
backed up data, you could change the
volume name back; or you could simply
specify the filespace name in curly
braces, i.e.: RESTORE {OLDNAME}\*
instead of RESTORE D:\* .
Restrict server access Use the Groups and Users options (q.v.).
Retain Extra Versions Backup copy group attribute reflecting
the specification "RETExtra" (q.v.).
Retain Only Version Backup copy group attribute reflecting
the specification "RETOnly" (q.v.).
Retension Term to describe "relaxing" a tape...
Retensioning a tape means to wind to the
end of the tape and then rewind to the
beginning of the tape to even the
tension throughout the tape. Doing this
can reduce errors that would be
otherwise be encountered when reading
the tape. When tapes are read or
written, that occurs at a much lower
speed than the rewind preceding tape
ejection. Whereas normal read-write
speeds wind the tape relatively evenly
and gently, rewinding is more stressful,
and can result in the tape being
stretched somewhat, or even compressed
in the inner part of the spool. The bit
spacing is thus slightly altered. It
therefore helps to let the tape "unwind
and relax", to help return the tape to
a more natural condition. Reading a tape
without retensioning it, itself respools
the tape and causes some relaxation such
that after a read error, a second read
attempt may work fine.
In Unix, retensioning can be performed
via 'tctl ... retension'. See also the
man page on the rmt Special File: you
can specify a device suffix number to
cause automatic retensioning. In the
case of TSM, you could conceivably
redefine your tape drive to use one of
the dot-number suffixed variants of the
device name, and achieve automatic
retensioning before reading. This may be
particularly desirable when you have to
read a large number of tapes that have
been in offsite storage.
Retention The amount of time, in days, that
inactive backed up or archived files are
retained in the storage pool before they
are deleted. The following copy group
attributes define retention: RETExtra
(retain extra versions), RETOnly (retain
only version), RETVer (retain version).
Ref: IBM site Technote 1052632, "TSM
Policies Demystified".
Retention period for archived files Is part of the Copy Group definition
(RETVer). There is one Copy Group in a
Management Class for backup files, and
one for archived files, so the retention
period is essentially part of the
Management Class.
Changing the retention setting of a
management class's archive copy group
will cause all archive versions bound to
that management class to get the new
retention.
Retention period for archived files, 'UPDate COpygroup DomainName SetName
change ClassName Type=Archive
RETVer=N_Days|NOLimit'
where RETVer specifies the retention
period, and can be 0-9999 days, or
"NOLimit".
Effect: Changing RETVer causes any
newly-archived files to pick up the new
retention value, and previously-archived
files also get the new retention value,
because of their binding to the changed
management class.
Default: 365 days.
Retention period for archived files, 'Query COpygroup DomainName SetName
query ClassName Type=Archive
RETVer=N_Days|NOLimit'
Retention period for archived files, ADSM server command:
query 'Query COpygroup [DomainName] [SetName]
Type=Archive [Format=Detailed]'
Retention period for archived files, The retention period for archive files
set is set via the "RETVer" parameter of the
'DEFine COpygroup' ADSM command. Can be
set for 0-9999 days, or "NOLimit".
Default: 365 days.
Retention period for backup files, 'UPDate COpygroup DomainName SetName
change ClassName
RETExtra=N_Days|NOLimit
RETOnly=N_Days|NOLimit'
where RETVer specifies the retention
period, and can be 0-9999 days, or
"NOLimit".
Default: 365 days.
Retention period for backup files, ADSM server command:
query 'Query COpygroup [DomainName] [SetName]
[Format=Detailed]'
Retention period for event records 'Set EVentretention N_Days'
in the server database
Retention period for HSM-managed files They are permanently retained in the
sense that they are server file system
files and thus are implicitly permanent.
What *do* expire are the migrated copies
of these files, on the ADSM server.
That is controlled by the
MIGFILEEXPiration option in the Client
System Options File (dsm.sys), whose
value can be queried via:
'dsmc Query Option' or 'dsmc show
options'.
You can code 0-9999 days.
Default: 7 days.
Retention period for migrated (HSM) Control via the MIGFILEEXPiration option
files (after modified or deleted in in the Client System Options file
client file system) (dsm.sys). Default: 7
RETExtra Backup Copy Group operand defining the
retention period, in days, for Inactive
backup versions (i.e., all but the
latest backup version).
The RETExtra "clock" does not start
ticking until the backup version goes
Inactive, by virtue of another Backup
having been run to create a new Active
version which displaces the prior Active
version. That is, if you back up a file
on January 15, 1997, but don't back it
up again until March 1, 1997, RETExtra
retention period for the first backup
version counts from March 1, not January
15.
When the file is deleted from the client
and a subsequent Backup makes this known
to the server, all the RETExtra copies
will persist, and will continue their
expiration countdown: they do not
immediately disappear because the client
file was deleted.
RETExtra=NOLIMIT setting will cause
the next-most recent copy to also be
kept indefinitely (until the next backup
version is created, in which case it is
expired per the VERExists/VERDeleted
settings).
For files still present on the client,
Inactive versions will be discarded by
either the RETExtra versions count or
the VERExists retention period -
whichever comes first.
RETExtra is not an independent value: it
should be considered a subset of
RETOnly.
See also: RETExtra, RETOnly, VERDeleted,
VERExists
RETExtra, query 'Query COpygroup', look for
"Retain Extra Versions".
RETOnly Backup Copy Group operand defining the
retention period, in days, for the sole
remaining Inactive version of a
backed-up file.
The scenario is: A client file that
changes over time is backed up and
accumulates multiple Inactive copies, as
well as the Active copy, which is an
image of the file that prevails on the
client. The Inactive versions age, and
will be deleted from server storage once
older than the RETExtra value. Because
the file still exists on the client, the
RETOnly value is ignored. Once the
file is deleted from the client, there
will be only Inactive versions in server
storage. When the number of Inactive
versions drops to 1, the RETOnly value
is considered, and the final version
will be kept only as long as its
increasing age is less than RETOnly.
This is to say that the RETOnly "clock"
for the final backup has in effect been
ticking since that final version of the
file went Inactive. The RETOnly value is
intended to allow you to keep the final
version of the file longer than the
RETExtra value, if desired.
Example: RETExtra=45 and RETOnly=45...
The final Inactive version will be on
the server for no more than 45 days. If
you wanted to keep it for 45 days
longer, you would have to code
RETOnly=90.
It does not make sense for the RETOnly
value to be less than the RETExtra
value, given that both refer to the
singular age of one file, whose aging
has been in progress for some time.
RETOnly is not an independent value: it
should be considered a superset of
RETExtra.
(When searching the Admin Guide manual,
search on "Retain Only Versions".)
RETOnly, query 'Query COpygroup', look for
"Retain Only Version".
Retrieval performance In performing a Retrieve of Archive
data, many of the same factors are at
play as listed in "Restoral
performance". Some specifics:
- If CRC data is associated with the
storage pool data, the CRC is
validated during the retrieval, which
adds some time.
Retrieve The process of copying an archived copy
of a file from ADSM storage to a local
file system. You can retrieve a file to
its original location or a new location.
The archive copy in the storage pool is
not affected. Contrast with Archive.
ADSMv2 did not archive directories, but
files in subdirectories were recorded by
their full path name, and so during
retrieval any needed subdirectories will
be recreated, with new timestamps.
ADSMv3+ *does* archive directories.
Files which had been pointed to by
symbolic links will be recreated as
files having the name of the symlink.
Contrast with Archive, Restore, Recall.
See: dsmc RETrieve
Retrieve to tape, not disk The Retrieve function wants to write the
de-Archived file to disk. But sometimes
you simply don't have enough disk space
to accomodate standard retrieval of very
large files. Here is a Unix technique
for instead retrieving the files, one at
a time, and putting each directly to
tape:
In one window, do:
mkfifo fifo; # Create Named Pipe,
# called "fifo".
dd if=fifo of=/dev/rmt1 # Tape drive
# of your choice, tape in it.
In another window, do:
dsmc retrieve -REPlace=Yes
-DEscription="___"
ArchivedFilename fifo
This will retrieve the desired archived
file and, instead of retrieving it to
its natural name, will instead direct it
to "fifo". The "-REPlace=Yes" will quell
the retrieve's fear of replacing the
file which, as a FIFO type special file,
will instead result in the data being
sent to whatever is reading the named
pipe, which in this case is the 'dd'
command, which passes it to tape. When
the retrieval ends, the 'dd' command
will end and the file's data will be on
that tape. Record on the tape's
external label the identity of the data
written to the tape. To later extract
the data from the tape, again use the
'dd' command, specifying the chosen tape
drive via "if" and an output file via
"of". Whereas this is plain data on a
non-labeled tape, an operating system
other than Unix should be able to as
easily get the data from the tape.
Note that the inverse is not possible:
you cannot have a FIFO as input to a
dsmc Archive operation. (TSM will detect
the named object as being a special file
and archive it as such, which is to say
send its description to the server,
rather than try to read it as a file.)
Retrieve, handling of existent file Use the Client User Options file
file on client (dsm.opt) option REPlace to specify
handling.
Retry Conventionally refers to retrying a
backup operation, for one of the
following rreasons:
1. The file is in use and, per the
Shared definitions in the COpygroup
definition, the operation is to be
retried. In the dsmerror.log you may
see an auxiliary message for this
retry: "<Filename> truncated while
reading in Shared Static mode."
2. The file exceeds the capacity of a
storage pool in the hierarchy such
that the backup has to be retried
with a storage pool lower in the
hierarchy.
3. The backup is direct-to-tape and the
tape is not mounted: the client will
send the data to the server, who
rejects the operation until the tape
is mounted, and then the client
resends the file(s).
4. In backing up an HSM file system, the
file being backed up is a migrated
file and so a mount of its storage
pool volume is required.
The Retry inflates the summary statistic
"Total number of bytes transferred" in
the cases where the file is actually
re-sent to the server.
See also: Changed
Retry # 1 In a backup session client log,
indicates that the file has been found
to have changed as it was being backed
up (you will see a preceding
"Normal File--> ...Changed" entry), and
that per the CHAngingretries client
option, the backup of the file is being
retried. The dsmerror.log will typically
have a corresponding entry like
"<Filename> truncated while reading in
Shared Static mode.".
Retry drive access See: DRIVEACQUIRERETRY
RETRYPeriod Client System Options file (dsm.sys)
option to specify the number of minutes
you want the client scheduler to wait
between attempts to process a scheduled
command that fails, or between
unsuccessful attempts to report results
to the server.
Default: 20 minutes
Return codes (status codes) In product releases prior to 5.1, there
were no return codes that customers
could test from the command line client.
Per IBM then: "The return code from the
execution of any of the ADSM executables
(except the ADSM API) cannot be relied
upon, and is not consistent and is
therefore not documented. We do log
errors in the error log and the
schedule log, and these are what you
should rely upon."
As of 5.1, however, reliable, documented
return codes are available, as per the
B/A client manual "Return codes from the
command line interface". The return code
is based upon the severity letter at the
end of the 'ANSnnnn_' message labels:
I: 0 W: 8 E: 12
RC 4 indicates skipped files (not
"failure").
RC 12 May occur if the client nodename
and/or IP address are different
from last session time.
Ref: swg21114982
('HELP QUERY EVENT' will also explain
the return code values.)
You cannot configure which messages will
generate which return code.
API return codes are documented in the
manual "Using the Application Program
Interface" (SH26-4123), and in the TSM
Messages manual.
Return codes, Windows Are documented in the WINERROR.H file.
RETVer Archive Copy Group attribute, specifying
how long to keep an archive copy.
REUsedelay Stgpool option which says how many days
must elapse after all files have been
deleted from a volume before the volume
can be reused.
The REUsedelay is designed to prevent
a sequence of events like the following:
TSM database is backed up
Reclamation moves contents of tape A
to tape B
Tape A is rewritten with new files
TSM database suffers failure
TSM database is restored from backup
mentioned above
After this sequence of events the db
will have certain files recorded as
being on tape A even though the files
have actually been overwritten. Avoiding
this situation calls for a REUsedelay
value which matches the retention period
for backups of the TSM database
(typically from a few days to a couple
weeks). No useful purpose is served by
setting REUsedelay to a value
dramatically larger than the retention
period for database backups.
A volume subject to REUsedelay will show
a Status of "Pending".
Server internals will take care of
finally deleting the pending volume from
the stgpool when its time is up. This
examination is believed to be in *SM's
internal hourly process.
Messages: ANR1342I, then ANR1341I when
the deletion actually occurs, that many
days later.
Default: 0 (days).
See also: Reclamation
REUsedelay, query 'Query STGpool PoolName Format=Detailed'
for "Delay Period for Volume Reuse".
REUsedelay, thwart To return a volume to the Scratch pool
before the REUsedelay expires, just do
'DELete Volume ______'. (Note that
'UPDate Volume' won't do it.)
REVoke AUTHority ADSM server command to revoke one or
more privilege classes from an
administrator. Syntax:
'REVoke AUTHority Adm_Name
[CLasses=SYstem|Policy|STorage|
Operator|Analyst]
[DOmains=domain1[,domain2...]]
[STGpools=pool1[,pool2...]]'
Also: GRant AUTHority, Query ADmin
RIM DBMS Interface Module.
Ref: Redbook "Using Databases with
Tivoli Applications and RIM" (SG24-5112)
RMAN The Oracle 8 Recovery Manager
(backup/restore utility), to back up an
Oracle database to tape, unto itself.
Ships with all versions of Oracle 8.
Replaced EBU from Oracle 7.
TSM (ADSM ConnectAgent; TSM Data
Protection) provides an interface
between RMAN and *SM to allow backups
straight to your *SM Server. Each backup
has a unique filespace name based upon
the backup timestamp.
In Solaris: RMAN looks for a library
named libobk.so which got installed when
you install TDPO. TDPO uses TSM API to
connect to TSM server to send/receive
data.
RMAN uses backuppiece names to backup
its data, which basically means that DP
for Oracle only recieves a logical name
related to the data. For this, DP for
Oracle has to virtualize the filespace
name and highlevel name on the TSM
Server. By default the backuppieces are
stored under the name
\adsmorc\orcnt\<backuppiece> where
backuppiece is the name that Oracle
associates with the backed up data. You
can seek the objects on the TSM server
by using Query FIlespace.
Be aware that RMAN is not very robust in
reporting errors from initialization
problems.
RMM Removable Media Manager; an IBM tape
management system.
RMSS IBM: Removable Media Storage Systems
See also: SSD RMSS device driver
rmt*.smc See: /dev/rmt_.smc
Roll-off Another term for Expiration, referring
to file objects aging out and going
away.
Rollforward See: Set LOGMode
RPFILE DRM Recovery Plan File object volume
type.
See: DELete VOLHistory; EXPIre
Inventory; Query RPFContent; Query
RPFile; Query VOLHistory; Set
DRMRPFEXpiredays; Volume Type
RSM Removable Storage Management: an
industry-standard API.
RSM prevents TSM from direct control of
the library as far as media handling is
concerned. TSM is not able to label,
check in, or check out tape volumes;
these operations must be performed by
RSM through the Windows Management
Console.
Ref: TSM 3.7 Technical Guide redbook
See also: adsmrsmd.dll
RTFM Old data processing colloquialism
chiding the individual to
Read The F*ing Manual. More gentilely
translated as Read That Fine Manual.
Run "Sess State" value from 'Query SEssion'
saying that the server is executing a
client request (and not waiting to
send data).
See also: Communications Wait;
Idle Wait; Media Wait; SendW; Start
RUn Server command to run Scripts. Syntax:
'RUn Script_Name Substitution_Value(s)
Preview=No|Yes Verbose=No|Yes'
Note that if run from dsmadmc that
neither the command prompt nor
completion messages will appear until
the script completes - which may be a
long time if the script is, for example,
a daily housekeeping job. In such cases,
the best approach is to let it run in
one dsmadmc session and perform Query
PRocess observations from another.
Run Time API (Runtime API) Refers to the TSM API runtime library.
See also: Compile Time

SAIT Sony Advanced Intelligent Tape, an


enterprise tape storage technology, a
follow-on to AIT. Utilizes half-inch
tape (in contrast to AIT's 8mm) in a
single-reel cartridge and provides over
twice the uncompressed capacity of the
nearest linear half-inch tape drive. The
drive is sized for a 5.25" bay.
SAIT-1 The first generation of SAIT.
Capacity: 500 GB native. ADLC
compression may get up to 1.3 terabytes.
Transfer rate: 30 MB/s native; up to 78
MB/s with compression.
SAIT-1 is essentially the same as AIT-3,
but using a different tape width.
Supported as of TSM 5.2.2.
www.aittape.com/pdf/Sony_SAIT_FAQs.pdf
Samba file serving complexities Samba is a way for a Unix system to
function like a Windows Share server.
By default, Samba simply delivers the
files to the Unix file system with file
names and contents in their native
Windows code page. If you want the Samba
server to provide file backup service as
a Windows server would, you have a
problem, in that TSM provides Unicode
capability for Windows, but not Unix.
Attempting to perform a 'dsmc i' on Unix
for those files yields error
"unrecognized symbols for current
locale, skipping...". A way around this
is to have all new files incoming to the
Samba server get readable filenames, via
smb.conf specs, like:
client code page = 862
character set = ISO8859-8
(which are for Hebrew). A complication
is that Samba's code page specs are
singular, pertaining to all clients
using the Samba instance. That is, all
clients must use the same language for
the scheme to work.
To determine what code page a Windows or
DOS client is using, open a DOS command
prompt and type the command 'chcp'. This
will report the code page number. The
default for USA MS-DOS and Windows is
page 437. The default for western
European releases of the above operating
systems is code page 850.
SAN Storage Area Network, a somewhat loosely
defined approach to isolating backup
traffic to its own Fibre Channel network
and providing peer-level storage
servers. As of 2000, an immature
technology with little standardization
or interoperability.
See http://www.computerworld.com/cwi/
story/0,1199,NAV47_STO48238,00.html
SAN Data Gateway A SAN device to which the 3590 drives in
a 3494 library can be attached, for
access by a host. If there is question
about the device addresses after
hardware work, for example, the DG can
re-scan its SCSI chains (after deleting
them from TSM and the operating system,
to be followed by reacquisition by the
OS and TSM following the re-scan).
SANergy Ref: TSM 3.7.3+4.1 Technical Guide
redbook; TSM 4.2 Technical Guide redbook
SARS Statistical Analysis and Reporting
System, in 3590 tape technology. SARS
analyzes and reports on tape drive and
tape cartridge performance to help you
determine whether the tape cartridge or
the hardware in the tape drive is
causing errors, determine if the tape
media is degrading over time, and
determine if the tape drive hardware is
degrading over time. Manual:
"Statistical Analysis and Reporting
System User Guide", available at
www.storage.ibm.com/hardsoft/tape/pubs/
pubs3590.html
SCALECAPacity TSM 5.2.2+ DEVclass parameter to define
the percentage of the (3592) media
capacity that can be used to store data.
The default is 100 (%), as you would
expect, but you can otherwise specify
20 or 90.
3592 tapes can be scaled, to confine
data recording to a reduced length of
the tape (as opposed to reducing the
density of the data written over the
whole length of the tape).
Refs: 3592 Introduction and Planning
Guide manual; IBM TotalStorage
Enterprise Tape: A Practical Guide
redbook
SCHEDCMDUser TSM 4.2+ Unix (only) client option to
specify the name of a valid user on the
system where a scheduled command is
executed. If this option is specified,
the command is executed with the
authorization of the specified
user. Otherwise, it is executed with the
scheduler authorization.
Default: Run schedule under root (UID 0)
For Windows, you can use a different
user for the TSM client scheduler as
long as your user has the following
rights:
- Back up files and directories
- Restore files and directories
- Manage auditing and security logs
You can use 3 different tools:
1) The setup wizard in the B/A client
GUI, where you may choose an account
other than the usual System.
2) Using the dsmcutil command, you can
use the /ntaccount:ntaccount and the
/ntpassword:ntpassword parameters
when creating the scheduler:
dsmcutil
install/name:"TSM Scheduler Service"
/node:ALPHA1 /password:nodepw
/autostart:yes /ntaccount:ntaccount
/ntpassword:ntpassword
3) If the service already exist, you can
set the desired user Services,
Properties - Log on tab
SCHEDCOMPLETEaction Macintosh client Preferences file option
to specify what action to take after a
schedule has been completed. Choices:
Quit Tells the scheduler application
to quit once a schedule has
completed.
SHUTdown Causes your Mac to be shut
down once a schedule has
completed.
SCHEDLOGname Client System Options file (dsm.sys)
option to specify the schedule log.
Must be coded within the server stanza.
Default: the installation directory and
a file name of "dsmsched.log".
Best if it is a normal place, like:
/var/log/adsmclient/adsmclient.log
Beware symbolic links in the path, else
suffer ANS1194E.
SCHEDLOGRetention Client System Options file (dsm.sys)
option to specify the number of days to
keep schedule log entries and whether to
save the pruned entries.
Syntax:
SCHEDLOGRetention [N | <days>] [D | S]
where:
N Do not prune the log (default).
days Number of days of log to keep.
D Discard the error log entries.
(the default)
S Save the error log entries to
same-directory file dsmerlog.pru
Placement: Code within server stanza.
Possibly define a low number to prune
old entries, to keep the file size
modest. 'SCHEDLOGRetention 2 s' causes
pruned entries to be saved (s) to a
dsmsched.pru file.
See also: ERRORLOGRetention;
SCHEDLOGname
SCHEDMODe (in client) Client System Options file (dsm.sys)
option, to be coded in each server
stanza, to specify which *SM schedule
mode to use:
POlling, for the client scheduler to
query the *SM server for
scheduled work at intervals
prescribed by the
QUERYSCHedperiod option; or
PRompted, for the client scheduler to
wait for the *SM server to
contact the client when
scheduled work needs to be
done. This choice is
available only with TCP/IP:
all other communication
methods use POlling. See
firewall notes below.
Pictorially, the tickling direction is:
Polling: client --> server
Prompted: client <-- server
On Polling:
With Polling, the server never has to
contact the client: the client initiates
all the communication. Despite the name,
POlling does not continually interrupt
the server (the QUERYSCHedperiod option
limits this), and is what to use when
randomizing schedule start time via the
server 'Set RANDomize' command. Note
that in polling, the server does not
need the IP address or port number of
the client. Polling is a good method to
use with DHCP network access, with its
varying IP addressing, as TSM never has
to "remember" a client's network address
that way. Note that the long intervals
between polling make this method
problematic for when schedules are added
or revised on the server, particularly
for those from DEFine CLIENTAction.
On Prompted:
The effect of this choice is that the
client process sits dormant, and that at
a scheduled time, the server will
contact the client, to tickle it into
initiating a session with the server.
That is, it is not the case that the
server unto itself conducts a session
with the client, but rather that the
client is merely given a wake-up call to
conduct a conventional session with the
server. Prompted mode does not
ordinarily work across a firewall: use
POlling instead, unless you employ
SESSIONINITiation SERVEROnly. How the
server knows the address and port number
in order to reach the client: The basic
approach is that when a client contacts
the server, the client IP address and
port number are "registered" and stored
on the server. Alternately, the server
may be explicitly told to use an IP
address and port number per overriding
node definitions in the server, per the
HLAddress and LLAddress values. When it
is time to prompt that client, the
appropriate IP address and port numbers
are used. If HLAddress/LLAddress are
not used and the IP address changes for
that client, or its option file is
updated to specify a new TCPCLIENTPort
number, then the client schedule process
must be stopped and restarted in order
for the new values to be "registered"
with server, for it to be able to
subsequently contact the client.
Prompted mode log entries:
"Waiting to be contacted by the server."
See also: IP addresses of clients;
QUERYSCHedperiod; SESSIONITIAiation;
Set QUERYSCHedperiod; Set SCHEDMODes;
TCPPort
Ref: Tivoli Field Guide "Using the
Tivoli Storage Manager Central
Scheduler"
SCHEDMODe (in client), query 'dsmc Query Option' in ADSM or 'dsmc
show options' in TSM; SchedMode value.
SCHEDMODes (in server) *SM server definition of the central
scheduling modes which the server
allows.
Set via:
'Set SCHEDMODes [ANY|POlling|PRompted'
Query via: 'Query STatus', inspect
"Scheduling Modes".
Schedule A time-based action for the server
(Administrative Schedule) or client
(Client Schedule) to perform.
An Administrative Schedule is used to
perform things like migration,
reclamation, database backup.
A Client Schedule is used to perform one
of three things: ADSM client functions
such as backup/restore or
archive/retrieve; or a host operating
system command; or a macro (by its file
name, but not the ADSM MACRO command).
See "Schedule, Client" for detailed
info.
Schedule, associate with a client 'DEFine ASSOCiation Domain_Name
Schedule_Name ClientNode [,...]'
Schedule, Administrative A server-defined schedule is used to
perform a server command.
Controlled by 'DEFine SCHedule' to
define the particulars of the schedule.
Don't forget to code "ACTIVE=Yes".
Note that administrative schedules are
associated with the administrator who
last defined or updated them: the
schedule will not run if that
administrator is no longer valid
(removed, renamed, locked).
Schedule, Administrative, one time DEFine SCHedule with PERUnits=Onetime.
Schedule, Client A server-defined schedule is used to
perform one of three things: ADSM client
functions such as backup/restore or
archive/retrieve; or a client operating
system command; or a macro (by its file
name, but not the ADSM MACRO command).
Controlled by 'DEFine SCHedule' to
define the particulars of the schedule
and then 'DEFine ASSOCiation' to
associate the node with the schedule.
Thereafter you have to invoke 'dsmc
schedule' on the client for the Client
Schedule to become active: it is a
client-server mechanism and requires the
participation of both parties. The
minimum period between startup windows
for a Client Schedule is 1 hour.
A Client Schedule is kind of an ADSM
substitute for using cron on the Unix
client in order to perform the action.
The Client Schedule start time will be
randomized if 'Set RANDomize' is
active in the server.
See also: DEFine CLIENTAction;
DEFine SCHedule; SET CLIENTACTDuration;
Weekdays schedule, change the days
Schedule, Client, Archive type One awkwardness with scheduling Archive
operations via client schedules is the
Description field: defined with the
OPTions keyword, it becomes an unvarying
value, which defeats the selectability
that the Description field is for. The
only recourse seems to be to omit it,
which causes the archive date will be
stored, like "Archive Date: 07/11/01".
Multiple archives per day will not be
unique, but archives on separate days
will.
Schedule, Client, one time DEFine SCHedule with PERUnits=Onetime,
or use 'DEFine CLIENTAction'
Schedule, define See: DEFine SCHedule
Schedule, define to AIX SRC 'mkssys -s adsm -p
/usr/lpp/adsm/bin/dsmc -u 0 -a "sched
-q -pas=foobar" -O -S -n 15 -f 9'
then You can start it either by
calling "startsrc -s adsm" or let the
Schedule, dissociate from client 'DELete ASSOCiation DomainName
SchedName NodeName[, Nodename]'
Schedule, interval Defined via the PERiod parameter in
'DEFine SCHedule', in the server.
See also: QUERYSCHedperiod
Schedule, missed At the end of the start duration for a
given schedule, the schedule manager
looks for nodes associated with the
schedule which never "started" (probably
caused by the client scheduler not being
active at the known IP address). These
get marked as "missed". At the same
time that this "check" is performed the
schedule manager also checks for nodes
which are in a "started" or "re-started"
state. For these nodes, there is check
done to determine if there is an active
session for the node/schedule
combination. If there is no session
(most likely caused by some sort of
timeout) then the schedule is marked as
"failed" in the server schedule event
table. Here is the "catch": Although the
client may reconnect after this time and
complete the activity, the event table
will NOT be updated to note this. This
case is what most administrators might
be seeing. There has to be some sort of
garbage cleanup for clients that never
do re-connect. If you see a lot of this,
you should consider updating your
IDLETimeout and COMMTimeout periods to
longer values. Also consider a longer
duration for the schedule. While the
duration is used for a start period and
not the time the scheuled activity must
comlpete in, the end of the duration is
used as a sanity check for prompted
sessions that have "disappeared".
Missed schedules are often caused by
wrong or expired passwords, or an
outdated MAXSessions server option
value. Msgs: ANR2571W et al
See also: Missed
Schedule, query from client 'dsmc Query Schedule'.
Shows schedule name, description, type,
next execution, etc.
Schedule, randomize starts See: Set RANDomize
Schedule, run command after Use the POSTSchedulecmd Client System
Options file option to specify the
command to be run.
Schedule, run command before Use the PRESchedulecmd Client System
Options file option to specify the
command to be run.
Schedule Randomization Percentage Output field in 'Query STatus' report.
See 'Set RANDomize' for details.
Schedule Log, prevent creation That log is controlled by the
SCHEDLOGName option. If running Unix,
you can define the name as /dev/null to
avoid creating a log file.
Schedule Log name The schedule log's default name, as it
resides in the standard ADSM directory,
is dsmsched.log.
Can be changed via the SCHEDLOGname
Client System Options file (dsm.sys)
option. Query via 'dsmc q o' and look
for SchedLogName.
Beware symbolic links in the path, else
suffer ANS1194E.
See: SCHEDLOGname
Schedule log name, query ADSM: 'dsmc Query Options'
TSM: 'dsmc SHOW Options'
look for "SchedLogName".
Schedule log name, set Controlled via the SCHEDLOGname Client
System Options file (dsm.sys) option
(q.v.).
Schedule Log pruning Messages: ANS1483I, ANS1485E
Schedule Randomization Percentage 'Query STatus', look for
"Schedule Randomization Percentage"
Schedule retry period Controlled via the RETRYPeriod Client
System Options file (dsm.sys) option
(q.v.).
Schedule Service Windows: Employs the NT 'at' command to
schedule command and programs to be run
at certain times. In NT4: Go into My
Computer; select Scheduled Tasks; open
Add Scheduled Task; select program to be
run.
Note that this just runs the TSM
schedule command: you additionally need
to define a client schedule in the TSM
server.
Alternative: Specify the 'dsmc schedule'
command in your Startup folder.
Beginning with TSM 4.1 and the use of
Microsoft Installer, the Schedule
Service is not automatically configured
at package installation time: configure
via dsmcutil or run the setup wizards
from the Backup/Archive GUI.
Tracing: See IBM site Technote 1152613
See also: PRENschedulecmd;
PRESchedulecmd
Scheduled commands Their output cannot be redirected: it
must go to the Activity Log.
Scheduled events, start and stop 'Query EVent * * Format=Detailed'
times, actual will reveal. If the events would all be
backups, you could also determine by:
'Query FIlespace [NodeName]
[FilespaceName]
Format=Detailed'
Scheduler, client See also: CAD; MANAGEDServices
Scheduler, client, looping Assure that dsmerror.log and
dsmsched.log are Excluded from backups.
Scheduler, client, Windows, restart Settings -> Control Panel ->
automatically Administrative Tools -> Services :
Select the service, oen its properties,
then adjust Recovery as desired.
Scheduler, client, start You run the client program, telling it
to run in Schedule mode, basically:
/usr/lpp/adsm/bin/dsmc schedule
Note that the client options files are
read only when the dsmc program starts:
changes made to the files after that
point will not be observed by the
program. You have to restart dsmc for
such file changes to be picked up. In
contrast, the client option set in the
server is handed to the client scheduler
each time it run a schedule, and so
the scheduler does not have to be
restarted when cloptset changes are
made.
Ref: Installing the Clients
Scheduler, client, start automatically Unix: Add line to the client
/etc/inittab file to start it at boot
time. For AIX:
adsm::once:/usr/lpp/adsm/bin/dsmc sched
> /dev/null 2>&1 # ADSM Scheduler
Windows: Make a shortcut to the
scheduler EXE program and put the
shortcut into the Startup folder: this
causes the scheduler to start whenever a
person logs on.
Ref: Installing the Clients.
Scheduler, client, start automatically Add to startup.cmd:
in OS/2 'start "Adsm Scheduler" c:\adsm\dsmc
schedule /password=actualpassword'.
Add "/min" after the word "start" to
have it run in a minmized window.
Scheduler, client, start manually Under bsh:
'dsmc schedule > /dev/null 2>&1
< /dev/null &'
or use nohup:
'nohup dsmc schedule > /dev/null 2>&1
< /dev/null &'
By redirecting both Stdout and Stderr
you avoid a SIGTTOU condition
("background write attempted from
control terminal"); and forcing a null
input you avoid situations where the
command hangs awaiting input.
But if the command may be trying to tell
you that something is wrong (as when
your client password is expired), and
you are suppressing that information,
then you will not know what is going on.
It is healthier to direct Stdout and
Stderr to a log file.
On Unix you could alternately do:
'echo "/usr/lpp/adsm/bin/dsmc sched
-quiet" | at now'
At least do 'dsmc q o' under ADSM or
'dsmc show options' under TSM to check
your options, if not invoke 'dsmc
schedule' out in the open to capture any
messages, then cancel it.
Interesting note: If you start the
scheduler simply as 'dsmc schedule', it
displays a novel countdown timer, at
least when SCHEDMODe PRompted is in
effect. You may not want to leave a
superuser terminal session sitting
around like this, but it can be a
valuable way to help narrow down a
scheduler problem.
See also: dsmc
Scheduler, find in Windows (NT) regedit " adsm scheduler "
Scheduler, max retries Specify via the MAXCMDRetries option in
the Client Systems Options file
(dsm.sys). Default: 2
Scheduler, max sessions 'Set MAXSCHedsessions %sched'
Scheduler, number of times retry cmds 'Set MAXCMDRetries [N]'
Scheduler, windows, not installed TSM4 does not install the Scheduler as
part of the client install. You can use
the dsmcutil program to install it, or
do it from the GUI.
Scheduler "not working" Things to look for:
- Is your node actually registered on
the server? If so, has a LOCK Node
been done on it, or a global DISAble
SESSions been done on the server (msg
ANR2097I)? For that matter, is the
server running?
- Are you starting the scheduler process
on the client as superuser?
- In Unix, remember that the scheduler
process is a background process, and
so it behooves you to redirect Stdin,
Stdout, and Stderr. (See: Scheduler,
client, start ...)
- In Unix, beware having "dsmc sched" in
/etc/inittab with 'respawn', as the
dsmc process may respawn itself, and
init may alwo respawn it, resulting in
port contention. Consider using dsmcad
instead.
- If using PASSWORDAccess Generate, did
you perform the required initial
superuser session to plant the client
password on the client? Did the
password expiration period as defined
in REGister Node or Set PASSExp run
out?
- If the PRESchedulecmd returns a
non-zero return code, the scheduled
event will not run.
- Is the scheduler process actually
present? If present, is the process
runnable? (In Unix, a 'kill -STOP'
prevents it from running.)
- Has a schedule been defined on the
server, and has a DEFine ASSOCiation
to have your node perform it?
- Is the server reachable from your
client, and vice versa (network,
firewall issues).
- The client schedule type - polling or
prompted - will dictate the direction
: in which to pursue analysis.
- Be sure to check client dsmerror.log
files for indications.
- You might also check for lingering
client sessions, which may exhaust
your eligible license count.
- For problem isolation, consider
running it as 'dsmc SCHedule', leaving
the superuser terminal session in a
foreground state like this for a day
or so (in a physically secure room).
- To debug an apparent TSM server
failure to schedule, define a client
schedule that runs every hour, with
ACTion=Command and OBJects specifying
a client command which will simply log
the scheduled invocation, such as the
Unix command 'date >> /var/log/debug'.
Scheduler Service See: Schedule Service
Schedules, administrative, list Via server commands:
'Query SCHedule Type=Administrative'
- or -
'SELECT * FROM ADMIN_SCHEDULES'
Schedules, client, list Via server commands:
'Query SCHedule'
- or -
'SELECT * FROM CLIENT_SCHEDULES'
Schedules, pending TSM server: 'SHOW PENDING'
Schedules in effect 'Query ASSOCiation
[DomainName [SchedName]]'
Scheduling Mode A mode that determines whether your
client node queries a *SM server for
scheduled work (client-polling) or waits
to be contacted by the server when it is
time to perform scheduled services
(server-prompted). If using TCP/IP,
best to use the "server prompted"
scheduling mode. The client options file
will have to have an option coded that
says SCHEDMODe PRompted. The default
mode of scheduling is "client polling".
Scheduling Modes See: SCHEDMODes; Set SCHEDMODes
Scout daemon The dsmscoutd HSM process.
See: dsmscoutd
Scraper Device that Magstar hardware engineering
added to new 3590 drives in 1999, to
attempt to remove dirt from the tapes by
staying in contact with the tape as it
moved by. Ended up being discontinued
because friction heat would distort the
tape's plastic base, and the scraper
itself would become a source of dirt as
it built up on the scraper.
Scratch See: MAXSCRatch
Scratch, make tape a scratch Via ADSM command:
'UPDate LIBVolume LibName VolName
STATus=SCRatch'
Via Unix command:
'mtlib -l /dev/lmcp0 -vC -V VolName
-t 12e'
This is just a 3494 Library Manager
database change: ADSM does not see it,
and it will not be reflected in
'Query LIBVolume' output.
SCRATCH category, change tape to Via Unix command:
'mtlib -l /dev/lmcp0 -vC -V VolName
-t 12e'
which may be done if a tape already
prepared via the ADSM 'CHECKIn' command
somehow gets a wrong category, such as
INSERT. If tape not previously prepared
via the ADSM 'CHECKIn' command, you
should do that, which also prepares the
tape label.
SCRATCH category code 'Query LIBRary' reveals the decimal
category code number.
Scratch tape Term used to refer to a tape available
for general writing for a storage pool.
The number of scratch tapes eligible for
a storage pool is specified via:
'DEFine STGpool MAXSCRatch=NNN'
where the default is 0, with the
expectation then being that you would
dedicate volumes to the pool via
'DEFine Volume STGpool VolName'.
If scratch volumes are used, they are
automatically deleted from the storage
pool when they become empty.
Scratch tape, 3490, add to 3494 'CHECKIn LIBVolume LibName VolName
library containing 3490 and 3590 STATus=SCRatch
tape drives [CHECKLabel=no] [SWAP=yes]
[MOUNTWait=Nmins] [SEARCH=yes]'
Note that this involves a tape mount.
Newly purchased tapes should have been
internally labeled by the vendor, so
there should be no need to run the
'dsmlabel' utility.
Scratch tape, 3590, add to 3494 'CHECKIn LIBVolume LibName VolName
library containing 3490 and 3590 STATus=SCRatch
tape drives [CHECKLabel=no] [SWAP=yes]
[MOUNTWait=Nmins] [SEARCH=yes]
[DEVType=3590]'
Note that this involves a tape mount.
Newly purchased tapes should have been
internally labeled by the vendor, so
there should be no need to run the
'dsmlabel' utility.
Scratch tape, 3590, add to 3494 'CHECKIn LIBVolume LibName VolName
library containing only 3590 STATus=SCRatch DEVType=3590
tape drives [CHECKLabel=no] [SWAP=yes]
[MOUNTWait=Nmins] [SEARCH=yes]'
Note that this involves a tape mount.
Scratch tape, add to library 'CHECKIn LIBVolume LibName VolName
(as in 3494) STATus=SCRatch
[CHECKLabel=no] [SWAP=yes]
[MOUNTWait=Nmins] [SEARCH=yes]
[DEVType=3590]'
Note that this involves a tape mount.
Newly purchased tapes should have been
internally labeled by the vendor, so
there should be no need to run the
'dsmlabel' utility.
Scratch tapes, list See: Scratch volumes, list
Scratch Volume A volume which is checked into a
library, and is assigned a library
Category Code which makes it eligible
for dynamic use in a given server
storage pool. After that volume's
contents have evaporated, the volume
leaves the storage pool and returns to
eligible status. Contrast this with
volumes which are Defined into a storage
pool and stay there.
Ref: Admin Guide, "Scratch Volumes
Versus Defined Volumes".
Also, an element of Query Volume command
output. Its value is Yes if the volume
came from a scratch pool (and will
return there when the volume empties).
See also: Defined Volume
Scratch volume added to stgpool Msg: ANR1340I Scratch volume ______ now
defined in storage pool ________.
This is when *SM itself adds the volume
to the storage pool, when it needs more
writable space.
Corollary msg: ANR1341I
Does not correspond to adding a volume
to a storage pool via DEFine Volume,
whose message is ANR2206I.
SCRATCH volumes, count of in 3494 Via Unix command:
(3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E'
category code x'12E')
Scratch volumes, list In server: SELECT VOLUME_NAME, STATUS
FROM LIBVOLUMES WHERE STATUS='SCRATCH'
In Unix: mtlib -l /dev/lmcp0 -qC -s ___
where the scratch category must be
supplied, in hex
SCRATCHCATegory Operand of 'DEFine LIBRary' server
command, to specify the decimal category
number for scratch volumes in the
repository. Default value: 301.
3494: As the model number implies, the
3494 was introduced to contain 3490
tapes. 3590s are still an extension of
that origin. Thus, the scratch category
number you define is for 3490 tapes,
though they are essentially non-existent
today. 3590 scratches are implied to be
one number higher: SCRATCHCATegory+1. So
you must make allowances to avoid
conflicts, particularly with the Private
category number.
Scratches, list SELECT LIBVOLUMES.VOLUME_NAME, -
LIBVOLUMES.STATUS, -
LIBVOLUMES.LIBRARY_NAME FROM -
LIBVOLUMES LIBVOLUMES WHERE -
(LIBVOLUMES.STATUS='Scratch')
Scratches, number left SELECT COUNT(LIBVOLUMES.VOLUME_NAME) -
AS "Scratch volumes" FROM LIBVOLUMES -
WHERE (LIBVOLUMES.STATUS='Scratch')
Or, with a 3494 you can externally query
from the opsys command line, based upon
the category code of your scratches:
'mtlib -l /dev/lmcp0 -qC -s ScratchCode'
and then count the number of lines.
Scripts See: Server Scripts
scripts.smp The product-supplied sample server
scripts definition file. May be
installed into the server bin directory,
or even its webimages directory.
See: SQL samples
SCROLLLines Client System Options file (dsm.sys)
option to specify the number of lines
you want to appear at one time when ADSM
displays lists of information on screen.
Default: 20 lines
SCROLLPrompt Client User Options file (dsm.opt)
option to specify whether you want long
displays to stop and prompt you to
continue, or to just pump out a whole
response without stopping.
Default: No
Specify 'No' if using the Webshell,
which needs to process ADSM command
output and balks at such prompts.
SCRTCH MVS, OS/390 generic designation for a
Scratch volume.
SCSI IDs in use, list AIX cmd: 'lsdev -C -s scsi -H'.
SCSI Library A library lacking an internal supervisor
such that the TSM server must physically
manage its actions, and must keep track
of volume locations.
Current SCSI libraries include: 3570;
3575; 3581; 3583; 3584.
For SCSI libraries, the server maintains
certain information to detect library
firmware bugs. If the customer expands
or otherwise change the configuration of
their library, there is a procedure the
customer must follow; otherwise the
internal checks of the server will
prevent the initialization of the
library.
Perspective on SCSI libraries: Why would
anyone spent a lot for a 3494 when a
3584 is so inexpensive? SCSI libraries
are "Ford" level products, eliminating a
lot of functionality to reduce the price
point. The work they don't do they shift
to the host, and so TSM is burdened with
a lot of intricate SCSI element details
and control issues. The server software
has to keep in sync with any changes in
the SCSI library components and
protocols - a functionality exposure,
and more work for the TSM administrator.
The 3494 is a "Lexus" level product in
which operations are delegated to the
LM: TSM simply has to say "I want tape
123456 mounted", and let the library do
all the difficult stuff while TSM server
cycles are free to do real work.
See also: Element; SHow LIBINV
SDG SAN Data Gateway
As for connecting a host with fibre
channel to tape drives with Ultra SCSI
connections: the SDG bridges the two
connection technologies.
Ref: TSM 5.1 Technical Guide redbook
See also: Server-free
SECOND(timestamp) SQL function to return the seconds value
from a timestamp.
See also: HOUR(), MINUTE()
Secondary Server Attachment You can obtain a license for attaching a
second server to a Library. It is not a
functioal thing, but rather just a
marketing thing to reduce the cost of a
second ADSM license for another server.
If so licensed, get the following
message at server startup:
ANR2859I Server is licensed for
Secondary Server Attachment.
Ref: Administrator's Guide.
Shows up in 'Query LICense' output.
SECONDS See: DAYS
Security in *SM First, *SM was not designed for
physically insecure environments.
Userid/Password: Rather rudimentary, in
that there is no distinction between
upper and lower case. But it uses a
"double-handshake" authentication
process that's pretty robust and
relatively tough to crack.
Client data: Can be stored in encrypted
file systems (EFS).
Client-server communication: Can be
encrypted. (See TSM 3.7.3 + 4.1
Technical Guide redbook)
Tapes: They are in proprietary,
undefined format, with no customer tools
for directly interpreting them.
See: Set INVALIDPwlimit;
Set MINPwlength; Set INVALIDPwlimit
SEGMENT Column in SQL database CONTENTS table.
See: Segment Number
Segment Number For files that span sequential volumes,
identifies the portion of the file that
is on the given volume, as revealed via
the Query CONtent server command or a
SELECT * FROM CONTENTS. (For volumes in
random-access storage pools, no value is
displayed for this field.)
See also: Aggregated; Query CONtent;
Span volumes, files that, find
Segmentation violation ("Segfault") Also known as Signal 11 (SIGSEGV).
Program failure in Unix caused a
programming error: the program attempts
to write to a region of memory to which
it does not have access, as in writing
past the end of an array due to failure
to check bounds.
You need to upgrade to a level of the
program where the defect is fixed.
You may be able to temporarily avoid the
failure if you can identify the
circumstances under which it occurs and
stay away from that scenario. The
problem may occur during an incremental
backup, where the Unix client is working
a large list of Active files gotten from
the server.
In some cases, you can prevent the
segfault by increasing the stack limit
using the 'ulimit -s' command.
If the server crashed, there may be a
dsmserv.err file with some indications
in it.
See also: MEMORYEFficientbackup
SELECT *SM command to perform an SQL Query of
the TSM Database, introduced in ADSMv3.
Syntax:
SELECT [ALL | DISTINCT]
column1[,column2] FROM table1[,table2]
[WHERE "conditions"]
[GROUP BY "column-list"]
[HAVING "conditions]
[ORDER BY "column-list" [ASC | DESC] ]
Note that this implementation of Select
is primitive, with a major shortcoming
being the absence of a LIMIT qualifier
to keep the search from plowing through
the whole table when, for example, only
the first occurrence of a value is
desired. This Select form also differs
from common SQL in requiring the
specification of FROM - which thus
prevents use of Select in *SM to
evaluate basic expressions, as you might
do "SELECT 2+2" to compute 4, or do
SELECT CURRENT_TIME to see that value.
(You can neatly work within this
requirement and get what you want, by
using a trivial *SM table as the FROM
value, as in:
SELECT CURRENT_TIMESTAMP FROM LOG
where table LOG serves as a placebo.)
Note that the *SM database is not an SQL
database per se: SQL Select was added on
top of it to provide customers the
ability to report information in a
flexible manner. The SQL tables that you
process via Select do not actually
exist: they are effectively constructed
as your Select runs (hence the TSM db
work space margin requirment.) However,
there is indexing: if you do
'SELECT * FROM SYSCAT.COLUMNS'
you will notice columns INDEX-KEYSEQ and
INDEX-ORDER, which on their own rows are
described:
INDEX_KEYSEQ Column key sequence
number
INDEX_ORDER Index key sequence
A - ascending
D - descending
Performing your Select based upon an
indexed column results in faster runs.
While less flexible, the pre-programmed
server commands which report from the
(actual) database are much faster in
that they are optimized to go directly
at the actual database format, and don't
have to go through the artificial SQL
interface. Note that various info is
not available through the SQL interface
- particularly that which is accessible
via client queries where the data
content is specific to the client
operating environment (OS, file system,
etc.). Generally speaking, if there is
no (supported) TSM server command which
reports certain information, there will
be no SQL access to it, either.
Impact: The Select command may require
work space to service the query, which
it takes from the TSM database itself -
and so you need a decent amount of free
space to do more complex Selects. The
SQL functions can also be performed via
the ODBC interface which is provided in
Windows clients (only). Appendix A in
the TSM Technical Guide redbook
perpetually carries ODBC usage info.
See also: Database; Events table; ODBC;
SQL ...
SELECT, date/time Select ... \
WHERE DATE(DATE_TIME)='mm/dd/ccyy'
SELECT, example of defining headers SELECT CLIENT_VERSION AS "C-Vers", -
CLIENT_RELEASE AS "C-Rel", -
CLIENT_LEVEL AS "C-Lvl", -
CLIENT_SUBLEVEL AS "C-Sublvl", -
PLATFORM_NAME AS "OS" , -
COUNT(*) AS "Nr of Nodes" FROM NODES -
GROUP BY -
CLIENT_VERSION,CLIENT_RELEASE,-
CLIENT_LEVEL,CLIENT_SUBLEVEL,-
PLATFORM_NAME
SELECT, example of pattern search SELECT * FROM ACTLOG WHERE MESSAGE LIKE
'%<process_name>%'
SELECT, example using dates SELECT * FROM ACTLOG WHERE DATE_TIME \
>'1999-12-22 00:00:00.000000' AND
DATE_TIME <'1999-12-23 00:00:00.000000'
SELECT, exclusive case To report columns which are in one table
but not in another, use the NOT IN
operators. For example, to report TSM
database backup volumes which have been
checked out of the library:
SELECT DATE_TIME AS -
"Date_______Time___________",TYPE, -
BACKUP_SERIES,VOLUME_NAME -
FROM VOLHISTORY WHERE -
(TYPE='BACKUPFULL' OR TYPE='BACKUPINCR')
AND VOLUME_NAME NOT IN (SELECT
VOLUME_NAME FROM LIBVOLUMES)'
SELECT, generate commands from See: SELECT, literal column output
SELECT, literal column output You can cause literal text to appear in
every row of a column, which is one way
to generate lines containing commands
which operate on various database
"finds". The form is:
'Cmdname' AS " " ...
where Cmdname will appear on every line.
For example, here we generate
Update Libvolume commands for scratches:
SELECT 'UPDATE LIBV OUR_LIB' AS -
" ", -
VOLUME_NAME, ' STATUS=SCRATCH' FROM -
LIBVOLUMES WHERE STATUS='Scratch' -
> /tmp/select.output
Inversely, you may employ a literal to
occupy only the title of the first
column of a report, to name the report -
given that TSM's limited SQL excludes
the ability to have a page title, as the
TTITLE operator would do. Example:
SELECT '' AS "Title" ...
SELECT, restrict access See: QUERYAUTH
SELECT, speed vs. client speed You will inevitably realize that the B/A
client can obtain filespace and file
information much faster than it can be
obtained via the server Select command.
The gist of the matter is that Select is
a virtualized convenience for us server
administrators to look at the data in
the database, whereas the client "knows
the inside scoop" and can more directly
go after the data. Select is much more
generalized, and entails more overhead.
SELECT, terminate prematurely The SELECT may run for a ridiculously
long time, and you want it gone rather
that waiting for it to end. Entering
'C' to cancel is ineffectual because it
merely waits for the operation to end.
You need to do a CANcel SEssion from
another dsmadmc invocation in order to
get rid of it. This will terminate the
SELECT, but not force you out of the
original dsmadmc.
SELECT, yesterday ...DAYS(CURRENT_DATE)-DAYS(DATE_TIME)=1
SELECT output, column width The width of a column is governed by its
header; so you can use that to cause
your columns to be widened to keep
column content from wrapping across
lines. You define a column header via
the SQL "AS".
SELECT output, columnar instead of Issuing Select (and Query) commands from
keyword list the dsmadmc prompt may result in the
report being in Keword: Value sets
instead of tabular, columnar output.
This can be controlled via the explicit
dsmadmc -DISPLaymode= option, but is
also the implicit result of the
combination of the number of database
entry fields (columns) you choose to
report, the column width of each, and
the width of your window. *SM *wants to*
display the results in tabular form, and
is helped in doing so by reducing the
number of fields reports and/or their
column width (via the <ColumnName> AS
____ construct). Widening your window
will also help. (In an xterm window, you
can aid this by the use of smaller
fonts: hold down the Ctrl key and then
press down the right mouse button, and
from the VT Fonts list choose a smaller
font.)
You can demonstrate the adaptation by
doing 'SELECT * FROM AUDITOCC' in a
narrow window, which will result in
Keyword: Value sets; then widen it to
get tabular output.
See also: dsmadmc; -DISPLaymode
Selective Backup A function that allows users to back up
objects (files and directories) from a
client domain that are not excluded in
the include-exclude list and that meet
the requirement for serialization in the
backup copy group of the management
class assigned to each object. A
selective backup of filenames will also
result in their containing directory
being backed up.
Performed via the 'dsmc Selective' cmd.
"Selective" backs up files regardless of
whether they have changed since the last
backup, and so could result in more
backup copies of the file(s) than usual.
In computer science terms, this is a
"stateless" backup.
Note that the selective backups
participate in your version limits.
Note that a Selective backup does not
back up empty directories, and it does
not change the "Last Incr Date" as seen
in 'dsmc Query Filespace', nor the
backup dates in 'Query FIlespace'
(because it is not an incremental
backup).
Rebinding: A Selective backup binds the
backed up files to the new mgmtclass,
but not the Inactive files: you must
perform an unqualified Incremental
backup to get the latter.
Example: dsmc s -subdir=y FSname
See also: dsmc Selective
Selective Backup, more overhead than Running a Selective Backup can be
Archive expected to entail more overhead than a
comparable Archive operation, in that
more complex retention policies are
involved in Backup policies than in
Archive. Remember that Archive retention
is based purely upon time, whereas
Backup involves both time and versions
decisions. File expiration candidates
processing based upon versions (number
of same file) is performed during client
Backups (in contrast to time-based
retention rules, which are processed
during a later, separate Expiration).
The more versions you keep, the more
work the server is distracted with at
Backup time.
Selective Backup fails on single file See: Archive fails on single file
Selective migration HSM: Concerns copying user-selected
files from a local file system to ADSM
storage and replacing the files with
stub files on the local file system.
Is goverened by the
"SPACEMGTECH=AUTOmatic|SELective|NONE"
operand of MGmtclass.
Contrast with threshold migration and
demand migration.
Selective recall The process of copying user-selected
files from ADSM storage back to a local
file system. Contrast with transparent
recall. Syntax:
'dsmrecall [-recursive] [-detail]
Name(s)'
SELFTUNEBUFpoolsize TSM server option to specify whether TSM
can automatically tune the database
buffer pool size. If you specify YES,
TSM resets the buffer cache hit
statistics at the start of expiration
processing. After expiration completes,
if cache hit statistics are less than
98%, TSM increases the database buffer
pool size to try to increase the cache
hit percentage.
The value which TSM will apply will not
exceed 10% of real memory. IBM
recommends a value that is higher than
that.
The default is NO.
SELFTUNETXNsize TSM server option to specify whether TSM
can automatically change the values of
the TXNGroupmax, MOVEBatchsize, and
MOVESizethresh server options. TSM sets
the TXNGroupmax option to optimize
client-server throughput and sets the
MOVEBatchsize and MOVESizethresh options
to their maximum to optimize server
throughput. Default: NO.
Obsoleted in TSM 5.3 because other
performance enhancing changes were made
in the software. (If present in the
file, no error message will be issued,
at least early in the phase-out.)
SendW "Sess State" value from 'Query SEssion'
saying that the server is waiting to
send data to the client (waiting for
data to be delivered to the client node
that has already been sent, as in
waiting for the client to respond to
the send).
If you see the session continually in
SendW state but the Wait Time is "0 S"
and the Bytes Sent keeps increasing,
then it is not the case that the session
is stuck in SendW state. Rather, that is
just the dominant state.
See also: Communications Wait;
Idle Wait; Media Wait; RecvW; Run; SendW
Sense Codes, 3590 Refer to the "3590 Hardware Reference"
manual.
Sequential devices Tape is an obvious, physical example of
a sequential access medium, in which
data can only be appended after the
position where data was last written to
the tape (in-midst updating not
possible). TSM also supports sequential
device definition on disk, via the FILE
device class.
See also: FILE
SERialization A copy group attribute that specifies
(backing up open files) whether an object can be modified during
a backup or archive operation and what
to do if it is. Specified by the
SERialization parameter in the 'DEFine
COpygroup' command.
This parameter affects only upcoming
operations: it has no effect upon data
already stored on the server.
See: Changed; CHAngingretries; Dynamic;
Fuzzy Backup; Shared Dynamic;
Shared Static; Static
SERVER Device type used for a special device
class where the volumes are virtual
(Virtual Volumes) and exist on another
*SM server as archived files.
The data which may be stored across
servers can include DBBackup volumes.
See also: FILE
server A program that runs on a mainframe,
workstation, or file server that
provides shared services such as backup,
archive, and space management to other
various (often remote) programs called
clients.
Server, HSM, specify Specified on the MIgrateserver option
in the Client System Options file
(dsm.sys). Default: the server named on
the DEFAULTServer option.
Server, merge into another server As of TSM 4.1, there is no way to merge
one server into another server, as you
might want to do in transferring a
retiring server system's data and
library to another server. Your only
options are:
- Export from the old server and Import
into the other;
- Run the old server as a parallel
instance on the same platform where
the other server lives, via database
restore. (Doing this without
Export-Import requires that both
servers be of the same operating
system type.)
Server, move to another architecture This will most likely have to be
performed via Export/Import, including
both the server proper and all the
client data, rather than simply moving
the "server" portion of things and have
the new server architecture use the old
server data tapes, as-is.
However, you *might* be able to
accomplish the move via Restore DB: one
customer reports successfully moving a
server from AIX to Solaris via this
method. Note that this is a very gray
area, completely unspecified by Tivoli.
One could conceivably run into problems
even when moving between like
architecture machines, such as from
32-bit Solaris to 64-bit Solaris.
Server, move to same architecture You can rather easily move the TSM
server from one system to another, of
the same architecture, as when upgrading
to a more powerful server. Essentially,
all you have to do is move or copy the
current TSM server database, recovery
log, and storage pool volumes, as is,
retaining the same path names.
You can do 'DSMSERV RESTORE DB' across
systems of the same architecture.
A thumbnail of such a move, on AIX,
using SSA disk and keeping the same IP
address:
- Install the new AIX system, at an AIX
level which is compatible with your
existing TSM level.
- Migrate disk storage pools to tape,
for safety.
- Prevent all sessions and processes,
then run a safety db backup.
- Halt the server and shut down the AIX
system.
- Disconnect the SSA disks and tape
drives from the old system, and
connect to the new.
- Fire up the new AIX system with the
same IP address as the old one.
- Install your TSM server software on
the new system.
- Connect the ssa disks and tape drives
to the new system.
- Import the volumegroups and mount
file systems.
- Check the volhistory, database, and
logs for placement.
- Make any adjustments needed in the
devconfig and server config files.
- Start the TSM server on the new
system.
Server, prevent all access The 'COMMmethod NONE' server option will
prevent all communication with the
server.
Server, prevent client access Temporarily changing the server options
file TCPPort value to a hoked value will
prevent client access - they utilize a
value coded on their client option file
TCPPort option (default: 1500), which
would prevent them from talking to the
server when its value is different.
See also: DISAble SESSions;
DISABLESCheds
Server, recover to new disk space You may have to recover the *SM server
after the loss of the disks upon which
its Database and Recovery Log resided.
If you keep good records, you know how
much disk space was involved, in order
to recreate the space at the operating
system level. But if you don't know the
sizes, you can allocate a larger area:
The 'dsmserv restore' command will
decrease the DB and Recovery Log to its
original sizes and whatever is left over
will become the Maximum Extension.
Server, restarting after killing, After a server is restarted, do
things to watch out for 'Query DBVolume' and 'Query LIBVolume'
in that a mirror copy could have become
de-synced.
Server, run as non-root (in Unix) The *SM server is conventionally run by
user root, to be able to do anything it
needs to. However, it is possible to run
the *SM server under other than root...
Much of the issue of doing so is in the
ownership of file in the server
directory and its contained files:
adsmserv.licenses (ADSM, not TSM)
adsmserv.lock (ADSM, not TSM)
dsmaccnt.log
dsmerror.log
dsmlicense
dsmserv.dsk
dsmserv.err
dsmserv.opt
nodelock
rc.adsmserv
Likewise, adjust ownership/permissions
of dbvols, logvols and diskpool volumes.
You must also assure that the username
under which the server is to run has
high enough Unix Resource Limits (as in
AIX /etc/security/limits), not
artificially lower-limited by the shell
under which the server is started. Not
accounting for this can result in BUF087
failure of the server (msg ANR7838S).
Downsides: Cannot use Shared Memory.
Server, select from client In the Unix environment, a client may
choose the server to contact, by using
the SErvername in the Client User
Options file, or by doing:
'dsm -SErvername=StanzaName'
'dsmc incremental
-SErvername=StanzaName'
to identify the stanza in dsm.sys which
points to the server by network and port
addresses.
Server, shut down 'HALT' command, after doing a 'DISAble'
to prevent new sessions, 'Query
SEssions' to see what's active, and
'CANcel SEssion' if you can't wait for
running stuff to finish. You should
also 'dismount' any mounted tapes
because the 'halt' does not dismount
them.
Note that this does not shut down HSM
processes such as dsmmonitord and
dsmrecalld, as these are file-system
oriented and need to remain active.
In Unix, it is conventional to shut down
applications in /etc/rc.shutdown,
wherein you could code a dsmadmc
invocation of HALT.
Note that Unix TSM servers
conventionally respond to SIGTERM to
terminate cleanly.
See also: HALT
Server, split? When the load on one TSM server becomes
excessive, it's time to split out to
another server. Decision factors:
- Expire Inventory remains a
single-process task, and may run far
too long to be acceptable.
- BAckup DB takes too long.
Server, start automatically Conventionally, the installation of the
product installs a server start-up
method in a place standard for the given
operating system, such as /etc/inittab
for AIX:
autosrvr:2:once:
/usr/lpp/adsmserv/bin/rc.adsmserv
>>/var/log/adsmserv/adsmserv.log 2>&1
See also: Server startup
Server, start manually The following steps start the *SM
server proper:
- Make sure that the disks containing
the TSM db, Recovery Log, and storage
pools are varied online to the
operating system.
- In Unix, make sure your Resource
Limits - particularly filesize - is
sufficient to handle the CPU, memory,
and file sizes the server will need.
- Now invoke the server: In Unix:
'cd /usr/lpp/adsmserv/bin'
'./dsmserv quiet' (run in bkground)
- or -
'./dsmserv' (run interactively)
or alternately do:
'/usr/lpp/adsmserv/bin/rc.adsmserv &'
Do *not* do './dsmserv &', because
without the "quiet" option it will be
constipated, needing to output to the
tty.
Do 'Query DBVolume' and
'Query LIBVolume' after restart to
assure that all mirrored copies are
synced.
If you use HSM, go start it as well.
(See: HSM, start manually)
See also: Server startup
Server, stop See: Server, shut down
Server command line access 'dsmadmc ...'
Server development site Is Tucson, AZ.
Server directory (executables, Named in the DSMSERV_DIR environment
license file, etc.) variable; defaults to:
AIX: /usr/lpp/adsmserv/bin/
Sun: /opt/IBMadsm-s/bin/
If another directory is to be used, the
environment variable must be set thus.
Ref: Install manual
Server disappeared, handling You find your host system up for some
time, but your TSM server has
disappeared. What should you do?
First, try to determine why...
- Look for the server process, to assure
that it really has gone away. (If the
process is present, see if it is in
some way stopped, and what's causing
it.)
- Look at the last-modified dates of
your recovery log, per file names in
dsmserv.dsk, to get a sense of when it
went away.
- Look for a core/dump file in the
server directory, which certainly
shows when it went away.
- In Unix, you can look at the
/var/adm/pacct files, via 'acctcom' or
like command, to see when the dsm*
processes went away.
- In AIX, do 'errpt -a|more' and look
for a record of the dsmserv process
having failed. Look for any hardware
errors (disk problems, etc.) that
would have precipitated the TSM
failure.
- Check the file systems that the server
uses to assure that they have not
filled.
- Your system should be set up to direct
the output of the server start-up to a
log file, which you can examine.
Note that the real indications of the
problem are trapped in the Activity Log,
which you can't see until the server is
restarted.
Server file locations Are held within file:
/usr/lpp/adsmserv/bin/dsmserv.dsk
(See "dsmserv.dsk".)
Server files Located in /usr/lpp/adsmserv/bin/
Server "hangs" First, check the obvious: inspect your
process table to see if the server
process is in a Stopped state: in Unix
*maybe* someone did a 'kill -STOP' on
it (use 'kill -CONT' to resume it).
If not that, and if you have an
automated tape library, you could
perhaps see if a tape was mounted by the
server and perhaps deduce what the
server was doing.
Also use 'netstat' and/or the public
domain 'lsof' command to see what TCP/IP
connections were active with the server.
Check for datacomm hardware problems
which may be causing TCP/IP connections
to stop/hang and thus clog the server.
Look for an unusually high packet rate:
it is not impossible for someone to
conduct a "denial of service"
bombardment of the server port.
See also: HALT; Server lockout
Server installation date/time 'Query STatus', look for
"Server Installation Date/Time".
Server IP address The *SM server IP address is whatever it
is... There is no server option for
defining its address. Clients will point
to the *SM server through their option
TCPServeraddress. Note that some
libraries communicate with the server
over TCP/IP, and may have the server
network address configured into them.
If you change the server IP address, you
will have to go around to all the
clients to update their TCPServeraddress
values. (That option obviously cannot be
a server-based clientopt.) Don't forget
to update your library, too, if needed.
You may be able to avoid the chore of
changing all the clients if it is
possible for you to define a DNS CNAME
or Virtual IP for your server which
serves the old IP address, as well as
the new, native one. Changing the
server network address has no effect on
storage pool data: your next client
backup, to the new IP address, will be
as incremental as ever.
Server lockout, TCP/IP Connection The server may be irrevocably hung if it
Problem is rejecting TCP/IP connections. If
Unix, you might try using the client on
the server system to access it, changing
the client options file to specifying
COMMMethod SHAREDMEM to try getting in
via that alternate communications
method.
See also: HALT
Server looping, 'hung' client sessions If possible, do Query Session for the
Sess State value: anything odd, or
client hitting on server?
Look for any peculiar client conditions
which might have triggered it, like a
client which was Win95 yesterday but is
Linux today, or clients of differing
versions hitting the server.
Use operating system facilities to
identify the looping process or thread,
as ADSM dedicates processes or threads
to specific resources, which may help
pinpoint the problem.
Server name Defaults to "ADSM".
Server name, get 'Query STatus'
Server name, set See: Set SERVername
Server operating system type If you do a client-server command like
'dsmc q sch', the system type should
show up in the "Session established with
server" line.
Server options, query 'Query OPTion'
Server options file A text file specifying options for the
ADSM server. Defaults to
/usr/lpp/adsmserv/bin/dsmserv.opt .
If another filename is to be used, the
DSMSERV_CONFIG environment variable must
be set thus, or specify on -o option of
'dsmserv' command.
Changes in this options file are not
recognized until the server is
restarted. See also: SETOPT
Ref: Install manual.
Ref: Installing the Server...
Server performance - Choose a fast-processor computer for
your server system, preferably one
with multiple CPUs, and possibly
multiple I/O backplanes.
- Employ fast interface card in your
server system, and do not mix fast and
slow devices on one interface where
speed will be governed by the slowest
device on the chain (as is the case
with SCSI).
- Assure that your server system has an
abundance of real memory, which is
vital to the performance of any kind
of server.
- Do a 'Query DB Format=Detailed' and
check the Cache Hit Pct. If it is less
than 98 add database buffers; in the
server options file increase the
BUFPoolsize value. See: BUFPoolsize
The Cache Wait Pct (q.v.) value should
always be zero.
- Don't run the AUDit LICenses command
during high-demand periods, as its
computation of server storage space
can consume much CPU time and
interfere with other server activity
to the point of stalling it. Consider
using the AUDITSTorage server option.
- Do 'Query LOG Format=Detailed' and
check that the Log Pool Pct Wait value
is zero: if otherwise, something in
your operating system environment or
hardware configuration is hampering
access.
- If your server is running in a system
where other things are running,
realize that it can be impeded by the
mix, particularly if it is assigned a
priority (and, in Unix, a Nice value)
which makes it the same or worse than
other processes running in that
system.
- Investigate server options
AUDITSTorage, MOVEBatchsize,
MOVESizethreshold, TXNGroupmax.
- In AIX, check Threads performance
factors. From TSM 4.1 README:
"Possible performance degradation due
to threading: On some systems, TSM
for AIX may exhibit significant
performance degradation due to TSM
using user threads instead of kernel
threads. This may be an AIX problem;
however, to avoid the performance
degradation you should set the
following environment variables before
you start the server:
export AIXTHREAD_MNRATIO=1:1
export AIXTHREAD_SCOPE=S
- Where clients co-reside in the same
system, use Shared Memory in Unix or
Named Pipes in Windows.
Download the Dave Daun IBM TSM Server
Performance Tuning presentation, IBM
site reference number 1191934.
See also: MVS server performance
Server PID 'SHow THReads'
Server processes, number of See: Processes, server
Server restart date/time 'Query STatus', look for
"Server Restart Date/Time".
Server script, cancel There has been no way to terminate a
script as a whole, as TSM provides no
"handle" for the script itself.
However, you can program your script to
include potential break points which
will cause it to exit upon a condition
which you can externally set. For
example, you have a daily script called
DAILY, and in it you code the test:
Query SCRipt DAILY-CANCEL
if (RC_OK) exit
Now, to get the running script to
cancel, you do simply:
COPy SCRipt DAILY DAILY-CANCEL
When the script finishes its current
action and performs the test, it will
find the "cancel" version of the script
to exist and will exit, whereupon you
can then DELete SCRipt DAILY-CANCEL.
Server script, delay There are occasions in server scripts
where you need to introduce a delay
between operations; but there is no
"Sleep" command or the like. The most
effective way, I have found, is to use
the SHOW VOLUMEUSAGE command, which is
well known to take time but produce
little output, so is a good candidate.
(I did think about doing a 'Query ACtlog
BEGINDate=-999 Search=garbage', which
would certainly take time; but that
would be recursive, each day adding more
and more finds of "garbage" from all
preceding days.)
Server Script, issue OS command from There is no way to directly issue an
operating system command from a Server
Script. However, it can be done
indirectly, by taking advantage of
client schedules, which can issue OS
commands. The best way is to use a
one-time client schedule.
Note that some commands, like 'Query
MEDia' and 'Query DRMedia', can generate
commands which can be written to an OS
file, which can then be defined and run
as a script invoked from the running
script, to for example send email about
a certain volume.
Conversely, you can invoke server
functions from outside the server, as in
having a Perl script run dsmadmc, and
thereby achieve more sophisticated
processing.
See: DEFine CLIENTAction
Server Scripts Facility introduced in ADSMv3 to store
administrative scripts in the *SM
database, which can be conditionally
'RUn' to perform administrative tasks.
The Scripts facility is a lot like
Macros, except that Scripts are stored
in the TSM database instead of in the
client file system, and scripts provide
some conditional logic capability.
Server Scripts can be run from
Administrative Schedules - but
restrictions on them prohibit using
redirection.
Disallowed characters: Do not use Tab
characters!! Server Scripts insidiously
report lines containing them as errors!!
Continuation character: -
Statements: IF EXIT GOTO
IF coding: IF (Curr_RC) __Action__
where the return code tested is from a
preceding server command, per any of
the possible RC_* values summarized in
appendix B of the Admin Ref manual; and
Action may be a GOTO or any server
command.
GOTO coding: The GOTO specifies a
labeled target, as in "GOTO step_1" and
"step_1:". The label may appear on a
line by itself or heading a line which
includes another element, such as a
server command or EXIT.
Comments: Code in C style: /* */
Redirection: Not possible! To
compensate, consider using commands like
Query MEDia and Query DRMedia, which can
create an output file by parameter.
What's lacking: No Else, no Not (no
negation, as in "if (! ok)".
Line numbering: When you DEFine SCRipt,
the line numbers are assigned starting
at 1, then each line is five more than
the previous one, so you end up with
lines numbered: 1, 6, 11, 16, 21, etc.
This will probably remind you of the old
Dartmouth BASIC language, where the gaps
afforded you modest room to insert line
in between those, with UPDate SCRipt.
Loops: Dangerous - because there is no
way to query or cancel a server script,
meaning that a loop could be inifinite
and impair your server without you
having a good way to detect or do
anything about it.
Naming: Keep the script name as short as
feasible! Every line of output resulting
from the execution of the script is
reported in the Activity Log on ANR2753I
messages - prefaced by the name of the
script. Long script names make for a lot
of log inflation, particularly in
causing output to span lines.
Beware revising a running script, as it
appears that the server executes scripts
by interpretation, line by line.
There is no way to interrupt a
multi-command script. This causes
customers to shy away from server
scripts.
Scripts cannot be run from the server
console, for some obvious reasons: Some
of the scripts a) create a lot of output
b) you could start some foreground
process and for the time, the Script is
running, your console would be busy for
all other applications.
Ref: Admin Guide, Automating Server
Operations, Tivoli Storage Manager
Server Scripts; Admin Ref appendix on
Return Code Checking
See also: DEFine SCRipt; RUn
Server scripts, editing The 'UPDate SCRipt' command allows for
editing an existing server script, but
is exceedingly awkward. The best
approach is to maintain your server
scripts as files outside of TSM, then do
'DELete SCRipt' and
'DEFine SCRipt ... FILE=____' to put
changes into effect. This allows you to
utilize your favorite text editor to
quickly make contextual changes, and to
have safety copies of your server
scripts in case of server loss or adding
the same script to another server.
Server scripts, move between servers Do 'Query SCRipt scriptname FORMAT=RAW
OUTPUTFILE=____' to a file, move the
file to the other system, and then do a
'DEFine SCRipt ... FILE=____' to take
that file as input.
Still, the best overall approach is to
maintain your complex server scripts
external to the TSM server and re-import
after editing.
In a more elaborate way, this can be
achieved through TSM's Enterprise
Configuration, with a Configuration
Manager server and Managed Server.
Server Scripts, supplied with TSM The server Quick Start manual describes
installing the scripts.smp suite of
sample server scripts which are supplied
with the server.
See also: SQL samples
Server session via command line Start an "administrative client session"
to interact with the server from a
remote workstation, via the command:
'dsmadmc', as described
in the ADSM Administrator's Reference.
Server Specific Info Is the NetWare Directory Services
info; i.e., Users and Groups.
Server stanza A portion of the Client System Options
file, typically starting with the
keyword "SErvername", which governs
communicating with that one server.
An ADSM client may communicate with more
than one server, and thus can have
multiple server stanzas within the file.
The server with which the client usually
interacts will be coded on the
DEFAULTServer line, in the section of
the file which precedes the server
stanzas.
(Note that the "server names" in this
file are just arbitrary names for the
stanzas, though they are typically the
actual names of the servers. It is the
TCPServeraddress which actually
identifies the server to communicate
with.)
Many client options pertain to a given
server and so must appear within each
respective server stanza. The Client
Options Reference topic of the
Backup-Archive Clients manual lists the
options which may precede server stanzas
in the options file.
Server startup (dsmserv) Begins in /etc/inittab, which invokes:
ADSM: /usr/lpp/adsmserv/bin/rc.adsmserv
TSM: /usr/tivoli/tsm/server/bin/
rc.adsmserv
which does 'dsmserv quiet' to start
the primary daemon process, which in
turn spawns as many children as it
needs to do its work.
See also: Processes, server
Server startup, prevent interference During extraordinary server restarts,
you may need to suppress normal
activities - which you may do by adding
the following options to dsmserv.opt
file prior to server restart:
DISABLESCheds Yes
NOMIGRRECL
(NOMIGRRECL is an undocumented option to
suppress migration and reclamation.)
Server startup action A site may want the *SM server to
perform a certain action after the
server is restarted. The product has no
provision for a start-up action. The
simplest way to do it is to modify the
server start-up script (e.g.,
rc.adsmserv) to incorporate a delayed
dsmadmc to incite the action after the
server has gotten settled in.
Server startup considerations It takes some minutes for the *SM
server to become fully ready when it is
restarted: client sessions may be
disallowed or delayed during this time.
During start-up, the DB mirrors have to
be re-synced.
When the server comes up, Expire
Inventory is always started
automatically.
Another implicit server start-up task is
an Audit Library - which may not be
explicitly evidenced in the Activity
Log, except for some ANR8455E affiliated
messages. In the case of a 3494, this
operation will examine the volume
history info and "fix" any Category Code
values which do not agree with Scratch
or Private values which the server
believes the tapes should be. This is
something to consider if you attempt
"loose" sharing of a 3494 between two
TSM servers.
Realize that the database buffer cache
that a long-running server had built up
is gone and has to be reinvested when a
server is restarted, which can result in
some slower service than when the server
has been up for some time.
Server startup resources The server needs the following at
startup:
1. Access to the option files:
found via the DSMSERV_OPT environment
variable, or in the current directory
2. Access to dsmserv.dsk:
must come from the current directory
3. Access to auxilary modules:
found via the DSMSERV_DIR environment
variable, or in the current directory
4. System needs access to the code:
via explicit path information or
through the PATH environment variable
Server status 'Query STatus'
- or -
SELECT * FROM STATUS
Note that arrangement and content may
vary in the results from the two
commands above.
Server TCP/IP port number, query 'Query STatus' report entry: The TCP/IP
port on which the server listens for
client requests.
See also: TCPPort server option
Server TCP/IP port number, set Hard-code in the TCPPort server option
(q.v.).
Server version number From a server session (dsmadmc) you do:
'Query STatus'.
Server version/release number & paying You have to pay for a new license to use
a new version or release level of the
product. For example, you have to pay
to acquire and use TSM 4.1. When 4.2
comes out, you have to pay again. Only
maintenance fixes within a release are
free, downloadable from the Tivoli web
site.
Base server levels, such as 5.2, may be
downloaded: if you have a Passport
Advantage contract, you can download
software from the Passport Advantage
website.
SERVER_CONSOLE Special administrator established by
*SM server installation which allows
administration from the server console
(only), by virtue of starting ADSM from
the server console and remaining in
control of it. This is what you need to
use in the case of having formatted a
database and thus starting with it empty
of any definitions. From there you can
establish initial site definitions
(register administrators, etc.).
If your TSM server is already up and
running via a normal rc.adsmserv start,
you cannot normally use SERVER_CONSOLE
to access it: The SERVER_CONSOLE user
ID does not have a password. Therefore,
you cannot use the user ID from an
administrative client unless you set
authentication off.
An administator with system privilege
can revoke or grant new privileges to
the SERVER_CONSOLE user ID. However, you
cannot do any of the following to it:
- Register or update
- Lock or unlock
- Rename
- Remove
- Route commands from it
Msgs: ANS8034E
Ref: Admin Guide, "Managing the Server
Console"; Admin Ref, "Using the Server
Console"
Server-free backup Offloads your server systems by having
the SAN perform Backups and Restores -
of volume images. (Server-free does not
operate at the file level.)
Exploits the capabilities of network
storage and peer-level device
communication on a SAN for the data to
move from one storage device in the SAN
to another without going through a
server, eliminating server work. The SAN
knows where the data is and where it is
going and handles the transport without
the assistance of the client node.
Uses the SCSI-3 Extended Copy command to
do full-volume backup and restore: the
TSM server issues the command, which is
carried out by the SAN's data mover.
Initially implemented on Windows 2000 -
as Server-free is a special form of the
standard Windows 2000 Image Backup.
Supports Raw and NTFS volumes, but not
FAT volumes.
Available in a TSM 5.1 PTF made
available 3Q2002.
Server-free operations made necessary
the introduction of Path definitions for
TSM tape libraries and tape drives.
Ref: TSM 5.1 Technical Guide
See also: LAN-free; OBF; SDG
Server-to-server ADSM Version 3 enables multiple ADSM
servers within an enterprise to be
configured and administered from a
central location. ADSM Version 3
server-to-server communications provides
the foundation for configuring multiple
ADSM Version 3 servers in an
enterprise.
Ref: ADSMv3 Technical Guide redbook, 6.1
ADSM Server-to-Server Implementation and
Operation redbook (SG24-5244)
See: DEFine SERver; Set SERVERHladdress;
Set SERVERLladdress
"server-to-server" module Supports Virtual Volumes and thus
electronic valulting, exports/imports
directly between servers, etc. Note
that this module is extra charge.
Ref: Redbook: ADSM Server-to-Server
Implementation and Operation
(SG24-5244).
Server-to-server IP address and The DEFine SERver command specifies
Port number these via the HLAddress and LLAddress
operands, respectively. The port number
is usually the same as the usual TCPPort
server option value.
See also: Set SERVERHladdress;
Set SERVERLladdress
Serverfree data bytes transferred Client Summary Statistics element:
The total number of data bytes
transferred during a server-free
operation. If the ENABLEServerfree
client option is set to No, this line
will not appear.
See also: Server-free
SERVERHladdress See: Query SERver; Set SERVERHladdress
SERVERLladdress See: Query SERver; Set SERVERLladdress
SErvername (Unix only) Client System Options file (dsm.sys)
option which leads and labels the stanza
(distinct subsection) in that file which
contains the TCP network address, port
number, and other specs which pertain
only to the set of definitions which you
want to prevail in accessing that
server. Note that this name is a STANZA
NAME ONLY: IT IS *NOT* NECESSARILY THE
NAME OF THE SERVER AS DEFINED ON THE
SERVER BY THE 'SET SERVERNAME' COMMAND
THERE!
Name length: 1 - 64 characters.
The stanza name may initially be
"server_a", as installed.
This stanza name may then be referenced
by DEFAULTServer statement at the head
of the Client System Options file, or by
a SErvername statement in the Client
User Options file (dsm.opt), or by the
dsm/dsmc -SErvername command line
option.
This stanza name thus serves as a level
of indirection in identifying and
reaching the server. Once reached by
the physical addresses in the stanza,
the server returns its actual name in
the ANS5100I message returned in a
dsmadmc session.
See also: DEFAULTServer; SET SERVERNAME
-SErvername=StanzaName Same as SErvername, but for command
line. Using -SErvername on the command
line does not cause MIgrateserver to use
that server.
Ref: "Using the UNIX Backup-Archive
Clients" and "Installing the Clients".
Servers The Client System Options File,
/usr/lpp/adsm/bin/dsm.sys, lists all
servers which client users may contact
via either the default Client User
Options File (/usr/lpp/adsm/bin/dsm.opt)
or an override file named by the
DSM_CONFIG environment variable or via
-OPTFILE on the command line. If the
invoker does not specify a server, the
first one coded in the Client System
Options File is used.
Servers, multiple, on one machine Advantages:
(two servers on one system) 1. Less hardware to manage, as compared
to multiple servers on multiple systems.
2. Attached tape resources can be shared
3. Disk resources can be moved between
instances without an outage.
4. Multiple interfaces can be shared
5. One TSM server license
6. Can be implemented in a few hours
7. Works around application bottlenecks
8. Cheaper
Disadvantages:
1. Harder to upgrade
2. Memory allocation can be an issue
Refer to "Server startup resources" for
general info on where the server looks
for its resources. The server instance
is determined by the directory wherein
it is started. So...
- Create a separate server directory,
with its own config files and symlinks
to the executable modules.
- Create the new ADSM server database
and recovery log. (These will be
referred to by the dsmserv.dsk file
which will reside in that directory.)
- The dsmserv.opt TCPport option should
specify a unique port number.
Clients which are to use that server
should have their TCPPort client
option specify that port number.
- Customize your client option files to
point to the appropriate server.
Note that you can set environment
variables DSMSERV_OPT, DSMSERV_DIR, and
PATH to point to resources.
Ref: Admin Guide section "Running
Multiple Servers on a Single Machine"
Service Volume category 3494 Library Manager category code FFF9
for a tape volume which has a unique
service volser, for CE use. Host
systems are not made aware of Service
Volumes, because of their engineering
nature.
Services for Macintosh NT facility for serving Mac files.
ADSM can back them up from the NT; but
the 3.7 and 4.1 client README file says:
"Mac file support is available only for
files with U.S. English characters in
their names (i.e. names that do not
contain accents, umlauts, Japanese
characters, Chinese characters, etc.)."
See also: unicode; USEUNICODEFilenames
"Sess State" Entry in 'Query SEssion' output; reveals
the current communications state of the
server. Possible values:
End The session is ending.
IdleW Waiting for client's next
request.
MediaW The session is waiting for
access to a serially usable
volume (e.g., tape).
RecvW Waiting to receive an expected
message from the client.
Run The server is executing a client
request.
SendW The server is waiting to send
data to the client.
Start The session is starting
(authentication is in progress).
See also the individual explorations of
each of the above states in this
QuickFacts.
Session A period of time in which a user can
communicate with an ADSM server to
perform backup, archive, restore, and
retrieve requests, or to perform space
management tasks such as migrating and
recalling selected files.
HSM sessions occur for the system where
the file system is resident.
Session, cancel 'CANcel SEssion Session_Number|ALl'
Session files What files is a session currently
sending? Do 'Query SEssion F=D' to get
the current output volume, then on that
do 'Query CONtent ______ COUnt=-5' to
see the most recent five files.
Session numbering Begins at 1 with each *SM server
restart.
Session port number Shows up on msg ANR0406I when the
session starts, like:
(Tcp/Ip 100.200.300.400(4330)).
Session start time Not revealed in Query SEssion: you have
to do 'SELECT * FROM SESSIONS' and look
at START_TIME.
Session timeout problem during backup Try increasing IDLETimeout value, or
choose "SLOWINCREMENTAL YES" option
(q.v.) for those clients supporting it.
Session type 'SHow SESSion', which reports Backup
and Archive sessions.
SESSION_TYPE SQL: Column in SESSIONS table,
identifying the session type, as "Admin"
or "Node".
SESSIONINITiation TSM 5.2+ client option to control
(-SESSIONINITiation=) whether the server or client initiates
sessions. The overriding purpose of this
option is to prevent users on the client
system from initiating sessions with the
TSM server. It is also used with
firewalls to allow the server to
initiate scheduled sessions with the
client, to perform backups and the like
(which could not be done prior to 5.2,
with SCHEDMODe PRompted; but the
mechanism by which this is achieved are
not described in any IBM doc thus far.
One can deduce that 5.2 changes the
paradigm such that the server contact
initiates the full session, rather than
inciting the client to contact the
server as in the old Prompted paradigm.)
Placement: Use with the client schedule
command. Can be used on command line.
Not usable with the API.
Placement: In client system options file
(dsm.sys).
Syntax:
SESSIONINITiation [Client|SERVEROnly]
where
Client Specifies that the client will
initiate sessions with the server by
communicating on the TCP/IP port
defined with the TCPPort server
option. This is the default.
SERVEROnly Specifies that the client
understands it to be the case that the
server will not accept client requests
for sessions. All sessions must be
initiated by the server - prompted
scheduling on the port defined on the
client with its TCPCLIENTPort option.
So...if the client cannot initiate
actions, then how can a Restore be
accomplished? Via a client schedule on
the TSM server, via DEFine SCHedule or
DEFine CLIENTAction with ACTion=REStore.
Caution: This option disables a lot of
functionality, and should be activated
only after having fully set up the
client and tested its general
inteoperability as intended after the
option is in effect. (See APAR IC37509)
Ref: Tivoli Field Guide "Using the
Tivoli Storage Manager Central
Scheduler"
SESSIONINITiation TSM 5.2+ server option to control
whether the server or client initiates
sessions. Though often couched in terms
of firewall use, the overriding purpose
of this option is to prevent people on
the client system from initiating
sessions with the TSM server. Note that
this option does not perform any
firewall magic: firewalls are
principally intended to keep the server
from being accessed via various port
numbers, whereas communications out from
the server are generally uninhibited.
Syntax:
SESSIONINITiation=[Client|SERVEROnly]
where
Client Specifies that the client will
initiate sessions with the server by
communicating on the TCP/IP port
defined with the TCPPort server
option. This is the default.
SERVEROnly Specifies that the server
will not accept client requests for
sessions. All sessions must be
initiated by server-prompted
scheduling on the port defined for the
client with the REGISTER or UPDATE
NODE commands.
Set the node's HLADDRESS and LLADDRESS
values as appropriate.
Note that if you put SERVEROnly into
effect for a node, it behooves you to
put the equivalent client option into
effect, to avoid confusion on the client
side.
SESSIONS SQL Table. Columns and samples:
SESSION_ID: 6692
START_TIME: 2002-12-06 09:20:05.000000
COMMMETHOD: Tcp/Ip
STATE: Run
WAIT_SECONDS: 0
BYTES_SENT: 1333085
BYTES_RECEIVED: 3488
SESSION_TYPE: Node
CLIENT_PLATFORM: AIX
CLIENT_NAME: SYSTEM7
OWNER_NAME:
MEDIA_STATE: Current output volume:
001647.
(The following columns are in TSM 5:)
INPUT_MOUNT_WAIT:
INPUT_VOL_WAIT:
INPUT_VOL_ACCESS:
OUTPUT_MOUNT_WAIT:
OUTPUT_VOL_WAIT:
OUTPUT_VOL_ACCESS:
LAST_VERB: CSResults
VERB_STATE: Recv
Sessions, client, number of See: RESOURceutilization
Sessions, maximum, define "MAXSessions" value in the server
options file (dsmserv.opt).
Sessions, maximum, query 'Query STatus', look for "Maximum
Sessions".
Sessions, multiple See: RESOURceutilization
Sessions, prevent If the server is up, 'DISAble SESSions'
will prevent client nodes from starting
any new Backup/Archive sessions.
See also: DISAble SESSions;
DISABLESCheds; Server, prevent client
access
SET Access See: dsmc SET Access
Set ACCounting On ADSM server command to create
per-session records, including KB data
volumes sent from client.
Set ACTlogretention TSM server command to specify the
retention perion, in days, for
Activity Log records. Syntax:
'Set ACTlogretention N_Days'.
Default: 1 day.
Will result in messages
ANR2102I Activity log pruning started
ANR2103I Activity log pruning completed
in the Activity Log.
Remember that the Activity Log lives in
the TSM server database, so be
conscious of how many space that can
take over so many days.
Important: It is absolutely vital that
you somehow have at least six months
worth of Activity Log records, in that
you need to be able to look back at what
happened to specific volumes, etc. You
can accomplish this by simply leaving
the Activity Log records around that
long, or you can periodically capture
old records before they are pruned, as
via 'Query ACtlog BEGINDate=-999 >
SomeFile'.
Set AUthentication Server command, with System privilege,
to specify whether administrators and
client nodes need a password to access
the server. Choices:
ON Administrators and client nodes
need a password to access the
server. This is the default.
OFF Administrators and client nodes do
not need a password to access the
server.
See also: REGister Node
Set CLIENTACTDuration TSM server command to specify the number
of days that a schedule, defined with
the DEFine CLIENTAction command, is to
live as a server definition. TSM
automatically deletes the schedules and
associations with nodes from the
database when the scheduled start date
plus the specified number of days have
passed the current date. Records for the
event are deleted regardless of whether
the client has processed the schedule.
Syntax: Set CLIENTACTDuration Ndays
See also: DEFine CLIENTAction
Set CONTEXTmessaging ON Server command to get additional info
when ANR9999D messages occur. Server
components for info, including process
name, thread name, session id,
transaction data, locks that are held,
and database tables that are in use.
'Set CONTEXTmessaging ON|OFf'
Set DRMCHECKLabel TSM DRM command to control whether a
tape's media label is read and verified
before it is checked out of the library.
Set DRMCHECKLabel Yes|No
The default is Yes.
Set DRMCMDFilename Server command to name a file that can
contain the commands created when the
MOVe DRMedia or Query DRMedia commands
are issued without specifying a
CMDFilename. Syntax:
'Set DRMCMDFilename file_name'
If you are not licensed for DRM, this
command will work but will complain
about the absence of a license, msg
ANR6752W.
Set DRMCOPYstgpool Server command for DRM, to specify names
of the copy storage pools to be
recovered after a disaster. TSM uses
these names if the PREPARE command does
not include the COPYSTGPOOL parameter.
If the MOVe DRMedia or Query DRMedia
command does not include the COPYSTGPOOL
parameter, the command processes the
volumes in the MOUNTABLE state that are
in the copy storage pool named by the
SET DRMCOPYSTGPOOL command. At
installation, all copy storage pools are
eligible for DRM processing. Syntax:
'Set DRMCOPYstgpool
Copy_Pool_Name[,Copy_Pool_Name]'
Do 'Set DRMCOPYstgpool ""' to nullify
specific names and allow all copy
storage pools to participate.
Use the Query DRMSTatus command to
display the current settings.
Set DRMDBBackupexpiredays DRM parameter; tells *SM how long to
keep the DB backup tapes that it is
managing before finally expiring them.
Stipulations for this to work:
- The age of the last volume of the
series has exceeded the expiration
value set by this command.
- For volumes that are not virtual
volumes, all volumes in the series are
in VAULT state.
- The volume is not part of the most
recent database backup series
(BACKUPFULL + BACKUPINCRs).
Also watch out for a BACKUPINCR which is
on disk, which may thwart expiration: do
MOVe DRMedia to deal with those and
allow dbbackups to expire.
Do not use DELete VOLHistory on DB
backup volumes when DRM is in charge.
Use Query DRMSTatus to check.
Syntax: Set DRMDBBackupexpiredays Ndays
where Ndays can be 0 - 9999
The DBBackup volumes remain until the
specified number of days has past and an
Expiration is run. This necessarily
overrules any retention you may think
you are doing in DELete VOLHistory which
intends to keep the volumes longer.
Set DRMNOTMOuntablename Command to specify the name of the
offsite location for storing the media.
At installation, the name is set to
NOTMOUNTABLE. Use the Query DRMSTatus to
see the location name. The location name
is used by the MOVe DRMedia command to
set the location of volumes that are
moving to the NOTMOUNTABLE state.
'Set DRMNOTMOuntablename location'
where the location name can be up to 255
chars.
If this Set command has not bee issued,
the default location is NOTMOUNTABLE.
Set DRMRPFEXpiredays DRM parameter to specify when recovery
plan files are eligible for expiration.
Syntax: Set DRMRPFEXpiredays Ndays
Set INVALIDPwlimit TSM server command to define the maximum
number of logon attempts allowed before
the node involved is locked.
Code: 0 - 9999.
Default: 0, meaning no checking
See also: Set INVALIDPwlimit;
Set PASSExp
Set INVALIDPwlimit attempts ADSMv3 server command to set a limit on
the number of invalid password attempts
a prospective session may make.
Set LICenseauditperiod Specifies the period, in days, between
automatic license audits performed by
the ADSM server. Syntax:
'Set LICenseauditperiod <N_days>'
where N_days can be 1-30.
Default: 30 days.
See also: Query STatus
Set LOGMode Server command to set the mode for
saving log records, which in turn
determines the point to which the
database can be recovered. Syntax:
'Set LOGMode Normal|Rollforward'
Normal The Recovery Log keeps only
uncommitted transactions. Database
recovery involves restoring from the
most recent db backup only: all
transactions since that time are lost!!
(This is particularly bad where users
do Archive with the DELetefiles option:
the user files will be lost!)
No automatic backups are possible.
TSM db mirroring is thus very important
in this case, to reduce the possibility
of database loss.
Because of the potential for data loss,
Normal mode is undesirable,
antithetical to the intention of the
product.
Rollforward The Recovery Log keeps
*all* transactions since the last
database backup. Database recovery
involves the most recent db backup and
the intact Recovery Log contents such
that all activity up to the current
time is preserved. Automatic db backups
are performed (via DBBackuptrigger).
Note that TSM db mirroring is valuable,
but not as essential in this case; but
Recovery Log mirroring is more
important.
Other factors in choice: Rollforward
makes sense when the time it takes to
run an incremental backup is much less
than what it takes to run a full backup.
If you have the time to perform full
backups at least once a day, Normal mode
may be a choice for you. In either case,
it is always best to use TSM mirroring
for the database and recovery log. And,
in either case, allocate a capacious
recovery log, as a complex mix of
clients can result in a lot of
uncommitted transaction space.
If currently using Rollforward, you can
Set LOGMode Normal, then switch back
(which triggers a full db backup).
Note that switching from Normal to
Rollforward doesn't take effect until
the next full database backup, which is
necessary in order to have a baseline
from which the log can be used to
recover a database.
Perspective: Many customers report
having given up on Rollforward, given
its limited advantages and the big
problem of the Recovery Log filling,
with little hope of DBBackuptrigger
curing the problem in a timely manner.
Default: Normal
Msgs: ANR2362E
Ref: Admin Guide, "Database and Recovery
Log Protection" and "Auditing a Storage
Pool Volume"
See also: DBBackuptrigger
Set MAXSCHedsessions %sched *SM server command to regulate the
number of sessions that the server can
use for processing scheduled work, as a
percentage of the total number of
server sessions available (MAXSessions).
Roughly speaking, this regulates the
percentage of "batch" sessions to
"interactive" sessions.
See also: MAXSessions
Set MINPwlength TSM server command to set the minimum
length of a password.
Privilege level required: System
Syntax: 'Set MINPwlength length'
Specify a length from 0 - 64, where 0
means that the password length is not
checked.
Default: 0
See also: Set INVALIDPwlimit;
Set PASSExp
Set PASSExp *SM server command to specify password
expiration periods.
'Set PASSExp N_Days [Node=nodelist]
[Admin=adminlist]'
Note that this value can override a zero
PASSExp value in REGister Node.
Set Password See: dsmc set password
Set QUERYSCHedperiod Server command to regulate how often
client nodes contact the server to
obtain scheduled work when it is running
in SCHEDMODe POlling operation. This
can be used to universally override the
client QUERYSCHedperiod option value.
Syntax: Set QUERYSCHedperiod <N_hours>
In the absence of this server setting,
clients are free to hit the server as
often as they like.
Check server value with 'Query STatus'.
Set RANDomize TSM server command to specify the
degree to which schedule start times are
randomized within the temporal startup
window of each schedule, for clients
using the client-polling mode
("SCHEDMODe POlling" option - but not
"SCHEDMODe PRompted"). Syntax:
'Set RANDomize Randomize_Percent'.
To verify: 'Query STatus', look for
"Schedule Randomization Percentage"
value.
Set SCHEDMODes Server command to determine how the
clients communicate with the server to
begin scheduled work. Each client must
be configured to select the scheduling
mode in which it operates. This command
is used with the SET RETRYPERIOD command
to regulate the time and the number of
retry attempts to process a failed
command. Syntax:
Set SCHEDMODes ANY|POlling|PRompted
Default: ANY
See also: SCHEDMODe
Set SERVERHladdress To set the high-level address
(IP address) of a server. TSM uses the
address when you issue a DEFine SERver
command with CROSSDEFine=YES. Syntax:
'Set SERVERHladdress ip_address'
See also: DEFine SERver;
Set SERVERLladdress
Set SERVERLladdress To set the low-level address (port
number) of a server. TSM uses the
address when you issue a DEFine SERver
command with CROSSDEFine=YES. Syntax:
'Set SERVERLladdress tcp_port'
See also: DEFine SERver;
Set SERVERHladdress
Set SERVername TSM server command to set the name of
the server, which is used in the
following ways:
- The server feeds this name back to
the client when the client contacts
the server by the network and port
address contained in its client
options file.
- In DEFine PATH commands where
SRCType=SERVer.
- Is displayed in the prompt within
dsmadmc sessions.
Syntax: 'Set SERVername Some_Name'
The name can be up to 64 characters, and
must be unique across the Tivoli server
network.
Changing this server name does not
affect the client's ability to find the
server, because that is set in the
client options file by physical
addressing; however, a client with
"PASSWORDAccess Generate" has the server
name stored with the encrypted password
on the client, so the client
administrator will have to redo the
password. THIS CAN HAVE FAR-REACHING
RAMIFICATIONS.
Assigning arbitrary server names
allows you to run multiple servers, or
to uniquely identify servers on multiple
systems. The ADSM "Test Drive" works
this way.
Note that the name is that used between
the server and client, and has nothing
to do with the server's name in the
physical network namespace.
Set SERVERPAssword To set the password for communication
between servers to support enterprise
administration and enterprise event
logging and monitoring. Syntax:
'Set SERVERPAssword password'
Set SERVERURL To specify a Uniform Resource Locator
(URL) address for accessing the server
from the web browser interface. TSM uses
this address when a server is defined
and cross definition is permitted.
'Set SERVERURL url'
Query: Query STatus, see "Server URL"
Set SQLDATETIMEformat To control the format in which SQL date,
time, and time stamp data are displayed.
See your SQL documentation for details
about these formats. Syntax:
'Set SQLDATETIMEformat
[Iso|Usa|Eur|Jis|Local]'
Where:
Iso Specifies the International
Standards Organization (ISO)
format. ISO is the default.
Usa Specifies the IBM USA standard
format.
Eur Specifies the IBM European
standard format.
Jis Specifies the Japanese
Industrial Standard Christian
Era. Currently the JIS format is
the same as the ISO format.
Local Site-defined. Currently, the
LOCAL format is the same as the
ISO format.
See also: Query SQLsession
Set SQLDISPlaymode To control how SQL data types are
displayed. Syntax:
'Set SQLDISPlaymode [Narrow|Wide]'
Where:
Narrow Specifies that the column
display width is set to 18. Any
wider string is forced onto
multiple lines at the client.
This is the default.
Wide Specifies that the column
display width is set to 250.
See also: -COMMAdelimited; -DISPLaymode;
-TABdelimited
See also: Query SQLsession
Set SQLMATHmode to round or truncate decimal numbers for
SQL arithmetic. Syntax:
'Set SQLMATHmode Truncate|Round'
Default: Truncate
See also: Query SQLsession
Set SUBFILE TSM 4.1+ server command to allow clients
to back up subfiles. Product
installation sets it to No; set it to
Client to allow such backups. Do Query
STatus in the server to check.
See also: Adaptive Differencing;
SUBFILE*
Set SUMmaryretention TSM 3.7 server command to specify the
number of days to keep information in
the SQL activity Summary table. Syntax:
Set SUMmaryretention Ndays
where Ndays specifies the number of days
to keep information in the activity
summary table. Specify 0 to 9999. 0
means to not keep data. 1 says to keep
the activity summary table for the
current day only.
Query via: Query STatus
See also: Summary table
Set TAPEAlertmsg TSM 5.2+ server command to control the
handling of TapeAlert problem
indications from a library or tape drive
which supports that technology
'Set TAPEAlertmsg ON|OFf'
See also: Query TAPEAlertmsg; TapeAlert
SETOPT ADSMv3 server command which allows
changing server options without
restarting the server. It actually
updates the dsmserv.opt file as well,
but: it appends the specified option to
the end of the file rather than changing
the option where it appears in the file;
and it fails to add a newline at the end
of the line that it adds. Nor does it
even check the current value: for
example, you can specify the very same
value that an option currently has, and
the foolish command will add a needless
duplicate to the file. Suffice to say,
the programming of this command is
embarassingly primitive.
Note also that performing a SETOPT does
*not* result in TSM re-examining the
other options in the file. (You cannot
use SETOPT to cause TSM to adopt changes
you manually made to the file.)
As of ADSMv3 you can operate on:
AUDITSTorage
COMMTimeout
DATEformat
EXPINterval
EXPQUiet
IDLETimeout
MAXSessions
NUMberformat
RESTOREINTERVAL
TIMEformat
As of TSM3.7 you can also operate on:
BUFPoolsize
Msgs:
ANR2119I The ________ option has been
changed in the options file.
Share Point Name See: UNC
SHRDYnamic (Shared Dynamic) An ADSM Copy Group serialization
mode, as specified by the
'DEFine COpygroup' command
SERialization=SHRDYnamic operand spec.
This mode specifies that if an object
changes during backup or archive and
continues to be changed after a number
of retries, the *last* retry commits the
object to the ADSM server whether or not
it changed during backup or archive.
Contrast with DYnamic, which should sent
it on the first attempt.
See also: CHAngingretries
Shared memory To conduct a *SM client-server session,
within a single Unix computer system,
via a shared memory area instead of data
communications methods. (In Windows, the
comparable mechanism is Named Pipe.)
The shared memory communications options
were added with the V2 level 6 or 7 ADSM
AIX server and the V2 level 3 (?) AIX
client.
COMMMethod SHAREDMEM
SHMPORT 1510
The SHMPORT must be the same for both
the client and the server. That is a
TCP/IP port that is used between the
client and the server for the initial
handshake. Of course the client and the
server must be running on the same
machine because it uses a shared memory
region on the machine for the
communications. Restrictions:
The client MUST:
1 - run as ROOT (as must server) or
2 - run under the same userid as the
server or
3 - use PASSWORDAccess Generate
(attempting to use PASSWORDAccess
Prompt results in rejection with
an error message.)
Overall control of shared memory in your
computer system is in accordance with
its hardware architecture and operating
system design. See appropriate doc.
Use of the shared memory protocol in at
least AIX results in the use of a
temporary file named /tmp/adsm.shm.xxxxx
being created, deleted at the end of the
session. If the operating system is
rebooted or the TSM server is halted,
the files may not be deleted, and so
external measures need to be implemented
to do so.
If you use the same two parameters
(COMMmethod and SHMPORT) on your client
(on the same machine as the server),
you'll get a shared memory connection.
You don't really need to specify SHMPORT
on either the client or server unless
you deviate from the default value of
1510.
A server 'Query SEssion' will show the
"Comm. Method" being "ShMem", rather
than "Tcp/Ip".
Note that there is no shared memory
communiction between client sessions.
Ref: B/A Client, "COMMMethod".
Msgs: ANR8285I, ANS1474E
See also: Named Pipe; NAMedpipename
Shared Static See: SHRSTatic
SHRSTatic An *SM copy group serialization mode, as
specified by the SERialization'
parameter in the 'DEFine COpygroup'
command. This mode specifies that a
backup or archive operation will
disapprove of an object having been
modified during the operation. (The
object being "open" during this time
doesn't matter; detection of the file
attributes indicating modification does
matter.) After the operation, TSM will
check the object and, if it discovers
the object to have been modified, TSM
will reattempt the operation a number of
times (see below), and the following
message will be written to the
dsmerror.log: "File '_____' truncated
while reading in Shared Static mode."
If the object has been modified after
every attempt, the object is not backed
up or archived.
How it works (as of 1997): *SM will send
the file to the server. Only AFTER it
has sent the file to the server will it
then go back to the client and look at
the attributes to see if they have
changed since the beginning of backing
up the file. If they have changed, then
it determines the file was open while it
was backed up and will retry (if you
have Shared Static) immediately, i.e. it
will send the file AGAIN, and then check
AGAIN. It will repeat this process for
the specified number of retries
(CHAngingretries). *SM will NOT be
backing up any files at this time - all
other file backups wait until the
processing for this file is done. This
could mean that the file has been sent
to the server up to 4 times.
See also: CHAngingretries; Serialization
Contrast with: ABSolute; Dynamic; Static
SHMPORT See: Shared memory
"shoe-shining" Term most commonly used to refer to the
reciprocating motion of linear
serpentine tape (3590, 3580) as it
records to the end of tape, switches
head tracks, and records back toward the
starting point, repeated until all
possible tracks are used, as needed.
Also refers to "backhitch" (q.v.).
Helical scan tape technology vendors
(Sony AIT) deride linear tape
"shoe-shining" as causing much more wear
to tapes than their technology - but the
claim is specious, given the higher
stresses involved in helical scan.
See also: Backhitch
SHow commands Unsupported, undocumented commands to
reveal various supplementary info,
mostly that of internals of no interest
to customer. Running some of them can
impose a substantial burden on the
server. And they are typically session
executables (not processes) which cannot
be canceled. They often yield internals
data meaningful only to developers: the
Select command can often yield
information far more useful to
customers. In general, these are not
things that customers should run only
under the direction of TSM support
personnel. Links to SHow command doc is
embedded in the TSM Problem
Determination Guide. On the Web:
http://publib.boulder.ibm.com/tividd/td/
TSMM/SC32-9103-00/en_US/HTML/
info_show_cmds.html
SHow AGGREGATE __ Undocumented *SM server command to show
???
SHow Archives NodeName FileSpace Undocumented *SM server command to show
archives for a given Node filespace,
revealing full path name, when
archived, and management class.
Sample output:
/usr1 : / graphics (MC: SERV.MGM)
Inserted 10/27/1998 14:55:03
Beware doing this on a large filespace
because the server will have to process
the whole thing.
Note: does not show archiver, owner,
or object size.
See also: SHow Versions
SHow ASAcquired Undocumented *SM server command to show
acquired removable volumes.
SHow ASMounted Undocumented *SM server command to show
mounted (or mount in progress) volumes.
SHow ASQueued Undocumented *SM server command to show
the mount point queue.
SHow ASVol Undocumented *SM server command to show
acquired removable volumes.
SHow BACKUPSET Undocumented TSM server command to show
Backup Set info.
SHow BFVars Undocumented *SM server command to show
Bitfile Services Global Variables.
SHow BFObject 0 <ObjectIdDecimal> Undocumented *SM server command to show
a Bitfile Services Object. Example, for
ObjectID 0.43293636:
SHow BFObject 0 43293636
The object may not be found...
SHow BFObject 0 43293699
Bitfile Object: 0.43293699
Bitfile Object NOT found.
Sometimes used with a Select From
Backups, which yields an ObjectID to
look up via this SHow command.
See also: SHow INVObject
SHow BFStats ___ Undocumented *SM server command to show
Bitfile Services Statistics.
SHow BUFClean Undocumented *SM server command to show
Database Buffer Pool - Hot Clean List.
SHow BUFDirty Undocumented *SM server command to show
Database Buffer Pool - Dirty Pages Table
SHow BUFStats Undocumented *SM server command to show
Database Buffer Pool Statistics,
including Cache Hit Percentage.
SHow BUFVars Undocumented *SM server command to show
database buffer pool global variables.
SHow BVHDR ___ Undocumented *SM server command to show
???
SHow CART Undocumented *SM server command to show
Cart Info from mounted volumes.
SHow CCVars Undocumented *SM server command to show
Central Configuration Variables
SHow CONFIGuration Undocumented *SM server command to show
Configuration: Time, Status, Domain,
Node, Option, Process, Session, DB,
DBVolume, Log, Logvolume, Devclass,
Stgpool, Volumes, Mgmtclass, Copygroups,
Schedules, Associations, Bufvars,
Csvars, Dbvars, Lvm, Lvmcopytable,
Lvmvols, Ssvars, Tmvars, Txnt, Locks,
Format3590, Formatdevclass.
ADSMv3 provides the 'Query SYStem'
command, which provides much the same
info.
SHow CSVars Undocumented *SM server command to show
client schedule variables.
SHow DAMAGE <Stgpool_Name> To show damaged files in a stgpool
Example:
SHOW DAMAGE STGP1
**Damaged files for storage pool STGP1,
pool id 4
Bitfile: 0.7726069, Type: PRIMARY
Volume ID: 1168, Volume Name: NT1681
Segment number: 1, Segment start: 14,
Segment Size: 0.26218147
UX142ORA : /ORAohmspt12//
al_509156970_454_1 636679436
Bitfile: 0.7726072, Type: PRIMARY
Volume ID: 1168, Volume Name: NT1681
Segment number: 1, Segment start: 15,
Segment Size: 0.262719
UX142ORA : /ORAsoddev33//
al_509157087_93_1 636679436
Found 2 damaged bitfiles.
SHow DBBACKUPVOLS Undocumented *SM server command to show
info on the latest full+incremental
database backup volumes.
SHow DBPAGEHDR ___ Undocumented *SM server command to show
???
SHow DBPAGELSN ___ Undocumented *SM server command to show
???
SHow DBTXNSTATS Undocumented *SM server command to show
Database Transaction Statistics.
SHow DBTXNTable Undocumented *SM server command to show
the Database Transaction Table.
SHow DBVars Undocumented *SM server command to show
database Service Global Variables.
SHow DEADLock Undocumented *SM server command to show
any deadlocks that exist.
SHow DEVCLass Undocumented *SM server command to show
sequential device classes.
SHow DEVelopers Undocumented *SM server command to show
Server Development Team + Server
Contributors. (Don't expect it to be
current.)
SHow DISK Undocumented *SM server command to show
DISKfiles data.
SHow DSFreemap ___ Undocumented *SM server command to show
???
SHow DSOnline Undocumented *SM server command to show
storage pool datasets (volumes) online.
SHow DSVol Undocumented *SM server command to show
disk storage pool datasets (volumes).
SHow DUPLICATES Undocumented *SM server command to scan
the database for duplicates.
Warning: Runs a long time and uses a lot
of system resources; and there is no way
to stop it!
SHow FORMAT3590 _VolName_ Undocumented *SM server command to
verify that the Devclass Format spec for
a given volume is correct. Yields
Activity Log message like:
ANR9999D asvolut.c(2086): No change
required for volume _VolName_.
SHow FORMATDEVCLASS _DevClass_ Undocumented *SM server command to
verify that volumes in a given device
class are correct in the db. Yields
Activity Log message like:
ANR9999D asvolut.c(2293): All volumes
in _DevClass_ device class have
correct entries in *SM database.
SHow ICCTL Undocumented *SM server command to show
control info about current image copy
(db backup)?
SHow ICHDR Undocumented *SM server command to show
info about latest image copy (db
backup)?
SHow ICVARS Undocumented *SM server command to show
Image Copy Global Variables.
SHow IMVARS Undocumented *SM server command to show
Inventory Global Variables.
SHow INCLEXCL See: dsmc SHow INCLEXCL
SHow INVObject 0 <ObjectIdDecimal> Undocumented *SM server command to show
an inventory object, reporting its
nodename, filespace, management class,
etc. Example, for ObjectID 0.43293636:
SHow INVObject 0 43293636
OBJECT: 0.43293636 (Backup):
Node: ACSN08 Filespace: /u2.
/csg/rbs/ tempThis
Type: 2 CG: 1 Size: 0.0
HeaderSize: 0
BACKUP OBJECTS ENTRY:
State: 1 Type: 2 MC: 1 CG: 1
/u2 : /csg/rbs/ tempThis (MC: DEFAULT)
Active, Inserted 08/01/03 07:58:58
EXPIRING OBJECTS ENTRY:
Expiring object entry not found.
See also: SHow BFObject
SHow LANGUAGES Undocumented *SM server command to show
???
SHow LIBINV Undocumented *SM server command to show
the library's inventory: lib, vol, stat,
use, mounts, swap, data. May show
library storage slot element address, as
for an STK 9710 lib.
SHow LIBrary Undocumented *SM server command to show
the status of the library and its
drives, being the output of SIOC_INQUIRY
and other operations. Meaning of fields:
type= Device type, like 8 for 3590.
mod= Device type modifier, like 17 for
3590.
busy=0 means the drive is not mounted or
even acquired by *SM.
busy=1 should reflect *SM using the
drive (Query MOunt). But this could
result from drive maintenance. Fix by
trying 'cfgmgr' AIX command, or killing
the lmcpd AIX process and then doing
'cfgmgr' or '/etc/lmcpd'.
online=0 means the drive is "offline",
as when 'rmdev -l rmt_' had been done
in AIX.
In Version 2, this will only be if the
polled=1.
In V3, you can update a drive to be
offline, in which case the polled
flag will be 0.
polled=1 means that *SM could not use
the drive for one of three reasons:
- The drive is loaded with a Non-*SM
volume (eg a cleaner cartridge, or a
volume from the other *SM server);
- The drive is unavailable to the
library manager (usually set this way
by load/unload failures)
- The drive cannot be opened (some
other application has it open, or
there's some connection problem, etc)
polled=1 means the server is polling
the drive every 30 seconds to see when
the above three conditions all clear.
(It also means that the online flag
should be 0.) When the conditions
clear, it turns online back to 1 and
the drive should now be available to
be acquired.
Note that if no tape drive is currently
available, *SM will wait rather than
dispose of client and administrative
tasks.
Note that the relative positions of
the drives in the list can change over
one server's uptime.
SHow LMVARS Undocumented *SM server command to show
License Manager variables.
SHow LOCKs Undocumented *SM server command to show
Lock hash table contents.
Same as 'SHow LOCKTABLE'
SHow LOCKTABLE Undocumented *SM server command to show
Lock hash table contents.
SHow LOG Undocumented *SM server command to show
Log status information.
SHow LOGPAGE ___ Undocumented *SM server command to show
???
SHow LOGPINned Undocumented *SM server command to show
contributors to Recovery Log "pinning".
But you may figure out the culprit
simply by doing Query SEssion.
Ref: IBM site article swg21054574
See: Recovery Log pinning/pinned
SHow LOGREADCACHE Undocumented *SM server command to show
the Log Read Cache.
SHow LOGRESET Undocumented *SM server command to show
Logging service statistical variables
reset.
SHow LOGSEGTABLE Undocumented *SM server command to show
the Log Segment Table.
SHow LOGSTATS Undocumented *SM server command to show
log statistics.
SHow LOGVARS Undocumented *SM server command to show
Log Global Variables
SHow LOGWRITECACHE Undocumented *SM server command to show
the Log Write Cache.
SHow LSN ___ Undocumented *SM server command to show
???
SHow LSNFMT ___ Undocumented *SM server command to show
???
SHow LVM Undocumented *SM server command to show
logical volume manager info: server disk
volumes.
SHow LVMCKPTREC Undocumented *SM server command to show
LVM checkpoint record contents.
SHow LVMCOPYTABLE Undocumented *SM server command to show
copy table status (database and log
volumes).
SHow LVMCT Same as 'SHow LVMCOPYTABLE'
SHow LVMDISKNAME ___ Undocumented *SM server command to show
???
SHow LVMDISKNUM ___ Undocumented *SM server command to show
???
SHow LVMDNU ___ Same as 'SHow LVMDISKNUM'
SHow LVMDISKTABLE Undocumented *SM server command to show
Disk Table Entries (database and log
volumes).
SHow LVMDNA ___ Same as 'SHow LVMDISKNAME'
SHow LVMDT Same as 'SHow LVMDISKTABLE'
SHow LVMFIXEDAREA Undocumented *SM server command to show
the "LVM fixed area" on each data base
and recovery log volume (the extra 1MB
that you have to add to these volumes).
This command also reveals the maximum
possible size for the *SM Database and
Revovery Log.
SHow LVMFA Same as 'SHow LVMFIXEDAREA'
SHow LVMIOSTATS Undocumented *SM server command to show
???
SHow LVMLP Undocumented *SM server command to show
DB Logical Partition Information
SHow LVMPAGERANGE ___ Undocumented *SM server command to show
???
SHow LVMPR ___ Same as 'SHow LVMPAGERANGE'
SHow LVMRESET Undocumented *SM server command to ???
SHow LVMVOLS Undocumented *SM server command to show
database and recovery log volume usage.
SHow MEM (or maybe SHow MEMU) Undocumented *SM server command to show
internal memory pool utilization
numbers. In the report...
"Freeheld bytes" reflects what the TSM
server needs.
"MaxQuickFree bytes" should be greater
than Freeheld.
Doing 'Show Mem SET MAXQUICK _____'
will actually set the MaxQuickFree to
the given bytes value.
SHow MESSAGES Undocumented *SM server command to show
???
SHow MP Undocumented *SM server command to show
allocated Mount Points; that is, drives
currently in use, and their status
(Alloc, Clean, Idle, Open, Opening,
Reserved, Waiting).
(Use SHow LIBrary to see all drives.)
SHow NODE <homeAddr> Undocumented *SM server command to show
what's in a database node (not to be
confused with a client node).
SHow NODEHDR ___ Undocumented *SM server command to show
a subset of SHow NODE: just the header
info, not the records.
SHow NUMSESSIONS Undocumented *SM server command to show
number of client sessions.
Response is like:
Number of client sessions: 2
See also: Query SEssion; SHow SESSions
SHow OBJ (SHow OBJects) Undocumented *SM server command to show
Defined Database Object info: homeAddr=,
create=, destroy=, savePointNum=,
info-> .
SHow OBJDir Undocumented *SM server command to show
Defined Database Object Names and their
corresponding Home Address in
parentheses.
SHow OBJHDR Undocumented *SM server command to show
a more expanded view of what SHow OBJDIR
displays: Type, Name, homeAddr, create,
destroy,savePointNum, openList.
SHow OPENHDR Same as 'SHow OPENobjects'
SHow OPENobjects Undocumented *SM server command to show
open Objects.
Show Options See: dsmc show options
SHow OUTQUEUES Undocumented *SM server command to show
???
SHow PENDing Undocumented *SM server command to show
pending administrative and client
schedules. Reveals nodes which use
"SCHEDMODe POlling" as well as
"SCHEDMODe PRompted".
Reports: Domain, Schedule name,
Node name, Next Execution, Deadline.
SHow RAWNODE <homeAddr> Undocumented *SM server command to show
a database node (not to be confused with
a client node) in dump format (raw
data).
SHow RECLAIM ___ Undocumented *SM server command to show
???
SHow RESQUEUE Undocumented *SM server command to show
storage service ???
SHow SESSions Undocumented *SM server command to show
Session information, including whether
it is Backup (including backing up or
restoring) or Archive (including
archiving or retrieving).
SessType values (perceived):
4 HSM, or an ADSMv2 backup session
5 Backup
7 Administrator
The "bytes" value is actually the number
on the right side of the seeming decimal
point; so in "0.1889841210", the bytes
value is some 1.8 GB. The number may
also be negative, as in "0.-1596708786",
with repeated command issuances showing
the negative value decreasing, which is
indicative of a register overflow
condition: the bytes value is more than
can be contained in a C int.
See also: Query SEssion; SHow NUMSESSions
SHow SLOTs <LibName> Undocumented *SM server command to show
slot definitions in a SCSI library, such
as a 3583.
SHow SMPBIT Undocumented *SM server command to show
???
SHow SMPHDR Undocumented *SM server command to show
???
SHow SPAcemg <nodename> FileSpace Undocumented *SM server command to show
all SPACEMGMT (HSM) Files for node.
Beware: output can be enormous.
SHow SQLTABLES Undocumented *SM server command to show
mapped SQL tables.
SHow SSLEASED Undocumented *SM server command to show
storage service ???
SHow SSOPENSEGS Undocumented *SM server command to show
storage service open segments.
SHow SSPOOL Undocumented *SM server command to show
storage service pool info.
SHow SSSESSION Undocumented *SM server command to show
Storage Service sessions.
SHow SSVARS Undocumented *SM server command to show
Storage Service Global Variables:
*ClassId, *PoolId, *VolId.
SHOW STORAGE USAGE Dsmadm GUI selectable; is equivalent
to 'Query AUDITOccupancy NodeName'.
SHow SYSTEMOBJECT Undocumented TSM4 server command to show
Windows System Objects.
SHow TBLSCAN ___ Undocumented *SM server command to show
???
SHow THReads Undocumented server command to show all
the server's threads. Thread names are
fairly descriptive. For example, each
non-admin client session will have a
SessionThread; if a Move Data is
running, its thread name will be
AfMoveDataThread. Thread 0 is main,
followed by an LvmDiskServer thread for
each disk volume, then others.
Report begins with server PID, thread
table size, active threads count, zombie
threads count, cached descriptors count.
tid Thread id.
ktid Kernel thread ID, as reported by
the "tid" operand of the AIX 'ps'
-o option, like:
'ps -mefl -o pid,ppid,bnd,scount,
sched,thcount,tid,comm'
ptid Associated Process thread ID.
det Probably refers to whether the
thread was created in Detached
state. Most threads show det=1,
except main, TbPrefetchThread,
SmAdminCommandThread,
AdmSQLTimeCheckThread.
zomb Presumably refers to being a
zombie (child whose parent isn't
listening for its end).
Value usually 0. "Zombie threads"
count at beginning of report tells
you how many in total.
join Probably indicates that
pthread_join() was invoked to
suspend processing of the calling
thread until the target thread
completes. Value always seen 0.
result ?? Value always seen 0.
sess Session number, if a SessionThread
Thread names:
LvmDiskServer Logical Volume Manager,
with one thread per DB and Recovery Log
volume.
Note that there is no indication as to
which thread is running or how much CPU
time it is accumulating, hence no way to
readily isolate problem threads.
See also: Processes, server
SHow TIME Undocumented *SM server command to show
the current server date and time.
SHow TMVARS Unsupported *SM server command to show
Transaction Manager Global Variables +
Restart Record.
SHow TRANSFERSTATS ___ Undocumented *SM server command to show
???
SHow TREEstats _TableName_ Undocumented *SM server command to show
statistics on an SQL table tree. Add up
leaf-nodes and non-leaf-nodes for the
number of pages used.
Beware that this command scans the
database trees, which can take a long
time.
Example: show tree Activity.Log
SHow TXNstats Unsupported *SM server command to show
Transaction manager statistics.
SHow TXNTable Undocumented *SM server command to show
Transaction hash table contents.
SHow VERIFYEXP Undocumented *SM server command, to be
used only as directed by IBM support...
Verifies expiration table entries and
may correct potentially corrupt
entries. It is not guaranteed to fix all
entries. If this doesn't clean up the
problem (ie. you still see signs of the
problem afterwards), then an AUDITDB
operation is likely the only corrective
action available. IBM Support may be
contacted in response to a message like
ANR9999D imexp.c(4694): ThreadId<25>
Backup Entry for object 0.129710882
could not be found in Expiration
Processing, whereupon guided use of this
command may be warranted.
Further cautions: Takes a long time to
run (like Audit DB) and will tax the
capacity of the Recovery Log.
SHow Versions Unsupported *SM server command to show
the version of every Backup file in a
filespace, the management class used to
back it up, whether it is Active or
Inactive, and when it occurred
(timestamp). However, object size id
not revealed. Syntax:
'SHow Versions NodeName FileSpace
[Nametype=________]'
where Nametype=unicode may be needed for
such cases. Example:
SHow Versions ournode /home
/home : / netinst (MC: OURLIBR.MGMT)
Active, Inserted 06/03/1997 16:36:46
Employing the Select command on the
Backups table can produce comparable
results.
A Deactivated date of year 1900 is a
"negative infinity" setting to denote
that a file is eligible for immediate
expiration/deletion processing.
See also: SHow Archives
SHow VIRTVOL ___ Unsupported *SM server command to show
???
SHow VOLUMEUSAGE NodeName Unsupported TSM server command to
display Primary Storage Pool volumes
being used by a given Node for backup
data. Does not reflect Copy Storage
Pools, or volumes used only for Archive
data or HSM data. That is, the command
will report volumes which contain backup
data, or a mix of Backup and Archive
data for a node, but not volumes which
contain only Archive data. (A Select on
the VOLUMEUSAGE table *will* show copy
storage pool volumes.)
Sample output:
adsm> SHow VOLUMEUSAGE ____
SHOW VOLUMEUSAGE started.
Volume 000042 in use by node ____.
Volume 000043 in use by node ____.
SHOW VOLUMEUSAGE completed.
You could subsequently go on to issue a
'Query CONtent' command to find out
what's on the tape.
IBM intends to replace this with a
similar, supported command.
SHow VOLUSE Same as 'SHow VOLUMEUSAGE'
Shut down server 'HALT' command, after doing a 'DISAble'
to prevent new sessions, 'Query
sessions' to see what's active, and
'CANcel SEssion' if you can't wait for
running stuff to finish.
Signal 11 See: Segmentation violation
Signal the TSM server See: HALT
SIM (3590) Service Information Message. Sent to
the host system. AIX: appears in Error
Log.
Ref: "3590 Operator Guide" manual
(GA32-0330-06) esp. Appendix B
"Statistical Analysis and Reporting
System User Guide"
See also: MIM; SARS
Single Drive Some customers attempt to implement a
*SM server with a single (tape) drive.
That is extremely awkward, and
discouraged. Do all you can to add a
second removeable storage media (tape,
optical) to your installation. Remember
that the second drive does not have to
be of the same type as the first for
purposes like BAckup STGpool: that drive
can be cheaper and of lower performance,
with less costly media.
Single Drive copy storage pool A *SM server with a single drive needs
special configuration to accomplish a
BAckup STGpool to tape. The best
approach is to utilize disk (disk is
cheap) for the primary backup stgpool,
then do a BAckup STGpool from that disk
to the single sequential drive, then
migrate the disk data to the next
stgpool in the hierarchy, which would be
the same single sequential drive.
Single Drive Reclamation See: RECLAIMSTGpool
Single Drive Reclamation Process Redbook "AIX Tape Management"
script (SG24-4705) appendix C.
SIngular Perhaps you mean "distinct", as in
SELECT operations.
Size See: FILE_SIZE
Size factor HSM: A value that determines the weight
given to the size of a file when HSM
prioritizes eligible files for
migration. The size of the file in this
case is the size in 1-KB blocks. The
size factor is used with the age factor
to determine migration priority for a
file. Defined when adding space
management to a file system, via dsmhsm
GUI or dsmmigfs command.
See also: Age factor
Size limit See: MAXSize
Size of file for storage pool See "MAXSize" operand of DEFine STGpool.
SKIPNTPermissions Windows option to allow bypassing
processing of NTFS security information.
Select this option for incremental
backups, selective backups, or
restores. Use this option with the
following commands: Archive,
Incremental, Restore, Retrieve,
Selective. Choices:
No The NTFS security information is
backed up or restored. This is the
default.
Yes The NTFS security information is
not backed up or restored with
files. (Consider carefully)
Also, with Yes, the
SKIPNTSecuritycrc option does not
apply.
SKIPNTSecuritycrc Windows NT client option: Computes the
security cyclic redundancy check (CRC)
for a comparison of NTFS security
information during an incremental or
selective backup archive, restore, or
retrieve operation. Performance,
however, might be slower because the
program must retrieve all the security
descriptors. Use this option with the
following commands: Archive,
Incremental, Restore, Retrieve,
Selective. Choices:
No The security CRC is generated
during a Backup. This is the
default.
Yes The security CRC is not generated
during a Backup. All the
permissions are backed up, but the
program will not be able to
determine if the permissions are
changed during the next incremental
backup. When SKIPNTPermissions Yes
is in effect, the SKIPNTSecuritycrc
option does not apply.
The security info are stored in a
variable length buffer. It is not part
of the attributes structure that is used
to compare to see whether anything has
been changed to back it up again as part
of incremental backup. What is stored
in the attrib structure is the security
CRC which is the checksum value of the
buffer. If the security info are backed
up but not the CRC, *SM won't be able
to detect changes that were made to the
security attributes. *SM does store
the size of the four security structures
(owner SID, group SID, DACL & SACL) but
the size alone doesn't tell if it was
changed. So the downside of setting
SKIPNTSecuritycrc=Y is that TSM can
only detect if the actual size of any of
the four security structures has been
changed.
skipped ANS4940E message indication that a file
was skipped during Backup because it
changed, per CHAngingretries option.
Skipped files Somewhat peculiar and misleading product
terminology referring to files that span
multiple storage pool volumes - they
skip from one volume to another. As
used in the AUDit Volume command's
SKIPPartial keyword.
See also: Span volumes, files that, find
In the context of a client backup, see
"Backup skips ..."
SLDC Streaming Lossless Data Compression
compression algorithm, as used in the
3592. See also: ALDC; ELDC; LZ1
Slot (tape library storage cell) See: Element; HOME_ELEMENT
Slow performance with multiple client An individual client backup may take 10
accesses minutes; but if multiple clients
simultaneously do backups, the backup
time turns to hours. This can occur if
the database cache is too small.
Inspect your "Cache Hit Pct" number: if
it is down around 80% then disk access
is dominating, slowing everything down.
Increase BUFPoolsize in dsmserv.opt .
SLOWINCREMENTAL Option (Client System or Client User
Option) for personal computers
(Macintosh, Novell, Windows (only)) to
perform "slow incremental backups",
which means to back up one directory
at a time instead of first generating
a full list of all directories and
files.
Specify "SLOWINCREMENTAL YES" to so
choose. The Default for all systems
except Macintosh is "SLOWINCREMENTAL
NO", so as to speed the backup itself.
You may want "SLOWINCREMENTAL YES" in
cases where the node session times out
as the server is busy so long compiling
that list before starting the first
transmission.
SM In the TSM server, the Session Manager.
You may see it issue messages like
"ANR9999D sminit.c(656) ...".
The session manager is not needed when
running in standalone mode (AUDITDB,
DUMPDB, LOADDB).
Note that you may get spurious sminit
messages if a client tries to connect
while the TSM server's running in
standalone mode.
Small Files Aggregation ADSMv3 feature to group small files into
a larger aggregate to improve the
efficiency of backup and restoral
operations, by reducing overhead.
If the TXNBytelimit client option or
TXNGroupmax server option values are too
small or client files are very large you
may not get much aggregation.
Ref: Admin Guide: "Aggregate file".
SMC SCSI Medium Changer, as on a 3590-B11,
as used via Unix device /dev/rmt_.smc;
and on the 3583 and 3584. ("Medium
Changer" is also referred to as an
"Autochanger".) In Unix, the associated
device is /dev/smc0, /dev/smc1, etc.
The smc* special file provides a path
for issuing commands to control the
medium changer robotic device.
Though the term originated with SCSI
cable connections, the terminology has
been carried into Fibre Channel as well.
Mounts within an SMC are specified by
slot number, which means that, unlike
fully automated libraries having a
library manager, TSM must keep track of
what slots its volumes are in, and this
is reflected in Query LIBVolume output,
where the Home Element should identify
the slot. An AUDit LIBRary should
refresh TSM's knowledge of volume
locations.
See also: 3590 TAPE DRIVE SPECIAL DEVICE
FILES at the bottom of this document.
SMIT and ADSM ADSM adds its own selection category to
SMIT, as in Devices -> ADSM Devices.
smpapi_* Like "smpapi_setup". These are functions
provided in the TSM sample API program.
The source files themselves are named
dapi*.c
SNA LU6.2 Systems Network Architecture Logical
Unit 6.2.
.snapshot A "hidden" directory created by Network
Appliance (and possibly other, similar)
products in locations like the head of
file systems or home directories where
read-only images of files are available
in case the "real" files are lost.
You should obviously exclude .snapshot
directories from TSM backups, as
redundant, via an Exclude like:
EXCLUDE.Dir /.../.snapshot
Snapshot Backup Actually, Windows 2000 & XP image
backup. See: Image Backup
SNAPSHOTCACHELocation For TSM 5.1 Windows 2000 & XP image
backups, in conjunction with
INCLUDE.IMAGE; or for TSM 5.1 Windows
2000 & XP open file backups, in
conjunction with Windows INCLUDE.FS.
Specifies the location of a
pre-formatted volume which will house
the Old Blocks File (OBF), which
contains changes which other processing
makes to the different volume which is
the subject of the image backup or open
files archive.
The default is the system drive
(typically, C:), C:\tsmlvsa .
Note that the OBF file cannot be on the
same volume that is being backed up. One
approach to handling this is via
INCLUDE.FS C: fileleveltype=dynamic
See also: LVSA; OBF
SNMP ADSMv3 provides SNMP support. Implement
by doing:
- Configure dsmserv.opt for SNMP
- Configure /etc/snmpd.conf
- Start /usr/lpp/adsmserv/bin/dsmsnmp
- Start the ADSM server (in that order!)
- Register admin SNMPADMIN with a
password and analyst privileges.
See: dsmsnmp.
Ref: ADSMv3 Technical Guide redbook,
section 9.3
SNMP MIB files AIX: /usr/lpp/adsmserv/bin/adsmserv.mib
3494: Note that the atldd package does
not itself provide MIB files for the
3494. See the IBM Magstar 3494 Tape
Library Guide redbook (search on SNMP)
and the 3494 Tape Library Operator's
Guide manual. Note that the latter
manual says: "The Library Manager code
does not contain any SNMP Management
Information Base (MIB) support."
SNMPD Later releases of AIX V4.2.1 all have a
DPI V2 compliant snmpd built-in. The
snmpd component is in fileset
bos.net.tcp.client. You can download
fixes from http://198.17.57.66/aix.us/
aixfixes?lang=english.
Sockets and Backup/Restore ADSM will back up and restore special
files, but (per the v3 client README
file), *not* sockets: sockets are
skipped during backup; and they are
skipped during restore, even if they
were backed up with earlier levels of
the ADSM software.
AIX 4.2 and HP-UX do not support
creating socket files, and always skip
socket files in Restore operations.
Note: Early v3 software attempted to
back up and restore sockets; but there
were too many problems, and that
functionality was removed.
See also: IGNORESOCKETS
Solaris errno values Do 'man -s 2 intro' on Solaris.
See also IBM site Technote 1143564.
Solaris restorals, speed up Employ the "fastfs" attribute, which
causes directory updates to be buffered
in memory rather than be written to disk
as each is changed, which can
dramatically slow a restoral.
Risk: A hardware problem, power outage,
or other system disruption will cause
all the buffered data to be lost, so
best to use this only for file systems
which are lost causes to begin with.
ftp.wins.uva.nl:/pub/solaris/fastfs.c.gz
Solaris x86 client There is none, and IBM has indicated
that it has no intentions of investing
the effort to create one. (Solaris's
future is uncertain, after all.)
Space management Another term for describing the services
performed by HSM: The process of keeping
sufficient free storage space available
on a local file system for new data and
making the most efficient and economical
use of distributed storage resources.
Space management attributes HSM: Attributes contained in a
Management Class that specify whether
automatic migration is allowed for a
file, whether selective migration is
allowed for a file, how many days must
elapse since a file was last accessed
before it is eligible for automatic
migration, whether a current backup
version of a file must exist on your
migration server before the file can be
migrated, and the ADSM storage pool to
which files are migrated. In fact, most
of the attributes in a 'DEFine
MGmtclass' and 'UPDate MGmtclass' are
for HSM.
Space management for Windows See: HSM, for Windows
Space management information (HSM) 'dsmmigquery FSname'
Space management settings Settings that specify the stub file
size, quota, age factor, size factor,
high threshold, low threshold, and the
premigration percentage for a file
system. A root user selects space
management settings when adding space
management to a file system or when
updating space management.
Space Management Technique Management Class specification
(HSM) (SPACEMGTECHnique) governing HSM file
migration...
See: SPACEMGTECHnique
Space monitor daemon HSM: Daemon (hsmsm) that checks space
usage on all file systems for which
space management is active, and
automatically starts threshold migration
when space usage on a file system equals
or exceeds its high threshold. How often
the space monitor daemon checks space
usage is determined by the
CHEckthresholds option in your client
system options file. In addition, the
space monitor daemon starts
reconciliation for your file systems at
the intervals specified with the
RECONCILEINTERVAL option in your client
system options file.
Space reclamation See "Reclamation".
Space used by clients (nodes) on all 'Query AUDITOccupancy [NodeName(s)]
volumes [DOmain=DomainName(s)]
[POoltype=ANY|PRimary|COpy'
Note: It is best to run 'AUDit LICenses'
before doing 'Query AUDITOccupancy' to
assure that the reported information
will be current.
Space used on a volume 'Query Volume'
Space used in storage pools, query 'Query OCCupancy [NodeName]
[FileSpaceName]
[STGpool=PoolName]
[Type=ANY|Backup|Archive|
SPacemanaged]'
.SpaceMan Hidden directory in a space-managed
(HSM) file system, containing files:
candidates: list of migration
candidates. Created by
'dsmreconcile -c FSname'
fslock.pid: PID of a dsm process which
is using the file system,
e.g. dsmautomig,
dsmreconcile, etc.
orphan.stubs: Names files for which
stub file exists, but no
migrated file; from
reconcilliation.
status: symlink to point to file which
records stats.
premigrdb.dir, premigrdb.pag: the
premigrated files database,
accessed via dbm_* calls.
progress.automig Small binary file
(80 bytes).
progress.reconcile Small binary file
(104 bytes). Seems to be see
most of its updating by the
scout daemon rather than
dsmreconcile; so not a good way
to tell when dsmreconcile was
last run.
progress.scout Small binary file
(2176 bytes).
reconcile.pid Small ASCII file
containing the PID of the
dsmreconcile process, like
"12345\n", timestamped when the
dsmreconcile started, and left
behind after reconcile ends.
This file is a good indicator of
when dsmreconcile was last run.
logdir: Directory to record info about
files in the process of migrate
or recall.
Ref: HSM Clients manual.
This hidden directory is implicitly
excluded from space management.
SpaceMan The Space Management component of ADSM,
more commonly known as HSM, which is an
optional feature.
Started by /etc/inittab's "adsmsmext"
entry invoking /etc/rc.adsmhsm .
SPACEMGTECHnique MGmtclass operand governing HSM file
(HSM) migration...
AUTOmatic says that files may migrate
automatically or by selective command;
SELective says only by selective cmd;
NONE says no migration allowed.
Default: NONE, as per the usual customer
case of HSM not being installed.
Check via client 'dsmmigquery -M -D'
command.
See also: Space Management Technique
Span volumes, files that, find In general:
SELECT * FROM CONTENTS WHERE SEGMENT>1
For specific volumes:
Query CONtent VolName COUnt=1
Query CONtent VolName COUnt=-1
to see the first, and last, files on
volumes suspected of harboring known
spanners.
See also: Segment Number; Skipped files;
Spanning
Spanning TSM fills tapes as much as possible,
which means that as it encounters EOV
when writing a file, it will split the
file at that point and continue writing
the remainder of it on another volume.
Each piece of the file is called a
Segment.
Experience shows that probability is
high that files will span volumes; that
the last file on a volume will span to
the next volume. See "Filling" for
ramifications for Filling volumes.
Sparse files, handling of Sparse files are those which contain
empty space; that is, portions of the
file are implicit per positional
addressing and consume no disk space.
(In Unix, at least, there is no file
attribute which identifies a file as
sparse: sparseness is implicit, and not
always deterministic.)
Sparse files are in general problematic
in that any ordinary reading of the file
will result in the full, effective
content of the file being presented,
with the internal skip space being
expanded with padding characters (bytes
whose value is 0). TSM tries to properly
detect sparse files and handles them
appropriately:
At Backup time: The TSM client attempts
to discern if the file is sparse, and
sets a Sparse flag if it believes that
the file is sparse. We observe that
the data is still backed up as if it
were not sparse, however: it seems
that the client needs to traverse the
file in its logical entirety to fully
determine whether it is sparse. Thus,
there is no space savings at Backup
time.
At Restore time: The Sparse flag, set
at Backup time per full file traversal
and evaluation, is normally honored,
and restoral proceeds accordingly...
If a block of a file consists only of
bytes with value zero this block is
not restored as a physical disk
block. For sparse files with large
holes in the address space this
obviously improves restoral
performance somewhat. However, realize
that there is no mapping as to where
the sparseness occurs with the data as
stored in the TSM storage pool, and so
the totality of the file data has to
be traversed by the server much as the
client had to during backup - only
this time the server knows to look for
sparseness, to send back to the client
only the significant data, with
appropriate offsets for client
reconstruction of the sparse file. The
data scanning thus performed by the
server is unusual and relatively
costly; it saves time only where the
file is considerably sparse such that
the analysis time is offset by savings
in reduced data transmission (time).
But a sparse file with minimal "holes"
will aggravates restoral time. Further:
the Backup client may have
misinterpreted a plain file as sparse
and so flagged it in TSM server
storage, which substantially prolongs
restoral time. This can be remedied by
setting client option
MAKesparsefile NO or using
-MAKesparsefile=no on the CLI.
Per the 4.1 Solaris Readme (only):
If files have been backed up as sparse
files and need to be restored as normal
files (non-sparse files), this should
be done by the internal (undocumented)
option MAKesparsefile NO in dsm.opt or
-MAKesparsefile=no which is supported
by the command line client only. The
option is only necessary for files
where the existence of physical disk
blocks is required. This is the case in
some rare situations for system files
like ufsboot which is needed during
boot time. The boot file loader of the
operating system accesses physical disk
blocks directly and does not support
sparse files.
Alternative: Consider using client
compression where sparse files are an
issue, as that will inherently take care
of all the emptiness.
See also "Sparse file processing" in
recent server README files.
Historical note: ADSMv2,3 supported an
intentionally undocumented option called
MAKesparsefile which explicitly
requested that sparse files be restored
as sparse. APAR IC19767 notes that the
client now handles this automatically.
Sparse files, handling of, Windows Backup: TSM will back up a sparse file
as a regular file if Client compression
is off (COMPRESSIon No). Enable file
compression (COMPRESSIon Yes) when
backing up sparse files to minimize
network transaction time and to maximize
server storage space. (However, if your
tape drive hardware does compression,
the only savings will be network
transmission time.)
Restore: When restoring sparse files to
a non-NTFS file system, set the TSM
server communication time out value
(COMMTimeout, and even IDLETimeout) to
the maximum value of 255 to avoid client
session timeout.
Split retentions Customers sometimes want separate
retention periods for Backups of the
same files, as for onsite versus offsite.
For example: keep up to 14 days of daily
backups, weekly backups kept for 6
months, and monthly backups kept for a
year. This has historically not been
possible with the product, as retention
is tied to the unique file identity, not
to the medium upon which it is stored.
One circumventional approach is to
utilize additional, "fake" nodenames via
the options file and perform the
supplementary backups with those.
Another approach is to utilize Archive
for the supplementaries.
See also: Backup, full, periodic
Splitting files across volumes See: Span
SpMg Space Managment (HSM) file type, in
Query CONtent report. Other types: Arch,
Bkup.
Spreadsheet, import TSM db data into See ODBC in Appendix A of the TSM
Technical Guide redbook.
SQL See: Select
SQL: Re-cast Like CAST(BYTES_SENT AS DECIMAL(18,0))
SQL: Selecting from multiple tables In one Select you can retrieve column
entries from tables via specificity:
using "Tablename.Columname" format to
explicitly identify your objectives.
Sample:
SELECT DISTINCT contents.node_name,
contents.volume_name,
archives.archive_date,
archives.description
FROM contents,archives ...
SQL: Equal symbology =
SQL: Greater Than symbology >
SQL: Greater Than Or Equal To symbology >=
SQL: Less Than symbology <
SQL: Less Than Or Equal To symbology <=
SQL: Not Equal symbology <>
SQL: NOT LIKE To filter out things not matching a
pattern. For example, to omit storage
pool names which end with the string
"OFFSITE", code:
STGPOOL_NAME NOT LIKE '%OFFSITE'
where % is a wildcard character for
SQL: Experiment with expressions The Select statement is a generalized
thing, and you can take advantage of
that to experiment with the forumlation
of expressions. Unlike real-world SQL,
the TSM Select statement requires that a
table be specified with From: you can
supply a placebo table which always has
only one entry, to yield just one row in
your output. Such a table is Log.
Here's an example to display the current
timestamp:
SELECT CURRENT_TIMESTAMP
Here's an example to display the
timestamp three days ago:
SELECT CURRENT_TIMESTAMP-(3 DAYS) FROM
LOG
SQL: Sorting On the Select statement, use the
ORDER BY parameter specification,
specifying the sort column by name or
relative numeric position.
SQL: String encoding Enclose in single quotes, like 'Joe'.
SQL: Wildcard character Is percent sign (%), to represent one
or more occurrences of any possible
character (number, letter, or
punctuation).
See sample in: SQL: NOT LIKE
SQL, last 24 hours Here's an example of seeking table
entries less than a day old where the
table has a timestamp column named
"DATE_TIME":
... WHERE
DATE_TIME>(CURRENT_TIMESTAMP-(1 DAY))
SQL, number format Select command output does not conform
to server NUMberformat settings. There
is no provision for special formatting
of numbers. Your only recourse is to
post-process the results.
SQL, rounding result Do like: SELECT NODE_NAME,
CAST(SUM(CAPACITY * (PCT_UTIL/100)) AS
DECIMAL(yy,z)) as Percent_Utilized FROM
FILESPACES GROUP BY NODE_NAME
where yy is the max number of places to
the left of the decimal point, z is the
number of places to the left. Note that
places to the right are padded with
zeros, places to the left are not.
SQL, specify a set to match in Use the IN keyword, like:
"select ... where stgpool_name in
('BACKUPPOOL', 'TAPEPOOL',
'ANOTHERTAPEPOOL')".
SQL BackTrack Non-Tivoli backup product from BMC
sofware, for backing up various database
types. To back up to TSM, Uses the TSM
API to store backups of physical files
or logical exports using
pseudo-filenames that includes time
stamps, so every time you do an SQL
BackTrack backup ADSM is given a new set
of unique objects. Thus there is never
more than one 'version' of a 'file'. So
versions-exists can safely be set to 1
and retain-extra can be set to zero
(recall that retain-extra affects the
retention of the 2nd, 3rd, etc. oldest
versions of a file, of which there are
none in this case). The
versions-deleted is set to 0 so that
when SQL BackTrack tells ADSM to delete
an object, which it does after the two
weeks you've set it to, ADSM will mark
it for expiration the next time
expiration is run (within 24 hours
typically). The retain-only is set to
zero for the same reason; once SQL
BackTrack decides to delete the file, it
is of no use to retain that
last-good-version and longer.
Ref: www.bmc.com
SQL backup See: TDP for Microsoft SQL.
SQL column width See: SELECT output, column width;
Set SQLDISPlaymode
SQL efficiencies Instead of use the construct:
columname='A' or columname='B'
use:
columname in ('A', 'B')
The latter will run in about half the
time.
SQL in ADSMv3 Used via 'Select' command.
See available information by doing:
SELECT * FROM SYSCAT.TABLES
SELECT * FROM SYSCAT.TABLES WHERE -
TABNAME='___'
Shows table names, column count, index
column count, whether unique, and
table description. See example under
Select in the in the Admin Ref.
SELECT * FROM SYSCAT.COLUMNS
SELECT * FROM SYSCAT.COLUMNS WHERE -
TABNAME='___'
Enumerates all the columns for that
table.
SELECT * FROM SYSCAT.COLUMNS WHERE -
COLNAME='___'
Shows table name, column name, column
number, type, length, description
SELECT * FROM SYSCAT.ENUMTYPES
Shows type index, name, values,
description
Or use the Web Admin and run the script
Q_TABLES, then run Q_COLUMNS with the
desired table name as parameter.
You can use the following technique to
send the output to a file, with commas
between elements, for absorbing into
your favorite spreadsheet program for
manipulation and pretty printing:
dsmadmc -id=id -pa=password -comma
-out="syscat.tables.csv" "select * from
syscat.tables"
dsmadmc -id=id -pa=password -comma
-out="syscat.columns.csv" "select *
from syscat.columns"
dsmadmc -id=id -pa=password -comma
-out="syscat.enumtypes.csv" "select *
from syscat.enumtypes"
Ref: Admin Guide;
"Using the ADSM SQL Interface",
http://www.uni-karlsruhe.de/~rz57/
ADSM/3rd/handouts/raibeck.ps
(a PostScript file, to print or see
with a utility like the free
Ghostscript or GSview)
SQL node choice IBM recommends that you do NOT use the
same ADSM node for the base ADSM
client and SQL Agent. The SQL Agent has
its own special policy requirements due
to the nature of the design, i.e. each
backup object is always unique. There
can also be coordination issues when
defining the various needed schedules.
IBM also recommends that you keep the
options file separate. In fact, the
design of the GUI requires that the
options file be kept in the SQL Agent
install directory. You can use the same
node, but we do not recommend it.
SQL report formatting See: SELECT output, column width;
Set SQLDISPlaymode; SQL column width
SQL samples Shipped with the server is a scripts.smp
file, containing a lot of interesting
examples of SQL coding for TSM. These
sample scripts can be visually inspected
and adapted; or loaded at TSM install
time via 'dsmserv runfile scripts.smp',
or loaded anytime thereafter into a
running server via 'macro scripts.smp'.
Ref: server Quick Start manual
See also: dsmserv runfile
SQL settings See: Set SQLDATETIMEformat;
Set SQLDISPlaymode; Set SQLMATHmode;
Query SQLsession
SQL string comparisons Are done on a byte-for-byte basis, so
they are case sensitive. Use the LCASE
and UCASE functions as needed to force
a name to either.
SQL TDP See: TDP for Microsoft SQL Server
SQLDISPlaymode See: Set SQLDISPlaymode
/SQLSECURE TDP for SQL V1 function which allows use
of Windows "authentication" (userid and
password) to communicate with the SQL
Server. TDP for SQL V2 improves upon
this by allowing SQLUSERID and
SQLPASSWORD to be stored in the Registry
so that both GUI and command-line can be
used without having to enter the
userid/password; and you also have the
choice of using Windows "authentication"
for communicating with the SQL Server.
See the "/SQLAUTHENTICATION=INTEGRATED"
option.
ssClone An internal server facility created to
as to avoid an HSM file recall during a
backup operation, by performing an
"inline server copy".
SSD IBM: Storage Systems Division; or
Storage Subsystems Division; or
Storage Subsystems Development
SSD RMSS device driver IBM higher-end tape drive opsys driver
software, as for the 3590 and 358x tape
drive series, with different names for
different platforms, such as Atape,
atdd, IBMtape, IBMUltrium, IBMmag.
Found at: FTP://ftp.software.ibm.com/
storage/devdrvr
SSL See: HTTPS
Staggered start for client schedules See: Schedule, Client
Stale Shows up as Copy Status in
'Query DBVolume' or 'Query LOGVolume'
command output, indicating that a
Vary On is in progress to bring a volume
back into service.
Start "Sess State" value from 'Query SEssion'
saying that the session is starting.
See also: Communications Wait;
Idle Wait; Media Wait; RecvW; Run; SendW
Start-stop In tape technology, refers to providing
data to a tape drive irregularly such
that recording must stop, halting the
transport of the media until data is
again available for recording, whereupon
the media is again set into motion.
This is the kind of recording most
frequently found in reality.
Drives which exhibit inferior start-stop
performance can greatly prolong TSM
backup operations.
The underlying problem with file system
backup and drives with mediocre
start-stop performance is in the
"sputtering" way that Backup will send
files as it encounters them in
traversing the file system. Enlarged
transaction buffering will help with
this. A frontal disk storage pool
serving as a consolidation buffer also
does the trick. A more labor-intensive
method would be to have a non-TSM (i.e.,
home-grown) tool run through the file
system to collect the names of all the
candidate files and then initiate a
backup with the -FILEList option, to in
effect cause streaming, eliminating all
the time gaps in candidate discovery
("squeeze the air out"). It's a more
desperate measure, but it may suit some
installations.
Contrast with: Streaming
See also: Backhitch
Start-up window for client schedules See: Schedule, Client
State MOVe MEDia command states are
MOUNTABLEInlib and MOUNTABLENotinlib
(q.v.). Not to be confused with volume
Status.
STATE SQL: Column in BACKUPS table,
identifying the backup state:
'ACTIVE_VERSION' or 'INACTIVE_VERSION'
See also: Active files, identify in
Select; Inactive files, identify in
Select
STARTTime A 'DEFine SCHedule' operand. It is by
schedule, not by node. The only way to
give a node a unique starttime would
be to define a schedule and have only
that node associated with it.
Static A copy group serialization value that
specifies that an object must not be
modified during a backup or archive
operation. If the object is in use
during the first attempt, *SM will not
back up or archive the object. See
serialization. Contrast with Dynamic,
Shared Static, and Shared Dynamic.
STATUS table TSM SQL table containing most of the
information contained in a Query STatus
report (but not server version/release).
Columns: SERVER_NAME, SERVER_HLA,
SERVER_LLA, SERVER_URL, SERVER_PASSSET,
INSTALL_DATE, RESTART_DATE,
AUTHENTICATION, PASSEXP, INVALIDPWLIMIT,
MINPWLENGTH, WEBAUTHTIMEOUT,
REGISTRATION, AVAILABILITY, ACCOUNTING,
ACTLOGRETENTION, LICENSEAUDITPERIOD,
LASTLICENSEAUDIT, LICENSECOMPLIANCE,
SCHEDULER, MAXSESSIONS,
MAXSCHEDSESSIONS, EVENTRETENTION,
CLIENTACTDURATION, RANDOMIZE,
QUERYSCHEDPERIOD, MAXCMDRETRIES,
RETRYPERIOD, SCHEDMODE, LOGMODE,
DBBACKTRIGGER, ACTIVERECEIVERS,
CONFIG_MANAGER, REFRESH_INTERVAL,
LAST_REFRESH, CROSSDEFINE.
STATUS (volumes status) The status of volumes, as in the
underlying database fields reported by
the customer-visible Media and Volumes
tables. Value is one of:
EMPty, FILling, FULl, OFfline, ONline,
PENding
Status info, get 'Query STatus'
Status values See: dsmc status values
STAtusmsgcnt TSM server option specifyin the number
of records (times 1000) that will be
processed between status messages during
DSMSERV DUMPDB and DSMSERV LOADDB
commands.
Stem See: Stub
STGDELETE In 'Query VOLHistory', Volume Type to
say that volume was a sequential access
storage pool volume that was deleted.
Also under 'Volume Type' in
/var/adsmserv/volumehistory.backup .
STGNEW In 'Query VOLHistory', Volume Type to
say that volume was a sequential access
storage pool volume that was added.
Also under 'Volume Type' in
/var/adsmserv/volumehistory.backup .
STGPOOLS SQL table of server storage pools.
Columns: STGPOOL_NAME, POOLTYPE,
DEVCLASS, EST_CAPACITY_MB, PCT_UTILIZED,
PCT_MIGR, PCT_LOGICAL, HIGHMIG, LOWMIG,
MIGPROCESS, NEXTSTGPOOL, MAXSIZE,
ACCESS, DESCRIPTION, OVFLOCATION, CACHE,
COLLOCATE, RECLAIM, MAXSCRATCH,
REUSEDELAY, MIGR_RUNNING,
MIGR_MB (amount migrated),
MIGR_SECONDS, RECL_RUNNING, RECL_VOLUME,
CHG_TIME, CHG_ADMIN, RECLAIMSTGPOOL,
MIGDELAY, MIGCONTINUE
STGREUSE In 'Query VOLHistory', Volume Type to
say that volume was a sequential access
storage pool volume that was reused.
Also under 'Volume Type' in
/var/adsmserv/volumehistory.backup .
This Type is unusual, and has been
associated with ANR0102E problems.
STK Short id for Storage Technology Corp.
http://www.storagetek.com/
They have a Customer Resource Center for
the submission of questions.
STK 9710 APAR IX75639 advised of ANR8420E I/O
errors occurring on STK9710 while
accessing DLT 7000 drive: errpt
indicates SCSI Adapter errors. Correct
by enabling the FAST DRIVE LOAD option
on the STK 9710 Lib, which seems to be a
requirement for this Lib/Drive to work
with ADSM. (Set the FAST DRIVE LOAD
via the front panel.)
STK 9730 A model in the "TimberWolf" family.
Is a rack-mountable, SCSI-based
automated library about the size of a
workstation. Without tape drives, the
9730 weighs 50 kg (110 lbs.) and is the
least expensive library in the series,
available with 18 or 30 cells, and 1-4
DLT drives. May be driven by ACSLS.
Customer experience varies: some find
problematic hardware with DLT7000
drives, as of 9/98. See "DLT7000".
STK 9840 StorageTek tape drive technology, using
cartridge of same form factor as IBM
3480/3490/3590, which is to say 1/2"
tape, but dual-hub (diagonally
opposite). Used in STK PowerHorn lib.
Customers report this technology to be
"rock solid".
Capacity: 20 GB basic, 60 GB compressed
(LZ1 method, 3:1)
Recording method: linear serpentine, 288
tracks, servo tracking
Load time: 12 seconds to 1st data
transfer
Average access time: 11 seconds
Throughput: 10 MB/sec sustained.
Tape speed: read/write @ 2 m/s; search @
8 m/s
Rewind time: 16 s max
Cartridge: essentially square; mid-point
load; dual hub (dual spool), on
corner-to-corner diagonal of cartridge;
metal particle tape.
TSM definition: DEFine DEVclass
DEVType=ECARTridge FORMAT=9840|9840C
www.storagetek.com/products/tape/9840/
STK L700 StorageTek floor-standing SCSI tape
library in a silo design, with STK 9840
or DLT or LTO tape drives.
STK L700e StorageTek floor-standing tape library
in a silo design. 678 cartridge slot
capacity, extendable to 1344. Supports
up to 12 StorageTek high-performance
T9840 and/or high-capacity T9940 tape
drives or up to 20 DLT, SDLT or LTO
Ultrium tape drives; or mix any of these
drives in different combinations.
There is a web interface to the library.
Slot 10 of a STK L700 Library is the
upper import/export slot of this bulk
station.
Connectivity: a native 2Gb Fibre Channel
optical interface.
AIX handles as:
Resource Name: lb0
Resource Class: library
Resource Type: TSM-FCSCSI
Storage Agent LAN-free backups introduced in TSM 3.7
relieve the load on the LAN by
introducing the Storage Agent. This is a
small TSM Manager server (without a
Database or Recovery Log) which is
installed and run on the TSM client
machine. It handles the communication
with the TSM server over the LAN but
sends the data directly to SAN attached
tape devices, relieving the TSM server
from the actual I/O transfer.
Ref: TSM 5.1 Technical Guide
See: Lan-Free Backup; Server-free
Storage Agent and logging/accounting The Storage Agent operates unto itself,
and does not produce logs or accounting
records, and so there are no entries in
either the TSM server Summary table or
accounting records to identify Storage
Agent actions. As of TSM 5.2 there
exists TSM server option DISPLAYLFINFO
to cause Storage Agent identification.
With it, records for Storage Agent
activity will appear in the Summary
table and TSM server accounting records,
tagged with "NodeName(StorageAgentName)"
instead of just NodeName. This allows
you to benefit from further information
and distinguish ordinary, direct
client-server sessions from those
performed through a Storage Agent.
Storage pool A named set of storage volumes that is
used as the destination for Backup,
Archive, or HSM migrate operations.
May be arranged in a hierarchy, for
downward migration according to age.
The storage pool is assigned to a
Devclass.
Can also be
Copy Storage Pools to provide backup of
one or more levels of the hierarchy.
Can be an AIX file, prepped with the
dsmfmt cmd, which serves as a random-
access storage pool; or a raw logical
volume.
Files within a given storage pool are
not segregated by management class:
files belonging to different management
classes may exist on the same volume.
Is target of: DEFine COpygroup ...
DESTination=PoolName
and: DEFine STGpool ...
NEXTstgpool=PoolName
and: DEFine Volume PoolName VolName
Note that storage pools cannot span
libraries.
Storage pool, assign You do 'DEFine STGpool' to assign it to
a Devclass; then do 'DEFine COpygroup'
to make it part of a Copy Group in a
Management Class, which is under a
Policy Set, which needs to be Activated.
Storage pool, back up Have a Copy Storage Pool, and perhaps
nightly issue the command:
'BAckup STGpool PrimaryPoolName
CopyPoolName
[MAXPRocess=N]
[Preview=Yes|VOLumesonly]'
Storage pool, Copy Storage Pool, See: DEFine STGpool (copy)
Storage pool, disk You may, of course, allocate storage
pools on disk.
In *SM database restoral, part of that
procedure is to audit any disk storage
pool volumes; so a good-sized backup
storage pool on disk will add to that
time.
Considerations:
- Because there is no reclamation for
random access storage pools:
- disk fragmentation is a concern;
- aggregates are not rebuilt, so as
objects within an aggregate expire,
that space is not freed until all
objects in the aggregate have
expired. This can cause inefficient
utilization of the disk space over
time.
- FILE device classes could be used,
but represent configuration and
performance concerns.
- While such an environment is
technically possible, it is not the
intended *SM usage model, and IBM
does not recommend it at this time.
See: Backup through disk storage pool
Storage pool, disk, define See: DEFine STGpool (disk)
Storage pool, disk, performance There have been reports that reading
from a disk storage pool is done a file
at a time and not buffered, "because it
is a random access device". This
dramatically impedes the performance of
BAckup STGpool and Reclamation.
Another drawback from using disk storage
pools is that they nullify the
advantages of multi-session
restore. From the Client manual, in the
description of the RESOURceutilization
option: "If all of the files are on
disk, only one session is used. There is
no multi-session for a pure disk storage
pool".
See also: Multi-Session Restore
Storage pool, HSM, define 'DEFine MGmtclass MIGDESTination=StgPl'
Default destination: SPACEMGPOOL.
Storage pool, HSM, update 'UPDate MGmtclass MIGDESTination=StgPl'.
If this updated MGmtclass is in the
active policy set, you will need to
re-ACTivate the POlicyset for the change
to become active.
Storage pool, last used date/time Alas, *SM does not allow customers to
determine when the storage pool was last
used for reading or writing: there is no
command to query for this information.
Storage pool, number of files in, 'Query OCCupancy [NodeName]
query [FileSpaceName]
[STGpool=PoolName]
[Type=ANY|Backup|Archive|
SPacemanaged]'
Storage pool, outside library See: Overflow Storage Pool; OVFLOcation
Storage pool, reclaimable volumes SELECT VOLUME_NAME,STGPOOL_NAME,-
PCT_UTILIZED FROM VOLUMES WHERE -
STATUS='FULL' AND PCT_RECLAIM>50
Storage pool, rename ADSMv3:
'REName STGpool PoolName NewName'
Storage pool, restore 'RESTORE STGpool PrimaryPoolName'
Storage pool, skip during writing You can cause this to happen by making
and go to next in hierarchy its ACCess=READOnly; or change the
MAXSize to a silly, low value.
See: UPDate STGpool
Storage pool, space used 'Query OCCupancy [NodeName]
[FileSpaceName]
[STGpool=PoolName]
[Type=ANY|Backup|Archive|
SPacemanaged]'
Storage pool, tape, define See: DEFine STGpool (tape)
Storage pool, tape, prevent usage 'UPDate DEVclass DevclassName
MOUNTLimit=0'
Storage pool, volumes in 'Query Volume STGpool=Pool_Name'
Storage Pool Count As seen in Query DEVclass report.
Is the number of storage pools that are
assigned to the device class, via
'DEFine STGpool'.
Storage pool devices class A storage pool is defined with a single
device class. Thus, it is not possible
to have both FILE and tape participate
in the stgpool, as you might want to do
to effect a copy storage pool where you
have only a single tape drive.
Storage pool disk volume which no In the history of a TSM server you might
longer exists, delete end up with some storage pool disk
volumes which physically no longer
exist, but which are still known to TSM.
They are non-existent, and in TSM are
offline. How do you clean them out?
Trying to create an imposter volume so
that you can delete it is virtually
impossible, because content simply
doesn't match TSM expectations. A
Delete Volume fails. One customer
reports success in using Restore Volume:
it restores some data and then deletes
the old, original volume. Obviously,
though, you want TSM administration
procedures in place to avoid getting
into this situaation.
Storage pool hierarchy, defining Use either 'DEFine STGpool' or
'UPDate STGpool' and use
"NEXTstgpool=PoolName" to define the
next storage pool down in the hierarchy.
So if you had "diskpool" and "tapepool",
you would define the latter to be the
next level by doing:
'UPDate STGpool
Storage pool logical volume, max size Under AIX 4.1, ADSM storage pool logical
volumes are limited to 2GB in size, as
are files, because of AIX programming
restrictions. AIX 4.2 relieves that
limit.
Storage pool migration, query 'Query STGpool [STGpoolName]'
Storage pool migration, set The high migration threshold is
specified via the "HIghmig=N" operand of
'DEFine STGpool' and 'UPDate STGpool'.
The low migration threshold is specified
via the "LOwmig=N" operand.
Storage pool naming If you employ disciplined, methodical
naming conventions to your storage
pools, you will make your life a lot
easier when it comes to performing
administration, as various commands
(e.g., Query MEDia) allow you to specify
the storage pool name with wildcard
characters.
Example: You have a hierarchy of disk
and tape for your three kinds of data,
plus a local copy storage pool and an
offsite pool...
Disk:
POLSET1.STGP_ARCHIVE_DISK
POLSET1.STGP_BACKUP_DISK
POLSET1.STGP_HSM_DISK
Tape:
POLSET1.STGP_ARCHIVE_3590
POLSET1.STGP_BACKUP_3590
POLSET1.STGP_HSM_3590
Copy:
POLSET1.STGP_ARCHIVE_COPY
POLSET1.STGP_BACKUP_COPY
POLSET1.STGP_HSM_COPY
Offsite:
POLSET1.STGP_ARCHIVE_OFFSITE
POLSET1.STGP_BACKUP_OFFSITE
POLSET1.STGP_HSM_OFFSITE
The commonality in the names facilitates
the use of wildcards to seek, for
example, full volumes in the Offsite
pool set that can be ejected from your
library and be sent offsite.
Storage pool occupancy by node SELECT STGPOOL_NAME, -
SUM(NUM_FILES) AS "Total Files", -
SUM(PHYSICAL_MB) AS "Physical MB",-
SUM(LOGICAL_MB) AS "Logical MB" -
FROM OCCUPANCY -
WHERE NODE_NAME='UPPER_CASE_NAME' -
GROUP BY STGPOOL_NAME
Storage pool space and transactions TSM has two basic media types for
storing data: random (disk) and
sequential (tape). Because of the
different characteristics of the two
types of media, TSM manages each
differently, particularly when data is
to move to the next storage pool...
Disk volumes defined to a *SM storage
pool have a fixed size, allowing the
server to determine the capacity of the
storage pool. Since these volumes are
created and managed by TSM, it is able
to determine during the beginning of a
transaction if there is enough space in
the disk storage pool to contain the
data to be stored. It is important to
note that if this occurs, the storage
pool is approaching fullness, and
migration should be run to move data to
make room for new data entering the TSM
storage hierarchy. However, if migration
is disabled or the file exceeds the
maximum file size for files allowed in
the disk pool (MAXSize), TSM will move
new data to the next storage pool in the
hierarchy. This is only possible because
TSM is knows the capacity of the disk
pool and manages the allocation of the
disk volumes.
Sequential storage media and storage
pools are different in several ways.
First, sequential (tape) media is
variable length and its drives are
capable of compression to increase the
amount of data it can store. This
prevents TSM from knowing the absolute
capacity of the storage pool or tapes,
and so when the transaction begins it is
not possible to determine how much data
a storage pool tape will receive. TSM
can only check to ensure that the file
does not exceed the maximum file size
for this sequential storage pool. If
TSM is able to allocate a volume, it
proceeds to store data on it. Secondly,
sequential storage pools tend to be open
ended or are capable of adding volumes
to the pool. Again, TSM cannot know how
much these volumes are capable of
holding and so cannot determine if the
transaction data will fit on the volume.
However, TSM is typically able to
continue storing data if the volume
fills, by allocating another sequential
volume. Again, as with disk storage
pools, if the sequential storage pool
becomes full, migration will move data
to the next pool to make room for new
data.
FILE volumes are a combination of disk
and sequential media. TSM allocates
these volumes on disk media but treats
them as sequential. Hence, TSM does not
presume to know the amount of space of
the scratch file volumes. Typically,
those using FILE devclass will allow
enough scratch volumes that handle their
daily workload and allow migration to
ensure enough space is available in the
pool. If there are files larger than the
FILE volumes and it is necessary to
store the data in the next storage pools
then it is recommended that the storage
pool be changed to a Disk pool rather
that a File pool.
Ref: APAR IY00820
Msgs: ANS1329S
See also: MAXSize
Storage pool volume, query 'Query Volume Vol_Name'
Storage pool volume, long gone, delete If you're fully following TSM procedures
and no server defects affect operations,
you should not encounter situations
where you end up with a phantom storage
pool volume: one that the storage pool
thinks it still has, but that has long
been gone from the TSM system. If you
do end up with that situation, here are
possible ways to proceed:
If a disk volume:
- Halt the TSM server;
- From the server directory do:
'dsmserv auditdb diskstorage fix=yes'
If a removable volume:
- 'CHECKOut LIBVolume ... FORCE=Yes'
Later, to bring back:
Storage pool volumes, count SELECT STGPOOL_NAME,count(*) FROM -
VOLUMES GROUP BY STGPOOL_NAME
Storage pool volumes, how used There is no definitive information on
how TSM uses multiple volumes in a
storage pool, as during Backup. Users
report a "write each file to one volume"
pattern: when the file size is huge
(e.g, a full DB2 backup) disk volumes
get filled one at a time; but in the
case of a a number of modest-sized
files, TSM seems to spread them over all
the volumes.
Storage pool volumes, query 'Query Volume [STGpool=Pool_Name]'
SELECT STGPOOL_NAME, COUNT(*) AS -
"# Vols." FROM VOLUMES GROUP BY -
STGPOOL_NAME
Storage pool volumes and performance For DISK (random access) volumes, *SM
spreads its activity out over multiple
volumes, so you're better off with more
small disks than a few larger ones. *SM
creates a (LvmDiskServer) thread for
each volume (see "Processes, server
(dsmserv's)", so you get more
parallelization.
The size of your aggregates, as governed
by TXNGroupmax and TXNBytelimit, affects
the speed of operation across storage
pools.
See also: MOVEBatchsize
Storage pools, number of SELECT COUNT(STGPOOL_NAME) AS -
"Number of storage pools" FROM STGPOOLS
Storage pools, query 'Query STGpool'
Reports pool names, device class,
capacity, %utilization, migration, and
next storage pool.
Storage pools and database backup Do not use macros to schedule backup of
your storage pools and database because
they would inappropriately run in
parallel (in that the Backup server
command generates a parallel process).
Instead, do the following in this order:
1) Back up your storage pools
2) Update the volumes to change the
access to OFfsite for your
newly-created copy storage pool
volumes
3) Back up your database
4) Back up your devconfig and volume
history files (external to ADSM)
StorageTank Familiar name for IBM's first iteration
of its TotalStorage SAN File System,
circa 2003/11. It is a by-product of the
company's Almaden Research division,
designed to facilitate high-performance
heterogeneous file sharing and access in
a SAN environment. It required an IBM
disk array and AIX or Windows.
StorageTek 9710 StorageTek 9710.
With ADSM V2, a 3rd party product called
ADSM Enhanced Server was required to
support the 9710. Running STK's ACSLS
(which is server software for the robot)
ADSM talks to ACSLS via the Enhanced
Server code.
Starting with ADSM V3, you can run an
STK9710 with ADSM using only the IBM
drivers, OR you can use ADSM V3 and
talk to the 9710 via ACSLS.
StorageTek 9710 and 9714, labeling If having problems labelling tapes,
tapes (I/O errors) check that the library is
in "Fastload" mode, which ADSM needs.
StorageTek 9710/9714 Library Audit Make sure FAST LOAD is enabled on the
time 9710 to minimize AUDit LIBRary time (it
can cause mount processing delays if it
is disabled).
And use the Checklabel=barcode option
on the AUDit LIBRary command so that it
won't mount each tape and read the
header. The audit then takes only 1-2
minutes at most.
StorageTek 9730 As of 1998, StorageTek had available
software so that ADSM would see the
library as a 9710.
Stored Size In ADSMv3 'Query CONtent ...
Format=Detailed': The size of the
physical file, in bytes. If the file is
a logical file that is stored as part of
an aggregate, this value indicates the
size of the entire aggregate.
The inability to see the actual size of
files from the server is a major
annoyance in being able to produce
reports and examine problems. This
information SHOULD be possible to get
from the server: after all, when you do
a query from the client you certainly
see actual file sizes.
StorEdge L1000 element addresses See IBM site Technote 1052348.
StorWatch 1998 IBM product: storage resource
managment software products integrated
with storage hardware.
Streaming In tape technology, refers to providing
data to a tape drive continuously such
that recording is continuous: the media
never stops moving.
This is relatively rare in reality,
except in applications such as media
copying and real-time data acquisition
(e.g., scientific experiments and field
studies).
Contrast with: Start-stop
STRMNTBRMS The BRMS maintenance task, in the backup
of Domino data on AS400/iSeries, that
handles expiration of backup data etc.
Stub file A file that replaces the original file
on a local file system when the file is
migrated to ADSM storage. A stub file
contains the information necessary to
recall a migrated file from the server
storage pool (HSM file management
overhead). This information consumes 511
bytes. Because file systems are usually
allocated in blocks larger than that,
HSM exploits the blksize-511 byte are to
store a copy of the leading data from
the (migrated) file, for convenience of
limited inspection via operating system
commands like the Unix 'file' and 'head'
commands.
See also: dsmmigundelete; Leader data
Stub file size (HSM) The size of a file that replaces the
original file on a local file system
when the file is migrated to ADSM
storage. The size specified for stub
files determines how much leader data
can be stored in the stub file. The
default for stub file size is the block
size defined for a file system minus 1
byte.
Define via 'dsmmigfs -STubsize=NNN'.
The stub contains information ADSM needs
to recall the file, plus some amount of
user data. ADSM needs 511 bytes, so
the amount of data which can also reside
in the stub is the defined stub size
minus the 511 bytes. When you do a
dsmmigundelete, ADSM simply puts back
enough data to recreate the stubs, with
0 bytes of user data (since you don't
want ADSM going out to tapes to recover
the rest of the stub). When the file
gets recalled, then migrated again, we
once again have user data that we can
leave in the stub, so the stub size
goes back to its original value.
Stub files, in restoral -RESToremigstate=Yes (default) will
restore the files only as stubs.
Stub files, recreate 'dsmmigundelete FSname'
Sub-file backups A.k.a "Adaptive differencing" and
"adaptive sub-file backup". Available as
of TSM 4.1, in the Windows client
(intended for laptop computer users),
and supported by all TSM 4.1 servers.
Operates by creating a /cache
subdirectory under the /baclient
directory. (Make sure you exclude that
from backups!)
Make possible by doing Set SUBFILE on
the TSM server.
Can control what gets backed up by using
include.subfile, exclude.subfile.
Caveats:
- Limited to 2 GB files, max.
- If the delta file grows beyond a fixed
size of the base, the file is backed
up again to create a new base, which
is a network load.
- Reduces the amount of data backed up,
but restorals are still voluminous: a
restore requires the base and the last
delta file - which leads to extra tape
mounts without collocation.
- Backups mysteriously stop when the
client subfile cache becomes
corrupted. Fix that by to deleting the
entire cache directory and let it
build a new one on the next backup.
- The stats in dsmsched.log show the
size of the original file, not the
size of the subfile that actually got
backed up.
- Only the backup-complete stats will
reveal how much data actually sent.
See also: Adaptive differencing;
Set SUBFILE
SUBFILEBackup (-SUBFILEBackup=) V4 Windows client option for the options
file or command line, specifying whether
adaptive subfile backup is used.
(This option can also be defined on the
server.) Syntax:
SUBFILEBackup No | Yes
Default: No
SUBFILECACHEPath (-SUBFILECACHEPath=) V4 Windows client option for the options
file or command line, specifying the
path where the client cache resides for
adaptive subfile backup processing. The
cache directory houses reference files
and the small database which manages
them. If a path is not specified, TSM
creates a path called \cache under the
directory where the TSM executables
reside. The parent pathname of the
pathname specified by the subfilecachep
option must exist. For example, if
c:\temp\cache is specified, c:\temp must
already exist. Note: This option can
also be defined on the server. Syntax:
SUBFILECACHEP Path_Name
SUBFILECACHESize (-SUBFILECACHESize=) 4 Windows client option for the options
file or command line, specifying the
client cache size for adaptive subfile
backup. Note: This option can also be
defined on the server. Syntax:
SUBFILECACHES Size_in_MB
where the size can be from 1 - 1024 MB.
Default: 10 (MB)
SUbdir (-SUbdir=) Client User Options file (dsm.opt)
option or dsmc option to specify whether
directory operations should include
subdirectories, on commands: ARCHIVE,
Delete ARchive, Query ARCHIVE, Query
BACKUP, RESTORE, RETRIEVE, SELECTIVE.
Note: When restoring a single file, DO
NOT use -SUbdir=Yes, because it may
cause the directory tree to be restored
(see APAR IC21360).
Specify: Yes or No
Default: No
SUbdir, query 'dsmc Query Options' in ADSM or 'dsmc
show options' in TSM; look for "subdir".
Subquery An SQL operation where a Select is done
within a Select: the internal Select is
a Subquery. The Subquery is like a
subroutine, and as such must have the
same number and type of columnar results
as the Where condition which calls it.
The Subquery extracts a set of data from
the table it processes, from which the
higher query can select elements
according to its query.
See also: Join
Subscription See: Enterprise Configuration and Policy
Management
SUBSTRing SQL function Format: SUBSTR( column_name,
first_position, length) = 'string'.
You can use this in SELECT or in WHERE.
The separators are always "," and,
perhaps, you need put one blank after
that space.
SUG Abbreviation for an APAR closure reason,
indicating that it was closed as a
Suggestion for future functionality.
Some issues in software may extend
beyond the current architecture, or into
other areas of the product, and cannot
feasibly be addressed as an isolated
work item. Instead, they will be
addressed in the longer term development
of the product, to be worked into the
overall architecture in a careful,
deliberated manner, with all parties in
the development area aware.
SUM SQL statement to yield the total of all
the rows of a given numeric column.
Example: SELECT SUM(NUM_FILES) AS \
"Number of filespace objects" FROM
OCCUPANCY
See also: AVG; COUNT; MAX; MIN; ORDER BY
SUMMARY table SQL table added in TSM 3.7, as described
in that server's Readme file. The
activity summary table contains
statistics about each client session and
server process, saved for as many days
as specified in Set SUMmaryretention
(q.v.). It is a summary of the whole
session - which contrasts with TSM
accounting records, where there may be
multiple threads in a session and an
accounting record for each, which makes
for separate pieces of information.
Table contents:
1. START_TIME Start Time
2. END_TIME End Time
3. ACTIVITY Process or Session
Activity Name:
'EXPIRATION',
'FULL_DBBACKUP'
'MIGRATION' 'BACKUP'
'RESTORE' 'TAPE MOUNT'
'RECLAMATION'
'STGPOOL BACKUP'
'RETRIEVE'
4. NUMBER Process or Session
Number
5. ENTITY Associated user or
stgpool(s) associated
with the activity
6. COMMETH Communications Method
7. ADDRESS Network address
8. SCHEDULE_NAME Schedule Name
9. EXAMINED Number of objects (files
and/or dirs) examined by
the process/session
10. AFFECTED Number of objects
affected (moved, copied
or deleted) by the
process/session
11. FAILED Number of objects that
failed in the process or
session
12. BYTES Bytes processed
13. IDLE Seconds that the session
or process was idle
14. MEDIAW Seconds that the session
or process was waiting
for access to media
(volume mounts)
15. PROCESSES Number of processes used
16. SUCCESSFUL Is YES or NO.
As of 2003/04 isn't
useful for determining
the success of a client
operation. Corresponds
to the "Normal server
termination indicator"
in the TSM server
accounting records,
which basically says
that the session between
client and server ended
normally.
Beware that the SUMMARY table has been
the subject of many APARs and attempted
fixes, so may not be fully reliable. As
one customer put it: the Summary Table
is a notoriously dubious source of
information. It was broken again in TSM
5.1 (see APAR IC33455). For monitoring
client status, use Query EVent
Format=Detailed or the EVENTS table; or
use the TSM accounting records.
There are IBM site Technotes (1155023,
etc.) describing seeming inconsistencies
in the Summary table.
See also: Accounting;
Set SUMmaryretention
Sun client level software Can run 2.5.1 code on the 2.6 machine
without problem.
Sun client performance Try setting "DISKMAP NO" in dsmserv.opt.
This setting can improve performance
with larger disk pools and with some
disk sub-systems. To get the best disk
storage pool performance on Sun, IBM
recommends using the raw-partitions (see
the reference manual or the help on
"define vol" and the notes on the disk
device class).
Sun system, restoring via ADSM Use Solaris jumpstart to rebuild from
ADSM backups. The ADSM client code is
loaded into the mini-root that Solaris
runs when the box is network booted.
This client code can then contact the
ADSM server and restore the directories
/ /opt /usr and so on.
Beware that mount point directories
cannot appear in that they are
overlaid by mounts when the backup is
Sun system raw partitions When creating the partition with the
/etc/format utility, do not include
cylinder 0 (zero) in the partition
intended for use as a raw partition.
Note that Solaris 2.5.1 limits partition
size to 2 GB.
Sun third-party hardware - watch out Sun sells various third-party hardware,
such as FibreChannel HBAs. Customers
report finding that Qlogic HBAs bought
from Sun would not work with the IBMTape
driver, for example; but purchased
directly from Qlogic, the card would
work fine. Sun substituted microcode to
operate with Sun disks - not others.
SuperDLT (SDLT) New in 2000.
Capacity: 110 GB native; 220 GB with 2:1
compression.
Brings servo-positioning to DLT via
Laser Guided Magnetic Recording (LGMR)
system and Pivoting Optical Servo (POS)
system uses optical servo tracks on the
back coating of the tape: this gives DLT
better start-stop performance than its
previous incarnation, and eliminates the
need for pre-formatting tapes.
Backward read compatible with DLT 4000,
DLT 7000 and DLT 8000 drives, using
DLTtape IV media.
DLT is not an open architecture
technology - only Quantum makes it - a
factor which has caused customers to
gravitate toward LTO instead.
http://www.dltape.com/superdlt
See also: DLT
SuperDLT-2 Next generation of SDLT with 160 GB
native capacity (320 GB with 2:1
compression).
Superuser The supreme, most powerful account in an
operating system. In Unix, it is
"root"; in Windows, it is the System
account.
SWAP Secure Web Admin Proxy
Sybase backups See the product SQL-Backtrack for Sybase
from BMC Software (http://www.bmc.com).
You'll also need the TSM OBSI module.
Symbolic link A Unix file system object which serves
as an "alias" to another file by
symbolically naming the target file.
Is created by the 'ln -s' command.
The nature of the data involved in a
symbolic link means that it will not be
stored solely in the TSM database, as
directories and empty files can be: the
symbolic link will become a storage pool
object.
Symbolic links (Unix) and handling by Backup (incremental or selective): backs
ADSM operations up the symlink itself, and not the
target of the symbolic link, unless
SUbdir=Yes is in effect, in which case
it will back up the symbolic link plus
any files and directories that it
points to.
Restore, when symlink was to a file:
Restores the symlink, regardless of
whether the file it points to still
exists.
Restore, when symlink was to directory:
- Without the files in the directory,
and the symbolic link does not exist
in the file system, nothing is
returned.
- Along with the files in the directory
and the symbolic link does not exist
on your file system, TSM builds the
directory and puts the files in that
directory. If the subdir option is
set to yes, TSM recursively restores
all subdirectories of the directory.
- And the symbolic link already exists:
the result depends on how the
FOLlowsymbolic option is set; if it
is set to:
Yes The symbolic link is restored
and overwrites the directory.
If FOLlowsymbolic=Yes is in
effect, a symbolic link can be
used as a virtual mount point.
No TSM displays an error message.
(No is the default.)
Archive: Backs up the target of the
symlink, under the name of the symlink.
See also: ARCHSYMLinkasfile;
FOLlowsymbolic
Symbolic link restoral characteristics Symbolic links are restored with the
same owner and group they had at Backup
time; but their timestamp is that of
Restoral time rather than Backup time,
in that symbolic links have to be
regenerated rather than physically
restored.
SYMbolicdestination Client System Options file (dsm.sys)
option to specify a symbolic ADSM server
name. For SNA communication.
Default: none
SysBack Short but official name of System Backup
& Recovery for AIX (V5: 2001) Withdrawn
from marketing, as announced 2004/12/07,
replaced by IBM Tivoli Storage Manager
for System Backup and Recovery, which
thereaftere took the SysBack name.
SysBack is a comprehensive system
backup, restore, and reinstallation tool
for AIX systems. Provides bare metal
restore capabilities. Any feature may
be executed from either the AIX command
line or by using the SMIT menu
interface.
System Files The pagefile, Registry, etc.
The Windows Client manual stipulates
that you should exclude System Files per
se from backups: they are separately
backed up as system objects and should
not be backed up as ordinary files. A
dsm.smp sample exclude list is provided
with the install.
System Files, list of There is no list of system files: you
simply enumerate them via a Windows
lookup, as TSM does via the Windows API
function SfcGetNextProtectedFile().
TSM 5.2's client relays such information
to you as a convenience feature, via its
Query SYSTEMInfo command.
SYSTEM OBJECT Name of filespace created in TSM backups
of the Windows system state.
"System Object" data (including the
Registry) cannot be the subject of TSM
Archive operations. Instead, you could
use MS Backup to Backup System State to
local disk, then use TSM to Archive
this.
Ref: "Determining what files get backed
up as part of your system objects"
http://www.ibm.com/support/entdocview.wss
?uid=swg21141874
System Object, restore to different The receiving machine must have the same
machine hostname, and it must have identical
hardware, as you are restoring the
Registry, which includes hardware
information. See redbook "Deploying the
Tivoli Storage Manager Client in a
Windows 2000 Environment".
System Objects See: Windows NT System Objects
System privilege, grant 'GRant AUTHority Adm_Name
CLasses=SYstem'
System Protected Under Windows 2000, Microsoft
implemented the concept of "system
protected" files. Win2K keeps a catalog
of all the files it considers "system
and boot files", and they are flagged as
"system protected". Those files are
considered part of Win2K "system state",
and are all backed up and restored as a
set. When you run backups via the
scheduler on Win2K, TSM gets the whole
Microsoft-defined "System state", which
includes the "system protected files",
plus Active Directory, plus COM+DB, plus
Registry, and a bunch of other stuff,
depending on whether it's WIN2K or Win2K
pro. When you run backups via the GUI
on Win2K, you must specificially select
SYSTEM OBJECT to get a backup of "system
state".
Ref: "TSM 3.7.3 and 4.1 Technical Guide"
redbook
System State (Windows) Windows 2000 logical grouping of the key
system files and databases which in
combination define the state of the
Windows system. Constituents:
Active Directory, Boot Files, COM+ Class
Registry, Registry, Sys Vol.
Does not include things like Removable
Storage Management database.
SYSTEMObject Windows: The designated name of the
System Objects.
In 5.2 you can exclude System Object
from backups by coding:
DOMain -SYSTEMObject
Systems Network Architecture Logical A set of rules for data to be
Unit 6.2 (SNA LU6.2) transmitted in a network Application
programs communicate with each other
using a layer of SNA called Advanced
Program-to-Program Communication (APPC).
Discontinued as of TSM 4.2.

-TABdelimited dsmadmc option for reporting with output


being tab-delimited.
Contrast with -COMMAdelimited.
See also: -DISPLaymode
Tables, SQL 'SELECT * FROM SYSCAT.TABLES'
Tape, add to automated library 'CHECKIn LIBVolume ...'
Note that this involves a tape mount.
Tape, audit (examine its barcode 'mtlib -l /dev/lmcp0 -a -V VolName'
to assure physically in library) Causes the robot to move to the tape and
scan its barcode.
'mtlib -l /dev/lmcp0 -a -L FileName'
can be used to examine tapes en mass, by
taking the first volser on each line of
the file.
Tape, bad, handling See: Volume, bad, handling
Tape, erase There are times that you need to
actually erase a tape, either to satsify
legal requirements or in retiring a
tape, to obliterating data on TSM tapes
whose contents have expired or been
copied. The tapeutil/ntutil commands
have an Erase function, readily usable
from the command line or prompting.
In the Unix environment, the
'tctl -f /dev/rmt_ erase' command can do
the deed.
See: ntutil; tapeutil
Tape, identify physically in library There may be times when you are unsure
as which is actually tape XXXXXX in the
library. Some ways to find out:
- If the library provides a means to
query its database, try to locate the
tape by cell that way. You may also be
able to tell by looking at the
statistics for the number of times the
tape has been mounted.
- Cause the tape to be mounted as you
watch, which certainly establishes
which volume the systems think it is.
You can do this from ADSM:
'AUDit Volume VolName Fix=No';
or outside of ADSM use, like:
'mtlib -l /dev/lmcp0 -m -f /dev/rmt?
-V VolName'
Tape, initialize for use with a For simple, manually-mounted tape:
storage pool 'dsmlabel -drive=/dev/mt0'
where the drive must be one which was
specifically ADSM-defined.
It will iteratively prompt for volsers
so you can do a bunch of tapes at once.
For robotic tape library:
'dsmlabel -drive=/dev/mt0
-library=/dev/lmcp0'
Tape, number of times mounted 'Query Volume ______ Format=Detailed'
"Number of Times Mounted" value (q.v.).
Tape, remove from automated library 'CHECKOut LIBVolume LibName VolName
(as in 3494) [CHECKLabel=no] [FORCE=yes]
[REMove=no]'
Tape checkin date There is no way to determine when a tape
was checked into the library: ADSM
doesn't track it in volume stats, and
libraries like the 3494 don't record it
as part of database inventory info.
Tape contention handling technique TSM really likes to fill a storage pool
tape before starting on a new one, and
sometimes this can result in contention.
For example, consider an Archiving user
whose session was waiting on a tape that
is busy as input to a BAckup STGpool
operation that would be reading from
that tape for some time. To keep the
user from waiting further, you can do
Update Volume ... Access=Readonly, which
TSM immediately recognized to allow the
archive session to proceed with another
output volume. Then do Update Volume
... Access=Readwrite to put the
contended volume back into its original
state, and everyone is happy.
Tape density, achieving Generally speaking, rewriting a tape
from its beginning, as relabeling does,
is the only opportunity to change a
tape's density (which is to say that it
is not possible to change to a different
density in the midst of a computer tape,
as one might on a home VCR). This has
been true of computer tapes since the
early days of open reel tapes. Actually
achieving the desired density is a
function of the application causing the
drive to switch density, which in turn
must be supported and allowed by the
hardware. In TSM terms, this is a
function of the Devclass definitions.
Operating system commands are usually
available to verify tape drive attribute
selections to assure that the
application has triggered the values you
expect. If not, check the hardware to
assure that it can inherently achieve
that value, that there is no operator
setting preventing it, and that the
media allows it. (Tape cartridges have
sensing indentations by which drives can
determine what a given tape can do.) If
your OS attributes check finds settings
not as desired, the less usual cause
could be a TSM defect.
Tape device driver ADSM relies on the Atape driver for the
Magstar family of tapes and libraries
(on AIX), but it relies on the device
driver shipped with ADSM for all others
(DLT, 8mm, 4mm, QIC, optical drives,
STK drives).
Tape drive, define for use by ADSM Note that ADSM must use its own device
drivers for most tape drives (e.g., 8mm)
except for devices such as the 3494
which supply their own drivers.
Refer to ADSM Device Configuration.
Do: 'lsdev -C -s scsi -H' to list SCSI
devices and identify their adapters.
Do the following via SMIT:
Select DEVICES.
Select ADSM Devices.
Select Tape Drive.
Select Add a Tape Drive.
Select the ADSM-SCSI-MT.
Select the adapter to which the device
is attached.
All this will generate a command like:
'mkdev -c adsmtape -t 'ADSM-SCSI-MT' -s
'scsi' -p 'scsi0' -w '60'
The resulting tape drive is what is
needed by the 'dsmlabel' command.
Tape drive, make available (online) 'UPDate DRive LibName Drive_Name
to *SM ONLine=Yes'
Tape drive, make offline to host AIX: 'rmdev -l DeviceName'
Example: rmdev -l rmt2
This desensitizes the operating system
to maintenance being done on the
attached drive, for example. Experience
shows that it is usually unnecessary to
do this, however.
Tape drive, make online to host AIX: 'mkdev -l DeviceName'
Tape drive, make unavailable (offline) 'UPDate DRive LibName Drive_Name
to *SM ONLine=No'
3494: You can also go the the Operator
Station, and in the Service Mode panel
called Availability, render the drive
offline. This will be recognized by *SM,
as reflected in msg ANR8775I and
'SHow LIBrary' command output. Note that
this operation is immediate, and would
disrupt anything operating on the drive
(the request is not queued until the
drive is free).
Tape drive, when it went offline 'SHow LIBrary' report element
"offline time/date" reflects this.
Tape drive, 3590, release from host Unix: 'tapeutil -f dev/rmt_ release'
Windows: 'ntutil -t tape_ release'
after having done a "reserve".
Tape drive, 3590, reserve from host Unix: 'tapeutil -f dev/rmt_ reserve'
Windows: 'ntutil -t tape_ reserve'
When done, release the drive:
Unix: 'tapeutil -f dev/rmt_ release'
Windows: 'ntutil -t tape_ release'
Tape drive availability and ADSM If no tape drives are currently
available (as reflected in SHow LIBrary)
ADSM will wait until one becomes
available, rather than dispose of client
and administrative jobs.
Tape drive cleaning The most insidious cause of tape
processing problems (outright I/O errors
and time-consuming read/write retries)
is dirty tape drives. Tape libraries are
not air-sealed (nor are tape
cartridges): any crud that floats around
in your environment will eventually end
up in the tape drives and cartridges.
And all the mounts and dismounts will
spread the contaminants to other tapes
and drives. All tape libraries provide
for some kind of cleaning, be it
automatic or manual, usually via a
cleaning cartridge: make sure that your
library has such, that cleaning is
activated, and is being done. Cleaning
tape is necessarily abrasive, because it
is a dry cleaning method. As such the
cleaning process wears down the tape
head a bit. If that concerns you, keep
it in perspective: the objective is
reliable reading and writing, not making
the (replaceable) heads last decades.
Beyond cleaning cartridges, your shop
should periodically use a HEPA vacuum
cleaner to clean out the interior of the
library, where dust and dirt will
accumulate and be agitated by the motion
of the robotics. Another issue is the
manual handling of cartridges, where
dirty hands and miscellaneous human
detritus will get on and into
cartridges. Tapes which go offsite have
further opportunities for contamination.
Consider placing a portable air cleaner
or two alongside your library,
particularly if it is in a dusty or
high-traffic area. Computer rooms are
not Clean Rooms.
See: <Device type> cleaning (such as
"3590 cleaning")
Tape drive parameters, query Use the 'tapeutil'/'ntutil' command
"Query/Set Parameters" selection. Or:
AIX: 'lsattr -EHl rmt1' or
'mt -f /dev/rmt1 status'
Tape drive parameters, set Use the 'tapeutil'/'ntutil' command
"Query/Set Parameters" selection. But be
aware that TSM sets things the way that
it wants, so best not to interfere.
Tape drive performance See: Tape drive throughput
Tape drive status, from host 'mtlib -l /dev/lmcp0 -f /dev/rmt1 -qD'
to query by device name (-f), or
'mtlib -l /dev/lmcp0 -x 0 -qD'
to query by relative tape drive in
library (-x 0, -x 1, etc.).
(but note that the relative drive
method is unreliable).
Tape drive throughput See the "THROUGHPUT MEASUREMENT" topic
near the bottom of this doc.
See also: Migration performance; MOVe
Data performance
Tape drive "unavailable" A condition sometimes due to serial
numbers used in DEFine DRive not being
consistent with the serial numbers
actually embedded in the drive firmware.
The drive which TSM queries will respond
with a serial number different from what
is defined to TSM, and TSM naturally
balks. (Note that drive serial numbers
may change if the drive is replaced, or
odd procedures are followed in drive
maintenance.) Use of SERial=AUTODetect
in DEFine DRive is the common approach
to avoiding this issue. The SHow
LIBRary command will report drive serial
numbers (devNo).
Tape drive Vital Product Data Unix: 'tapeutil -f /dev/rmt0 vpd'
Windows: 'ntutil -t tape vpd'
Microcode level shows up as
"Revision Level".
Tape drives, in 3494, list From AIX: 'mtlib -l /dev/lmcp0 -D'
Tape drives, list available ADSM 'lsdev -C -c tape -H'
tape drives
Tape drives, list supported ADSM 'lsdev -P -c adsmtape
tape drives -F "type subclass description" -H'
Tape drives, maximum that ADSM can Devclass controls it.
ask for
Tape drives, not all being used in a See: Drives, not all in library being
used
Tape drives, where they are specified They are defined via 'DEFine DRive',
in ADSM and are associated with an
already-defined library, as in:
'DEFine DRive 8MMLIB 8mmdrive
DEVIce=/dev/mt0'.
Do 'Query DRive' to list them.
Tape ejections, phantom See: Ejections, "phantom"
Tape history, query 'Query VOLHistory'
Tape I/O error message ANR8359E Media fault ... (q.v.)
Tape labels ADSM wants tapes to have VOL1, HDR1, and
HDR2 labels. The tapes you get
"pre-labeled" from a tape vendor may
have only VOL1, HDR1; so it's always
best to label the tapes yourself,
regardless. Ref: APAR IX77477
Tape leak A term I invented to describe the
product's propensity for using a fresh
tape when a Filling tape is busy,
resulting in Filling tapes which will
probably never be used again, resulting
in a perplexing dwindling of scratch
tapes. A full discussion of this is
found in the topic 'Shrinking
(dwindling) number of available scratch
tapes ("tape leak")' near the bottom of
this document.
Tape library, list volumes Use AIX command:
'mtlib -l /dev/lmcp0 -vqI'
for fully-labeled information, or just
'mtlib -l /dev/lmcp0 -qI'
for unlabeled data fields: volser,
category code, volume attribute, volume
class (type of tape drive; equates to
device class), volume type.
(or use options -vqI for verbosity, for
more descriptive output)
The tapes reported do not include CE
tape or cleaning tapes.
Tape lifetime See: MP1
TAPE MOUNT Activity value in the TSM SUMMARY table.
Query: SELECT DATE(START_TIME) ,
DRIVE_NAME, VOLUME_NAME FROM SUMMARY
WHERE ACTIVITY='TAPE MOUNT'
Tape operator The TSM server supports sending mount
messages to a special session via:
'dsmadmc -MOUNTmode'.
See: -MOUNTmode
Tape performance See: Tape drive throughput
Tape pool, steps in defining Define tape drive(s) via SMIT. (They
need to be specially defined for ADSM:
the /dev/rmt? drives already defined in
your system are *not* eligible for use
by ADSM.
Tape pool, 8mm, steps in defining You should first have established an 8mm
tape drive to use, via SMIT. (See "Tape
drive, define for use by ADSM".)
Define library, as in:
DEFine LIBRary 8mmlib LIBType=manual
(Note that DEVice is not coded for
manual.)
Define device class, as in:
'DEFine DEVclass 8mmclass DEVType=8mm
LIBRary=8mmlib MOUNTLimit=1
ESTCAPacity=2300M'
Define the sequential storage pool:
DEFine STGpool 8mmpool 8mmclass
DESCription="___"
Define the tape drive(s) to use:
DEFine DRive 8MMLIB 8mmdrive
DEVIce=/dev/mt0
Define specific tape volumes for pool:
DEFine Volume 8mmpool VolName
[ACCess=READWrite|READOnly|
UNAVailable|OFfsite]
You also need to label the tapes, via
'dsmlabel' (q.v.).
Tape recovery procedure See: Volume, bad, handling
Tape reliability ("tape is tape") Tape is still being used because it is
relatively cheap, un-delicate, and
capacious. But it is not the ultimate in
reliability. Unlike hermetically sealed
disk technology, the tape medium is
exposed to the environment, is pulled
and stressed, and abrades as it rubs
past transport guides and tape heads.
(By its nature, flexible magnetic media
has to be in contact with the read-write
head.) Moreover, in manufacturing, the
quality of the medium cannot be as
readily assured by inspection as can
disk platters. All this means that when
using tape you cannot unilaterally
depend upon it, and it behooves you to
have a secondary copy of important data.
See http://www.sresearch.com/library.htm
Tape security We sometimes have site managers asking
how secure *SM data tapes are, wondering
if someone may be able to harvest data
from *SM scratch tapes, and whether *SM
expiration erases the old data.
By data processing definition, tapes -
like disks or any other media (including
paper) - are supposed to be physically
secure, as in kept in a room that
non-authorized people cannot enter, and
that the people in the room are
trustworthy. That is the fundamental
protection for tapes written by any
application.
Expiration is a logical process, not
physical: nothing goes near the tape in
the process. Only the "catalog entry"
for the expired data is obliterated,
while the tape remains intact. Being an
append-only medium, there is no
potential for partial erasure of tape
contents. You can wholly write over the
tape with binary zeroes when it is empty
if you like, to obliterate prior
contents; but next use effects
obliteration anyway. Note that *SM tape
data format is unpublished: even we as
*SM administrators don't know how to
physically access it.
Tape storage pool, define See: 'DEFine STGpool'
Tape technology Newsgroups comp.arch.storage and
comp.data.administration tend to have
such discussions.
Tape volume, assign to a storage pool 'DEFine Volume Poolname VolName'
The alternative to dedicating tape
volumes to a storage pool is to define
the STGpool with "MAXSCRatch=NNN", to
use scratch volumes instead.
Tape volume, eject from library Via Unix command you can effect this by
to Convenience I/O Station changing the category code to EJECT
(X'FF10'):
'mtlib -l /dev/lmcp0 -vC -V VolName
-t FF10'
Tape volume, set Category code in Via Unix command:
library 'mtlib -l /dev/lmcp0 -vC -V VolName
-t Hexadecimal_New_Category'
Tape volumes, consolidate Use the ADSM server 'MOVe Data' command
to move data from one volume in a
storage pool to other volumes in it, as
in the case of ADSM happening to write a
few files on a new tape when the other
tape(s) in the storage pool are mostly
empty. This operation eliminates the
wasteful use of the second volume, as
in: 'MOVe Data 000994'.
TAPE_ERR4 AIX Error Log label for an entry
involving an adapter or tape drive
error. Has been seen accompanied by
SCSI_ERR2 where the issue was a cable or
bad terminator.
TapeAlert A patented technology and standard of
the American National Standards
Institute (ANSI) that defines conditions
and problems that are experienced by
tape drives. The technology enables a
server to read TapeAlert flags from a
tape drive through the SCSI interface.
The server reads the flags from Log
Sense Page 0x2E.
You will find TapeAlert summarized in
the IBM 358x Setup and Operator Guide
manuals, with flag values.
In TSM terms, TapeAlert is a software
application supported by TSM 5.2+ that
provides detailed device diagnostic
information using a standard interface
that makes it easy to detect problems
which could have an impact on backup
quality. It is a standard mechanism for
tape and library devices to report
hardware errors. From the use of
worn-out tapes, to defects in the device
hardware, TapeAlert enables TSM to
provide messages that provide
easy-to-understand warnings of errors as
they arise, and suggests a course of
action to remedy the problem. To take
advantage of TapeAlert, you need
TapeAlert-compatible tape drives or
libraries.
See also: Set TAPEAlertmsg
TAPEIOBUFS TSM 3.7 server option for MVS (only).
Specifies how many tape I/O buffers the
server can use to write to or read from
tape media. The default is 1. Syntax:
TAPEIOBUFS number_of_buffers
The number_of_buffers specifies the
number of I/O buffers that the server
can use to write to or read from a tape
media. You can specify an integer from 1
to 9, where 1 means that no overlapped
BSAM I/O is used. For a value greater
than 1, the server can use up to that
number of buffers to overlap the I/O
with BSAM.
Note: The server determines the value
based on settings for the TXNBYTELIMIT
client option and the MOVEBATCHSIZE,
MOVESIZETHRESH, TXNGROUPMAX, AND
USELARGebuffers server options. The
server uses the maximum number of
buffers it can fill before reaching the
end of the data transfer buffer or the
end of the transaction. A larger number
of I/O buffers may increase I/O
throughput but require more memory. The
memory required is determined by the
following formula:
number_of_buffers x 32K x mount limit
Performance: Boosting the number can
obviously improve throughput.
tapelog Command to view the AIX
/var/adm/ras/Atape.rmt?.dump? file.
Syntax: 'tapelog {-l DeviceName
| -f FileName'.
Ref: IBM SCSI Tape manual, Chapter 9.
Src: /usr/lpp/Atape/samples/tapelog.c
TAPEPrompt (-TAPEPrompt=) Client User Options file (dsm.opt) or
command line option to specify whether
to wait for a tape to be mounted if
required for an interactive backup,
archive, restore, or retrieve process;
or to prompt the user for a choice. Is
not in effect for a schedule type
operation.
Specify: No or Yes
Specifying No makes operations more
transparent, but does not account for
the mount delay.
HSM: "No" must be chosen for HSM,
because of its implicit action, and
because an NFS client of an exported HSM
file system obviously will not get the
prompt.
See client message ANS4116I as with HSM
actions; ANS4117I; and ANS4118I as with
incremental backup.
Default: Yes, prompt the user when a
tape mount is required.
Note that the DEVclass MOUNTWait value
does not pertain to a wait for a tape
drive to be free.
Note: Specifying Yes does not cause the
needed volume to be identified to the
client; it merely gives you the
opportunity to decline mounting.
Tapes, label all in 3494 library The modern way is to use the LABEl
having category code of Insert LIBVolume command, to both label and
'dsmlabel -drive=/dev/XXXX
-library=/dev/lmcp0 -search -keep
[-overwrite]'
Tapes, number to restore a node SHow VOLUMEUSAGE Node_Name
Tapes, number used by a node SELECT NODE_NAME AS '_NodeName_', -
COUNT(DISTINCT VOLUME_NAME) AS -
"Number of tapes used" FROM -
VOLUMEUSAGE GROUP BY NODE_NAME
Tapes, prevent usage See: Storage pool, tape, prevent usage
Tapes in library, list Use AIX command:
(including Category codes) 'mtlib -l /dev/lmcp0 -vqI'
for fully-labeled information, or just
'mtlib -l /dev/lmcp0 -qI'
for unlabeled data fields: VolSer,
category code, volume attribute, volume
class (type of tape drive; equates to
device class), volume type.
(or use options -vqI for verbosity, for
more descriptive output)
The tapes reported do not include CE
tape or cleaning tapes.
Tapes in use for a session 'Query SEssion [SessionNumber]
Format=Detailed'
Tapes needed in a restoral See: Restoral preview
Tapes supported ADSM supports a specified repertoire
of tape drives, which must be accessed
through its own device drivers.
Exception: For IBM 1/2" tape drives,
ADSM uses the device drivers supplied
with the hardware.
Tapes used by a node See: Volume usage, by node
tapeutil 3490/3590 tape utility for Unix,
provided as part of the Magstar Device
Drivers, available at
ftp.storsys.ibm.com, under devdrvr.
For an interactive session, simply
invoke by name and follow the menu.
For a batch session, invoke with
operands as from 'tapeutil -\?'.
There is no man page, but there is
complete documentation in the manual
"IBM TotalStorage Tape Device Drivers:
Installation and User's Guide",
available from the same ftp location.
"Device Info" returns iocinfo info,
including devtype, devsubtype,
tapetype, and block size.
"Erase" will erase the full length of
the tape. Experience shows that this
operation will experience no write
problems on a bad tape though prior and
subsequent TSM writing will result in
I/O errors; so just because Erase is
happy doesn't mean the tape is fine.
"Inquiry" returns a block of info akin
to that from the AIX 'lscfg' command.
"Read and Write Tests" by default will
write 20 blocks of 204800 bytes, write
2 file marks, backspace 2 file marks,
backspace 20 records, read the written
data, and forward spacing file mark.
Src: /usr/lpp/Atape/samples/tapeutil.c
"tapeutil", for NT See: ntutil
TB Terabytes, usually being 1024 ** 4.
TCA See: Trusted Communication Agent
TCP_ADDRESS (TSM 4.2+) SQL NODES table entry for
the TCP/IP address of the client node as
of the last time that the client node
contacted the server. The field is blank
if the client software does not support
reporting this information to the
server. Corresponds to the Query Node
field "TCP/IP Address".
Derives from the GUID value
See also: GUID
TCP_NAME (TSM 4.2+) SQL NODES table entry for
the host name of the client node as of
the last time that the client node
contacted the server. The field is blank
if the client software does not support
reporting this information to the
server. Corresponds to the Query Node
field "TCP/IP Name".
TCP/IP Transmission Control Protocol/Internet
Protocol. Consists of two main
protocols: TCP, for session-oriented
(stream) connections, as used by ADSM
and TSM; and UDP, for "connectionless"
operations, as in send a packet and hope
they got it.
TCP/IP access to server, disable The 'COMMmethod NONE' server option will
TCP/IP address of server See: TCPServeraddress
prevent all communication with the
server.
TCP/IP and OS/390 (MVS) In the OS/390 environment, TCP/IP is a
separate task, not integral to the
operating system as in Unix. Thus, it is
essential that TCP/IP be up before the
*SM server is started, and should not
be brought down before the *SM server.
TCP/IP port number of client The client needs a TCP port number when
it needs to be contacted by the server,
during SCHEDMODE PROMPTED.
Default = 1501. Change via the
TCPCLIENTPort Client System Options file
(dsm.sys) option.
See: TCPPort
TCP/IP port number of client, get 'dsmc Query Options' in ADSM or 'dsmc
show options' in TSM; see
"TcpClientPortNumTcpPort" value.
TCP/IP port number of client, set TCPCLIENTPort Client System Options file
(dsm.sys) option. See: TCPCLIENTPort
TCP/IP port number of server The TCPPort value. Default = 1500.
TCP/IP port number of server, get 'Query OPTions', "TcpPort" value.
TCP/IP port number of server, set "TCPPort" definition in the server
options file.
TCP/IP window size of server, get 'Query OPTions', "TCPWindowsize" value.
TCP/IP window size of server, set "TCPWindowsize" definition in the server
options file.
TCPADMINPort, -TCPADMINPort TSM 5.1+ client command line or options
file option to specify a separate TCP/IP
port number on which the TSM server is
waiting for requests for administrative
client sessions, allowing secure
administrative sessions within a private
network, as used for firewalls.
Placement: Unix: dsm.sys, within a
server stanza. Windows: dsm.opt.
Syntax: TCPADMINPort nnnn
Default: The value of the TCPPort
option.
Note that the port may not be used for
ordinary client sessions: it is for
administrative sessions only.
TCPADMINPort TSM 5.1+ server option, corresponding to
the same-named client option, to specify
the port number on which the server
TCP/IP communication driver is to wait
for requests for sessions other than
client sessions. This includes
administrative sessions,
server-to-server sessions, SNMP subagent
sessions, storage agent sessions,
library client sessions, managed server
sessions, and event server sessions.
Perspective: Using different port
numbers for the options TCPPORT and
TCPADMINPORT enables you to create one
set of firewall rules for client
sessions and another set for the other
session types listed above. By using the
SESSIONINITIATION parameter of REGISTER
and UPDATE NODE, you can close the port
specified by TCPPORT at the firewall,
and specify nodes whose scheduled
sessions will be started from the
server. If the two port numbers are
different, separate threads will be used
to service client sessions and the
session types. If you allow the two
options to use the same port number (by
default or by explicitly setting them to
the same port number), a single server
thread will be used to service all
session requests. Client sessions which
attempt to use the port specified by
TCPADMINPORT will be terminated (if
TCPPORT and TCPADMINPORT specify
different ports). Administrative
sessions are allowed on either port, but
by default will use the port specified
by TCPADMINPORT.
TCPBuffsize Client System Options file (dsm.sys)
option to specify the size for the ADSM
internal communications buffer, in
kilobytes. Code from 1 to 32 (KB).
Placement: Within a server stanza.
Default: 8 (KB)
General recommendation: 32
TCPBufsize Server Options file (dsmserv.opt):
Specifies the size, in kilobytes, of the
buffer used for TCP/IP send requests.
During a Restore, client data moves from
the ADSM session component to a
TCP communication driver. Syntax:
"TCPBufsize <N_KiloBytes>"
in the range 0-32 (default: 4)
Performance (particularly, restorals:
This option affects whether or not the
ADSM server sends the data to the client
directly from the session buffer or
copies the data to the TCP buffer. A 32K
buffer size forces ADSM to copy data to
its communication buffer and flush the
buffer when it fills, which entails
overhead.
TCPBufsize server option, query 'Query OPTion', "TCPBufsize" value.
TCPCLIENTAddress (-TCPCLIENTAddress=) Client System Options file (dsm.sys) or
command line option for when your client
node has more than one network address
(multi-homed) and you want the *SM
server to communicate with the client
using this network address, rather than
whatever address it may have previously
stored in client communication. Note
that the address specified is the
Service IP address: the IP address used
for primary traffic to and from the
node.
The specified address can be a name or
dotted number.
Use only with SCHEDMODE PRompted.
Default: use whatever address the client
response to.
See also: HLAddress; NODename
TCPCLIENTPort Client System Options file (dsm.sys)
option to specify the TCP port number
that the server should use to
communicate with the client, when
Schedule is active.
Use only with SCHEDMODE PRompted.
Default: 1501 (being TCPPort+1)
See also: LLAddress
TCPNodelay AIX (only) Client System Options file
(dsm.sys) option to specify whether
small transactions should be sent
immediately or be buffered before
sending. Ordinarily, TSM buffers small
transactions until the TXNBytelimit is
reached, and then the whole buffer is
sent. Sending immediately improves
continuity and throughput, but at the
expense of more packets being sent and,
ostensibly, smaller Aggregates.
(Remember that more packets make for
more interrupts on the TSM server, which
hurts its performance.)
Default: No, buffer before sending
General recommendation: No
See also: TXNBytelimit; TXNGroupmax
TCPNodelay Server Options file (dsmserv.opt):
Specifies whether the server allows data
packets that are less than the TCP/IP
maximum transmission unit (MTU) size to
be sent out immediately over the
network, to a client (in client-server
sessions) or another server (the target
server, in server-to-server virtual
volume operations); or whether small
stuff should be buffered before sending.
Default: Yes (send immediately)
TCPNodelay, query in client 'dsmc Query Options' in ADSM or 'dsmc
show options' in TSM; see "TcpNoDelay".
TCPNodelay, query in server 'Query OPTions', see "TCPNoDelay".
TCPPort Client System Options file (dsm.sys) or
command line option to specify the port
address of a server when using TCP/IP.
(Unfortunately, the name of this option
is ambiguous and leads to confusion: it
really should have been called
TCPSERVERPort, to be as specific as the
existing TCPCLIENTPort option.)
Code within a server stanza.
Default: 1500.
Note that TCPPort+1 (1501) is used by
the *SM Client Scheduler (dsmc
schedule) when using SCHEDMODE PRompted
to listen for the "prompt" from the
Server to initiate a scheduled
operation. When you start up a client
with SCHEDMODE PRompted, it contacts the
server on TCPPORT (1500) and registers
its IP address. It then disconnects and
(only at the appointed schedule time!)
listens on port TCPPORT+1 for the server
to contact it.
TCPPort Server option. Defines the TCP/IP port
upon which the server listens for client
requests. Default: 1500.
Note that the *SM server can only have
one such port defined for clients. A
way around this is to use a front-end
which serves a different port and relays
to the *SM real port. The Unix netcat
facility is one such method.
Tip: Temporarily coding a hoked value
during a maintenance time when you need
to bring the server up for maintenance
tasks will surely keep those pesky
clients out, as they user their client
option file TCPPort value.
See also: TECPort
TCPPort server option, query 'Query OPTion'
tcpQueryAddress A name which may pop up in TSM server
problems, being a function in tcpcomm.c
to handle reverse DNS lookups, via the
gethostbyaddr system call.
The "tcpinfo" traceclass can be used in
a server trace to inspect TCP/IP DNS
performance issues.
TCPServeraddress Client System Options file (dsm.sys) or
command line option to specify the
TCP/IP address for a *SM server, as
either a name or dotted IP address
number.
Placement: Within a server stanza.
Usage: Where you have a single NIC in
the client, or don't care how outgoing
TSM client traffic is routed, specify
the server location as a network
hostname. In a multi-homed ethernet
portal environment, where the client has
multiple NICs or one NIC with multiple
portals each on a different subnet,
specify the TSM server network location
via IP address via this option to have
outgoing TSM client traffic go through a
specific subnet rather than the default
route. (You should confer with your
network people to achieve optimal
throughput. Plan and configure for it:
It is very bad form to capriciously
decide to send large amounts of data
over a subnet which may be intended for
other purposes. Keep in mind the
difference between LAN and SAN.)
Note: The servername which may be
coded here has nothing to do with the
server name established within the
server via Set SERVername, as the former
is a network address and the latter is
just a name that the server tells the
client during session initialization.
Note: There is no speed advantage to
coding 127.0.0.1 (localhost) when both
the client and server are on the same
system: communication has to go through
the local protocol stack in both cases.
Advisories: Code an IP address rather
than a hostname. This will avoid two
problems: (1) access problems when
Domain Name Service is flakey, and (2)
lack of certainty where the server
hostname is defined in DNS with multiple
IP addresses.
See also: -SERVER; Set SERVERHladdress;
Set SERVERLladdress; -TCPPort
TCPWindowsize client option Client System Options file (dsm.sys)
option to specify the size, in KB, to be
used for the TCP/IP sliding window for
the client node: the size of the buffer
used when sending or receiving data.
Code a value from 1 to 2048 (KB), but
remember that your operating system
TCP/IP buffer size must be at least as
large:
- In AIX, do not exceed the sb_max
system value as seen with the
Query: 'no -a -o=sb_max' command.
(Note that sb_max is expressed in
bytes and TCPWindowsize is expressed
in KB. So if "sb_max" shows as 65535,
then TCPWindowsize must be 64 or
less.)
- In HP-UX, the limit is the kernel
parameter STRMSGSZ, which is expressed
in KB.
- Solaris: max TCPWindowsize is 1024.
- Windows NT4: max supported is
64KB-1byte, so specify "63".
General recommendation: 64 (for all TSM
servers except Windows, which is 63)
The client checks to assure that the
value specifies is not too high: if it
is, an error message saying so results.
You should respond by either reducing
the TSM value or increasing the opsys
value.
TCPWindowsize 0 *may* work in some
systems, meaning to use the operating
system settings.
Default: 16 (KB)
TCPWindowsize server option Specifies the amount of data to send or
receive before TCP/IP exchanges
acknowledgements with the client node in
client-server sessions. Also pertains
to the target server in server-to-server
(virtual volume) operations.
This actual window size that is used in
a session will be the minimum size of
the server and client window sizes.
Larger window sizes may improve
performance at the expense of memory
usage.
A value of 0 causes the operating system
default to be used, avoiding conflicts.
TCPWindowsize server option, query 'Query OPTion', see "TCPWindowsize".
TCPWindowsize server option, set Definition in the server options file
(dsmserv.opt), to specify the size of
the TCP sliding window: the amount of
data to send or receive before TCP/IP
exchanges acknowledgements with the
client node. The actual window size
that is used in a session will be the
minimum size of the server and client
window sizes. Larger window sizes may
improve performance at the expense of
memory usage.
Allowed range: 0 - 2048.
0 indicates that the default window
size set for AIX should be used. Values
from 1 to 2048 indicate that the window
size is in the range of 1 KB to 2 MB.
Default: 0 which indicates
that ADSM should accept the AIX default
window size.
TDP Tivoli Data Protection: the equivalent
of the former ADSM Agents, for backing
up databases. The TDPs operate as a
middle man, between the application
database API which give it access to the
client data, and the *SM server. A TDP
install will also install the TSM API,
which it needs to communicate with the
*SM server.
Cost: The various "TDP for____" packages
are not free, unlike the basic clients:
they are separately priced and licensed
products, which must be ordered through
your normal channels, and thus obtain a
"Paid in Full" licenses for use with the
TSM server.
In operation, a TDP does not perform
locking on the database that it is
accessing, because it is just a guest
visiting the database via the API of the
application controlling the database.
Can Backup Sets be used with the TDPs?
The short answer is No: Though most TDPs
produce Backup type objects in the TSM
server storage pool, and thus the
creation of a Backup Set is possible,
the TSM API does not support Backup
Sets, which precludes the use of the
created Backup Set on the client, as
Backup Sets are intended to be used.
(Ref: IBM Technote 1109074)
See: Data Protection Agents
TDP and retries The number of retries during backup has
historically been hardcoded into the
software. This may change as the
software evolves.
TDP backups overview When initiating a TDP backup, the
tdpo.opt file is referenced. That
further involves dsm.opt, and dsm.sys in
multi-user systems. The tdpo.opt and the
dsm.opt should be in directory
/opt/tivoli/tsm/client/oracle/bin64/
(omit the "64" if running 32-bit mode).
For 64-bit operation, the dsm.sys has to
be in the
/opt/tivoli/tsm/client/api/bin64/
directory.
A recommendation is to create a separate
domain for your database backups, and
set the management class retentions to
1-0-0-0. You can also set it as the
default management class.
TDP compression Reportedly only eliminates whitespace.
TDP for Domino (TDP Domino) Backup product for Lotus Domino mail
servers, replaced 2002/04 by Tivoli
Storage Manager for Mail (q.v.).
Data sent to the TSM server is stored in
Backup type storage pools.
Works at the database level, and thus
provides fast backup and restore of the
entire database as compared with the
document-oriented TDP for Notes. But in
order to restore a single document, you
need to restore the database to an
alternate name and copy out the document
you want. Also, any particular database
backup only consists of two TSM server
objects instead of a possible thousands
with TDP for Notes.
Summary info is stored in the TSM server
Activity Log in the form of ANE4991I
messages.
Each time a "DOMDSMC INCREMENTAL" backup
is run it should be picking up "new"
databases (as well as logged databases
that the DBIID has changed or non-logged
databases that the internal time stamp
has changed on).
Expiration: An active log backup will
never expire: you need to inactivate the
log backup, and that can only happen if
there is no active database backup that
requires that log and then the
INACTIVATELOGS command is run.
It is possible for a private folder to
be stored on the desktop rather than in
the database, as with folder types:
"Shared, desktop private on first use"
"Shared, private on first use"
Multi-session backups? Not yet, in this
product: start multiple instances of
dsmdomc per the manual.
Tracing: Turn on by adding this to the
invocation:
/TRACEFLAG=ALL /TRACEFILE=filename.txt
You cannot run Domino Server third party
products reliably through Windows
Terminal Services (or Remote Desk Top
Connection): Domino itself does not
support it. This is documented in the
IBM support knowledge base article
1083052, which can be sought at
www.ibm.com, and Lotus TechDoc 186006.
TDP for Exchange Tivoli Data Protection for Exchange.
Backup product for Microsoft Exchange
mail servers, replaced 2002/04 by Tivoli
Storage Manager for Mail (q.v.). Backs
up Exchange Server database files (.EDB,
.STM) and log files (.LOG) according to
Microsoft specifications.
Data sent to the TSM server is stored in
Backup type storage pools.
Will create only one session for each
instance that you run. If there is an
error during a backup, it will retry up
to a maximum of 4 attempts.
The level of granularity for Exchange
backups is at the Storage Group level,
meaning that separate Storage Groups can
be backed up simultaneously. Example:
start tdpexcc backup SG1 full
start tdpexcc backup SG2 full
start tdpexcc backup SG3 full
start tdpexcc backup SG4 full
Naturally, all other elements of the
backup must be sufficient for such
parallelism to be meaningful.
With some TDPs you may need to
separately install the TSM API; but for
this one the API code is included: you
do not need to install any TSM BA client
components unless you decide to the use
the TSM BA client scheduler.
DSM.OPT: You don't need to put anything
in the DSM.OPT file under Exchange
Server 5.5: by default, DP will back up
the Information Store and Directory.
Scheduling: Must be a Command type
schedule, which launches the TDP on the
client machine. See the sample batch
files shipped with the TDP. See manual.
Version 2.2 released 2001/03, supporting
Exchange 2000. As a new version number,
must be purchased: cannot be downloaded.
During a backup, each page of the
database is examined for the correct
checksum to verify that the data on
the page is valid. TDP for Exchange
(actually the Exchange backup/restore
API itself) won't allow you to back up
a corrupted database.
When doing a full backup, this TDP will
"inactivate" any previous incrementals
that exist.
TDP for Exchange performs incremental
and differential backups by backing up
the full transaction log files to TSM.
They are all placed into a single TSM
backup object. During restore, the
individual log files will be extracted
from the single TSM object and be
written back to disk.
A brief history of versions:
1.1.0 1998/04
1.1.1 1999/11
2.2.0 2001/03
5.1.5 2002/10
5.2.1 2003/09
The version jumped from 2 to 5 to align
with the base TSM products.
TDP for Exchange, API level, query 'tdpexcc query tsm'
TDP for Exchange, port numbers Normal client port is 1501.
TDP for Informix For Informix database backup. Is an API
which implements the Open Group Backup
Services application program interface
(Open Group XBSA) functions. This TDP
does not provide a CLI or GUI because
such an interface is provided by
Informix. Backups and restore are driven
through Informix with a utility that
Informix provides called ON-Bar.
You can use the BA client to query the
backup data. Note that a dsmc Query
Filespace will show 0 MB because that
field is not used with the TDP. A dsmc
Query Backup will also work if you can
interpret the object naming scheme.
(Use of the B/A client query commands
typically works but is "supported".)
Object expiration: For database backups,
general TSM policy is used. Log backups
are uniquely named... you can use an
Informix tool called onsmsync to control
their expiration.
Ref: IBM KB article "Managing Informix
logs that are saved on the TSM Server"
TDP for Lotus Domino vs. Notes TDP for Lotus Notes and TDP for Lotus
Domino are not compatible with each
other. With Lotus Domino Server R5,
Lotus provided an API solely for the
purposes of backup and restore, which is
performed at the database level. Domino
R4 did not have this...and so the
technique for backing up and restoring
on Domino R4 was very different. It was
done at the item level.
TDP for Lotus Notes >Product discontinued 2001/09/30.<
Backs up at the document level. Good
aspect: you have restore granularity
down to the document level. Bad aspect:
because each document takes one TSM
server object and because backing up or
restoring of an entire database with
many documents could take a while and
cause large TSM Server database
extents.
You can physically accomplish the task
of backing up the Lotus Notes database
using the ordinary TSM Backup/Archive
client while the Notes server is running
- but it may not be restorable, because
the database was in transition during
the backup. Hence the need for TDP.
TDP for Microsoft SQL Server All objects are stored on the TSM server
(TDP for SQL; SQL TDP; MSSQL) as Backup objects (not Archive, so an
Archive Copy Group is not required).
Stripes: A separate TSM Server session
is created for each stripe, which then
waits for the SQL Server to send data to
each stripe. The SQL Server determines
which data goes to which stripe, and
writes the data to it.
Environment variables: not used
License file: agent.lic
Options file (default): dsm.opt
located in the TDP installation
directory, or as specified by the
/TSMOPTFile=____ command line
parameter.
Watch out for blanks in the path name
when it is coded in the Object spec of a
client schedule: enter the path name
such that it ends up in double quotes in
the schedule (by enclosing the
double-quoted string in single quotes).
Return codes: Look in the tdpsql.log
and/or dsierror.log to find out the
cause. Also see return codes in the API
manual.
Notes: The TDP, as an API, does not
support things like the MS "RESTORE
... VERIFYONLY" operation.
5.2.1 will install and run with the PAID
license from the 2.2.1 product.
Retention periods: You cannot extend the
retention period of a single backup but
leave all of the others as they were:
the same management class settings apply
to all versions of a particular file.
You can change the retention period of
ALL of the current backups by changing
the management class settings or by
binding the backups to a new management
class by using the INCLUDE statement and
running a new backup.
Inactivating old backups: Deleted
databases do not "automatically" get
inactivated: it is up to you to manually
inactivate them, which you can do...
Via the CLI, use the TDPSQLC INACTIVATE
command, which is very similar to the
RESTORE command (TDPSQLC HELP
INACTIVATE). Via the GUI, go to "View",
"Inactivate Tab", and you will see a new
tab show up which allows you to choose
the database backups that you would like
to inactivate.
Expiration of data: V1 of the MSSQL
backups product performed its own
expired data deletion; but therafter the
product conforms to standard TSM server
policies. (The BACKDEL parameter is for
deletion of temporary TSM Server objects
used in unique situations such as a
change in management class.)
Restoral times: Before 2005: The larger
the database being restored, the more
time is required, as the DB file
"container" is recreated on disk, with
pre-formatting, before its contents can
be restored. For example, a customer
reports a 22 GB db restoral taking
hours. The TDP waits around for the
MSSQL work to complete before the TDP
can proceed. (Boosting the COMMTimeout
value is advised.) This should improve
in SQL Server 2005, where SQL Server
must still allocate the file space
before doing a database restore, but the
time-consuming initialization of
database pages is no longer required.
For a discussion of data striping, see
IBM site Technote 1145253.
And, no, you cannot back up an MySQL
database with TDP for MS SQL.
See also: DIFFESTIMATE
TDP for NDMP The TSM server uses NDMP to connect to
the NAS file server to initiate,
control, and monitor a file system
backup or restore operation.
This TDP used to be an add-on,
separately priced and licensed product
for performing NDMP backup and restore
for Network Attached Storage file
servers. As of TSM 5, it is incorporated
into TSM Extended Edition.
In 'REGister Node', Type=NAS is used.
Ref: Admin Guide
See also: nasnodename; NDMP; NetApp
TDP for Oracle Operates between RMAN and the TSM server
to effect Oracle backups. All objects
are stored on the TSM server as Backup
objects (not Archive, so an Archive Copy
Group is not required) which show up as
Filespaces on the TSM server.
Error logs: dsierror.log, created by
the TSM API; tdpoerror.log, created by
the TDP proper. (tdpoerror.log is
created in the local directory; may be
$ORACLE_HOME/dbs/tdpoerror.log.)
PASSWORDAccess settings:
- In Unix, must be set to Prompt...
Oracle specifies that a 3rd party
vendor (in this case, TDP for Oracle)
cannot spawn a child process (which
in the TSM case would be the
TCA). The TDP is not an executable,
so it is not able to have a child
process. Thus for Unix, there is no
child process capability for the
dsmtca module to retrieve the
password. Therefore, the TDP Oracle
for the Unix Operating Systems must
use PASSWORDAccess Prompt.
IBM recommends that you set TDPO_NODE
in the tdpo.opt file, to be a node
name different from the computer
name.
- In Windows, must be set to Generate.
Do not set TDPO_NODE in the tdpo.opt
file.
Storage pool space estimation: TDP
Oracle uses the value passed by Oracle -
which is probably overestimated; and if
compression is turned on then this value
is grossly overestimated. You can
specify the space via TDPO_AVG_SIZE.
Backuppiece: The Rman specifications
state that only one copy of a
backuppiece will exist at one time on
Media Manager (DP for Oracle). So
Oracle/Rman first tries to delete the
backuppiece that it is about to create
on the TSM Server. Unfortunately,
Oracle/Rman also specifies that the
delete routine act as single and
seperate operation, so when Oracle tries
to delete a backuppiece that does not
exist, that is an error and DP for
Oracle returns that error. There is no
way for DP for Oracle to determine if
the deletion is a true delete of a
backuppiece or if Oracle is checking for
backuppiece existence prior to backup.
Consider: Changing the filespace name to
something other than adsmorc... In the
event that you have multiple oracle
instances on the same client, it is much
more managable when they each have a
unique name. For example, if the
database is discontinued, you can simply
delete the filespace for that database.
(The filespace name can be set in the
tdpo.opt file). You will need to create
a unique tdpo.opt file for each
database.
See also: RMAN
TDP for Oracle and multi-stream backup Oracle can employ what it calls Channels
to effect parallel backups. The effect
within the TSM server depends upon your
TSM storage pool collocation setting.
With COLlocate=No, multi-streaming will
occur and parallel backup will occur to
your multiple tape drives.
With COLlocate=Yes, multi-streaming will
not occur: all the sessions wait for the
same tape volume.
Collocation is typically desirable for
restoral performance - but its value is
minimized as very large backup files
tend to occupy few tapes anyway. And in
a commercial database restoral, you
would often want all the db components
restored together, and all backups from
that point in time would be clustered
together on tapes anyway, where any
space taken by unrelated backups would
be on either side of the Oracle backup
data and would not much matter.
If you do want collocation for Oracle
backups, you can take the approach of
defining a separate tape storage pool
with COLlocate=No for the clients that
run multiple stream backups; or you can
employ a primary disk storage pool ahead
of tape, where a DISK type storage pool
does not collocate.
TDP for R/3 (SAP) For automatic password handling (client
option file PASSWORDAccess Generate),
the encrypted password will be stored in
the R/3 configuration file
(init<SID>.bki), and the password can be
set via the following - only if you have
not already set that encrypted client
password via the standard TSM client:
Unix: backint -p
/oracle/SID/dbs/init<SID>.utl
-f password
Windows: backint -p
<drive>:\orant\database\init<SID>.utl
-f password
Example: backint -p initSYS.utl
-f password
See also: Backint
TDP for SQL See: TDP for Microsoft SQL Server
TDP maintenance and licenses "PTF" or "fixtest" versions of the Data
Protection clients do not include a
license file, meaning that they won't
run at all without the ".lic" file: you
need to have a "Paid in Full" license or
a "Try and Buy" license. You can obtain
a Try and Buy license through your IBM
representative.
TDP schedules Are typically ACTion=Command type, to
invoke a OS environment command/batch
file written by the customer which
launches the TDP as desired.
TDPO Tivoli Data Protection for Oracle
tdpo.<Nodename> file In Unix, the TDP file in which the node
password is written, for PASSWORDAccess
Generate.
(In Windows, the password is stored in
the Registry.)
TDPO_AVG_SIZE TDP Oracle tdpo.opt option to specify
the average size of an object sent to
the TSM server, to influence where the
sent object goes first in the storage
pool hierarchy. The value should be
large enough to accommodate the largest
objects sent, but not to be so large
that no objects would go to a first
level disk storage pool (instead going
to the next level tape storage pool).
This option was discontinued in TDP
2.2.1 as being counterproductive.
TDPO_FS Tivoli Data Protection for Oracle option
to specify a file space name on the TSM
server which TDP for Oracle uses for
backup, delete, and restore operations.
Name length: 1 to 1024 characters
Default: adsmorc
tdpoerror.log TDP for Oracle error log. As of 2.2.1,
TDP Oracle no longer uses the Tivoli
Storage Manager API error log file,
dsierror.log.
tdpsdan.txt The TDP for SQL Danish language message
repository. See also: ANS0102W
Teach A tape library operation wherein the
robotic mechanism carefully explores the
internals of the library, learning what
elements (tape storage racks, tape
drives) are present, and their exact
locations in space (usually via infrared
reflector patches).
TEC Tivoli Enterprise Console; or,
Tivoli Event Console. Aka T/EC.
Tivoli Enterprise Console product is a
powerful, rules-based event management
application that integrates network,
systems, database, and application
management. It offers a centralized,
global view of your computing enterprise
while ensuring the high availability of
your application and computing
resources. It collects, processes, and
automatically responds to common
management events, such as a database
server that is not responding, a lost
network connection, or a successfully
completed batch processing job. It acts
as a central collection point for alarms
and events from a variety of sources,
including those from other Tivoli
software applications, Tivoli partner
applications, custom applications,
network management platforms, and
relational database systems.
Ref: TSM Admin Guide, "Logging Tivoli
Storage Manager Events to Receivers"
See also: TECHost; TECBegineventlogging;
TECPort; Data Protection Agents
TEC events Refers to to the event sent from a
monitored system to the Tivoli
Enterprise Console server.
TECBegineventlogging Server option to activate the Tivoli
Enterprise Console receiver during
startup. This is analogous to issuing a
BEGIN EVENTLOGGING TIVOLI on the server
console. This specifies whether event
logging for the Tivoli receiver should
begin when the server starts up. If the
TECHost option is specified,
TECBegineventlogging defaults to Yes.
Syntax: TECBegineventlogging Yes|No
Yes Specifies that event logging begins
when the server starts up and if a
TECHost option is specified.
No Specifies that event logging should
not begin when the server starts
up. To later begin event logging to
the Tivoli receiver (if the TECHOST
option has been specified), you
must issue the BEGIN EVENTLOGGING
command.
Technical Guide redbook Each new version of TSM is typically
accompanied by a Technical Guide redbook
which nicely explains all the new
features in that version. View at
http://www.redbooks.ibm.com .
In addition, in the frontmatter of the
manuals is a Summary of Changes which
enumerates the technical improvements in
that release of the software.
Technote IBM provides numerous technical articles
on its website, each called a Technote.
They are identified by number, such as
1141492. If you know a Technote number,
you may search for it at www.ibm.com.
(Put the number in quotes to limit the
search that is performed on the site.)
TECHost Server option to specify the Tivoli
Enterprise Console server host for the
Tivoli event server.
Syntax:
TECHost <HostName or IP_address>
TECPort Server option to specify the Tivoli
Enterprise Console port number on which
the Tivoli event server is
listening. This option is only required
if the Tivoli event server is on a
system that does not have a Port Mapper
service running (portmap process).
Syntax: TECPort <port number>
where the port number must be between 0
and 32767.
See also: TCPPort
Testflag Nomenclature for a provisional client
software developer's flag, which can be
specified as a dsm.opt option (e.g.,
"TESTFLAG NODETACH") or like in Trace:
'dsmc i -traceflags=_______ ...'
to cause some unusual action in the
client.
Threads See: Processes, server; SHow THReads
Threads, client The TSM client uses the
producer-consumer multithreading model.
In a standard Incremental backup:
When the producer thread gets a file
specification to be processed, it
queries the TSM server for information
about existing backups for that file
spec. The server sends the query results
back to the client. The producer thread
uses the query results to determine
which files have changed since the last
backup, then builds transactions
(representing files to be backed up) to
be processed by the consumer thread. The
consumer thread then backs up the files
in each transaction. Since it is the
consumer thread that does the actual
backup work (i.e. the transfer of the
data to the server), you see its session
with a large number of bytes received.
An idle producer thread is typically due
to it not being given any more file
specs to process, so it isn't querying
the TSM server. Once the consumer thread
is done with its work (and there are no
more file specifications to process),
then the consumer and producer threads
will close out their server sessions.
If the producer session is timed out via
the server's IDLETimeout setting, it
will re-establish itself if necessary.
The client's main thread is responsible
for giving the producer thread file
specs to process. The producer thread
doesn't close out its session after
processing each file spec for
performance reasons; if the file specs
are coming in fairly quickly, then the
overhead of stopping/restarting sessions
could impact performance. In theory, the
producer could close its session after a
certain period of inactivity.
See also: Multi-session Client
Threshold for non-journaled Windows client GUI preference,
incremental backups introduced in TSM 4.2. Corresponds to
the INCRTHreshold option.
Ref: Windows client manual; TSM 4.2
Technical Guide redbook
Threshold migration The process of moving files from a local
file system to ADSM storage based on the
high and low thresholds defined for the
file system. Threshold migration is
started automatically by HSM and can be
started manually by a root user.
Contrast with demand migration and
selective migration.
Threshold migration (HSM), max number Control via the MAXThresholdproc option
of simultaneous processes in the Client System Options file
(dsm.sys). Default: 3
Threshold migration (HSM), query Via the AIX command:
'dsmmigfs Query [FileSysName]'
Threshold migration of a file system, Via Unix command: 'dsmautomig FSname'
(HSM) force
Threshold migration of a file system Control via the AIX command:
(HSM) set levels 'dsmmigfs Add|Update -hthreshold=N'
for the high threshold migration
percentage level. Use:
'dsmmigfs Add|Update -lthreshold=N'
for the low threshold migration
percentage level.
THROUGHPUTDatathreshold Server option: Specifies throughput
threshold that a client Consumer session
must achieve to prevent being cancelled
after a specified number of minutes
(plus media wait time). The time
threshold starts at the time a client
sending data the server for storage (as
opposed to setup or session housekeeping
data). Syntax:
THROUGHPUTDatathreshold Nkbpersec
where the number of KB per second
specifies the throughput that client
sessions must achieve to prevent
cancellation after
THROUGHPUTTimethreshold minutes have
elapsed. This threshold does not include
time spent waiting for media mounts.
A value of 0 prevents examining client
sessions for insufficient throughput.
Throughput is computed by adding send
and receive byte counts and dividing by
the length of the session. The length
does not include time spent waiting for
media mounts and starts at the time a
client sends data to the server for
storage.
Code: 0 - 99999999. Default: 0
Note: Interactive sessions, i.e. command
line and graphical interface clients,
will be affected by these parameters as
calculations are cumulative across
multiple operations. When a session is
cancelled for being over the throughput
time threshold and under the throughput
data threshold, the following new
message will appear:
ANR0488W Session xx for node yy ( zz )
terminated - transfer rate is less than
ww kilobytes per second and more than vv
minutes have elapsed since first data
transfer
xx = session number
yy = node name
zz = platform name
ww = transfer rate in kilobytes per
second
vv = elapsed time since first data
transfer
See also: Consumer session; SETOPT
THROUGHPUTTimethreshold Server option: Specifies time threshold
for a Consumer session after which it
may be cancelled for insufficient
throughput. Syntax:
THROUGHPUTTimethreshold Nmins
where the minutes specify the threshold
for examining client sessions and
cancelling them if the throughput
threshold is not met (see the
THROUGHPUTDatathreshold option). This
threshold does not include time spent
waiting for media mounts. The time
threshold starts at the time a client
sending data the server for storage (as
opposed to setup or session housekeeping
data). A value of 0 prevents examining
client sessions for insufficient
throughput.
Code: 0 - 99999999 (minutes).
Default: 0 (which disables it)
See also: Consumer session; SETOPT
tid Thread ID.
Time of day, per server ADSM server command 'SHow TIME'
(undocumented)
Time zone See: ACCept Date
TIMEformat, client option, query 'dsmc Query Option' in ADSM or 'dsmc
show options' in TSM; see "Time Format"
value. 0 indicates that your opsys
dictates the format.
TIMEformat, client option, set Definition in the client user options
file. Specifies the format by which
time is displayed by the ADSM client.
NOTE: Not usable with AIX or Solaris, in
that they use NLS locale settings. Code:
1 for 23:00:00
2 for 23,00,00
3 for 23.00.00
4 for 12:00:00AM/PM
Default: 1
Query: 'dsmc Query Options' in ADSM or
'dsmc show options' in TSM and look at
the "Time Format" value. A value of 0
indicates that your opsys dictates the
format.
See also: DATEformat
TIMEformat, server option, query 'Query OPTion' and look at the
"TimeFormat" value.
TIMEformat, server option, set Definition in the server options file
for ADSM and old TSM.
Specifies the format by which time is
displayed by the ADSM server:
1 for 23:00:00
2 for 23,00,00
3 for 23.00.00
4 for 12:00:00AM/PM
Default: 1
This option is obsolete since TSM 3.7:
the date format is now governed by the
locale in which the server is running,
where the LANGuage server option is the
surviving control over this.
Ref: Installing the Server...
See also: DATEformat; LANGuage
Timeout values See: COMMTimeout; IDLETimeout;
MOUNTWait; THROUGHPUTTimethreshold;
Client sessions, limit time
TIMESTAMP SQL: A typename in the ADSM database.
In report form, it looks like:
2000-05-10 22:37:37.000000
Portions of it can be accessed via a
CAST(... AS ___) where ___ can be one of
DATE, DAY, DAYNAME, DAYOFWEEK,
DAYOFYEAR, DAYS, DAYSINMONTH,
DAYSINYEAR, HOUR, MINUTE, MONTH,
MONTHNAME, QUARTER, SECOND, TIME,
TIMESTAMP, WEEK, YEAR.
Sample of seeking date > 7 days old:
SELECT * FROM ADSM.FILESPACES WHERE
CAST((CURRENT_TIMESTAMP-BACKUP_END)DAY
AS DECIMAL(18,0))>7
See also: HOUR(); MINUTE(); SECOND().
Timestamp Control Mode (HSM) One of four execution modes provided by
the dsmmode command. Execution modes
allow you to change the space management
related behavior of commands that run
under dsmmode. The timestamp control
mode controls whether commands preserve
the access time for a file or set it to
the current time.
See also: execution mode
TIVGUID Another name for GUID (q.v.).
Tivoli The name of the enterprise management
software company, acquired by IBM, and
then given responsiblity for the
* Storage Manager product.
Tivoli Data Protection for Exchange See: TDP for Exchange
Tivoli Storage Manager Formally called IBM Tivoli Storage
Manager, as of 2002/04.
Tivoli Storage Manager for Databases Consolidates former products as of
2002/05: Tivoli Storage Manager for
Databases: Tivoli Data Protection for
Informix, Tivoli Data Protection for
Oracle, and Tivoli Data Protection for
Microsoft SQL.
Relies on the backup application program
interfaces (APIs) provided by several
different database packages to store
backup data in the TSM server.
Microsoft SQL Server, Oracle and IBM
Informix. (A TSM backup client is also
available for IBM DB2 databases, but
this client is included with the DB2
software; it is not part of the Tivoli
Storage Manager for Databases product.)
Ref: May 2002 whitepaper "Comprehensive,
flexible backup and recovery for
relational databases".
www.tivoli.com/products/index/
storage-mgr-db/
See also: TDP for Informix
Tivoli Storage Manager for Hardware Various hardware storage subsystems
provide facilities which specifically
help make backups more efficient, such
as Flash Copy on the IBM ESS. This
provides a means for TSM to perform
backups from the snapshots, rather than
contending with the file system or
database at the operating system or
database system level. There are, of
course, ramifications and caveats.
This adjunct product is currently for
DB2 and Oracle database backups.
http://www.ibm.com/software/tivoli/
products/storage-mgr-hardware/
Tivoli Storage Manager for Mail A software module for IBM Tivoli Storage
Manager that automates the data
protection of email servers running
either Lotus Domino or Microsoft
Exchange.
This single facility replaces the two
prior, separate products as of 2002/04:
Tivoli Data Protection for Lotus Domino,
and Tivoli Data Protection for Microsoft
Exchange Server.
www.tivoli.com/products/index/
storage-mgr-mail/
Tivoli Storage Manager for System The latest incarnation and identity of
Backup and Recovery the IBM SysBack product, for AIX system
backups. See: SysBack
Tivoli.com The Tivoli web site, until 2003/02/01,
when it was absorbed into IBM.com for
corporate consistency.
\tivoli\tsm\Server\adsmdll.dll Like: C:\tivoli\tsm\Server\adsmdll.dll
At least through TSM 4.2, this is the
TSM client module on Windows.
TLM Generically, Tape Library Manager.
Product: Backup and disaster recovery
product from Connected.
TLS-NNNN Qualstar company Tape Library System
model number, where NNNN idenfiies the
specific model. The first N is the DLT
series identifier. The second N
specifies the number of drives in the
library. The final NN is the maximum
number of cartridges within magazines.
TME Tivoli Management Environment. An
integrated suite of systems management
applications for a distributed
client/server environment.
/tmp The Unix temporary files file system.
*SM has never wanted to back up the /tmp
file system, or any files in it, via
Incremental or Selective: there is an
implied Exclude in effect for /tmp, even
if you don't specify one. Some customers
report being able to get around this by
coding /tmp in the client DOMain option.
Likewise, HSM does not allow you to add
/tmp to its repertoire of controlled
file systems, as that doesn't make
sense.
See: ALL-LOCAL; DOMain; Raw logical
volume; Shared memory
/tmp/.8000001e.1a0e The kind of filename created by mail
reader Pine, owned by a user, containing
the PID of the pine process.
-TODate (and -FROMDate) Client option, as used with Restore and
Retrieve, to limits the operation to
Active and Inactive files up to and
including the specified date.
Used on RESTORE, RETRIEVE, QUERY ARCHIVE
and QUERY BACKUP command line commands,
usually in conjunction with -TOTime
(and -FROMTime).
The operation proceeds by the server
sending the client the full list of
files, for the client to filter out
those meeting the date requirement. A
non-query operation will then cause the
client to request the server to send the
data for each candidate file to the
client, which will then write it to the
designated location.
See also: DATEformat
Contrast with: -PITDate
Total number of bytes transferred: In the summary statistics from an
Archive or Backup operation, or the
Activity Log message ANE4961I which
records the client operation stats, the
sum of all bytes transferred.
The value will be reported in a form
suiting its magnitude, as in samples:
"114.45 MB", "1.53 GB".
Note that in Unix and other systems with
simple directory structures, the number
will probably be less than the sum
reflected by including the numbers shown
on "Directory-->" lines of the report,
in that *SM stores only the name and
attributes of directories, in its
database. Note also that Retry
operations may inflate this value, if
they result in the file being re-sent to
the server, as in the case of the
beginning of a direct-to-tape backup,
when the tape is not yet mounted
(message ANS1114I). With retries, the
amount of data actually deposited in the
storage pool can be considerably less
than the transferred bytes count.
Total number of objects backed up Client Summary Statistics element:
The total number of objects updated.
These are files whose attributes, such
as file owner or file permissions, have
changed.
Total number of objects deleted: In the summary statistics from an
Archive or Backup operation, or the
Activity Log message ANE4957I which
records the client operation stats.
This is a count of the objects deleted
from the client disk file system after
being successfully sent to the server
storage pool in an Archive operation
where -DELetefiles is used. The number
is zero for all Backup commands.
Total number of objects expired Client Summary Statistics element:
Objects that have been expired either
because they do not longer exist on the
TSM client, have been excluded by the
client, or has been rebound to a new
management class which is retaining a
less number of versions.
Total number of objects failed: In the summary statistics from an
Archive or Backup operation, or the
Activity Log message ANE4961I which
records the client operation stats.
Reflects problems encountered during the
job. Refer to the dsmerror.log for
problem details.
Message ANS1802E will appear at the end
of the backup of the file system having
the problem. Message ANS1228E usually
points out the file that failed. The
failure cause most typically is files
being active during backup, as per
message ANS4037E (consider boosting your
CHAngingretries value). Or, message
ANS4005E points out a file which was
deleted before it could be backed up.
Can also Search the body of the job for
messages other than ANS1898I progress
messages. See also messages ANS4228E,
ANS4312E. Could be the inability to use
a tape that is stuck in a drive, or that
the drive is disabled.
If the number of failed = number of
examined, it is likely a client defect,
as in APAR IC41440.
Total number of objects inspected: In the summary statistics from an
Archive or Backup operation, or the
Activity Log message ANE4952I which
records the client operation stats.
Reflects the number of file system
objects eligible for inspection - which
may be reduced in Backup or Archive per
Include/Exclude specs. (But, in a
Backup, only EXCLUDE.FS or EXCLUDE.Dir
will cause file systems or directories
to not be entered: other Exclude types
will cause the contents of file systems
and directories to be fully inspected,
thus elevating this statistic.
When using journal-based backup, the
number of objects inspected may be less
than the number of objects backed up.
In Unix, the "." file in the highest
level directory is not backed up, which
is why "objects backed up" is one less
than "objects inspected".
See also: Journal-based backups &
Excludes
Total number of objects rebound Client Summary Statistics element:
Total number of objects rebound to a
different management class.
Total Storage Expert (TSE) Can co-exist with TSM; but be aware that
TSE is a Java application, and as such
is a resource hog.
TotalStorage See: IBM TotalStorage
-TOTime (and -FROMTime) Client option, as used with Restore and
Retrieve, to limit the operation to
files up to and including the specified
time.
Used on RESTORE, RETRIEVE, QUERY ARCHIVE
and QUERY BACKUP command line commands,
usually in conjunction with -TODate (and
-FROMDate) to limit the files involved.
The operation proceeds by the server
sending the client the full list of
files, for the client to filter out
those meeting the time requirement. A
non-query operation will then cause the
client to request the server to send the
data for each candidate file to the
client, which will then write it to the
designated location.
See also: TIMEformat
TPname Client System Options file (dsm.sys)
option to specify a symbolic name for
the transaction program name. For SNA.
Discontinued as of TSM 4.2.
TPNProfilename server option, query 'Query OPTion'
TRACE Server command for tracing server
operation to capture data relating to a
problem situation. You should do so
only as instructed by IBM Support,
noting that tracing can add overhead and
itself jeopardize full, stable
operation. Example:
adsm> trace enable PVR MMS
(use PVR for suspected drive problems,
MMS for suspected robotics problems.
PVR generates a LOT of output)
adsm> trace begin tsmtrace.out
...replicate your problem situation...
adsm> trace end
Capture the results, from the Activity
Log, via like:
adsm> q actlog begintime=<time>
endtime=<time> > actlog.out
Tracing client Diagnosis tool invoked in the client
environment, usually as directed by IBM.
Controlled by options which may appear
in the Client User Options File
(dsm.opt), but which more usually are
specified on the dsmc command line.
Available commands: Query Tracestatus
Available options: NOTrace,
TRACEFILE <FileName>;
TRACEFLAGS:
SERVICE
INSTR_CLIENT_DETAIL for a breakdown of
how long the ADSM client spends in
each operation (network, file i/o,
etc).
TRACEMAX NNNN (limits trace log size to
that many MB)
Ref: Trace Facility Guide manual
You can also use the undocumented
"-traceflags=service" with dsmc, and
"-traceflags=instr_client_detail".
See also: Query Trace; THROUGHPUT
MEASUREMENT section near bottom of this
document.
(See "CLIENT TRACING" section at the
bottom of this document.)
Tracing server Diagnosis tool invoked in the server
environment, usually as directed by IBM.
Controlled by options which may appear
in the Server Options File (if needed to
trace things as the server comes up),
but which which more usually are
specified in server session commands.
Available commands: TRace Begin|Disable|
ENAble|END|Flush|List|Query.
Example of tracing admin scheduling:
TR ENABLE SCHED
TR BEGIN
Ref: ADSM Trace Facility Guide manual
See also: "SERVER TRACING" section at
the bottom of this document.
Transaction Watch out for clients backing up very
large files or commercial databases,
as that constitutes a single, very
large transaction, which burdens the
Recovery Log.
See "CLIENT TRACING" section at bottom
of this document.
Transactions, minimize number Reducing the number of transactions is
helpful in reducing the overhead of
various operations. Be aware, however,
that fewer transactions can mean more
data per transaction, which makes for a
higher demand for space in the
Recovery Log. This is governed by
various client and server options:
MOVEBatchsize, MOVESizethresh,
SELFTUNETXNsize, TXNBytelimit,
TXNGroupmax.
Transmission Control Protocol/ A standard set of communication
Internet Protocol (TCP/IP) protocols that supports peer-to-peer
connectivity of functions for both local
and wide-area networks.
See: TCP/IP
Transparent Recall The process HSM uses to automatically
recall a file back to your workstation
or file server when the file is
accessed. The recall mode set for a file
and the recall mode set for a process
that accesses the file determine whether
the file is stored back on the local
file system, stored back on the local
file system only temporarily if it is
not modified, or read from ADSM storage
without storing it back on the local
file system.
See also: recall mode
Contrast with: selective recall
Travan Tape storage technology which employs a
linear, single-channel recording on
0.25" tape. Lower capacity and cost make
it suitable for remote locations and
modest uses.
Capacity: TR-5: 10 GB native.
TR-7: 20 GB native.
trcatl Tracing command for the 3494, to be run
from the host system, to see if the 3494
is functioning correctly. Syntax:
'trcatl [-a | -l LibName]'
Of course, just performing mtlib query
commands is just as good to determine if
the library is accessible and
responding.
Ref: Manual: "IBM SCSI Tape Drive,
Medium Changer, and Library Device
Drivers: Programming Reference", Problem
Determination chapter (near end of book)
Tru64 Unix Compaq's name for what was the
"Digital Unix", before Compaq bought DEC
Trusted Communication Agent (TCA) The dsmtca or dsmapitca module, which
is setuid root and serves as a trusted
intermediary between the non-root client
and the TSM server when you use the
"PASSWORDAccess Generate" options.
Handles the sign-on password protocol.
The dsmtca process can be invoked as any
user, but runs as root, and thus can get
at the non-public TSM encrypted password
file, and so will have the authority to
connect.
Note, however, that neither the TCA nor
the TSM server use privileged port
numbers (0-1023) for the TCP
interaction: the TCA uses an arbitrary
port number > 1023 and the ADSM server
uses port number 1500.
Performance: There is little overhead in
the intermediation: the interaction will
take about the same amount of time as if
invoked by root directly
In such non-privileged interactions the
TSM accounting record will record a
username in the 7th field (Client owner
name).
Trustee rights See: Novell trustee rights
tsadosp.nlm (Netware) Used to backup and restore the DOS
partition on a NetWare server
(ostensibly introduced by NetWare 5).
The TSM client does not support the
backup/restore of the DOS partition.
TSM Tivoli Storage Manager. Initial version
and release: 3.7. Predecessor product:
ADSM. Basic AIX system requirement: AIX
4.3.1, 4.3.2, 4.3.3. Not supported under
AIX 4.2.
See also: ITSM
TSM 4.1, upgrading One customer reports having upgraded his
AIX 4.3.3 system to 5.1, and that his
TSM 4.1.4 did run on AIX 5.1.
TSM 4.2 principal features - Increased platform support for
LAN-free backup
- Enhanced SAN exploitation
- Journal-based backups for Microsoft
Windows NT and Windows 2000 clients
- A backup/archive client for the IBM
eServer zSeries on Linux
The TSM 4.2 server requires Internet
Explorer 5 or higher.
TSM 4.2.x, work on AIX 5.1? Yes.
TSM 5.1.0.0 This is a base-install level. On AIX,
runs on AIX 5.1 and AIX 5.2. (The Quick
Start manual stipulates "AIX 5.1 or
above".)
TSM 5.2.0.0 This is a base-install level. On AIX,
requires AIX 5.2.
TSM 5.2.2.0 is as well, requiring new
license files.
TSM 5.2.3.0 This is an upgrade for 5.2.x.x. So if
you have a 5.2.1.x version installed,
you can download 5.2.3.0 from the IBM
site and install it.
TSM 5.3, upgrade to IBM site Technote 1193325
TSM 5.3 FAQ IBM site Technote 1193418
TSM components installed AIX: 'lslpp -l "tivoli*"'
See also: ADSM components installed
TSM Extended Edition (5698-ISX) This is the "big" TSM - actually the
common TSM base, but with more expansive
licensing. Should one license the basic
TSM or this Extended Edition? As of
2004/05, IBM doc says:
"For IBM Tivoli Storage Manager version
5.1 and later, to use a library with
greater than 3 drives or greater than 40
storage slots, IBM Tivoli Storage
Manager Extended Edition is required."
As of TSM 5, the EE includes NDMP
support, previously in the separately
licensed TDP for NDMP product.
Summary of features:
- NDMP backup for NAS
- Advanced tape library support: many
drives and slots.
- Disaster Recovery Manager
http://www.ibm.com/software/tivoli/
products/storage-mgr-extended/
http://www.ibm.com/software/info/
ecatalog/html/products/
B106003K22276G08.html
TSM for Data Retention 2004/01 enhancements to TSM 5.2 Extended
Edition to prevent critical data from
being erased or rewritten. Helps meet
additional requirements defined by
regulatory agencies for retention and
disposition of data. This is a safeguard
measure to prevent inadvertent deletion
of the data until retention policies for
the data cause it to expire as the
organization anticipated.
This product also facilitates long-term
data retention in by moving data to new
recording technology over time.
www.ibm.com/software/tivoli/products/
storage-mgr-data-reten/
Controlled by:
Set ARCHIVERETENTIONPROTECTion
But: This is extremely non-trivial,
calling for a separate, special "archive
retention protection server", which
accepts only Archive operations.
The product term for this feature is
"Deletion Hold".
Ref: 5.2.2+ Admin Guide manual; 5.2.2+
API manual
See also: Archive, long term, issues
TSM For Hardware See: Tivoli Storage Manager for Hardware
TSM GUI Preferences (Mac client) The ADSMv3 Backup/Archive GUI introduced
an Estimate function. It collects
statistics from the ADSM server, which
the client stores, by server, in the
"TSM GUI Preferences file.
Client installation also creates this
file in the client directory.
Ref: Client manual chapter 3 "Estimating
Backup processing Time"; ADSMv3
Technical Guide redbook
See also: .adsmrc; dsm.ini
TSM manuals, report problems Send comments on manuals, printed and
online, to: pubs@tivoli.com
TSM monitoring products Tivoli Decision Support.
Servergraph (www.servergraph.com).
TSMManager (www.tsmmanager.us,
www.tsmmanager.com). Said to do a lot of
its data acquisition via SQL queries. As
of 2005/03 there is Web-TSM, for
historical tracking reports.
Storserver
Tivoli ISRM.
CNT offers a TSM Reporting Tool as part
of our TSM Consulting Services
(www.cnt.com)
CA-Vantage: http://ca.com/products/sams/
ca_vantage_tsm.htm
TSM Operations Reporting Manager (IBM).
TSMReports.com
TSM Operations Reporting Manager In beta as of 2003/05.
It creates configurable reports in
email, html, or Windows' "net send"
formats.
TSM Operational Reporting (TSMOR) TSM operational reporting is designed to
automate some of the activities that
many TSM administrators perform manually
on a daily basis, by reporting status on
a scheduled basis. It creates
configurable reports in email, html, or
Windows' "net send" formats. Its data
source is the TSM database. Made
available to the TSM community for beta
2003/05. Runs on Windows only.
By default TSMOR generates a file that
corresponds with each report which ends
with .in, wherein you may see the set of
commands that TSMOR is sending to the
server. The corresponding .out file
shows the results returned by the TSM
server. All sections are all optional
in TSMOR and you can easily add new
sections to obtain information from any
table and most queries.
TSMOR includes a set of canned sections
in the default report. Some of them use
the TSM summary table to obtain
information, a table which has had
viability problems in the past, so be
aware.
TSM origins See: WDSF
TSM server version/release level Revealed in server command Query STatus.
Is not available in any SQL table via
Select.
TSM tape, read on another TSM server? The question sometimes comes up if a TSM
storage pool tape (TDP SQL or the like)
can be ejected out of one TSM library
and sent to a foreign TSM site for data
retrieval there. The answer is no: TSM
is a regimented environment, where rules
and policies prevent interloper data
from being introduced. TSM tapes are not
intended to be portable. You can employ
data-specific means to generate portable
media (such as an SQL database unload).
You might also do a TSM Export - but
that is much more elaborate.
TSM tape format Is proprietary, undefined: it is the
vendor's trade secret. You as a customer
"do not need to know".
tsm In AIX, the terminal state management
command, whose console messages may
confusingly suggest that they come from
TSM. For info: 'man tsm'.
TSM.PWD The TSMv4+ name of the encrypted
password file on the TSM client.
See: ENCryptkey; /etc/security/adsm/
TSMCLEAN.EXE A limited, modified version of the
Windows client in 4.1.2.12 to fix the
"ANS1304W Active object not found"
problem.
TSMOR See: TSM Operational Reporting
TXNBytelimit (or -TXNBytelimit) Client System Options file (dsm.sys)
or dsmc command line option to specify
the number of kilobytes (not bytes) to
buffer before the client sends a
transaction to the server (Archive or
Backup) or that the server sends to the
client (Retrieve or Restore). (Yes, it
would have been better if the option had
been named "TXNKBytelimit".)
Note that the value implies the
allocation of a holding buffer in TSM
client memory, which precedes the TCP/IP
(or other communication) buffer, and as
such is independent of your operating
system transport buffer sizes. Likewise,
the server will seek to allocate this
much space in the receiving storage
pool: if there is insufficient space,
the transaction will fail.
Placement: Must appear within a server
stanza, not at head of options file.
Whichever of TXNBytelimit (client
option, in terms of bytes) or
TXNGroupmax (server option, in terms of
files) is met first causes the
transaction to be sent to the server.
Possible values, as KB units:
TSM 3.7: 1 - 25600 (25 MB).
TSM 4.1: 300 - 2097152 (2 GB)
Default: 300 (KB)
Recommendation for Solaris: 25600
(2097152 if going to LTO tape directly).
Notes: Larger values make for more
Recovery Log space. API applications -
in particular, the TDP agents - are not
bound to this limit by design.
Larger TXN* values can result in
transient files being "missed": present
when the transaction process compiles
its list of files for that "buffer", but
gone when the actual backup is done.
See also: COMPRESSAlways; TCPNodelay;
Diagram near the bottom of this document
Ref: Installing the Clients; Setting
Processing Options. Installing the
Server...; Setting Client Options
chapter.
TXNBytelimit and tape drive buffers An ADSM-L posting suggests that the
TXNBytelimit size should not be less
than the buffer size of the tape drive,
as experienced with LTO. The posting
indicates that an inadequate size will
incide a buffer flush on the drive each
time you commit the transaction in TSM's
database. IBM LTO drives have a 64 MB
buffer, suggesting a minimum
TXNBytelimit of 65536 (KB), with the
poster citing best results using 131072
(128MB).
TXNBytelimit client option, query 'dsmc Query Option' in ADSM or 'dsmc
show options' in TSM; seek
"MaxTxnByteLimit" if ADSMv2,
"txnbytelimit" if ADSMv3 or TSM. Note
that the value reported is in bytes, as
opposed to the option being defined in
KB.
TXNGroupmax Performance-affecting definition in the
server options file.
In ADSMv2, specified the maximum number
of files (including directories)
transferred as a group between the
client and server between transaction
commits.
In ADSMv3, this concept was extended to
determining the number of small files
(logical files) which can be aggregated
into an Aggregate (physical file) on the
server, in conjunction with the client's
TXNBytelimit value in client-server
sessions. Also pertains to the target
server, in server-to-server virtual
volume operations. Specifies the number
of files that are transferred as a group
between a client and the server between
transaction commit points.
A larger TXNGroupmax results in larger
Aggregates, more efficient server
operations (as in reclamation), and less
database space consumed in cataloging.
Applies to Backup, Archive, Restore,
and Retrieve - but not HSM.
Whichever of TXNGroupmax or the client
TXNBytelimit value is met first causes
the client to send the transaction and
for the server to commit it.
Code 1 - 256.
Default: 40 (files)
Notes: Larger values make for more
Recovery Log space. Do not expect this
to affect TDP agents: for example, TDP
for Exchange sends up a maximum of 4
objects for any backup, and all objects
for a backup are contained within one
transaction.
Where very large files are involved, or
where only a single file is being backed
up, TXNGroupmax is effectively out of
the picture.
Larger TXN* values can result in
transient files being "missed": present
when the transaction process compiles
its list of files for that "buffer", but
gone when the actual backup is done.
Beware: A high value can cause severe
performance problems in some server
architectures when doing 'BAckup DB'.
See also: COMPRESSAlways; TXNBytelimit;
diagram near the bottom of this document
Ref: Installing the Server...; Server
Options chapter.
TXNGroupmax server option, query 'Query OPTion', seek "TxnGroupMax".
TYPE ADSM db SQL:
Column in CONTENTS table.
Values: Arch, Bkup, SpMg
Column in ARCHIVES, BACKUPS tables:
Values: DIR, FILE
Directories are implicitly
backed up when a file is
backed up.)
Symbolic Links do not have
their own type: they are
FILE.
TYPE (volume type code) As output from 'mtlib' command, reports
the kind of tape in the 3494:
00 for 'J' tapes; 01 for 'K' tapes.

Ultrium (LTO) Designer name for an LTO technology.


The Ultrium format uses a single-reel
tape cartridge and a half-inch wide tape
(some 610 meters long; 2000 feet) for
high capacity. The external end of the
tape is attached to a strong metal
leader pin (not a flimsy plastic loop as
found in DLT).
Placement in marketplace: Ultrium
competes with DLT, and is said to be
somewhat better than SuperDLT. 3590
remains the premium technology.
Cartridge memory (LTO CM, LTO-CM) chip
is embedded in the cartridge: a
non-contacting RF module, with
non-volatile memory capacity of 4096
bytes, provides for storage and
retrieval of cartridge, data
positioning, and user specified info.
(There is no means provided for the
customer to retrieve data from the CM:
the LTO SCSI Reference manual describes
their Medium Auxiliary Memory (MAM) and
how to read and write it via SCSI cmds.
A modified tapeutil to do this can be
downloaded from http://www.mesa.nl .)
Recording method: Multi-channel linear
serpentine; 384 tracks across the half
inch of tape width, or 768 tracks per
inch. The 384 tracks are split into four
bands of 96 tracks each. Data is written
to the innermost bands first, to provide
protection to the data recorded earliest
in the process, by writing it in the
center, the most physically stable area
on the tape. Data is also verified as it
is written. Reads/writes 8 tracks at a
time: The first set of 8 tracks is
written from near the beginning of the
tape to near the end of the tape; the
head then repositions to the next set of
8 tracks for the return pass. This
process continues until all tracks are
written and the tape is full, or until
all data is written.
IBM's Ultrium is drive model 3580,3583.
Capacity: 100 GB, uncompressed (4x that
of Accelis)
Transfer rate: 10-20 MB/second.
Durability: The cartridge is relatively
fragile - nowhere near as rugged as
3590. In Ultrium cartridges, the leader
pin will sometimes get detached (can be
reattached); and in shipping, care must
be taken to protect the cartridges (in
their plastic cases, or a padded
container). Ultrium was originally
designed for stand-alone drives and
small libraries: the more rugged but
abandoned Accelis LTO was intended for
automated libraries.
Expected life: According to IBM doc, 100
file passes, 5000 load/unload cycles.
Performance: Customers are reporting
satisfactory performance in streaming
mode; but dissatisfying performance in
more realistic start-stop operation.
Visit: http://www.ultrium.com/
See also: 3583; Accelis; Backhitch; LTO;
LTO vs. 3590; MAM; TXNBytelimit and tape
drive buffers
Ultrium 1 (a.k.a. LTO-1, lto1) The first generation, as described in
"Ultrium", above.
Customer satisfaction: So-so... Numerous
problems. The early cartridges were not
welded: when loading, the leader pin
would push in slightly, which could
partially open the case, causing the
leader pin to get stuck, necessitating
drive surgery to extricate the
cartridge. Certainly not as good in
performance or reliability as the more
expensive 3590 or 9840 technologies.
Ultrium 2 (a.k.a. Ultrium2, LTO-2, Second generation of Ultrium/LTO.
lto2, Gen2) Native capacity: 200 GB
Compression: 2x
Data rate: 35 MB/sec native, 70 MB/sec
with 2:1 compression
Said to implement variable speed tape:
the drive adjusts its speed to match the
data flow, which improves performance.
Cartridge compatibility for the Ultrium
2 Tape Drive is as follows:
- Reads and writes Ultrium 2 format on
Ultrium 2 cartridges
- Reads and writes Ultrium 1 format on
Ultrium 1 cartridges
- Does not write Ultrium 2 format on
Ultrium 1 cartridges
- Does not write Ultrium 1 format on
Ultrium 2 cartridges
TSM support of it began at: 5.1.6.1
See also: Ultrium generations
compatibility
Ultrium and FibreChannel and Unix See first the general discussion under
"Ultrium and FibreChannel and Windows".
For RS6000/Pseries IBM is using Emulex
adapters (LP7000, LP9002, or LP9802) and
in AIX the setting is called
"max_xfer_size" with value measured in
bytes. By default the value is 0x100000
(1 MB) and is sufficient for disk
operations. For LTO tapes is must be set
to 0x1000000 (16 MB).
Ultrium and FibreChannel and Windows The Windows Registry value MAXimumSGList
may need adjusting. There can be
problems caused by the combination of an
insufficient scatter/gather region
associated with the FC adapter and the
use use of the Windows 2000 IBM Ultrium
device driver. The Scatter Gather region
is the DMA direct memory access for the
adapter. If this region is smaller than
the size of a write attempted by TSM,
the IBM Ultrium device driver will write
the TSM data into segments that fit into
the scatter gather region of the
adapter. When a restore is attempted on
this data the restore will fail because
the block of data TSM wrote during the
backup was broken into segments to make
it fit into the Scatter Gather region
associated with the FC adapter.
You may find the value set to 0x41: it
needs to be higher (try 0xFF).
(Perspective: During the TSM 4.2 era,
QLogic changed the default value of
MaximumSGList option from 0x41 to
0x21. This option mandates how large
your FCP packets can be. If you were
receiving errors when packet is larger
than the limit, you will now during
backup time there is an
inconsistency. The real issue is that
data is falsely reported as successfully
sent but is truncated. Large blocks are
used mostly for tape storage. Thus disk
operations are not affected (and no
problems are reported there). But tape
operations are affected and you can
realize this usually long after the seed
of the problem was planted.)
Increasing the value does not affect
disk operations.
Ultrium cleaning cartridge The IBM LTO Ultrium Cleaning Cartridge
is valid for 50 uses only: the
cartridge's LTO-CM chip tracks the
number of times that the cartridge is
used. If you insert a cleaning
cartridge when the drive does not need
to be cleaned or if you insert a
cleaning cartridge that has expired, the
drive will automatically eject the
cartridge.
Ref: 3580 Ultrium Tape Drive Setup,
Operator, and Service Guide
Ultrium generations compatibility Per the Ultrium FAQ:
The LTO Ultrium compatibility is defined
with two concepts demonstrating
investment protection:
1) An Ultrium drive is expected to read
data from a cartridge in its own
generation and at least the two prior
generations.
2) An Ultrium drive is expected to write
data to a cartridge in its own
generation and to a cartridge from
the immediate prior generation in the
previous generation format.
Ultrium (LTO) microcode updating Ref: TSM 5.1 Technical Guide redbook,
Appendix B.
Ultrium performance See Tivoli whitepaper "IBM LTO Ultrium
Performance Considerations".
See APAR IC33920, which cites poor Locate
performance on LTO Ultrium such that a
Read will be done instead of a Locate to
position to where the next file to be
restored is a few tape blocks away.
UNAVailable (Unavailable) Access Mode saying that the Storage Pool
or Volume cannot be accessed.
Can be manually set via UPDate STGpool
or UPDate Volume. A 'CANcel REQuest'
command produces this side effect.
Can spontaneously result from:
- A (manual) mount request times out,
unfulfilled.
- A reclamation ran into unreadable
files on the volume. (Retrying the
operation or a Move Data often works,
particularly on another drive.)
- The tape was stuck in a defective
drive and TSM gave up on it.
- Message "ANR8463E ____ volume ______
is write protected.", which is often
bogus, in that the tape cartridge
does not have Write Protect in
effect: this is a drive cartridge
sensing defect which your CE can
fix.
Tips: Changing a tape volume to
UNAVailable after it is freshly used to
back up a file system is one way to
prevent any other file systems from
being written to that volume when you
are manually performing one incremental
backup at a time, and thus achieve a
kind of file system collocation.
Msgs: ANR1410W
See: Copy Storage Pool, restore files
directly from
UNC Windows: Most commonly means Universal
Naming Convention - a network resource
name for a Share point on a computer.
The name consists of the network name of
the computer plus a name you assign to a
drive or directory in order to share it.
That name becomes the Share Point Name.
Might also be used as a shortcut for
Unicode (q.v.).
Examples of Share Point Names:
- For computer "server1", drive c:,
\\server1\c$
Note that the c$ identifies the name
of the remote drive - which is
probably entirely different from the
letter under which it is mounted on
your local computer system.
- For computer "server2", Share Point
"billing", \\server2\billing
Assuming that drive letter "g" got
assigned to this mount, you could
refer to first-level file "ReadMe" in
either of two ways:
1. \\server2\billing\ReadMe
2. g:ReadMe
Note that UNC names may not be specified
for removable devices, such as CD-ROMs
or Zip disks.
Uni-reel Term used to describe tape cartridges
employing a single tape storage reel, as
used by 3480/3490/3490E, 3590/3590E,
SD-3 "Redwood" (Helical), DLT IV/7000.
The takeup reel is in the tape drive:
the tape end is pulled out of the
cartridge, into the drive, and onto its
takeup reel.
Compared with old-style dual-reel
cartridges, this makes much more sense
in reducing library storage space (the
dual-reel cartridges were "50% air") and
drive size. and maximizing the amount of
data that can be contained in a
cartridge.
Unicode (UNC) Unicode is a universal character
encoding standard that supports the
interchange, processing, and display of
text that is written in any of the
languages of the modern world. It is
frequently encountered in Windows work.
Because the Unicode representation of a
character may occupy several bytes, the
number of characters in a file name can
vary.
As of ADSM client version 3.1.0.5,
Unicode names may be used for Windows
files. This means that the machine name
is part of the file name.
As of version 4.2, there is support for
Unicode-enabled Client File Spaces.
As of the TSM 5.2 client, the Macintosh
client is Unicode enabled.
Important: The *SM server stores info
about each node. Once a node logs onto
the server using a Unicode-enabled
client, the node cannot log on with a
version of the client that does not
support Unicode. The server allows only
a Unicode-enabled client to restore
files from a Unicode-enabled file
space.
Ref: http://www.unicode.org/
"Forms of Unicode", http://www.ibm.com
/developerworks/library/
utfencodingforms/
See: AUTOFsrename; USEUNICODEFilenames
Unidata database, back up with TSM There's no TDP for Unidata backups.
You can use one of the following
methods, the last probably the best:
- Stop Unidata, back up the DB file
system, then re-start Unidata.
- Issue the Unidata dbpause command,
back up the DB file system, issue
dbresume.
- Mirror the DB file system, break the
mirror and back up the static copy.
All of the above may be implemented
through TSM Client Schedules, using
PRESchedulecmd and POSTSchedulecmd
scripts.
Unique (UNIQUE) Perhaps you mean "distinct", as in
SELECT operations.
See: DISTINCT
Unix Limits In AIX: 'ulimit -a' command.
(see /etc/security/limits values, which
set a system ceiling on usage values.)
In Csh, do: 'limit' to see current
values; do 'ulimit' to unlimit.
Unknown (drive state) The drive begins in drive state unknown
as a result of being defined, as a
result of server initialization, or as a
result of having its status updated to
online. Also reported when a drive is
taken offline and put back online, where
the first use of the drive makes the
state meaningful again. If this
persists, check that your paths and
drives are online. A SCSI reserve might
be the reason, where a Windows reboot or
ntutil closure may fix it. Beyond that,
consider drive and library microcode
updates. One customer report this state
on SDLT after cleaning failed, as in the
cleaning cartridge possibly still in the
drive, all cleaner cycles consumed.
See also IBM site Technote 1083669
See also: DRIVES
Unload DB ADSMv3 server command to unload the
server database. A subsequent
DSMSERV LOADDB loads the database by
reorganizing and compressing the
database size.
Unload tape drive See: Dismount tape
UNLOADDB See: dsmserv unloaddb
UNLOCK Admin *SM server command to undo a previous
'LOCK Admin' to allow an administrator
to once again access the server.
Syntax:
'UNLOCK Admin Adm_Name'
UNLOCK Node *SM server command to undo a previous
'LOCK Node to allow a node to once again
access the server.
Syntax:
'UNLOCK Node Node_Name'
Unreadable Bytes; Unreadable Files Elements of Query PRocess output as a
Backup Storage Pool or Space Reclamation
is running, and subsequent ANR1214I
message. Reflects areas of the input
medium which could not be read due to a
media problem (e.g., bad spot on tape).
Query CONtent will show the problem
files remaining on the input volume.
Consider retrying the operation on a
different drive, and then recovery of
the volume.
Unregister licenses Prior to v5, there is no formal way to
"unregister" licenses. The basic method
is to remove the nodelock file from the
server directory, followed by
re-registering the licenses you do want.
You would need to do this during a quiet
time, when there are no sessions or
processes that would need to use the
licenses. In Windows, try using the
License Wizard (see the Quick Start
manual): with it you can adjust any of
your license counts up or down and it
also lets you know how you stand on
license compliance.
TSM v5 reportedly allows "deleting"
licenses by setting the number of
licenses to zero, as in:
REGister LICense FILE=./library.lic
Number=0
removes all of the library entries.
Similarly:
REGister LICense FILE=./mgsyslan.lic
Number=0
removes all of the clients.
Note that you can also *lower* the
number of licenses. If you are licensed
for 2,000 clients, you can reduce that
license to 1,000 clients:
REGister LICense FILE=./mgsyslan.lic
Number=1000
See also notes under REGister LICense.
Unregister node You mean: REMove Node
"Unsupported" - what that means You will often see statements like "Use
of an old client with a current server
level is unsupported". This basically
means that the vendor deems the old
client level not worth testing with
current server software, so they cannot
guarantee that the combination will work
- and more importantly, their official
stance means that they will not provide
support for inquiries regarding problems
encountered in trying to use such a
software combination.
UPDate Admin TSM server command to change basic
information about an adminstrator.
'UPDate Admin Adm_Name Adm_Passwd
[PASSExp=Expires0-9999Days]
[CONtact="Full name, etc..."]
[FORCEPwreset=No|Yes]'
where a PASSExp value of 0 means that
the password never expires.
See also: 'REGister Admin',
'GRant AUTHority'.
UPDate COpygroup (archive type) TSM server command to update a backup or
archive copy group. To allow clients to
use the updated copy group, you must
activate the policy set that contains
the copy group.
'UPDate COpygroup DomainName PolicySet
MGmtclass Type=Archive
[DESTination=PoolName]
[RETVer=N_Days|NOLimit]
[SERialization=SHRSTatic|STatic|
SHRDYnamic|DYnamic]'
For a copygroup in an active Policy Set,
the update is not in effect until an
'ACTivate POlicyset' is done (and of
course do 'VALidate POlicyset'
beforehand.)
Changing RETVer causes any
newly-archived files to pick up the new
retention value, and previously-archived
files also get the new retention value,
because of their binding to the changed
management class.
UPDate COpygroup (backup type) TSM server command to update a backup or
archive copy group. To allow clients to
use the updated copy group, you must
activate the policy set that contains
the copy group.
'UPDate COpygroup DomainName PolicySet
MGmtclass [Type=Backup]
[DESTination=Pool_Name]
[FREQuency=Ndays]
[VERExists=N_Versions|NOLimit]
[VERDeleted=N_Versions|NOLimit]
[RETExtra=N_Versions|NOLimit]
[RETOnly=N_Versions|NOLimit]
[MODE=MODified|ABSolute]
[SERialization=SHRSTatic|STatic|
SHRDYnamic|DYnamic]'
For a copygroup in an active Policy Set,
the update is not in effect until an
'ACTivate POlicyset' is done.
UPDate DEVclass Server command to update a device class
for storage pools.
It is not necessary to perform an
ACTivate POlicyset after the Update.
UPDate DEVclass (3590) 'UPDate DEVclass DevclassName
DEVType=3590 [LIBRary=LibName]
[FORMAT=DRIVE|3590B|3590C|
3590E-B|3590E-C]
[MOUNTLimit=Ndrives]
[MOUNTRetention=Nmins]
[PREFIX=TapeVolSerPrefix]
[ESTCAPacity=X]
[MOUNTWait=Nmins]'
Note that "3590" is a special, reserved
DEVType.
Changes take effect immediately: no
Activate Policyset needed.
See also: MOUNTLimit
UPDate DEVclass (file) 'UPDate DEVclass DevclassName
DEVType=FILE
[MOUNTLimit=1|Ndrives]
[MAXCAPacity=4M|maxcapacity]
[DIRectory=currentdir|dirname]'
Changes take effect immediately: no
Activate Policyset needed.
See also: MOUNTLimit
UPDate DRive TSM server command to update the
definition of a drive. Syntax:
'UPDate DRive LibName Drive_Name
[DEVIce=/dev/???]
[CLEANFREQuency=None|Asneeded|N]
[ELEMent=SCSI_Lib_Element_Addr]
[ONLine=Yes|No]'
where ONLine says whether a drive should
be considered available to *SM. The
state you set remains in effect across
TSM server restarts.
Note: Will not work (error ANR8413E) if
*SM thinks the drive is busy (as
reflected in a SHow LIBrary report).
UPDate LIBRary TSM server command to update the
definition of a library. Syntax:
'UPDate LIBRary LibName DEVIce=___
[EXTERNALManager=pathname]'
Note there is no provision for updating
the PRIVATECATegory/SCRATCHCATegory
values in an existing library: that is
not an adjustment but a major
commitment, requiring redefinition.
Note: Will not work (error ANR8450E) if
*SM thinks the library is busy (as
reflected in a SHow LIBrary report).
See also: DEFine LIBRary
UPDate LIBVolume TSM server command to change the status
of a sequential access storage volume in
an existing library. Syntax:
'UPDate LIBVolume LibName VolName
STATus=PRIvate|SCRatch'
Note that the update will cause the
Last Use value (as seen in
'Query LIBVolume') to be reset.
Where the library is a 3494, the command
won't interact with 3494 if it believes
that the volume already has the
indicated Category Code.
UPDate MGmtclass Server command to update definitions
within a management class. Syntax:
'UPDate MGmtclass DomainName SetName
ClassName
[SPACEMGTECH=AUTOmatic|
SELective|NONE]
[AUTOMIGNOnuse=Ndays]
[MIGREQUIRESBkup=Yes|No]
[MIGDESTination=poolname]
[DESCription="___"]'
Note that except for DESCription, all of
the optional parameters are Space
Management Attributes for HSM.
If this updated MGmtclass is in the
active policy set, you will need to
re-ACTivate the POlicyset for the change
to become active.
UPDate Node *SM server command to update the client
node definition. Syntax:
'UPDate Node NodeName [Password]
[FORCEPwreset=No|Yes]
[PASSExp=Expires0-9999Days]
[CLOptset=Option_Set_Name]
[CONtact=SomeoneToContact]
[DOmain=DomainName]
[COMPression=Client|Yes|No]
[ARCHDELete=Yes|No]
[BACKDELete=No|Yes]
[WHEREDOmain=domain_name]
[WHEREPLatform=platform_name]
[MAXNUMMP=number]'
where:
PASSExp value of 0 means that the
password never expires - unless
overridden by the Set PASSExp value.
Node must not be currently conducting a
session with the server, else command
fails with error ANR2150E.
Use of the DOmain parameter causes the
files to rebound to the new management
class the next time a backup is run on
the client.
See also: REGister Node; REMove Node
UPDate SCHedule, administrative Server command to update an
administrative schedule. Syntax:
'UPDate SCHedule SchedName
Type=Administrative
[CMD=CommandString]
[ACTIVE=No|Yes]
[DESCription="___"]
[PRIority=5|N]
[STARTDate=NNN] [STARTTime=NNN]
[DURation=N]
[DURunits=Minutes|Hours|Days|
INDefinite]
[PERiod=N]
[PERunits=Hours|Days|Weeks|
Months|Years|Onetime]
[DAYofweek=ANY|WEEKDay|WEEKEnd|
SUnday|Monday|TUesday|
Wednesday|THursday|
Friday|SAturday]
[EXPiration=Never|some_date]'
WARNING!!! Do not update a schedule
when that schedule is currently running,
as it may cause another instance to be
started! Example: I had an admin
schedule which started a Morning-Admin
script at 06:00. During its operation I
updated the schedult to start at 05:30 -
and found that ADSM started another
instance.
UPDate SCHedule, client Server command to update a client
schedule. Syntax:
'UPDate SCHedule DomainName SchedName
[DESCription="___"]
[ACTion=Incremental|Selective|
Archive|REStore|
RETrieve|Command|Macro]
[OPTions="___"] [OBJects="___"]
[PRIority=N]
[STARTDate=MM/DD/YY|TODAY
|TODAY+n]
[STARTTime=HH:MM:SS|NOW]
[DURation=N]
[DURunits=Hours|Minutes|Days|
INDefinite]
[PERiod=N]
[PERunits=Days|Hours|Weeks|
Months|Years|Onetime]
[DAYofweek=ANY|WEEKDay|WEEKEnd|
SUnday|Monday|TUesday|
Wednesday|THursday|
Friday|SAturday]
[EXPiration=Never|some_date]'
Advisory: Updating a client schedule
causes the discarding of all the event
records for every client connected to
this schedule and starts collecting
stats from scratch. The rationale is
that changing a schedule redefines the
rules, as though it were a new schedule,
and so the event records for the "old"
schedule are dismissed as no longer
relevant.
See also: DURation; DEFine ASSOCiation
UPDate SCRipt ADSMv3 server command to update a Server
Script. Syntax:
'UPDate SCRipt Script_Name
["Command_Line..." [Line=NNN]]
[DESCription=_____]'
The command line should be enclosed in
quotes, and can be up to 1200 chars.
The description length can be up to 255.
Note that, unfortunately, you cannot
specify that the source of the update is
a file containing script lines.
See also: DEFine SCRipt; Server Scripts
UPDate STGpool (disk) Server command to update a storage pool
definition. Syntax:
'UPDate STGpool PoolName
[DESCription="___"]
[ACCess=READWrite|READOnly|
UNAVailable]
[MAXSize=MaxFileSize]
[NEXTstgpool=PoolName]
[MIGDelay=Ndays]
[MIGContinue=Yes|No]
[HIghmig=PctVal] [LOwmig=PctVal]
[CAChe=Yes|No] [MIGPRocess=N]'
No wildcards allowd for PoolName: it
must be a unique storage pool name.
UPDate STGpool (tape) Server command to update a storage pool
definition. Syntax:
'UPDate STGpool PoolName
[DESCription="___"]
[ACCess=READWrite|READOnly|
UNAVailable]
[MAXSize=NOLimit|MaxFileSize]
[NEXTstgpool=PoolName]
[MIGDelay=Ndays]
[MIGContinue=Yes|No]
[HIghmig=PctVal] [LOwmig=PctVal]
[COLlocate=No|Yes|FIlespace]
[REClaim=N]
[MAXSCRatch=N] [REUsedelay=N]
[OVFLOcation=______]'
No wildcards allowd for PoolName: it
must be a unique storage pool name.
No choices in this command cause the
data in the storage pool to be "frozen":
objects will continue to expire per
prevailing policy values.
UPDate Volume Server command to change the access mode
for one or more volumes in random or
sequential access storage pools. Syntax:
'UPDate Volume VolName
[ACCess=READWrite|READOnly|
UNAVailable|DEStroyed|
OFfsite]
[LOcation="___"]
[WHERESTGpool=StgpoolName]
[WHEREDEVclass=DevclassName]
[WHEREACCess=READWrite,READOnly,
UNAVailable,DEStroyed,
OFfsite]
[WHERESTatus=EMPty,FILling,FULl,
OFfline,ONline,Pending]
[Preview=No|Yes]'
where VolName may be a tape volser in a
storage pool which does not use
Scratches, or may be an AIX logical
volume name for an AIX disk, in the form
"/dev/rLVNAME".
The change takes effect immediately: if,
for example, the volume is being used to
receive a storage pool migration, the
migration stops to switch to a scratch
tape. This can be a useful way to handle
multiple processes and/or sessions
waiting for the same output volume:
update the volume to READOnly and they
will both get separate scratch tapes and
then proceed in parallel. Once the
scratches are mounted you can change the
volume back to READWrite and not disturb
those tasks.
Changing an empty (scratch-destined)
"ACCess=OFfsite" copy storage pool tape
via UPDate Volume to READWrite or
UNAVailable causes it to be removed from
the copy pool storage pool and made
scratch.
Doing "ACCess=OFfsite" to a volume that
was the subject of a MOVe MEDia causes
the volume to disappear from Query MEDia
output: change it back to
ACCess=READOnly to again see it in
Query MEDia.
Note that a volume can belong to only
one storage pool.
Use 'DELete Volume' instead, to release
a volume from a storage pool.
No choices in this command cause the
data on the volume to be "frozen":
objects will continue to expire per
prevailing policy values.
Updating--> Leads the line of output from a Backup
operation because the attributes (meta
data) of the file have been found
changed while the data remains
unchanged, and thus TSM is updating the
server's saved information about the
file attributes as stored in the TSM
database...which is to say that the
attributes can be wholly stored in the
database. This departs from the method
used for other operating systems and
file systems where the nature of their
file attributes require backing up the
file afresh.
In Unix, operations such as chmod,
chown, chgrp, gunzip then gzip, and the
like will cause the file attributes to
be changed, and the file's ctime (inode
administrative change timestamp) to be
updated, while leaving the atime (access
time) and mtime (modification time)
stamps unchanged, indicating that file
content has not changed. Changing a
Unix ACL should instead be expected to
result in a file backup rather than an
update, because the amount of data is
more extensive.
A ramification of this Unix client
method is that because the old file
attributes are simply replaced in the
TSM database, a restoral will yield the
most current file attributes - which is
to say that you cannot restore the
former attributes of the file.
In Windows, the permissions are too much
to be stored in the TSM database and so
rather than an update operation, a file
backup should be expected.
See also: Directory-->; Expiring-->;
Normal File-->; Rebinding-->
UPGRADEDB See: dsmserv UPGRADEDB
UPPER SQL clause to force a character string
to be upper case. This is handy to
implement in a Select, in that storage
pool names need to be given in upper
case, and it's easy to forget that. So:
SELECT ...FROM VOLUMES WHERE
STGPOOL_NAME=UPPER('$1') AND...
URL Uniform Resource Locator: a web site
address. In ADSMv3 and TSM, a keyword in
REGister Node to specify the URL address
that is used in your Web browser to
administer the TSM client.
See: REGister Node; Set SERVERURL
USELARGebuffers Definition in the server options file,
introduced with ADSM 2.1.x.12 to allow
large files which are stored on the
server to use a larger buffer size to
help reduce CPU utilization on the
server. Syntax:
'USELARGebuffers [ Yes | No ]'.
Default: Yes
(Code 'USELARGebuffers No' to get around
the 2.1.5.12 server defect of not
restoring symbolic links.)
ADSMv3: Renamed to "LARGECOMmbuffers".
Obsoleted in TSM 5.3 because large
buffers are always used. (If present in
the file, no error message will be
issued, at least early in the
phase-out.)
USELARGETAPEBLOCK Definition in the server options file,
introduced with ADSM 2.1.x.15.
Obsoleted by ADSMv3.
This option is used to enable large tape
block support. Enabling this option
will cause larger tape blocks to be used
when writting data to tape.
NOTES: Once this option is enabled,
data written to tapes CAN NO LONGER BE
READ with the option set to NO or from
an ADSM server that does not support
this option. However, data from older
version of the ADSM server can be read
with the option enabled.
**CAUTION** Use of this option can
jeopardize ADSM database recoverability,
due to the tape incompatibility which
can be introduced for a recovery-time
server which does not have the same
option value.
USELARGETAPEBLOCK active? 'Query OPTion', look for
"UseLargeTapeBlock" being Yes or No.
User, count backup objects for day, Select count(*) from BACKUPS where -
at server NODE_NAME='UPPER_CASE_NAME' and -
FILESPACE_NAME='___' AND -
OWNER='____' AND -
date(BACKUP_DATE)='2000-01-14'
User, throttle TSM is a product with an "enterprise"
orientation, and does not provide
controls for governing individual users.
You can govern nodes and mount points,
but not the users who originate on the
nodes or use the mount points.
User exit ADSMv3: An external, customer-provided
program to which the *SM server passes
control for it to process each event log
record. The program must be of the
following type:
MVS: C, PL/I or Assembler program
Unix: C program
Windows NT: DLL program
(Sample programs are shipped with the
respective server.)
Specify in server options file, or use
command: 'BEGin EVentlogging USEREXIT'.
Warning: It is vital to appreciate that
the customer programming in the user
exit constitutes an extension of the
server. The server is rendering itself
completely vulnerable to whatever the
customer decides to do in the user exit.
The user exit program receives control
with *SM server file handles, and so it
is important to tread carefully to avoid
interfering with or even corrupting the
files being operated upon by the server
(tape volumes, database files, recovery
log files, trace files, accounting
files, etc.). In particular, DO NOT
spawn another process within the user
exit (as via system()) in that the
further process may inadvertently do
damage. All programming involved should
be strictly within the user exit itself.
(Ref: APARs IY03899 and IY03374.) And,
of course, a failure in the user exit
will cause the *SM server to fail with
it.
Ref: ADSM Version 3 Technical Guide
redbook; Admin Guide manual
See also: USEREXit
User interface See: Interface to ADSM
User Name Element of 'Query SEssion' report,
revealing the username on the client who
invoked the dsm or dsmc session. Note
that value is null until first command
is processed: for example, just entering
'dsmc' is not enough; you have to then
enter some command under the dsmc
session for the user name to appear.
Is empty if an HSM session.
USEREXit option (v.3) Server option to allow events to be sent
to a C-function named "adsmV3UserExit"
for processing. Be sure to enable
events to be saved (ENABLE EVENTLOGGING
USEREXIT ...) in addition to activating
the userexit receiver. Syntax:
USEREXit [YES | NO]
<C-compiled Module Name>
If YES is specified, event logging
begins automatically at server startup;
if NO is specified, event logging must
be started with the BEGIN EVENTLOGGING
command.
Code like:
USEREXit No
/usr/lpp/adsmserv/bin/ADSMexit
or
USEREXit Yes
/usr/lpp/adsmserv/bin/ADSMexit
See also: User exit
Users Client System Options file (dsm.sys)
option to authorize specific client
users to request services from an ADSM
server. If you don't code this option,
all client users can access the server;
if you do code it, only those specified
can get service from the server.
Default: all client system users can get
service from ADSM.
USEUNICODEFilenames Client option for Windows systems which
makes it possible to use characters
beyond the usual ASCII. Is needed for
backing up Macintosh files stored on an
NT server under the Appleshare in
Service for Macintosh, due to the
extended character set used. (Tivoli's
stance is that this is not intended for
internationalization.)
Code: USEUNICODEFilenames Yes .
Note that when using language AMENG, the
default value is YES, so you must
explicitly code it in DSM.OPT. When
using language other than AMENG, the
default is NO. Thus if you omit
USEUNICODEFilenames from DSM.OPT, you do
not need to code it. If you have
USEUNICODEFilenames YES then you must
either change it to NO or remove it
from DSM.OPT.
Starting with the 4.2 client for Windows
NT-based operating systems,
USEUNICODEFilenames is no longer
relevant (it is tolerated but doesn't
do anything). If you have mixed
character-set file names, you should
migrate your non-unicode file systems to
unicode file systems (see chapter 1
in the client book), so that you can
support these files. This is
regardless of the TSM client LANGUAGE
setting.
See also: Unicode
USN Update Sequence Number, as pertains to
Windows.
/usr filling Can be caused if you start the ADSM
server via 'nohup dsmserv' rather than
invoking rc.adsmserv: the absence of the
"quiet" option to dsmserv causes console
messages (the whole Activity Log) to be
reflected in the nohup.out file!
/usr/adm/ras/ See: /var/adm/ras/
/usr/tivoli/tsm/server/bin/db.dsm The introductory Database space which is
planted by the AIX TSM install.
/usr/tivoli/tsm/server/bin/log.dsm The introductory Recovery Log space
which is planted by the AIX TSM install.
%Util (ADSMv2 server) See: Pct Util
Utility files Located in /usr/lpp/adsmserv/ezadsm/

V2archive (-V2archive) TSM 4.2 client option to force archiving


to operate as it did in ADSMv2: archive
only files, omitting directories
encountered along the way.
Performance implications: Can impair
Retrieve performance, as surrogate
directories have to be fabricated when
retrieving files in a set of
directories; lessens the amount of
Expiration work the server has to do.
Contrast with FILESOnly.
Ref: TSM 4.2 Technical Guide
See also: Restore Order
VALidate POlicyset Command to verify that a policy set is
complete and valid before you activate
it with 'ACTivate POlicyset'. It
examines the management class and copy
group definitions in the policy set and
reports on conditions that you need to
consider before you activate the policy
set. Syntax:
'VALidate POlicyset DomainName
PolicysetName'
There must be a default management class
defined for the Policy Set.
See also: ACTivate POlicyset
VALIdateprotocol REGister/UPDate Node keyword to specify
whether a cyclic redundancy check (CRC)
is performed to validate the data sent
between the client and server.
By default, no validation is done.
Choice Dataonly causes validation to be
performed only on file data that is sent
between the client and server. This does
not include the file metadata. This mode
impacts performance because additional
overhead is required to calculate and
compare CRC values between the client
and the server.
Choice All specifies that data
validation be performed on all client
file data, client file metadata, and TSM
server metadata that is sent between the
client and server. This mode impacts
performance as additional overhead is
required to calculate and compare CRC
values between the client and the
server.
/var and backups It is only an unfounded rumor that /var
is automatically excluded from Unix
backups. (Only /tmp is automatically
excluded.)
/var/adm/ras/ The AIX directory in which error and
device logging occurs.
See manual "IBM SCSI Tape Drive,
Medium Changer, and Library Device
Drivers" (GC35-0154), the "Device and
Volume Information Logging chapter.
/var/adm/ras/Atape.rmt?.dump? Device information, logged in this file
to supplement system error logging.
Employ the 'tapelog' (q.v.) command to
view.
/var/adm/ras/atldd_atl.log Binary log produced by the atldd (3494)
device driver module (/etc/lmcpd)
through the day as it services requests,
to log nitty-gritty details at its
level.
How to control it: no doc provided by
IBM
How to list it: no doc provided by IBM
VARy ADSM server command to make a random
access volume (disk) available or
unavailable to the server for use as a
database, recovery log, or storage pool
volume. Syntax:
'VARy ONline|OFfline VolName'
See also: UPDate DRive
VAult DRM media state for volumes which
contain data and which are in an
(offsite) vault. Their next state should
be VAULTRetrieve.
See also: COUrier; COURIERRetrieve;
MOuntable; NOTMOuntable; VAULTRetrieve
Vault Retrieve Status (DRM), cannot Try 'MOVe DRMedia *
change to Onsite Retrieve WHERESTate=VAULTRetrieve' to put them
into COURIERRetrieve status, and
'MOVe DRMedia *
WHERESTate=COURIERRetrieve' to put them
back into scratch status.
Or the volumes could be DB Backups:
pursue 'DELete VOLHistory ...
Type=DBBackup';
or try deleting the Sequential Volume
History.
VAULTRetrieve DRM media state for volumes now empty of
data, which are offsite, and can be
retrieved. Their next state should be
COURIERRetrieve.
See also: COUrier; COURIERRetrieve;
MOuntable; NOTMOuntable; VAult
VCR data The Volume Control Region (sometimes
called Vital Cartridge Records) of a
tape cartridge, such as a 3590 tape.
This data is used to perform fast
locates to file positions on the
cartridge. Essentially, the VCR record
is in a reserved area located at the
beginning of the tape, before the label,
and records:
- Device block ID map (incl. end of
file / data marker(s));
- Media statistics (soft and hard I/O
errors, etc.);
- Format identification (128 (3590B) or
256 track (3590E))
When a 3590 tape cartridge is mounted
for OUTPUT, the VCR region must be in
the same recording format used by the
drive on which the tape is currently
mounted. Thus, when a tape is to be
rewritten in a format different from the
currently written format (128-track vs
256-track,) the VCR region of the tape
is rewritten when the first WRITE (at
beginning of the tape) is issued. This
activity results in the following
increases in processing time prior to
start of writing by the job: 42 seconds
when rewriting from 128-track to
256-track 40 seconds when rewriting from
256-track to 128-track.
Loss of this data causes the locate
performance for read or append
operations to become degraded. The VCR
data can be lost because of 3590
hardware problems, including unexpected
power-offs during the load or unload
process while Associated Write
Protection is set. (Can also be caused
by faulty microcode.) Subsequent locate
and space operations to the volume will
operate at low speed until new records
are written and the VCR is rebuilt.
The VCR data is rebuilt when the tape
is empty and is rewritten from its
beginning, which is what happens when a
tape is (re)labeled. Or you can use the
ASSISTVCRRECovery server option (active
by default). Or you can use the
'tapeutil'/'ntutil' command option
"Space to End-of-Data". Or you can check
the tape out of the library, back in as
private, and then Audit it with FIX=No.
Another way to handle the lost VCR
problem, per IBM and the tape vendor, is
to send the tape(s) back to the vendor
for replacement.
Call 1-800-IBM-SERV and request the
latest microcode for your device.
Msgs: ANR8820W
See also: ASSISTVCRRECovery; ANR8776W.
Ref: Manual "3590 Hardware Reference"
(see Volume Control Region in the index,
or search for "VCR" in the PDF image);
Redbook "IBM Magstar Tape Products
Family: A Practical Guide", 2.1.1.2
Predictive Failure Analysis
Verbose Client User Options file (dsm.opt)
option to specify that you want
processing information to appear as
tasks are performed. For example, will
assure that all output goes to the
terminal during a Backup or Restore.
But be aware that a lot of terminal I/O
comes at a cost.
Opposite of Quiet.
Default: is Verbose
Note that Verbose may not reveal the
file at play in a client failure: you
would then have to resort to a client
trace.
VERDeleted Backup Copy Group operand defining the
maximum number of backup versions kept
of deleted files: when the file is
deleted from the client and the next
Backup is run on the client, the number
of Inactive files will drop to this
number, by deleting the oldest Inactive
versions.
If you want to be able to restore old
Inactive versions, specify a large value
for this number (like 9999) and let
RETExtra control how much is stored.
See also: RETExtra; RETOnly; VERExists
VERDeleted, query 'Query COpygroup', look for
"Versions Data Deleted".
VERExists Backup Copy Group operand defining the
maximum number of backup versions - the
singular Active version and all Inactive
versions - that the server will keep:
the excess number will be deleted,
oldest first.
VERExists has its greatest value in
limiting backup versions where there are
more than one per day, where the
RETExtra can only control by days of
age.
For files still present on the client,
Inactive versions will be discarded by
either the RETExtra versions count or
the VERExists retention period -
whichever comes first.
Note that if you change the value for a
prevailing definition, there will be no
effect upon existing versions until
another backup is done.
See also: RETExtra; RETOnly; VERDeleted
VERExists, query 'Query COpygroup', look for
"Versions Data Exists".
Veritas *SM competitor. Users say:
Although Veritas backups are a lot
faster, due to the multiplexing of
files, it requires more management and
control to keep track of tapes and such.
Veritas is also old-stype: it reportedly
requires Weekly Fulls + Incrementals.
Veritas Quick I/O A file system developed to combine the
performance advantages of raw volumes
with the maintenance advantages of a
journaled file system (the ability to
use standard Unix file system commands
to manage it). To use Quick I/O, the
base file system must be constructed
with a VERITAS file system format (VxFS
is in the correct format): that is,
Quick I/O is an extension to VxFS.
Its applicability in a TSM setting is
yet to be determined.
Presentation:
eval.veritas.com/downloads/pro/qiowp.pdf
Versatile Storage Server 1998 IBM product: Centralized, shared
disk storage solution to support
multiple Unix, Windows NT, and AS/400
servers.
Version (1) A three-part designation for an
instance of the API, consisting of the
version, release, and level. (2) The
maximum number of different backup
copies of files retained for files. The
following backup copy group attributes
define version criteria: Versions Data
Exists and Versions Data Deleted.
Version, client 'dsmc q sch', and see version number
on first header line.
Is displayed when a client command line
session is entered.
From a server session (dsmadmc) you can
do: 'Query STatus'.
Version of ADSM/TSM Is displayed when a client command line
session is entered.
From a server session (dsmadmc) you can
do: 'Query STatus'.
Versions Data Deleted Backup copy group attribute reflecting
the specification "VERDeleted" (q.v.).
Versions Data Exists Backup copy group attribute reflecting
the specification "VERExists" (q.v.).
Versions-based file expiration During Backup, the storing of a file
triggers server evaluation of the number
of Inactive versions of the file versus
the number of versions which retention
rules say should be kept. If the
incoming file version conceptually
pushes the oldest Inactive file out of
the set, that bumped version is marked
as expired, such that the next EXPIre
Inventory will actually delete it from
the *SM database.
Virtual Mount Point Essentially, the definition of a
subdirectory within a managed file
system so that TSM treats it as a
separate file system. For example, if
you have natural file system /a, all
backups of /a will include /a/b. When
you define /a/b as a virtual mount
point, /a/b ceases to be treated as part
of /a, and instead becomes an entity
unto itself, which needs to be backed up
separately: a backup of /a will no
longer back up what is in /a/b; indeed,
everything that had been backed up under
/a/b will Expire.
The virtual mount point specific
reference is more efficient than
defining the file system via a Domain
option and then using an Include-exclude
file to exclude all files above that
subdirectory. The specified Virtual
Mount Point will show up in server
'Query FIlespace' and client 'dsmc Query
Filespace' as the Filespace Name: that
full virtual mount point name will
become the file space name, not just the
subdirectory.
See also: FOLlowsymbolic;
VIRTUALMountpoint
Version numbering The parts of the V.R.P.F version number
are as follows:
V Version number
R Release number
P PTF level
F Fixtest/patch level
(Sometimes called "VRML" - Version,
Release, Maintenance Level.)
Major TSM releases will have new version
and/or release numbers, i.e. "4.2",
"5.1", etc. - requiring new licensing
and $$$. The first set of codes for a
release will have '0' for the PTF and
fixtest/patch levels. Between releases,
scheduled maintenance will be issued in
the form of a PTF: i.e., 5.1.1.0 and
5.1.5.0 are PTFs for the 5.1 release.
Major releases and PTFs go through a
full testing processes. PTFs are
available in the "maintenance"
subdirectory of the FTP site.
Between PTFs, fixtests are sporadically
made available to address high impact
problems found between PTFs that can not
wait until the next formal PTF or
release. These usually under very little
regression testing. Such updates are
available in the "patches" subdirectory
of the website.
In general, it is good practice to test
out any new software on noncritical
systems before rolling out to
production. This is especially true for
fixtests due to the limited testing that
they receive.
Ref: http://www.ibm.com/software/
sysmgmt/products/support/Tivoli_
Software_Maintenance_and_Release_
Strategy.html
Virtual Root User What you become if you use the -NODename
option to go at files belonging to
another node, and correctly provide
the password for that node.
Virtual tape Utilizing disk drives to mimic a tape
library, for backup and recovery
purposes, where the disk storage may be
an end in itself, or serve as a large
buffer to a tape library.
Virtual Tape Server IBM product integrated into an IBM 3494
Tape Library. The host system perceives
it as an IBM 3494 Tape Library with 32
virtual 3490E tape drives and up to
100,000 virtual 3490E cartridges. A
large disk array as a front-end serves
as both a buffer and a cache.
The target environment is mainframes,
where there are lots of "tape volumes",
so many of which are just partly full,
which is expensive on today's tape
cartridges. The objective is to fill
tape cartridges by mapping real volume
usage to virtual volumes. The concept
has no real applicability to a TSM
environment.
See also: Enhanced Virtual Tape Server
Virtual Technology Uses a buffer to intelligently manage
and store data in such a way that the
tape and disk media are completely
filled. This is in reaction to how
traditional storage leaves a large
amount of space empty within its media.
Virtual Volumes ADSMv3 server-to-server electronic
vaulting feature for storing data on
another server. Virtual volumes can be
any of the following:
* Database backups
* Storage pool backups
* Data that is backed up, archived, or
space managed from client nodes
* Client data migrated from storage
pools on the source server
* Any data that can be moved by EXPORT
and IMPORT commands
* DRM plan files
On your source server, the data is
stored in the primary storage pools. The
data is then copied to virtual volumes
in your server-to-server copy storage
pool, where they are accepted as archive
objects. Since these archive objects
belong to a node of type=server, all the
normal archive copy group paramaters are
ignored, except the storage destination.
Use 'Query VOLHistory' to see them, and
manage as you would any volhistory
volume.
Ref: Admin Guide, chapter 13
Ref: Redbook: ADSM Server-to-Server
Implementation and Operation
(SG24-5244).
Virtual Volumes performance If AIX, consider using the TCPNodelay
client option to send small transactions
right away, before filling the TCP/IP
buffer.
Virtual Volumes retention In both cases (primary and copy) the
data retention is always managed by the
source server's normal management class
and copy group parameters. Thus the
remote, target server will retain the
virtual volumes *forever*, regardless of
the retention value, until told
otherwise by the source server. You may
explicitly specify "nolimit", for
certainty.
The retention of the source server's
data is managed by the source server.
The reclamation of the virtual volumes
is also managed by the source server.
The remote, target server retains the
virtual volumes until after they have
been reclaimed by the source server.
Then, run EXPIre Inventory on the target
server: this will show up as deleting
archives (because the virtual volumes
come in as archive data). Later, the
target server may reclaim the real
physical volume that previously stored
many virtual volumes.
There may be a grace retention period in
effect per the management class of the
target server, as perhaps for DR needs.
Sometimes, due to inter-server
communications problems, etc., there may
be a discrepancy between the two TSM
servers. You should occasionally
schedule "REConcile Volumes" on the
source server to syncronize their views
on which volumes need to be retained.
VIRTUALMountpoint Unix Client System Options file
(dsm.sys) option to define a virtual
mount point *within* a file system, to
Backup from there rather than the head
of the file system. The assigned name
then becomes its own Filespace.
Then you can refer to it on a DOMain
statement in the Client User Options
file (dsm.opt) or as a 'dsmc' command
operand ('dsmc i VMPoint').
'dsmc Query Filespace' will show a last
backup timestamp, same as for a real
file system.
VIRTUALMountpoint must be coded within a
SErvername stanza.
Note that the specified Virtual Mount
Point must always be present: if you
code it in dsm.sys "just in case" and
it is not actually present, anyone who
invokes 'dsmc' on the client will get
ANS4931S error messages.
Note that ADSMv3 supports coding the
virtual mount point name as a symbolic
environment variable, which thus allows
you great flexibility in scripts which
perform ADSM functions. For example:
VIRTUALMountpoint ${VMP1}
VIRTUALMountpoint ${VMP2}
VIRTUALMountpoint ${VMP3}
allows you to set environment variables
$VMP1,2,3 as needed in a script which
does incremental backups.
VIRTUALMountpoint can be used to make it
possible to use TSM with file system
types not supported/newer than your TSM
softwre: you simply code the file system
name as a virtual mount point and then
proceed as normal. For example, you have
a TSM 4.x client on Linux and want to
back up an EXT3 file system - a type
which came into being some time after
TSM 4.x. This technique works with the
command line client for Backups, but not
the GUI for Backups: but both the CLI
and GUI can be used for Restorals.
See also: Virtual Mount Point
VIRTUALMountpoint, Windows There is no VIRTUALMountpoint outside of
Unix. You can implement the equivalent
of a virtual mount point on Windows by
creating a local share and then backing
the share up as a separate filespace.
For example, on an Windows machine named
WMACH1, you could create virtual mount
points on directories c:\bigdir1 and
c:\bigdir2 as follows:
NET SHARE bigdir1=c:\bigdir1
NET SHARE bigdir2=c:\bigdir2
You could then backup these shares as
separate filespaces as follows:
dsmc incr \\WMACH1\bigdir1
\\WMACH\bigdir2
The filespaces on the server (assuming
you are using client PTF 5 or later)
will be \\wmach1\bigdir1 and
\\wmach\bigdir2.
One thing to keep in mind is that even
though these shares are local, Windows
still sees them as being remote, so you
would need to explicitly add them to
your domain statement (they wouldn't get
picked up by ALL-LOCAL). Also, the
client scheduler service would have to
run as a domain authorised account
because Windows thinks the shares are
domain resources so the Local System
account won't have access to them.
Note that the above examples use the UNC
name directly. You could just as easily
map these shares to a drive letter and
get the same result. Also note that
local shares are only supported on
Windows NT and later: they aren't
allowed on Win9x.
Another approach, suggested by Manuel
Panea-Doblado: Use the Windows 'subst'
command, like:
subst u: c:\bigdir1
subst v: c:\bigdir2
It has the advantage over 'NET SHARE'
that the substituted drives are not
taken by Windows to be remote, so it is
not necessary to include them in the TSM
domain (but the original C: has to be
excluded) and no domain authorised
account is needed. Also, they are easily
found in the TSM Backup or Restore
window as normal local drives.
VIRTUALMountpoint's, query 'dsmc Query Options' in ADSM or 'dsmc
show options' in TSM; see
"FileSpaceList". If no Virtual Mount
Points defined, then no FileSpaceList
entry in the report.
VIRTUALNodename, -VIRTUALNodename= ADSM v3.1+ Client User Options File
(dsm.opt) setting or command line
option. Similar to NODename, but
intended for use in gaining access to a
node's files from another node for
getting files from the server (Restore
or Retrieve processing, not for sending
them to the server (not for Backup or
Archive).
Results in the following prompt if not
also accompanied by -PASsword=____:
Please enter password for node "____":
(which, in Unix, is written to Stderr).
Until the password is entered, the
Session State will be IdleW.
Query SEssion will make it look like the
session is coming from the virtual node
rather than the actual one; but the
ANR0406I Activity Log session start
message will, while also showing the
session starting for the virtual node,
report an network (IP) address which is
that of the real node.
Can be used to gain backup/restore
authority based on the UNIX read/write
permissions, to create a userid with
more backup/restore authority than the
average user, but less authority than
the ROOT user. Accomplish by:
- Create all new *SM nodenames for
the UNIX processors, differing from
the standard hostname.
- Add the NODename option to the dsm.sys
file to specify the new nodename.
- Create a second dsm.opt file named
dsm.opt2 that containing the
VIRTUALNodename option.
- Modify the .profile files of specific
users to export a DSM_CONFIG variable
containing the path to the new
dsm.opt2 file.
- Give the dsm.opt2 permissions of
.rw-r-----, an owner of ROOT, and a
group of ADSM. By doing this, a user
must be granted access to the ADSM
group before he can access the
dsm.opt2 file, giving him higher
backup/restore authority.
Note that you would also need access to
that client's node password, which you
would specify via -PASsword=____.
Contrast with -FROMNode, which is used
to gain access to another user's files.
Windows considerations: The drive letter
(c:, d:, ...) gets substituted to their
UNC names based on the machinename of
the machine that is doing the backups
and restores. Say I have a machinename
'Backup' and I've done backups on it of
the c drive. Well the unc filespace
name is \\Backup\c$. Now if I go to a
server named 'Restore' and try to
restore and I use the syntax 'dsmc
restore c:\*.*' it will use the unc
name \\Restore\c$.
See also: GUID; NODename
-VIRTUALNodenamef vs. -FROMNode -FROMNode and -FROMOwner are part of the
facility for users sharing server-stored
files, defined by filename via Set
Access. -VIRTUALNodename is for gaining
access to all of a node's server-stored
files while at another node where that
TSM client supports the same file system
type as that of the source node.
VMware VMware is a product that allows one or
more Windows and Linux operating system
instances to run as virtual machines on
the x86 platform.
Backup and Restore works fine with TSM.
You have to specify the
/vmfs/<filesystemname>/* directory
explicitly, because TSM doesn't
recognize it as a filesystem during its
scan of all-local filesystems.
Particlarly see Redbooks Technote
"VMware Backup Considerations with IBM
Tivoli Storage Manager".
Performance: In some of their doc: "If
you are creating a new virtual machine
with a Windows Server 2003, Windows XP
or Windows 2000 guest operating system,
you can choose to install the vmxnet
networking driver for better networking
performance over the default vlance
networking driver."
See also: ____.dsk
VOLHISTORY TSM database table containing data for
the Volume History. Some fields:
BACKUP_OPERATON For a database backup,
is this volume's number within a set of
full + incrementals, where the number
is 0 for a full, and 1,2,... for the
incrementals which follow it. For other
uses, this field is empty.
BACKUP_SERIES For a database backup, is
the number for this set of full +
incrementals, since the server was
established. For other uses, this field
is empty.
TYPE One of: BACKUPFULL, BACKUPINCR,
STGDELETE, STGNEW, STGREUSE. (See
separate entries explaining each.)
VOLUME_SEQ Used for database backups,
where the amount of data being backed
up may exceed small tapes, so keeping
track of volume sequence number within
backup is necessary. For other uses,
this field is empty.
See also: VOLUMEHistory
-VOLinformation Client option, as used with Archive and
Backup, to specify that only root-level
information is to go, and pertains only
when you operate upon non-root files -
that is, files in subdirectories.
Volume A storage medium.
Volumes are assigned to Storage Pools,
not Libraries. (It is Drives which are
assigned to Libraries.)
Volume, annotate Update the Volume Location field.
See: Volume Location
Volume, bad, handling If in primary storage pool: you should
be able to do a Restore Volume from the
copy pool. But rather than embarking
upon that (or if you have no copy pool)
attempt a MOVe Data, trying on as many
drives as you have, cleaning the drives
first, to try your best to get the data
over. This endeavor will serve to get
that volume's data onto a good volume
without having to mount probably a lot
of copy storage pool tapes. For what
then remains on the volume, do a Restore
Volume if possible. As a last recourse,
do 'AUDit Volume ... Fix=Yes'. You can
also do 'Query CONTent' on the tape to
list its files and check to see if they
are still on the client such that they
will back up again if the tape is
deleted.
If a copy storage pool volume: Perform
MOVe Data to get off as much data as
possible, as above. Considering the
remaining stuff on the tape hopeless, do
'DELete Volume' to dispose of the rest,
then perform 'BAckup STGpool' to
recreate the copy storage pool content.
The slackest approach to any bad volume
situation is to leave the volume
read-only until the data on it expires,
hoping no one needs its data, and then
check the tape out to eliminate it. You
need luck with this approach.
Note that there has been a server defect
which causes Query CONTent to say that
the tape is empty, but other operations
to believe that there is data on it. The
'AUDit Volume ... Fix=Yes' operation
will fix this. If it's an offsite
volume, try changing its access to
READOnly before the Audit: you should
not have to check the tape in, as TSM
will detect the "missing or incorrect
information", and updates the database
without calling for a mount; you can
then set the status back to OFFSITE,
rendering the tape either EMPTY or
PENDING, and let it come back whenever
you retrieve tapes from offsite.
See also: dsmserv AUDITDB
Volume, define 'DEFine Volume PoolName VolName
[ACCess=READWrite|READOnly|
UNAVailable|OFfsite]
[LOcation="___"]'
where VolName may be a tape volser in a
storage pool which does not use
Scratches, or may be an AIX logical
volume name for an AIX disk, in the form
"/dev/rLVNAME".
Note that a volume can belong to only
one storage pool.
Volume, delete from Library Manager A destroyed tape which the 3494 spits
database out will remain in the 3494 library's
database indefinitely, with a category
code x'FFFA'. To get rid of that
useless entry, use the FFFB
(Purge Volume) category code, as in:
'mtlib -l /dev/lmcp0 -vC -V VolName
-t FFFB'
See also: Purge Volume category
Volume, delete from storage pool 'DELete Volume...'
Volume, disk, fix problem with See: dsmserv AUDITDB
Volume, make available 'UPDate Volume VolName ACCess=READWrite'
Volume, make unavailable 'UPDate Volume VolName
ACCess=UNAVailable'
as when you want to keep other file
systems off a tape freshly used to
back up one file system, thus effecting
a kind of file system collocation.
After such backups are finished, you
would change the Access Mode back to
READWrite.
Volume, maximum size Per 2004/05/27 IBM TechNote 1170255
("Maximum capacity of an ITSM disk
volume"), the maximum size of an ITSM
disk volume is 8 Terabytes (TB).
See also: File size, maximum
Volume, node content 'Query CONtent VolName ...'
Volume, restore from Copy Pool 'RESTORE Volume' (q.v.)
Volume, update See: UPDate Volume
Volume attributes 3494 library codes, reported as
numerical values in the 2nd column of
volser report 'mtlib -l /dev/lmcp0 -qI',
or interpreted by the detailed report
'mtlib -l /dev/lmcp0 -vqI'.
Codes and meanings are defined in the
/usr/include/sys/mtlibio.h header:
80 Volume present in library, but not
accessible. (Tape probably stuck in
drive - Int Req situation.)
40 Volume present in library, is
currently mounted. (A scratch
with a 40 attribute usually
indicates a volume still mounted
after its reclamation.)
20 Eject pending.
10 Ejection underway.
08 Misplaced - missing.
04 Unreadable label or unlabeled.
02 Used during manual mode.
01 Manually ejected.
00 Volume present in library, not
mounted.
89 Volume not in library or misplaced
in library. Usually accompanied by
Category Code FFFA, saying that the
volume was Manually Ejected, as
when the volume is defective and is
out to be replaced.
Volume Categories 3494 Library Manager category codes
numbering from 0000 to FFFF hex for
logically grouping tape volumes.
Private and Scratch category codes are
established via 'DEFine LIBRary'.
0000 Null
0001-FEFF General programming use
(decimal 1-65279)
0013 The scratch category in MVS
for 3590 cartridges.
012C (Decimal 300) Default PRIVATE
category number for ADSM, for
both 3490s and 3590s.
012D (Decimal 301) Default SCRATCH
category number for ADSM in
managing 3490 tapes.
012E (Decimal 302) Default SCRATCH
category number for ADSM in
managing 3590 tapes (always 1
more than 3490 scratch
category value).
FF00-FFFE Reserved for hardware funcs.
FF00 INSERT
FF01-FF0F Reserved.
FF10 Convenience Eject.
FF11 Bulk Eject
FF12-FF18 Reserved.
FFF6 CE cartridge.
FFF9 Service Volume (CE use)
FFFA Manually Ejected. Tape was
previously in the inventory
is not found: the 3494 thinks
that someone reached in and
removed it (typical in
getting out a damaged tape).
FFFB-FFFD Reserved.
FFFB Purge Volume. Used to delete
an LM database entry, as for
a Manually Ejected (FFFA)
volume.
FFFD 3590 cleaner cartridge
FFFE 3490 or 3490E Cleaner Volume
FFFF Volser-specific.
See separate definitions of each
category by name.
Note that category codes are things
stored in the 3494 database: they are
not contained in the tape, and a tape is
not mounted to change the category code.
Ref: 3494 Operator Guide manual;
Magstar Tape Products Family redbook
Appendix A; Device Drivers manual, LMCP
chapter, Volume Categories.
Volume Categories, query 'Query LIBRary' will reveal the decimal
category codes assigned to SCRATCH and
PRIVATE. Or you can use the AIX command
'mtlib -l /dev/lmcp0 -qV -V VolName'.
Volume class 3494 library volume class, reported as
numerical values in the 3rd column of
volser report 'mtlib -l /dev/lmcp0 -qI'.
Is "10" for 3590 tape drives.
'mtlib -l /dev/lmcp0 -vqI'.
Volume contents, list files 'Query CONtent VolName [COUnt=N]
[NODE=NodeName] [FIlespace=???]
[Type=ANY|Backup|Archive|
SPacemanaged]
[DAmaged=ANY|Yes|No]
[COPied=ANY|Yes|No]
[Format=Detailed]'
A positive COUnt value shows the first N
files on the volume; a negative COUnt
value shows the last N files on the
volume. The reported Segment Number
reveals whether the file spans volume
(where "1/1" says it's wholly contained
on the volume).
Use "F=D" to reveal the file sizes.
Performance: The more files on the tape,
the longer the query takes.
Volume History (file) Needed principally for DSMSERV to look
up DBBackup tapes when doing a database
recovery.
Volume history, file to contain, "VOLUMEHistory" definition in the
define server options file (dsmserv.opt).
Volume history, file to contain, 'Query OPTion', look for "VolumeHistory"
query
Volume history, query 'Query VOLHistory'
Volume history backup file missing You may find yourself in the tight spot
or no DBBACKUP entries of approaching a database restoral but
the volume history backup file (usually
/var/adsmserv/volumehistory.backup is
absent, not up to date, or lacking
DBBACKUP entries. What you can do is
"play fish": ask it for DBBackup info
about each volume in turn until you find
all the actual backup tapes, as in:
'DSMSERV DISPlay DBBackupvolumes
DEVclass=OURLIBR.DEVC_3590
VOLumenames=VolName[,VolName...]'
Volume history backup file name, VOLUMEHistory option in the server
define options file.
Volume history backup file name, 'Query OPTion', look for
query "VolumeHistory".
This is the file named on the server
options file VOLUMEHISTORY keyword and
is the target of the 'Backup VOLHistory'
command.
Volume in 3494, last usage date 'mtlib -l /dev/lmcp0 -qE -uFs
-V VolName'
Volume in drive, report 'Query Volume'
"Volume is queued for demount." Volume status from mtlib command query
of a volume, typically seen where the
tape is stuck in a tape drive.
Volume Location Element of Query Volume or Query
VOLHistory output, being an annotation
of where the volume is, as affected by
the MOVe MEDia command spec
OVFLOcation=____, and the UPDate Volume
spec LOcation=____.
Specify up to 255 chars, using quotes.
For TSM to fill in the location, the
Volume Type (q.v.) must be one of:
BACKUPFULL, BACKUPINCR, DBDUMP, EXPORT,
REMOTE, RPFILE.
For type REMOTE, the location is the
server name of the library client which
owns the volume.
For type RPFILE, the location is the
server name defined in the Prepare
command's DEVclass parameter.
See also: Volume Type
Volume names must be unique The TSM server represents a single
namespace, and all volumes within that
one server must be unique. Thus, even if
tape volumes are to be contained in
different libraries, they must have
unique volume names. Regardless of TSM
requirements, it is in general a Very
Bad Idea for volumes to not have unique
labels, given the propensity for
portable media in particular to end up
in places unintended, as the receiving
system could recognize the volume name
and blindly write over it, not
understanding that it doesn't belong in
the place it ended up.
Volume States 3494 Library Manager state for a given
tape volume. Possible states:
Inaccessible The accessor can't reach
the volume, perhaps stuck
in a tape drive, or
gripper had slippery
fingers removing from
cell so that it would not
come out - or perhaps was
even dropped. There may be
an Int Req on the 3494,
but maybe not. The tape
may actually be outside
the library: the operator
may have manually removed
it from a drive and left
it on a desk somewhere.
Misplaced Lost it in the box.
Mounted Currently on a drive or
being mounted.
Unreadable Vision system can't read
external label.
Manual Mode Volume was manually
handled.
Volume Status Output column in 'Query Volume' report.
Possible values:
Online For a disk, which is online.
Offline For a disk, which is offline.
Empty The volume is, um, empty.
Pending The volume is empty (all its
files have been removed) but
the REUsedelay is still
ticking.
Filling The volume is being written to
as needed, and will continue to
be until it is full.
Full Reflects a volume which is
either full now, or was and is
now no longer written to as
its Pct Util drops toward the
reclamation level. (Full tapes
are always decreasing in
content.)
Note: Does not specially report a volume
which is marked "DEStroyed": it will
show up in the report as Filling or the
like.
Volume Type Element of Query Volume output, as
affected by the MOVe MEDia command
spec OVFLOcation=____, and the
UPDate Volume spec LOcation=____.
Specify up to 255 chars, using quotes.
Standard, TSM values:
BACKUPFULL Full database backup volume.
BACKUPINCR Incremental database backup
volume.
BACKUPSET Client backup set volume.
DBDUMP Online database dump.
DBSNAPSHOT Snapshot db backup volume.
EXPORT A volume from an Export.
REMOTE The volume is owned by a
library client rather than
by this TSM server, where
the owner is identified in
the Volume Location field.
RPFILE A DRM Recovery Plan File
volume, created assuming
full and incremental
database backups.
RPFSnapshot Recovery plan file object
volume created assuming
snapshot database backups.
STGDELETE Deleted sequential access
storage pool volume.
STGNEW Added sequential access
storage pool volume.
STGREUSE Reused sequential access
storage pool volume.
See also: Volume Location
Volume usage, by node 'SHow VOLUMEUSAGE NodeName'
...or...
SELECT NODE_NAME,VOLUME_NAME FROM -
VOLUME_USAGE WHERE -
NODE_NAME='UPPER_CASE_NAME'
Volume utilization 'Query Volume'
VOLUMEHistory Server option specifying the name of a
file that should automatically be
updated when sequential volume history
information is changed in the server. By
coding this option you do not have to
perform 'BAckup VOLHistory' commands, in
that the server does this automatically.
D/R: This sequential file is essential
to TSM database recovery, for that task
to identify the BACKUPFULL and
BACKUPINCR Volume Types to be used in
the recovery. The information obviously
needs to be current. Sending a copy of
that file offsite via traditional D/R
means (i.e., daily) is rather
ineffective, in that the file changes so
frequently (Consider also that DB
backups don't always occur on a
schedule: if you have DBBackuptrigger in
effect, a backup could occur at any
time.) It would make more sense to do
something like FTP it to a relatively
remote site system by a local program
which detects when the file has been
changed, or copy it to an AFS file
system served remotely, or even copy it
to a drive in an adjoining fireproof
enclosure. If using TSM DR, note that
the file is part of the DRplan file.
Default: none
Ref: Installing the Server... Appendix A
"Maintaining VOlume History Backup
Files"
See: DELete VOLHistory; Query VOLHistory
volumehistory.backup See: Volume history backup file name...
VOLUMES SQL: Storage pool volumes table.
Columns:
VOLUME_NAME Volume name
STGPOOL_NAME Storage Pool name
DEVCLASS_NAME Device Class name
EST_CAPACITY_MB Estimated capacity (MB)
PCT_UTILIZED Percent utilization
STATUS Volume status
ACCESS Access mode
PCT_RECLAIM Percent reclaimable
space
SCRATCH Whether volume is
assigned to scratch pool
ERROR_STATE If in error state.
NUM_SIDES Number of writable sides
TIMES_MOUNTED Number of times mounted
WRITE_PASS Write pass number
LAST_WRITE_DATE Date last written
LAST_READ_DATE Date last read
PENDING_DATE If Pending, when it
became so (timestamp)
WRITE_ERRORS Number of write errors
(reset when leaves
stgpool, returns to
scratch)
READ_ERRORS Number of read errors
(reset when leaves
stgpool, returns to
scratch)
LOCATION Text field for noting
where volume is
CHG_TIME When this volume record
last updated by admin.
(YYYY-MM-DD
HH:MM:SS.000000)
CHG_ADMIN Identity of that admin.
Volumes, last write date SELECT volumes.volume_name, -
volumes.last_write_date FROM -
STGPOOLS,LIBVOLUMES,VOLUMES WHERE -
STGPOOLS.STGPOOL_NAME='______' AND -
STGPOOL_NAME=STGPOOLS.STGPOOL_NAME -
Alternately:
SELECT VOLUME_NAME, LAST_WRITE_DATE -
FROM VOLUMES WHERE STGPOOL_NAME IN -
('BACKUPSTK1','BACKUPSTK2') ORDER BY -
LAST_WRITE_DATE
Volumes, list 'Query LIBVolume' will display all the
volumes in a library, whether they be in
Scratch state or be assigned to a
storage pool.
'Query Volume' is used to display
volumes which are in storage pools.
'Query VOLHistory' reports a
chronological history of volume usage.
Volumes, list by Pct Util SELECT * FROM VOLUMES -
ORDER BY PCT_UTILIZED
(By virtue of saying that you want
volumes listed by Pct Util, you are
implicitly saying that you want to
report volumes that are assigned to
storage pools, not necessarily all
volumes in a library.)
Volumes, number of SELECT COUNT(VOLUME_NAME) as -
"Number of volumes" FROM VOLUMES
Volumes for restoral, determine See: Restoral preview
Volumes in library, list Use AIX command:
'mtlib -l /dev/lmcp0 -vqI'
for fully-labeled information, or just
'mtlib -l /dev/lmcp0 -qI'
for unlabeled data fields: volser,
category code, volume attribute, volume
class (type of tape drive; equates to
device class), volume type.
(Does not include CE tape or cleaining
tapes)
Volumes in storage pool, define 'DEFine Volume PoolName VolName
[ACCess=READWrite|READOnly|
UNAVailable|OFfsite]
[LOcation="___"]'
where VolName may be a tape volser in a
storage pool which does not use
Scratches, or may be an AIX logical
volume name for an AIX disk, in the form
"/dev/rLVNAME".
Note that a volume can belong to only
one storage pool.
Volumes in storage pool, query 'Query Volume STGpool=PoolName'
Volumes in use for a session 'Query SEssion [SessionNumber]
Format=Detailed'
Volumes not in a storage pool, list SELECT * FROM VOLUMES WHERE -
ACCESS <> 'READWRITE' AND -
ACCESS <> 'OFFSITE'
Volumes not Read-Write or Offsite SELECT * FROM VOLUMES WHERE -
ACCESS <> 'READWRITE' AND -
ACCESS <> 'OFFSITE'
Volumes used last night Can be determined from one of:
- Activity Log mount messages (ANR8337I)
which you can search for via Query
ACTlog
- SELECT * FROM SUMMARY WHERE
ACTIVITY='TAPE MOUNT' AND START_TIME>
(CURRENT_TIMESTAMP - (12 hours))
- SELECT * FROM VOLUMES WHERE
LAST_WRITE_DATE>
(CURRENT_TIMESTAMP - (12 hours))
- Inspect storage pool volumes for
"Date Last Written".
Volumes used by server Do 'Query Volume' if the ADSM server
is up. If down, you can find that
information in the file specified on
the "VOLUMEHistory" definition in the
server options file (dsmserv.opt).
VOLUMEUSAGE TSM database table, with columns:
NODE_NAME, COPY_TYPE, FILESPACE_NAME,
STGPOOL_NAME, VOLUME_NAME
Includes primary *and* copy storage pool
volumes (unlike SHow VOLUMEUSAGE cmd).
Expect a report on this table to be
prolonged and slow!
See also: SHow VOLUMEUSAGE
VRML See: Version numbering
vscsi The /dev/vscsiN special files which
provide device driver access to the
SCSI devices on the SCSI/2
Differential Fast/Wide adapters which
are identified by "ascsiN" names.
An adapter which supports both internal
and external SCSI chains will have two
vscsi entries: vscsi0 for the internal
chain, and vscsi1 for the external
chain.
VSS Windows Volume Shadowcopy Service.
A Volume Shadow Copy of a storage volume
is a point-in-time copy of the original
entity. The Volume Shadow Copy is
typically used by a backup application
so that it can backup files that are
made to appear static, even though they
are really changing.
TSM supports VSS as of v5.2, on Windows
Server 2003. TSM uses VSS to back up all
Windows Server 2003 system state
components as a single object, to
provide a consistent point-in-time
snapshot of the system state. System
services components can be backed up
individually.
VTS Virtual Tape Server. Beware using this
with ADSM because of its large amount of
recalls for logical volumes back to the
volume cache. Any application that
writes single-file tape images that fill
the tape volume is non-optimal for VTS.
The whole point of a VTS is to accept
small files that were originally
directed to tape, store them on disk as
"virtual" tape images, then let the
virtual tape images get migrated off and
stacked together on a large real tape.
When you want to access something on a
"Virtual" tape volume again, the VTS has
to stage the data from the tape back to
the disk before you can use it. If you
are talking small application files, the
VTS works wonderfully. But because TSM
(and any similar application) writes ONE
large file that fills the "virtual"
tape, the WHOLE TSM "volume" gets
written to disk, then staged out to
tape. So far so good. But when you
want to do a restore of a particular
file, the WHOLE volume gets staged back
in, not just the piece you want. So it's
a performance issue.
Ref: Several redbooks (search on "VTS"
and "virtual tape")
VXA Unique tape storage technology developed
by Ecrix (became part of Exabyte). Reads
and writes data in packets. Operates at
variable speeds, so can match the data
transfer rate of the host, and doesn't
have to stop and wait if data is
incoming at a slower rate, reducing wear
on drives and media. The heads can read
data from any physical location on the
tape, without having to follow tracks
from beginning to end.
VxFS Veritas File System
Is the native file system type for HP-UX
10.x and up. Note that there is also an
installable VxFS from Veritas, as may
sometimes be deployed on Solaris
systems. (And Solaris HSM needs to be
implemented on VxFS.) *SM supports the
native version. As of TSM 5.2, VxFS is
supported on AIX.

Waiting for mount of input volume... Status value in Query PRocess. If it


Waiting for mount of output volume... remains that way for an undue amount of
time, the drive may be undergoing
automatic cleaning (the AIX Error Log
would have a TAPE_DRIVE_CLEANING
record). Or the drive may be having a
mechanical problem (e.g., LTO "mount
ready" sensor). Or there may a problem
getting at the volume, in which case a
'Query REQuest' may show the reason,
such as "...CHECKIN LIBVOLUME
required". Also could be a bad tape, as
in subsequent "ANR8359E Media fault"
message.
Waybackups My term for backups which go way back.
Sites reasonably prefer to call recent
backups "backups", as contemporary,
pertinent data - data you would want to
restore to get current operations going
again. Another term for the current data
may be "Business Backup". Data from six
months ago is obviously much older than
you'd want to restore for current
operations: some might refer to such old
data as archival - but that conflicts
with the TSM Archive designation.
So "waybackups" is a good name for quite
old backups.
WCI Web client interface; as opposed to the
CLI or GUI.
WDfM Withdrawal from Marketing. IBM term
for product obsolescence. Their words:
"During the life of a product, initial
versions and releases may be replaced
by subsequent versions and releases
that will be delivered to customers
placing new orders, as well as existing
customers who are covered by first-year
support, or have acquired
subsequent-year support for the
product. When all of the functions of
a product have become obsolete, or have
been absorbed by one or more other
products, the product will be
withdrawn. This means the deletion of
the product's Program Identification
Number and its associated Feature
Numbers from sales manuals and price
lists, so the product can no longer be
ordered. Typically, an announcement of
product withdrawal will precede the
effective date, or last-order date, by
90 days."
See also: EOS
WDSF (WDSF/VM) Workstation DataSave Facility, the
predecessor product to ADSM. WDSF
consisted of 2 parts: a host server,
which ran on a VM mainframe, and client
software that ran on a Macintosh, IBM PC
or compatible, or Unix workstation.
Files were backed up over the network
and saved in a large disk pool on the
mainframe. When the disk pool got full,
older files were moved off to magnetic
tape. The data stored on the mainframe
was keyed by machine type (Macintosh,
DOS, OS/2, etc.), userid (machine name),
and disk volume name. Data backed up
from one machine type could be restored
to another machine type. Data backed up
by a particular userid was inaccessible
by another user unless specific
privileges have been granted by the
owner of the data.
WDSF utilized protocols which could be
more generally applied, as with the
OS/390 LAN Server.
When ADSM came along, it supported WDSF
clients.
Related to MVS DFDSM and Workstation LAN
File Services/VM (WLFS/VM or WLFS)
Web access to TSM It is very important to understand that
with any web-based access to the TSM
server, THERE IS NO SESSION!! Web
applications, by definition, are
"stateless": they contact the web server
only when they seek service, and do not
conduct a continuous session. (That is a
huge weakness in web applications,
particularly in web mail apps using
IMAP, where IMAP is intended to be a
session service, and so reinvoking it
every time you need something results in
greatly increased overhead.) If you do a
Query SEssion in the server, you will
not see web-based sessions unless you
happen to instantaneously catch one of
the contacts occurring. But see
"Web Client, phantom sessions" for a
cause of persistent sessions.
Web Admin (webadmin) Administration-via-web client.
Install/upgrade nuances: This may be a
separate component (in Linux TSM, et al)
so you may need to additionally attend
to it when installing or upgrading the
TSM B/A client.
You need to have these lines in your
server's dsmserv.opt file:
COMMMethod TCPIP
COMMMethod HTTP
HTTPPort 1580
(note that the httpport 1580 is the
default). You also need to run the
command DSMSERV RUNFILE DSMSERV.IDL to
initialize web definitions (web-based
operations, graphics, and online help)
in your TSM database for web admin
access. See the Quick Start and/or Admin
Ref manual for details.
After prep, restart the *SM server, then
point your web browser to:
http://server:1580
where "server" is the network address of
your *SM server.
OBSOLETE in 5.3: As of TSM 5.3, the Web
Admin is gone, replaced by ISC/AC.
See: Administration Center; ISC
Web Admin performance issues TSM 5.2 added the ability to do DNS
lookups in the Web Administrator (C
gethostbyname() calls). In some sites
this may result in degraded performance.
You can update your TSM dsmserv.opt file
to include the (undocumented) server
option : DNSLOOKUP NO
(and then restart your TSM server).
See also: DNSLOOKUP
Web Authentication timeout *SM server: 'SET WEBauthtimeouts value'
Default: 10 (minutes)
0 means to never time out.
Or in the Web page under
Object View/Server/Server Status chose
"option", and therein select
"SET WEB AUTHENTICATION TIME".
Web Client (WebClient) ADSMv3+ facility for performing client
actions on your client system via a web
browser. (The session must point at the
client system, not the TSM server.)
The web GUI uses Java heavily, so have
all the java options turned on in your
web browser. Specifically it requires a
Java 2 plug-in. (Note: Once the plug-in
is downloaded and installed there are
likely some older Java applets that
don't work with the plug-in: case in
point is the command line applet in the
Web Admin; so to switch between the two
you have to alter the browser settings
so do de-select the Java 2 plug-in when
you want to run an applet that does not
like Java 2.)
With the web client there are two
services: the client acceptor and the
remote client agent. The only one that
you should start is the client acceptor.
The client acceptor will then start the
remote client agent when it is needed.
If you manually started the remote
client agent or restarted the client
acceptor and the remote client agent is
still up, you must stop both services
and then just start the client
acceptor.
PASSWORDAccess Generate is required
for use.
Cannot be used to perform restores
across nodes.
As of 1998 the WebClient automatically
determines the locale of the machine on
which it is run and displays the
interface in that locale. There is no
option in the WebClient to override this
behavior. Under Windows NT you could
change the regional settings of the
operating system to an English locale.
As of TSM 3.7, the Web Client is
hereafter known as:
Enterprise Management Agent.
Beginning with TSM 4.1 and the use of
Microsoft Installer, the Web Client is
not automatically configured at package
installation time: configure via
dsmcutil or run the setup wizards from
the Backup/Archive GUI.
See also: dsmwebcl.log
Web Client, administrative Introduced in ADSM Version 3.1.2 .
Authentication and security is assured
via SSL (Secure Sockets Layer) and
Certificates.
Runs on server port 1580, by default,
alterable via the HTTPPort option.
Requires a modern web browser, with
frames and Java enabled. Do "View Source"
and the bottom, there is usually text
there (for to intercept a frames-off
session) which summarizes requirements.
Warning: The admin web client has a
history of being behind, relative to the
facilities available in the server. This
can cause Define Devclass and like
operations to fail, for new device
types, though the Define works perfectly
in the CLI. Suffice to say that the web
interfaces to TSM are unsatisfactory.
Ref: ADSMv3 Technical Guide
Web Client, command line won't appear Refer to the manual about co-requisite
software: Java is needed for this to
work, and with the MS-Sun rift, Java is
not included with IE. You need to
download the Java Runtime Environment
from the Sun site, or msjavx86.exe from
Microsoft Site.
Web Client, phantom sessions When a user just closes the browser
window, and does not log out, this can
result in phantom TSM sessions left
around, like:
1,949 HTTP Run 0 S 0 0
Admin WebBrowser
The problem is most often seen where the
client is using Internet Explorer 5.0.
They can be cancelled.
Phantom sessions can also result from
non-*SM network programs which contact
this *SM port, such as security program
which perform port scanning. These may
result in a "ghost" session which cannot
be cancelled.
Web interfaces, in general Web interfaces are just a universal
convenience, not an equivalent to
conventinal host-based GUI programs.
Web interfaces are not sophisticated,
are not full-featured, and are not
high-performance. Worst of all is the
lack of web standards, resulting in
operability too often being a pot-shot.
The less you expect of them, the better
off you'll be.
WEBPorts TSM 4.1 client option to allow
specifying the TCP/IP ports needed by
the TSM Web Client, as when using a
firewall. This enables the use of the
Web client outside a firewall by
specifying the TCP/IP port number used
by the TSM Client Acceptor daemon and
the TSM Remote Web client agent service
for communications with the Web GUI.
Syntax: WEBPorts Cadport Agentport
Cadport Specifies the required TSM
Client Acceptor daemon port number. If
a value is not specified, the default,
zero (0), causes TCP/IP to randomly
assign a free port number.
Agentport Specifies the required TSM
Web client agent service port number.
If a value is not specified, the
default, zero (0), causes TCP/IP to
randomly assign a free port number.
See also: Firewall
WEBshell ADSM client interface based on Web
methods. Is part of the standard
installation the ADSM client. Must
reside on the node whose files are to
be managed. The account you specify
with mkwspswd must match the account
on AIX.
As of 1997/03 does not support the
restoration of inactive files.
Weekday in Select See: DAYNAME
Weekdays schedule, change the days Customers sometimes ask if they can
redefine the days which constitute
"weekdays" in client schedules, which
the product defines as Monday - Friday
(DAYofweek=WEEKDay), as in perhaps being
Sunday - Friday. The answer is no.
You have to work around that, as in
defining two schedules: WEEKDay and
SUnday.
Wildcard characters Special characters which, when used in
a server command, can be used to operate
upon multiple objects or seek objects
when you know part of their name...
* An asterisk matches zero or more
characters;
? A question mark matches exactly
one character;
% A percent sign matches exactly one
character (same as question mark).
... Ellipsis, matches zero or more
directories.
[] Brackets enclose a character class
specification of individual
characters and/or an "a-z" range,
to match any one character in the
spec.
In a Unix environment, at least, you
should then either put the whole object
specification in quotes, or put a
backslash (\) before each wildcard
character, to keep the shell from
expanding the wildcards. Allowing the
TSM client to expand the wildcards
itself gives the client a better
opportunity to govern order of
processing, which may reduce the time
required for the operation, as in
dsmrecall operations. Otherwise, if the
shell were allowed to expand the
wildcards and thus pass the TSM client a
list of filenames, the client would deem
that it is being told to operate on the
sequence of files in the order given.
Ref: Client manual, "Including and
excluding groups of files"
Win31/DOS client Available in ADSM v.2, but not v.3.
Windows, copy files w/o date changes Download robocopy from Microsoft, or use
xcopy from a win2003 or XP machine.
Windows 2000 restore, replacing files In Windows 2000, TSM supports "replace
on boot" files; So if you choose
"replace even if readonly" as a client
option, TSM will restore those locked
files. Windows NT will not do this:
that's why when you are restoring WinNT
you put Windows in a different
directory, but with Win2K you don't.
Windows 2003 backup method TSM 5.2's method of backing up Windows
2003 systems departs from prior
approaches: For Windows 2003, TSM 5.2
uses the Microsoft VSS (Volume
Shadowcopy Service) to back up system
state and system services, versus the
"legacy" methods used in TSM 5.1 and
earlier to back up what we called the
"system object". In addition, the
Windows 2003 system state/service
backups use a different transaction
protocol that doesn't pin the TSM server
Recovery Log for extensive periods of
time, as might the "System Object"
backup method. This support required
changes on the server side as well, and
thus the co-requirement for a 5.2
server.
Shadow copy data is saved in a folder
called "System volume information",
which is a hidden system directory.
You can use the 5.1.6.x client and
system object backup method for backing
up the Windows 2003 to a TSM 5.1 server.
Windows Active Directory (AD) restore You need to do AD restores in AD repair
mode.
Windows Active Directory, restore You must use an authoritative restore to
individual objects restore individual objects. This is
done through MS ntdsutil.
Windows Active Directory support Introduced primarily to fulfill
requirements for Microsoft Windows
Certification of TSM.
Windows as a TSM server? Historically, less desirable than AIX...
Having to do a Chkdsk on Windows,
particularly on a large disk, can be
very painful. Windows I/O throughput is
historically substandard compared to
AIX. And you always have the perpetual
Windows security problems.
Windows client schedule, start On Win9x systems, you start the
scheduler by issuing a 'dsmc sched'
command at a DOS prompt.
Windows client user By default, TSM services (include the
Jbb service) always runs under the local
system account.
Interactive backup processes run under
the currently logged in account.
Windows Clustering advice When running Windows 2K Clustering, and
you want to launch the second instance
of TSM (Server2) on the second node of
the cluster, make sure you specify the
second instance. When starting the
server through "DSMSERV", it looks like
it will try and launch the first
instance it comes across in the
registry, which in this case is instance
#1 (Server1). The disk resources for
Server1 will not be seen by the second
node in the cluster, so won't be able to
see the dsmserv.dsk. A solution is to
run "DSMSERV -k Server2" to specify the
second instance of TSM in the registry.
Windows file names See: File names as stored in server
Windows Handles See IBM site Solution swg21112140.
As pointed out by one customer, the
ramifications of the numerous Handles is
memory loss: each event/handle requires
64 bytes of non-paged pool memory,
hidden in the kernel-mode usage, rather
than being visible in the non-paged pool
memory of the user-mode DSMSVC.EXE
process. (For anyone familiar with
POOLMON.EXE, try enable pool tagging and
monitoring the usage against the Even
pool tag as you increase the BUFPOOLSIZE
parameter in TSM.)
Windows include/exclude list, standard The product supplies a minimum
recommended include-exclude list, in the
dsm.smp file, located in the config
folder in the install directory.
Ref: Windows B/A manual,
"Excluding system files"
Windows NT, exclude all of a drive exclude.dir d:\
or the combo:
exclude d:\...\* exclude.dir d:\...\*
Windows NT, HSM for See: HSM, for Windows NT
Windows NT, instal considerations The correct method to install service is
(installing scheduler) to run the following command in a DOS
window after running the SETUP.EXE
program and after you have customized
your DSM.OPT file:
dsmcutil install /name:"ADSM Central
Scheduler Service" /node:NODENAME
/password:PASSWORD /autostart:yes.
After installing ADSM (or any other
service) it's imperative that the
machine be re-booted. Then check under
SERVICES to make certain those items are
in STARTED and AUTOMATIC modes. Also, if
you haven't already done so, make sure
you "named" the individual drives on the
client machine, i.e., MACHINE1_C for
the c:\ drive, etc.
Ref: text file in the baclient
directory, named dsmcutil.txt.
Windows NT, restore from Windows95? Cross client restore is supported
between NT and Windows 95. W
Windows NT, scheduler, running Running the scheduler via an NT service
is the recommended method for Windows
NT. However, there is nothing preventing
you from running DSMC SCHEDULE from the
command line if you wish. Remember,
though, if you use this latter method,
the scheduler will terminate if you log
off the machine.
Windows NT, skip access lists in See: SKIPNTPermissions
backup
Windows NT, workstation vs. server From the MS Windows NT 4 Core
licenses Technologies Handbook (1998): "Windows
NT Workstation has a limit of 10
incoming sessions. For Windows NT
Server, the number of concurrent
incoming sessions is limited only be
the number of client access licences".
Windows NT, won't back up disks Has been seen when the disks have no
labels.
Windows NT Application Log In TSM terms, this log acts as an event
receiver. The logging of events to
receivers is controlled by the admin
commands DISABLE EVENTS and ENABLE
EVENTS. To disable all logging to the NT
event log, the command would be
something like:
DISABLE EVENTS NTEVENTLOG ALL
Windows NT See: NT ...
Windows NT, active user profile The active user profile is backed as
part of the registry on NT
(HKEY_CURRENT_USER hive) and must be
restored as part of the registry. This
also true of the profiles of any other
logged user (multiple non-interactive
accounts may be logged on at the same
time via services). All logged on user
profiles are loaded into the HKEY_USERS
hive, and as stated above, the currently
logged on user (the one which is running
the tsm client process) is loaded into
HKEY_CURRENT_USER.
Ref: redbook SG24-2231: Windows NT
Backup and Recovery with ADSM.
Windows NT, back up server specific First, you must have TSA410.NLM Version
info 4.14 or later (it exposes the server
specific information to backup
products). Second, you must either be
running a backup against ALL-LOCAL or
use option SERVERSPECIFICINFO.
Windows NT 4.0 and System State Microsoft does not support the "system
state" concept in it, and so TSM cannot
provide such state support. On NT 4.0,
all you see is the registry and event
log in the "SYSTEM OBJECT" file space.
TSM backs up regular files, Registry,
and Event Log.
See also: System State (Windows)
Windows NT Backup of System State Does not include boot.ini: it does
include ntldr and ntdetect.com. TSM does
not backup boot.ini as a regular file.
It does not back up any files that are
included as part of the System Object.
Renaming the System Object on TSM after
the backup does work - I've tested this
out. This is documented in the TSM with
W2K redpiece. One reason to use the NT
backup method is that it gives you the
ability to archive the system state
which you cannot do at all with TSM.
Windows NT backup unable to access In performing Backups of NT file systems
files (access permissions problem) you may encounter "Access denied". The
problem is that some application has
opened the file with an attribute called
"DENY_READ". This attribute says that
the application will have *exclusive*
access to this file, and that no other
applications (including ADSM) will be
able to open this file, even if it is
just for read access.
Windows NT client Supported through TSM 5.1.7.3 only.
Not supported in TSM 5.2 or beyond.
Windows NT client directory Is saclient.
Windows NT client schedule, start On NT, you should install and start the
schedule as a service:
c:\program files\tivoli\tsm\server\
baclient > dsmcutil install
/name:"TSM Scheduler"
/node:yournodename
/password:yourpassword /autostart:yes
(same for ADSM, but \ibm\adsm\baclient)
Windows NT directories and *SM storage NT directories are data-rich, and as
pools such cannot be stored in the *SM
database as AIX directories can; so you
will find NT directories being stored in
a *SM server storage pool (which can be
specified via DIRMc).
Windows NT HSM HSM functionality is built into Windows
2000: RSM (Removable Storage Management)
Windows NT install location Originally was directory "win32app".
Later, directory "program files".
Windows NT name limits File name limit: 256 characters.
Path name limit: 1024 characters.
Windows NT Registry, back up Back up the whole Registry via
'dsmc regback entire', or one user's
entries via 'DSMC REGBACK USER CURUSER'.
Possibly invoke via a BAT file that is
run from the Startup folder, so the
registry is backed up each time you run
NT.
ADSM does not backup the registry to the
default managment class: it instead
looks for the management class with the
longest retention (RETOnly), which is
typically a tape management class. So
you may see unexpected tape mounts.
The registry backup is two steps:
1. Copies the registry contents to the
directory c:\adsm.sys\... (The TSM
client employs an MS API to get this
data from the Registry.)
2. Perform a backup of this directory
(during regular incremental backup)
For a restore it's the other way round:
1. Restore adsm.sys
2. Copy the contents back into registry
Windows NT Registry, back up? There is an NT-only option called
"BACKUPReg" to control incremental
backup of the NT Registry. The
default is Yes; can otherwise be
specified as No.
Windows NT Registry, restore See: REGREST
Windows NT Registry permissions The NT default for the registry is Admin
and System Full Control. If you want
a specific user or group to access the
registry, you must run "regedt32" and
give them authority. You can do this
through the "Permissions" option under
the NT "Security" menu.
Windows NT Registry restorals and SID Doing a fresh install of NT generates a
new machine SID. An auto-login as
administrator, generates a new user SID
based on the machine SID, which causes
the restore to fail because the original
SID differs from the current one.
Inserting the original one into the
Registry will allow the restoral to
proceed.
Windows NT System State backup Can be performed with the NTBackup
program that comes with Windows 2000.
Then you could send that output to TSM.
There is also the Backupexec product,
which can use ADSM/TSM as a virtual tape
device.
Windows performance See chapter 3 of the IBM redbook
Tivoli Enterprise Performance Tuning
Guide (SG24-5392)
Windows permissions See: SKIPNTPermissions
Windows restorals and security NTFS object security information is
stored with the object on the server and
will be restored when the individual
NTFS object is restored.
Share level security (may be set on all
types of file systems) is stored in the
registry and currently is only backed up
as part of the registry so the only way
to get it back is to restore a previous
copy of the registry.
Doing a directory only restore will
bring back the directory NTFS security
acl's but it will not bring back the
directory share level security.
(Backing up the individual directory
share information with the directory is
a well known requirement which
Development has contemplated.)
Windows restoral of system objects Per TSM documentation: "Restore of
inactive copies of System Objects are
not supported.". As of 2000, the
ability to restore inactive system
objects is being considered.
Additionally, system objects cannot be
restored to an alternate destination - a
restriction documented in the Readme.
Windows return codes Are documented in the WINERROR.H file.
System errors are documented at:
http://premium.microsoft.com/msdn/
library/sdkdoc/winbase/errlist_7fhu.htm
for Microsoft Prefered Members.
Search http://msdn.microsoft.com/
for the errno.
Windows scheduler The ID you use for backups needs to have
the "Manage auditing and security log"
right, in addition to the "Backup files
and directories" and "Restore files and
directories" rights.
It is recommended that you use the
System account.
See also: Schedule Service
Windows scheduler stops Look for causes in dsmerror.log,
dsmsched.log, server Activity log, the
NT event viewer, and Dr. Watson errors.
(Having BACKUPRegistry No in the options
file can cause the failure.)
Windows security See: SKIPNTSECURITYCRC
Windows System Object The Tivoli name for a collection of
Windows objects (files and databases)
which should be consistent for the
system to be restorable to a state of
viability. Included in System Object is
the Windows System State, which is
Microsoft's essential collection, to
which Tivoli adds things such as the
Removable Storage Management database.
First supported with the V3.7.2 client.
Beware bloat: Windows System Object
can be prolific. A new Win2k with no
applications on it can have system
objects are 250 MB in size and consist
of 2,000 to 3,000 files; and on larger
systems they can be up to 1 GB in size
and have several thousand files in them.
Most customers do not restore system
files that are more than a few days old,
so best practice is to keep 10 days or
so and we can do this with TSM by
excluding them from the daily backups
and including them to their own
management class with a 10 day
retention. This saves a lot of space in
tape and helps reduce the TSM database
size. If you have a reason to keep them
for 30 or 60 days you can, but it may be
just a waste of space.
Windows System Object, avoid By default, the client option
backing up "DOMain ALL-LOCAL" is in effect, which
in the Windows case causes backup of
System Objects. To avoid backing up
System Objects, explicitly specify
DOMain values.
Windows System Object, backup query dsmc Query SYSTEMOBJECT (q.v.)
Windows System Object, restoring The Windows MACHINE ID (not just the TSM
nodename) must be the same on the
restoral machine as it was on the backup
machine.
Windows system freezes on a file Probably a damaged file. Run 'chkdsk'.
Windows Web Client See: Web Client
Windows95, restore from NT? Cross client restore is supported
between NT and Windows 95.
Windows95 client crudeness As of 9/1998: The ADSM scheduler in
Win95 is *NOT* a Windows friendly app.
It runs in a Dos box, as a command line
app. Windows doesn't know what's
running in that DOS session, so it will
not kill it without asking the user.
You can change the properties of the DOS
window so that the "Warn if still
active" option was unchecked:
- Find the conagent.exe at the
following path: C:\windows\system\
- right click on it
- go to properties
- click on the Misc. tab
- mid way down on the right you will
see the "warn if still active" check
box: uncheck it.
Windows95 GUI info The ADSM admin GUI "properties" are
stored on the client machine, not on the
adsm server. These are stored in the
Registry, under HKEY_USERS\.Default\
Software\IBM\ADSM\CurrentVersion\
AdminClient\xxx
where xxx is the ADSM server name.
Windows2000 Supported as of 3.7.2.
WINS database, back up Do not attempt to back this up directly:
set up WINS to make an automatic backup
of its DB in WINS manager (usually in
the system32/wins/backup folder) and
then use TSM incremental backup to back
up that copy. You MUST backup WINS DB
manually once for the automated process
to take place.)
Ref: http://support.microsoft.com/
support/kb/articles/Q235/6/09.ASP
Wizards One of the aids provided with the
Windows server: see the server Quick
Start manual.
See also: License Wizard
WMI Windows Management Instrumentation
repository, in Windows 2000 and XP.
Ref: 5.2 Windows B/A manual
WORM Write-Once, Read-Many: media which is
intended to be written once, and serve
as an immutable copy of its contained
data, to satisfy regulatory and internal
audit requirements. Originally pertained
to permanently recorded optical media;
but as of mid 2004, IBM offers a WORM
tape, for the 3592 tape drive, in 60 GB
and 300 GB capacities: the drive detects
such special media and allows only
appending to the tape, until it fills.
Write-protected error message ANR8463E is logged when a volume which
*SM believes is writable and which is
mounted for writing, is reported to be
in write-protect state by the drive.
This may be false information, a fault
of the drive hardware/microcode.
Write-protection of media Physical media has historically had a
manually settable protection mechanism
which is honored by the drive. The
nearly universal convention is that a
"void", or indentation, uncovered in the
media carrier (cartridge) informs the
drive that the media is to be considered
read-only, and not writable. With
open-reel magnetic tapes (3420), removal
of the plastic ring in the hub exposed a
circular void which told the drive that
the media was read-only. Floppy
diskettes have a slideable notch which
could expose a square hole through the
cartridge to indicate that it was
read-only. 8mm tapes have a red slider
which causes a hole to be exposed in the
underside of the cartridge. 3480, 3490,
and 3590 tapes have a thumbwheel (File
Protect Selector) which, when turned,
causes a flat spot to be exposed - a
relative void. Some media, like Jaz
cartridges, have no manually settable
protection indicator: a vendor-specific
program has to be run to designate that
the media is read-only...and that's only
effective if you are using the
vendor-supplied device driver to write
to the cartridge, which need not always
be the case.
See also: 3590, write-protected?

XBSA X/Open Backup Services API: a set of


function definitions, data structures,
and return codes that the Open Group
developed to present a standardized
interface between applications that need
to perform backup or archive operations,
and the enterprise solutions that
provide these services. The TSM API
supports this via its provided libXApi.a
along with its own traditional
interface.
Used by AFS 3.6 to back up to 3rd party
facilities, such as TSM, without having
to use old buta.
http://www.opengroup.org/pubs/catalog/
c425.htm
See also: AFS; buta
XFS SGI IRIX file system type. Supported
through TSM 5.1 client.
XL Generic abbreviation for eXtended Length
tapes, like the 3590 Extended High
Performance Cartridge Tape

y.. (y + 2dots) y-umlaut: is usually the Windows font


character 0xFF.
YEARS See: DAYS

Zero-length files Like directories, these are stored only


in the database, and take no storage
pool space.
See also: FILE_SIZE
Zero-length files, back up? TSM provides no CLI way to specify that
zero-length files should not be backed
up. This is unfortunate, in that the
pointless backup of an empty file
displaces a viable backup version
within the fixed number of backup
copies and can thus cause all viable
backup copies to rotate out of the
repertoire of copies.
The v3 client provides several methods,
however... Its GUI provides a nifty
Find Files function (the "magnifying
glass" selection) which allows you to
filter files by size, which satisfies
the requirement. Or you could use the
Unix 'find' command to traverse the file
system and then do a 'dsmc i' on each
file (allowed in v3); however, you
sacrifice a single summary report in
doing this.
Zip drives (Iomega) Can be used for ADSMv3 server storage
pools, via 'DEFine DEVclass
... DEVType=REMOVABLEfile'.

ACCOUNTING RECORD FORMAT (dsmaccnt.log):

There are 29 fields, which are delimited by commas (,) - intended to facilitate
importing the records into a spreadsheet. Each record ends with a new-line
character. The following describes the record format, based upon information in
the Admin Guide, with supplementary information based upon observations. (The
accounting record format changes little over the years.)

Field Contents
1 Server product version. Through ADSMv2, this integer was all there
was to distinguish servers, and was called "Product level". As of
ADSM 3.1, this field's purpose was changed to be "Product version",
and fields pair 30 and 31 were added (q.v.).
Example: "3", as in TSM 3.7.2.
2 Server product sublevel. Example: "15".
3 Product name, "ADSM". (This has not changed, though the product has
transitioned to "TSM" and then "ITSM".)
4 Date of accounting (mm/dd/yyyy) - which is to say, when the session
ended. Corresponds to session-end ANR0403I message date in the TSM
server Activity Log. Has leading zeroes (e.g., 06/23/2004). Note
that the format of this field is immutable, and not affected by
locale settings, such as DATEformat. See also field 21.
5 Time of accounting (hh:mm:ss) - which is to say, when the session
ended. Corresponds to session-end ANR0403I message time in the TSM
server Activity Log. Has leading zeroes (e.g., 06:44:17).
See also field 21.
6 Node name of *SM client. Always upper case. Example: "SERVER1".
7 Client owner name (populated in Unix). Will contain a Unix username
where the session is associated with a user, most commonly with
Archive/Retrieve operations. Otherwise is null.
See: Trusted Communication Agent
8 Client Platform (operating system). Examples: "AIX", "IRIX",
"Linux", "Linux86", "SUN SOLARIS", "WinNT".
Will also reflect name of API program rather than operating system.
9 Authentication method used. Example: "1".
10 Communication method used for the session. Example: "Tcp/Ip".
(There is no further definition of this field provided anywhere.)
11 Normal server termination indicator (Normal="1", Abnormal="00").
Is Abnormal (0) when: the session is terminated by the client, as
via Ctrl-C (ANR0480W); or the session is terminated by the server
due to exceeding IDLETimeout, as when a user just leaves a dsm or
dsmc session idle (ANR0482W); or the client exceeded the COMMTimeout
value (ANR0481W). A schedule involving a failed status does not
seem to cause an Abnormal termination.
12 Archive: Number of archive database objects inserted during the
session. Example: "341".
13 Archive: Amount of data (size), in kilobytes, sent by the client to
the server. Example: "1944135".
14 Retrieve: Number of objects retrieved during the session.
15 Retrieve: Amount of data (size), in kilobytes, retrieved.
16 Backup: Number of backup database objects inserted during the
session. Example: "14408".
17 Backup: Amount of backup file data (size), in kilobytes, sent by the
client to the server, destined for a server storage pool.
This field is also known as the Backup Thread field.
Example: "32177666". See also field 20 comments.
18 Restore: Number of backup database objects retrieved during the
session.
19 Restore: Amount of data (size), in kilobytes, retrieved.
20 Session KB: Amount of data, in kilobytes, communicated between the
client node and the server, in both directions, during the session.
Includes overhead as well as storage pool data. The number
corresponds to message ANE4961I total bytes transferred.
Example: "32229930".
Notes: If, in a Backup, the value in this field is much greater than
the value in field 17, and field 17 is not zero, then the record
reflects a Consumer session which probably involved a lot of retries
on busy files. If field 17 is zero and this field 20 has a high
number, the value largely reflects an unqualified Incremental Backup
where there was a large inventory list of Active files which the
server sent to the client at the beginning of the session.
21 Duration of the session, in seconds. Example: "18838".
You might subtract this from the field 4,5 values to determine when
the session started - which should correspond to the ANR0406I
session started message in the TSM Activity Log.
22 Amount of idle wait time during the session, in seconds.
See quickfact on: Idle wait
23 Amount of communications wait time during the session, in seconds.
See quickfact on: Communications Wait
24 Amount of media wait time during the session, in seconds.
25 Client session type indicator character: (per IC18252)
1 - General Backup/Archive client session (same as 5)
2 - Open registration
3 - Password update session
4 - General Backup/Archive client session (same as 1, but commonly
recorded rather than "1").
Note that, in Backup, there is no indicator value to
distinguish Consumer sessions from Producer sessions: you can
only infer a Producer session by it having a "4" indicator,
and fields 16 and 17 being zero.
5 - Client scheduled session
6 - Admin console session
7 - Admin general session
8 - Admin password update session
9 - Export/import session (used internally)
10 - Admin -MOUNTmode session ('dsmadmc -MOUNTmode')
16 - Server-to-server library sharing session.
19 - Session proxied through storage agent.
26 HSM: Number of space-managed database objects inserted during the
session.
27 HSM: Amount of space-managed data (size), in kilobytes, sent by the
client to the server.
28 HSM: Number of space-managed database objects retrieved during the
session.
29 HSM: Amount of space-managed data (size), in kilobytes, recalled.
30 Product release (new with ADSM 3.1). See also fields 1, 31.
Example: "7", as in TSM 3.7.2
31 Product level (new with ADSM 3.1). See also fields 1, 30.
Example: "2", as in TSM 3.7.2
Notes: - Accounting is by node transaction. The filespace is not recorded, so
you cannot produce reports by filespace.
- The session number is not recorded! (It is available in the server
database SUMMARY table.)
- The amount of data includes retries, as when the client's sending of
data to the server is interrupted because a tape has to be mounted.
- The server's view of session activity has proven to be more consistent
with reality that the client's view. Thus, accounting records tend to
be a better source of session timings than client session summary
statistics (reflected in the client log and ANE messages given to the
server for its Activity Log).
- The session overall data rates can be computed from the field 21
duration and the session type KB values, to yield values like the
"Aggregate data transfer rate" from session summary statistics. But
the equivalent of "Network data transfer rate" cannot be, as there is
no network transfer time directly recorded.
- There is no explicit record of Delete ARchive activity.
- In MVS (OS/390) the recording occurs in SMF records: SMF record type,
subtype 14.
- In TSM backups, there will be a "control session" and a "data
session", where the control session envelops the data session. The
two are recorded separately, with the data session obviously recorded
just before the control session ends and is recorded.
See "Consumer session" and "Producer session".
- As of TSM 3.7, clients perform backups in a multi-threaded manner
such that a single backup job will be recorded across multiple
accounting records. See: Multi-session Client
- The server does not provide any means for cutting off this file, which
will grow endlessly unless you do something.
- Servergraph.com sells software which allows viewing accounting
information in graphical form.
- A pertinent Technote: "Why ITSM accounting record can differ from
Summary records in "Activity.Summary" table" (swg21155024), which
notes that the Summary table is exactly that - a summary of each
session - whereas there may be multiple accounting records for the
same session, one record per thread in a multi-threaded session.

API NOTES:

The manual "Using the API Interface" notes:


Only the API can restore or retrieve objects that have been backed up or
archived with API calls.

The API is said not not work with TCPNODELAY in the client system options
file.
ADSM/TSM UNNUMBERED MESSAGES (SEEN IN ACTIVITY LOG, ETC.):

cl_ipc_write: Sending message to socket 50004 failed on "Error 0"


cl_inform: Sending message to socket 50004 failed on "Error 0"
acs_ipc_write: FATAL_ERROR! cl_ipc_write() failed.
Indictes a communications problem somewhere between TSM, ssi, and ACSLS:
- Make sure that ACSLS is up and you can talk with it via cmd_proc.
- Then make sure that the ssi and mini-el tasks are up. Make sure they
can talk with ACSLS by running lbtest.
- If they are OK, then stop and restart TSM.
- If you still get those errors, call support.

domdsmc[977]: 33202 Segmentation fault(core dumped)


Did you run the dominstall program after installation? It sets up all
the necessary links and creates .profile entries etc., needed for proper
operation.

dsmserv: Command not found or ksh: rc.adsmserv: not found.


Take a look at your current Path setting...
For security reasons, root (and most other system accounts) should NOT
have a "." entry in the Path, to prevent someone from planting a command
of the same name as one that is an established name, but in an oddball
directory, like /tmp. Imagine what could happen if root had cd'ed into
/tmp and had "." in its Path, and happened to invoke some imposter
command instead of the real command. That's a classic hacker ploy. Get
in the habit of doing ./CMDNAME form invocation to explicitly invoke
something in the current directory. And make sure none of your
non-ordinary accounts have a "." in their path.

dsmserv.dsk file not found


Seen after installing TSM where SMIT was allowed to extend the file
systems: The expansion is apparently not enough, and in any case the
install is only partial. Among other things, the dsmserv.dsk file
doesn't get created. Your best bet is to remove the software using smit,
then rm the /usr/Tivoli directory structure. Then expand /usr yourself
first, then install the server software.

Message number ____ not available for language EN_US


These errors are generally seen when the TSM messages filesets are not
at the same level as the TSM Server. As a result, certain messages do
not exist in the message repository and cannot be displayed within TSM.
In AIX, issue the 'lslpp -l tivoli.tsm.*' command to list all of the TSM
filesets currently installed. Ensure that the messages filesets are at
least at the same maintenance level as the server runtime fileset.

Out of Memory
Error seen in the TSM 32-bit Server. Consider using the 64-bit version;
See: http://www.ibm.com/support/docview.wss?uid=swg21154955

Server or DNS can not be found


As when trying to use the web admin. See "Web Admin" for specification
of proper server options to facilitate web admin access. Also, try
doing a Reload/Refresh in your web browser to see if browser caching is
the problem, and try specifying the server as an IP address instead of a
server name, to see if it is a DNS problem preventing access. Check
your server Activity Log to see what problem may have occurred when it
tried to start HTTP services, where it will report the HTTPPort when
successful. Do 'netstat -l' or use 'lsof' to look for the server
listening at the HTTP port.
Waiting for multiple mount points in device class ____ (___ seconds)
Output from Query PRocess, as when a Move Data is trying to run.
See: Drives, not all in library being used

0506-511 Filesystem helper unknown vfs


Your /etc/vfs file was probably zeroed by an errant install.
Restore from your backup copy.

>>>>>> Process Interrupted!! Severing connection. <<<<<<


Appears in the schedule.log. Often accompanied by "ANS4017E Session
rejected: TCP/IP connection failure". Can occur due to server Halt or
simply a reboot of the server computer system. On the client side, the
client process was terminated, as via Ctrl-C at the terminal or via a
'kill' command on Unix, for example.

>>>>>> Restore Processing Interrupted!! <<<<<<


Seen in conjunction with ANS1028S and ANS4000E.
May relate to needed restoral objects not available on the server.

*SM NUMBERED MESSAGES:

The general format of such messages is: PPPnnnnT


where: PPP is a 3-letter prefix
nnnn is a 4-digit number
T is the message type: E Error I Information
K Kernel message from HSM
S Server error W Warning

ACD-----(TDP for Lotus Domino)--------------------------------------------------


Refer to the TDP for Lotus Domino manual. Don't overlook the dsmerror.log and
dsmsched.log files as additional sources of information.
ACD0106E ReadIndex: Message index not found for message 131.
Usually indicates that your TSM/ADSM API message file is out of sync
with your version of the TSM/ADSM API runtime. Specifically, files
DSCAMENG.TXT and ADSMV3.DLL (and/or TSMAPI.DLL). Try re-installing that
the latest version of the TSM API or re-installing TDP for Domino (which
will also re-install the latest version of the TSM API files as long as
the NT Registry is showing the older version is installed).
ACD0200E File (<NULL>) could not be opened for reading.
This might be because some type of Domino maintenance activity is
happening within the the backup window. TDP for Domino scans all of the
databases to gather info such as pathnames at the onset of the backup.
Something may have interfered with the acquisition of such info then, or
something has changed by the time TDP for Domino opens the object for
backup. The most obvious cause would be the disappearance of the
object.
ACD0202E Read failure on file (d:\notes\data\applications\What\Ever.nsf).
Looks like the Notes database has a bad page. Employ whatever Notes
utility is appropriate to identify and/or correct the page error. You
can try something like a compact or a Notes Admin "copy" to have it read
through the entire database, page at a time. You could also attempt a
normal file level copy to see if that finds a problem.
ACD5130E Could not initialize the connection to Lotus Domino properly. error=416
See the Lotus API documentation for error numbers. 416 is "Too many
concurrent users of the Notes API package." Look for other applications
running against the Domino server. It could be that some of the sessions
are not getting cleaned up. There might be some way to find out who
owns all of the current "connections" to the Domino Server. Confer with
your Domino administrator. Stopping and restarting your Domino server
will probably do the trick - if you have the luxury of doing so.
ACD5130E Could not initialize the connection to Lotus Domino properly,
error=4103
Occurs when the PATH environment variable puts TDP Domino behind other
paths when it needs to be at the head of the list. Such an error has
also occurred through user tinkering, as in having moved the old version
module, nnotes.dll, into the Winnt folder, which is in the PATH.
ACD5207I TDP for Lotus Domino: Incremental database backup from server ______
complete. Total Domino databases backed up: ___ Total bytes transferred:
______ Elapsed processing time: ____ Secs Throughput rate: ____ Kb/Sec
Summary results from the backup. Note that this message fails to include
info about compression results, leaving you to sum the "Written:" values
for each of the involved backups and compare against the "Total bytes
transferred:" value.
ACD5901E The '-INTO=filename' parameter requires a complete filename.
You entered like domdsmc restore "*" /subdir=yes /into=/tmp/dominotest
to restore multiple databases; but /into specifies a single object
target, which is inconsistent. Code like /into=/tmp/dominotest/=
to tell TDP both that the destination is a directory (via /) and that
the original filenames should be used (via trailing =).
Unnumbered messages from Domino:
This database is currently being used by someone else. In order to share a
Notes database, all users must use a Domino Server instead of a File Server.
This normally means that Data Protection for Domino is picking up the
wrong NOTES.INI file. To help resolve this issue...
1. Make sure that NOTESINIPATH setting is pointing to the correct
directory containing the NOTES.INI file of the active Domino Server.
2. Make sure there is not a NOTES.INI file in the \WINNT or
\WINNT\SYSTEM32 directory. The Domino Server API code looks there
first, even if you specify the directory where you want it to look.
3. If the above two things do not resolve the issue, search for all
occurences of the NOTES.INI file on your machine and find out if it
is picking that one up first. You could do this by temporarily
renaming all of them EXCEPT the one that your active Domino Server is
using.

ACN-----(TDP for Microsoft Exchange Server)-------------------------------------


Refer to the TDP for Microsoft Exchange Server manual. Don't overlook the
dsmerror.log and dsmsched.log files as additional sources of information.
See also the Storage Manager for Mail manual topic:
"Determining if the problem resides on Tivoli Storage Manager or Exchange".
Results codes can be found in the SDKs ESEBKMSG.H file, available at
http://sdks.icarusindie.com:2004/index.php .
ACN3521 Exchange Application Client: FULL backup of ____ from server ____
failed, rc = 310.
A return code 310 means that the Exchange Server reported a problem.
Examine the TDP for Exchange log file on the Exchange Server having the
problem. This is normally located in the installation directory for TDP
for Exchange (unless you changed it) and is called, by default,
excdsm.log. If it doesn't reveal anything meaningful, you should call
Tivoli Support so they can get a trace to diagnose the problem.
ACN4215E : Failed to open file during restore
In a Windows environment, this error occurs when the Windows function
CreateFile() fails. TDP is trying to open the file to be restored for
GENERIC_WRITE and FILE_FLAG_SEQUENTIAL_SCAN. You will have to run a
trace in order to determine the name of the file being opened.
This message usually means that TDP for Exchange is trying to open a
file (for write) through an admin share on the local drive containing
the Exchange database or log files. Has been seen when the internal
admin shares for the drives that contain the database or log files have
been removed, as may be done by third party security packages. Make sure
that your C$, D$, E$, ... or wherever your Exchange database and log
files reside actually exist as admin shares (issue the NET SHARE command
to find out). If they do exist, use your Windows tools (like Computer
Management on Windows 2000) to find out the permissions on those shares.
Make sure the userid that you are running the restore with has
permission to write to those shares.
ACN5237E Unable to communicate with the Microsoft Exchange Server.
Perhaps you are running this under a userid that does not have enough
permission.
If you are using the command line agent, be sure to use the /EXCSERVer
option to specify the virtual name of the exchange server. Make sure the
Exchange server is running on the same machine as the agent.
ACN5304E, Unable to open service to determine if running or not
When using the CLI to bac up Exchange, for example. It is looking at
the Information Store service. Sometimes this occurs when you are
running under a Windows userid that does not have Administrator
authority to view the registry. For example, this may occur when you are
running from a scheduler service that is being run under a different
userid. This message may also occur when performing the operation as
local admin, rather than domain admin.
ACN5798E MS Exchange API HRESEBACKUPRESTOREGETREGISTERED() failed with HRESULT:
0xc7ff07d7 -
Perhaps you have a clustered environment: then you need to run
TDPEXCC/TDPEXC with /EXCSERVER=Virtual-Exchange-Server-Name .
ACN5798E MS Exchange API HRESEBACKUPSETUP() failed with HRESULT: 0xc80001f9 -
Backup is already active.
Pretty much what it says. Check. The Exchange Server thinks that an
Exchange backup is already being run on that storage group. That could
mean either that a backup is actually still running on that storage
group or that a previous backup has hung or prematurely ended. (If a
previous backup has hung or prematurely ended, the Exchange server is
supposed to automatically detect the situation and clear the status so
that a new backup can be started.)
ACN5798E MS Exchange API HRESEBACKUPSETUP() failed with HRESULT: 0xc800020e -
Retrying failed backups...
This occurs with a TDP for Exchange incremental backup, but a full
backup runs fine. The Microsoft ESEBKMSG.H file shows this being
MessageId hrInvalidBackup, with MessageText "An incremental backup
cannot be performed when circular logging is enabled." In some cases,
Exchange is holding on to stale status info and needs a restart.

ACO-----(TDP for Microsoft SQL Server)------------------------------------------


Refer to the TDP for Microsoft SQL Server manual. Don't overlook the
dsmerror.log and dsmsched.log files as additional sources of information.
In the messages, you may see things like "HRESULT:0x800455f3". This comes from
the MSSQL server, where the 0x8004 is common, and the last four hex numerals are
the MSSQL error number. Translate the last four hex numerals to decimal (in this
case, 55f3 -> 22003).
ACO0004E An unknown error has been detected.
Usually seen in an upgrade, most likely a corrupted installation or
configuration issue. The best course is to uninstall the old and
reinstall the new.
ACO0057E The checksum in the license file
(C:\Program Files\Tivoli\TSM\TDPSql\sqlclient.lic)
does not match the license string text.
The wrong license file (sqlclient.lic) is installed in the TDPSql
directory. TDP For SQL 2.2.0 needs to be installed before 2.2.1 or
5.1.5. Fix by planting the 2.2.0 license file.
ACO0151E Restore failed [Microsoft] ODBC SQL Server Driver...The file cannot be
used by RESTORE..Consider using With MOVE Option to identify a valid
Location.
Look at the /RELOCATE and /TO options (or right-click while the backup
to restore is selected): this allows you to move the database to a
different physical location.
ACO2637E: Error flushing temp logfile while pruning C:\TSM\agentsql\SQLDSM.LOG.
Check for abnormalities: insufficient space on the disk where the log
file is; that the user has the correct permissions to write to the
directory.
ACO4210E -- Failed to connect to SQL server.
Try using the following options on the "sqldsmc" command to specify a
valid SQL Server user id and password:
/SQLUSer:username (default: sa)
/SQLPWD:password (default: "")
or use the following option when using a trusted connection (Windows NT
is used to authenticate the user.): /SQLSECure
If the GUI client works fine but the command line client is encountering
this message, it indicates that someone at one point provided the
correct sqluser and pswd info to the GUI (which then got stored in the
Registry). Both the command line and GUI will use "sa" as the default
sql userid if none is provided, but if that is incorrect, the GUI will
prompt the user to enter a different one and then stores that in the
Registry. If you supplied sqluser and sqlpswd on the command line, they
seem not to be the correct ones. The command line does not currently
look for stored userid/pswd values in the Registry.
Rebooting the NT system is also reported to eliminate the message.
ACO5091E PASSWORDACCESS is Generate. Either the stored password is incorrect or
there is no stored password. If you do not have a stored password, use
the -TSMPassword=xxx option to set and store your password.
Try this:
1. Add "CLUSTERNODE YES" to the DSM.OPT file.
2. Reset the password on the TSM Server for the node you are using for
DP for SQL.
3. Issue a command that connects to the TSM Server specifying the
/TSMPASSWORD= option. For example:
TDPSQLC QUERY TSM /TSMPassword=password
4. Retry the command that was failing.
ACO5400E The Virtual Device Interface is not registered with the Common Object
Model.
This is an OS/SQL server issue, not TSM. Has been seen when people have
upgraded to Windows SP4... and then their SQL backups would hang or
fail. This has been fixed by upgrading to SQL 2000 SP3a. Verify that the
ID you are running from has SQL sysadmin authority as well as Full
permission properties to the registry. See also KB article Q323602
(support.microsoft.com/default.aspx?scid=kb;en-us;323602&Product=sql2k).
ACO5422E Received the following from the MS SQL server:
Message text not available. HRESULT:0x800455f3
One customer reported experiencing this during backups. There is no
certain answer on this: you may have to check the MSSQL server log.
Check that you have the correct permissions to run a backup (SYSADMIN)
and that you have proper authority to access the registry.
ACO5424E Could not connect to SQL server; SQL server returned:
[Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user
'NT AUTHORITY\SYSTEM'. Microsoft SQL-DMO (ODBC SQLState: 28000)
(HRESULT:0x80044818)
Try changing the "Login As" parameter for the scheduler service to be
the user account that you sucessfully ran a backup with. Has also been
seen when the name of the SQL database was changed by its administrator
without communicating the change.
ACO5436E A failure occurred on stripe number (1), rc = 418
The API manual says thee error is: DSM_RC_OPT_CLIENT_DOES_NOT_WANT
(client doesn't want this value from the server). It generally indicates
that there is a TSM API error. Check the API's DSIERROR.LOG file for
indications of the problem, and the TSM server Activity Log.
ACO5456W The backup is corrupt and is not fully restorable.
A single Data Protection for SQL backup is made up of multiple objects
on the TSM Server. This error indicates that the the number of objects
which make up the specific object you are trying to restore is not
correct. This is abnormal, perhaps occurring if there have been
deleted/corrupted/lost TSM Server objects due to bad/lost tapes - which
you need to investigate in your TSM server. If all else fails, perform a
trace to illuminate the objects.
MS-SQL error log msgs:
unable to open English message repository 'dscenu.txt'
Seen after applying Windows patches. Looks like the Path is screwed up.
The ACO0004E msg handling is the best approach to fixing it. Or...:
http://www.ibm.com/support/docview.wss?uid=swg1IC39325 may help...
"Go to control panel on the Windows system and double click on the
system icon. From the System Properties Window select the Advanced tab
and Environment Variables. Under System Variables locate the PATH
variable on the left side of the window. Add the following to the end of
the variable value: ;p:\Program Files\Tivoli\TSM\baclient\"

ANE-----(client events logged to server)----------------------------------------


The ANE messages originate from the backup-archive client and are sent to the
server for for distribution to various event logging receivers: as such, they
carry the same information as their original ANS message counterparts. Such
messages appear in the server Activity Log, such as session summary statistics.
Note that session summary statistics identify the node involved, but not the
filespaces.
ANE4018E ...: file name too long
In the TSM 4 era, APAR IC27346 addressed such a problem with the Windows
client. After that, it was seldom reported by customers...and, where
reported, involved a path length which, confusingly, was not long.
It might be seen where a counterproductive user of the system creates a
file or directory where the object name has embedded characters which
are the same as the operating system directory separator character (/ or
\). The way to check for this is to do 'ls' or 'dir' on each alleged
directory portion of the path, and identify the inconsistency. For
example, in name /a/b/c, do 'ls /a' and see if b is in there; if so,
then do 'ls /a/b' and see if c is in there.
ANE4005E (Session: 4278, Node: _____) Error processing '<FileName>': file not
found
Backup processing tries to accomplish its work using the freshest
possible information about the files present in the directory it is
currently processing. However, files can still come and go quite
rapidly, and in this case the file it was about to back up was removed
from the directory before it could be backed up.
ANE4007E (Session: 4279, Node: _____) Error processing
'\\mokizd\c$\WINNT\system32\dhcp\dhcp.mdb': access to the object is
denied
You attempted to back up something that is active. For example, DHCP is
a live service. You would need to deactivate it first. (Note that
WinNT backs up the DHCP database automatically every hour, so it should
be excluded from ADSM backups.)
See also: ANS4007E
ANE4028E Error processing 'filespace namepath-namefile-name': cannot create
file/directory entry
The directory path for files being restored or retrieved cannot be
created. The file is skipped.
Ensure that you have the proper authorization to create the directory
for file being restored or retrieved.
ANE4961I (Session: NNNN, Node: _NAME_) Total number of bytes transferred: ____
Is the total amount of data communicated between the client node and the
server, in both directions, during the session. Includes overhead as
well as storage pool data. Corresponds to accounting field 20.
ANE4991I <free form text>
Activity Log message typically resulting from using the API (most
prominently, TDPs) to log an informational message in the server via
dsmLogEvent. You can suppress such messages by doing
'DISAble EVents ACTLOG ANE4991 client_node_name' or
'DISAble EVents ACTLOG ANE4991 *'
ANE4993E General notes...
ANE4993E is the server-side message corresponding to the ANS4993E
message in the client. The message is free-form, to log any messages
generated by the application which is running in conjunction with the
TSM application. Typically, this messages is logged by a non-TSM API
via it calling dsmLogEventEx(), documented in the TSM API manual.
For specifics on the content of the message you will have to refer to
the documentation accompanying the non-TSM application.
ANE4993E (Session: _____, Node: _____) TDP MSExchgV2 NT ACN3502 TDP for
Microsoft Exchange: full backup of Information Store from server ____
failed, rc = 418.
Try changing the buffer size (/BUFFERSIze) to 64K instead of the default
and the number (/BUFFers) to 4. This will likely improve performance and
eliminate the fragmentation issues with getting 1 MB buffers.
ANE4993E (Session: _____, Node: _____) TDP MSSQL ACO3002 Data Protection for
SQL: log backup of database ________ from server ________ failed, rc =
1914.
RC=1914 means a SQL Server API error. Take a look at the DP for SQL log
file on the SQL Server machine itself to find out the cause of the
error. If that does not help, take a look at the SQL Server error logs.
It could be that the abacushd database does not allow log type backups.
ANE4993E (Session: _____, Node: _____) TDP MSSQL Win32 ACO3002 Data Protection
for SQL: full backup of database ______ from server ______ failed,
rc = 1912. (SESSION: _____)
You need to examine the log file or output of Data Protection for SQL.
RC=1912 indicates an error in the creation of a VDS. The TDP attempted
to establish a VDI session connection for a backup or restore, and
session creation failed. Look for a cause in the SQL server log or VDI
error log. A simple cause is a permissions problem.

ANR-----(server messages)-----------------------------------------
A number in parentheses alongside a module name, such as "asalloc.c(5006)", is
not signficant: it can be expected to be the sourcecode line number, as via the
ANSI C _LINE__ definition; so it will vary from one maintenance level to
another.
Module names beginning with "sm" (e.g., smnode.c) seem to reflect software
involved in TSM server commands.
ANR0000W Unable to open default locale message catalog, /usr/lib/nls/msg/C/.
Ostensibly, your environment variable LANG is set to C rather than
en_US.
ANR0000W Message repository for language AMENG does not match program level.
The server message repository file is built for a certain level of
dsmserv (the "program") and the two must be matched to work properly.
"Loose" management of TSM filesets or casual copying of application
files at a site can result in inconsistencies. This one does not keep
the server from coming up, but is a big clue that things are out of
whack, and should not be ignored.
ANR0000E Unable to open language AMENG for message formatting.
Usually happens when you try to execute the server from a directory
other than where the server code is located. Try "cd"ing to
/usr/lpp/adsmserv/bin and issuing it there; or under Csh do
'setenv DSM_DIR /usr/lpp/adsmserv/bin'. If you ARE running it from
there, check to make sure the file exists (dsmameng.txt) and that you
have sufficient permission to it.
ANR0102E asalloc.c(5006): Error 1 inserting row in table "AS.Segments".
A problem within the database, involving other than a disk storage pool,
perhaps precipitated by a shortage of space in the database volumes or
an abrupt shutdown of the server while it had a problem, such as a
failing tape dismount. You may have to do a 'dsmserv Auditdb' offline,
or an 'AUDit DB' online. Some customers have found that restarting the
server helps. Or you may be able to locate an involved tape and remove
it from the system to clear that info from the database. Also found to
be caused in Reclamation: the currently open copypool volume, upon being
reclaimed, went "empty" rather than "pending". It never went into
scratch status, but rather, was reused, and marked as "STGREUSE" in the
volhistory. A resolution to this is to mark the "STGREUSE" volume
ACCess=READOnly, which may allow writing to the copypool again, so you
can run a Move Data against the R/O volume, making sure it goes to
scratch status.
You might avoid such messes by utilizing *SM MIRRORWrite on your *SM
Database and Recovery Log, with Sequential writing.
May be caused by APAR IC36975, involving the COPYSTGPOOL= simultaneous
write feature. See also: REPAIR STGVOL
ANR0102E dsalloc.c(980): Error 1 inserting row in table "DS.Segments".
A problem within the database, involving a disk storage pool, seen in an
abrupt shutdown of the server while it had a problem, such as a failing
tape dismount. Fixing it with least disruption requires working with
TSM Support. They may advise migrating all disk storage pool data to
tape, shutting down the server, and then doing 'DSMSERV AUDITDB
DISKSTORAGE FIX=YES'.
You might avoid such messes by utilizing *SM MIRRORWrite on your *SM
Database and Recovery Log, with Sequential writing.
May be caused by APAR IC36975, involving the COPYSTGPOOL= simultaneous
write feature. See also: REPAIR STGVOL
ANR0104E astxn.c(1159): Error 2 deleting row from table "AS.Volume.Assignment"
followed by ANR0865E expiration processing failed - internal server
error.
Solution: Recover the database. Use RESTOREDB if you have recent
BAckup DB tapes; else you will have to perform a salvage operation
using DUMPDB/LOADDB.
But you may be able to employ the following instead:
1. Create a listing with command 'Q VOL * STA=Pending F=D'
2. Search for the string "Date Became Pending".
3. Compare the date with the "Reuse delay" parameter of your storge
pool(s).
i.e.: today 11/09/98, the reuse delay parm is 9 days, than look at
tapes 10/31/98 and below.
4. Execute --> MOVe Data volser
After the move-command(s) was successfully completed or ended with
message "no data on volume"
5. Execute --> DELete Volume volser
to remove this volume(s) from storagepool.
ANR0104E ASVOLUT(2202): Error 2 deleting row from table AS.Volume.Assignment
You have a tape volume in your storage pool that incorrectly has the
reuse delay flag set in the data base. To correct this problem issue
the following commands:
1. Q STG F=D The purpose of this command is to find the reuse delay for
the storage pool.
2. Q VOL F=D STATUS=PENDING With this command you will get a list of
all of the storage pool volumes that currently has the reuse delay
flag set.
The volume that has the error will be the volume on that list that
the reuse delay has expired. Perform a MOVe Data on that volume.
Thereafter, the volume should be in the correct state and you can issue
the command DELete Volume if needed.
ANR0104E imaudit.c(3797): Error 2 deleting row from table "Expiring.Objects".
As seen after message "ANR4206I AUDITDB: Object entry for expiring
object 0.0 not found - expiring object entry will be deleted.".
The "object 0.0" says that Expiring.Object table contains an entry which
does not have a corresponding Object Ids entry, and In consequence it
tries to delete this entry but fails with 'key not found'. That is,
there is an inconsistency in the database which can not be corrected
by dump/load/audit.
ANR0106E imarqry.c(4481): Unexpected error 2 fetching row in table
"Archive.Objects".
The table may be one of several (Archive.Descriptions, etc.).
This was a defect in TSM 5.2, per APAR IC39132, caused by a
timing/locking issue within the expiration process (hence, results may
vary in each run).
ANR0110E An unnexpected system date has been detected; the server is disabled.
Use the ACCEPT DATE command to establish the current date as valid.
So you need to check out your system clock, and perhaps your NTP
service. In smaller systems, the problem may be a depleted battery on
the motherboard. Whereas this message typically occurs at TSM server
start-up, and causes start-up to fail, it leaves no opportunity to
perform the ACCEPT. The way around this is to create a little TSM macro
file in the server directory called like "accept_date" containing the
lines ACCEPT DATE and COMMIT, and then do
'dsmserv runfile accept_date', which just performs that task and halts
the server, whereafter a normal restart should work.
ANR0207E Page address mismatch detected on database volume _____,logical page
xxxxxx (physical page xxxxxx); actual: 0.
ANR9999D iccopy.c(nnnn): ThreadId<nn> Unable to read from db volume.
You have a corrupted database. Were you configured with MIRRORWrite DB
Sequential as the product recommends, to guard against such problems?
If not, you'll probably have to restore your DB - after investigating
the cause of the problem (overfull db? opsys crash? disk problem?) so
as to keep it from occurring again. Refer to IBM Technote
http://www.ibm.com/support/entdocview.wss?uid=swg21155009
ANR0212E Unable to read disk definition file: dsmserv.dsk
It may be absent, or its permissions may be wrong. One user reported
finding this happening when the Recovery Log was full.
ANR0252E Error writing logical page 141410 (physical page 141666) to
database volume adsm-db1.mirror.
See: ANR7838S
ANR0259E Unable to read complete restart/checkpoint information from any
database or recovery log volume.
Typically seen in an install, where the recovery log and database
volumes have been only formatted with dsmfmt and then the server is
started by just invoking 'dsmserv'. There needs to be information in
the db about the environment as the server last saw it. In a fresh
install, though, the db is blank. You need to instead run
'dsmserv format' to initialize the database. See the Admin Guide.
Seen during a server restart: Suggests disk damage.
If experienced during a 'dsmserv restore', it would seem to be the case
that a fresh database and/or recovery log is being supplied for the
restoral - but the fresh areas have not been formatted. Do 'dsmserv
format' first. Watch out for your dsmserv.dsk not reflecting the
reality of the db and recovlog disks you set up: if necessary, move
dsmserv.dsk out of the way and manually define your disks when the
server comes up. Try using command option TODATE=, to do a point in
time restore with a volume history file, to keep dsmserv from trying to
read the Recovery Log to do a roll forward restore.
ANR0355I Recovery log undo pass in progress.
As seen in TSM server start-up. The larger and more full your Recovery
Log, the longer this will take, so be patient. If concerned whether
it's running, use your opsys monitorinig tools to check for lots of I/O
activity on the Recovery Log and Database.
ANR0361E Database initialization failed: error initializing database page
allocator.
Explanation: During server initialization, the server database
initialization fails because the page allocator cannot be started.
One thing to check is that the ADSM server files are as the dsmserv.dsk
file thinks they are. Part of the ADSM database may be corrupted.
Consider running an AUDITDB.
ANR0362W Database usage exceeds 86 % of its assigned capacity.
Periodic warning message when the database is getting full, added by
APAR IC08768. Messages begin when database utilization exceends 80 % of
its capacity, and then re-issue the message as database utilization
increases by 2% thereafter.
ANR0406I Session NNNN started for node <NodeName> (Opsys_Type) (Tcp/Ip
<Client_IP_address>(Client_Port_Number)).
Appears in the Activity Log when a client session is initiated, as when
the 'dsmc' command is issued, with a subcommand. (Note that the
originating process will differ, depending upon the issuer: if the
superuser invokes 'dsmc', the client port origin of the session will be
the dsmc command itself; but if 'dsmc' is invoked by an unprivileged
user, the client port origin of the session will be the dsmtca process.)
Note that with client schedules v3 and above, there will be two ANR0406I
session messages: the first (outer) is the data movement session itself;
the second (inner) is one by which the client sends ANE client event log
messages to the server resulting from the session.
ANR0423W Session ____ for administrator ________ (<PlatformType>) refused -
administrator name not registered.
The administrator name position may contain strange characters, which
should make you suspect Unicode. They look wacky to you because your
dsmadmc session to query the Activity Log is not using the same
character code page as the platform which is initiating the timed
sessions, so your admin session can't make sense of the name. Probably a
Windows client, but could be other, now that Unicode support has
extended. You might try dsmadmc from such a Unicoded client system.
See if there are accompanying ANR messages indicating the source of the
sessions, and get in touch with the client owner. If no ANR msgs and
the server is AIX, you can employ the AIX iptrace/ipreport command set
to relatively easily see where this traffic is coming from.
ANR0428W Session ____ for node ________ (<PlatformType>) refused - client is
down-level with this server version.
The node had been accessed with a higher level client than you are now
trying to use, and that higher-level access caused the server to "latch"
the requirement that the client always access its filespaces using at
least that level, due to data formatting and/or control information
established by that higher level - which lower level clients cannot
understand, and so sessions from them must be prevented.
This is very insidious where a session was conducted for a nodename from
a platform type other than that of the true, owning client. This bogus
access can cause not just a platform reattribution, but also latching of
the level peculiarities possessed by that interloper client; so when you
next try to conduct a session from the true client, you are locked out
with this error message. This is often due to a bug involving Unicode
vs. non-unicode clients: the rogue session probably resulted in the
client being marked as unicode-enabled. In the IBM/TSM support site,
search on ANR0428W and there view the instructions for patching the
database entries for the damaged client. By all means, feel free to
call vendor support for assistance in this process.
See also: ANS1357S
ANR0440W Protocol error on session 70919 for node _Name_ (_OSType_) invalid verb
header received.
The invalid protocol stream and lack of node identifiability suggests
that some random system on the net is trying to connect to your *SM
server port (default: 1500), but not *SM terms. Someone out there may
have erroneously set up some kind of mail client or the like to
periodically poll what it thinks is a mail server for new mail, using
your port number. First assure that you have not configured your server
to use a port number well-known for some other service. They you'll
have to use some kind of network/packet trace or the like to determine
where it's coming from, as in using the Unix 'tcpdump' command or AIX
'iptrace'. (You should be prepared to do this in any case: any site on
the net could be subject to Denial Of Service attacks, and needs to be
ready to find out where they are coming from so has to have their router
filter out traffic from that IP address.)
A closer-to-home possibility could be some spud in your company fumbling
to set up a *SM client to connect to your *SM server.
If an established client, may be mangled communication from it. Try
conducting other kinds of sessions from the HP (telnet, ftp) to see if
it can do those without manglement, then try again with ADSM, first with
queries like 'dsmc q fi', to try to narrow down where the anomaly is.
ANR0444W Protocol error on session NNNNN for node NODENAME (CLIENT_TYPE) -
out-of-sequence verb (type Confirm) received.
^^^^^^^^^^^^
Has been seen with client type TDP Domino NT when the Domino server was
short on memory.
ANR0444W Protocol error on session NNNNN for node NODENAME (CLIENT_TYPE) -
out-of-sequence verb (type SignOff) received.
^^^^^^^^^^^^
The specified client is attempting to establish a session, but no
password has been established. (PASSWORDAccess Generate)
Accompanied by message ANR0484W.
In Windows, can be caused by invalid Password stored in Registry.
Also look in dsmerror.log for "ANS1838E Error opening user specified
options file C:\program files\utilities\adsm\baclient\dsm.opt".
To fix: As root/Administrator, simply run a client-server command like
'dsmc q sch' to re-establish the password.
ANR0444W Protocol error on session NNNNN for node NODENAME (CLIENT_TYPE) -
out-of-sequence verb (type (Unknown)) received.
^^^^^^^^^^^^^
Seen with defective TCP/IP software, as within MVS, caused by a buffer
overflow or other problem.
ANR0480W Session 232 for node _Name_ (_OSType_) terminated - connection with
client severed.
The client dsmerror.log typically contains:
ANS1809W Session is lost; initializing session reopen procedure.
ANS1810E TSM session has been reestablished.
In combination, this would indicate that neither the server nor client
knows why the connection was severed - which says that something in
between the two did it.
May be that firewall software is in between, and it may have its own
"idle timeout" value: if it believes that a session is doing nothing,
the firewall terminates the session. (The TSM client may well be busy
trawling through a file system seeking the next backup candidate, or a
TDP may be waiting on a response from Oracle, etc.
A network cause is where a network switch was rebooted and
autonegotiation failed to agree on link characteristics (like half-
vs. full-duplex). In this case, avoid using autonegotiation. Otherwise,
look for evidence on the client (dsmerror.log, OS logs).
In API coding, the programmer omitted the session termination step, or
the API program failed and exited prematurely.
Has also been reported where a defective storage pool was put offline
and its replacement was defined - but the administrator neglected to do
ACTivate POlicyset. May be accompanied on client by dsmerror.log entry
"Txn Producer thread, fatal error, signal 11".
ANR0481W Session NNN for node _Name_ (_OSType_)) terminated - client did not
respond within 60 seconds.
The COMMTimeout value is too low. The AIX server default is a puny 60
seconds. Boost it.
ANR0482W Session <SessionNumber> for <NodeNode> name (<ClientPlatform>)
terminated - idle for more than N minutes.
If the messages reflects 15 minutes, your server still has the product
default IDLETimeout of 15 minutes, which is much too small. Boost it to
60: you need a large value to accommodate clients rummaging around in a
large, relatively inactive file system looking for changed files to back
up. If you have a good-sized value already, investigate why your client
is idle so long...which could occur if you need a password on the
command line and neglected to specify one, which causes the *SM client
to prompt and wait indefinitely.
ANR0484W Session 123 for node _Name_ (_OSType_) terminated - protocol violation
detected.
Usual cause: The specified client is attempting to establish a session,
but no password has been established. When you registered the client on
the server you established a password, which must be used when the
client session is invoked, either implicitly with
"PASSWORDAccess Generate", or explicltly with "PASSWORDAccess Prompt".
If the IP address in the message is not that of your workstation, it
might be that some other machine is using that name, or possibly that an
old server has been reactivated, which has old info about client IP
addresses.
Do 'dsmc q sch', a basic client-server query which goes through all the
password and network stuff that backup does, and will prompt for and
establish the password in the client area if "PASSWORDAccess Generate"
is in effect. If such a password is in effect, a good response will
verify the client-server interaction.
Another cause: A Client Schedule is defined with ACTion=Macro and the
macro whose file name is coded in OBJects= contains administrative
commands instead of client commands. (Use Administrative Schedules for
executing administrative commands.)
Another cause: An ADSMv2 HSM defect in which it caused the password
entry in /etc/security/adsm to be obliterated.
Accompanied by message ANR0444W.
ANR0487W Session ____ for node ____ (<OpsysType>) terminated - preempted by
another operation.
As in performing a Backup, and someone initiated a Restore or Retrieve,
which has a higher priority. A Backup will not necessarily be
terminated by such an event: a v3+ client will stick around and try to
pick up where it left off, as is evidenced by the following msg in its
backup log: ANS1809E Session is lost; initializing session reopen
procedure. (However, a v2 client will suffer "ANS4017E Session
rejected: TCP/IP connection failure" and start its backup all over
again.)
ANR0492I All drives in use. Session NNNN for node NODENAME (AIX) being
preempted by higher priority operation.
Has been seen happen to a Backup operation, as when another Backup
needs to get its data from HSM space. In that HSM retrieval is a
higher operation, it unfortunately shoves the important Backup out of
the way.
If the Backup interrupted was performed via ADSM scheduling, it will
resume *if* you coded a good Duration (window) value such that it has
an opportunity to restart itself thereafter.
ANR0511I Session ____ opened output volume ______.
Later followed by: ANR0514I Session ____ closed volume ______. These
messages may not be long apart, reflecting the current design of ITSM,
that the volume is closed after each transaction.
ANR0520W Transaction failed for session ____ for node ____ (<Platform>) -
storage pool ____ is not defined.
Is this a Lanfree backup through the TSM storage agent (dsmsta), and the
storage pool had been deleted and redefined? Try recycling the storage
agent, to cause the agent to re-find the stgpool. (Realize that TSM uses
internal identifiers for storage pools, which are reset when deleting
and redefining the storage pool. Stopping and restarting dsmsta resynchs
the storage agent with the TSM server, allowing the storage agent to
find the storage pool. This prevents the ANR0520W message, and clears up
the ANS1329S message which is the result of not finding the storage
pool.)
ANR0521W Transaction failed for session NN for node ____ (<OpSys>) - object
excluded by size in storage pool ________ and all successor pools.
Most likely: The stgpool MAXSize value was configured to not allow a
file that big to be stored - and no successor storage pools to the one
specified can accept the large file. Or: You have a disk storage pool
with a single volume or multiple volumes whose combined size is too
small to accommodate the huge incoming file. Normally, however, you
would have a tape storage pool below the disk pool, and the tape pool
would have "infinite" capacity, so incoming file size would not be an
issue. But perhaps migration to the tape pool is defeated, or that pool
is read-only or is depleted of tapes. Check it out. See also: ANS1310E
Note that the message unhelpfully fails to identify the too-big object,
which thwarts communication with the client administrator.
ANR0522W Transaction failed for session ____ for node ____ - no space available
in storage pool ______ and all successor pools.
Make sure that your policy set is activated.
Are your storage pools read/write?
Is the object being stored larger than the space available in the
storage pools.
Does your Storage Pool MAXSize value prohibit the store operation?
Are your storage pool volumes full and your Stgpool MAXSCRatch value is
insufficient?
ADSM wants to store associated directories in either the management
class specified by DIRMc or the class with the longest retention period:
is there space there?
Are your tape library and drives working? (You might see a lot of mount
requests denied.) Do 'SHow LIBrary' to check their status.
In general, there should be associated error messages in your server
Activity Log which would indicate the true problem.
Note that a large file that was spanning from the end of the last volume
available in the storage pool, which cannot be completed for lack of
further volumes, has to be logically expunged from the volume where its
writing left off.
ANR0534W Transaction failed for session nnnn for node xxxx - size estimate
exceeded amd server is unable to obtain storage
The delta between the low Pct Migr and the high Pct Util represents the
amount of space that has been reserved for other clients that are
concurrently being backed up: This is bitfile (Aggregate) space that has
been allocated in the storage pool for transactions that are currently
in flight.
Has been seen where client compression is turned on, and client has
large or many compressed files: *SM is fooled as compression increases
the size of an already-compressed file. (Remember that compression may
be turned on in the client or via a Client Option Set or mandated on the
server node definition.)
Prior to a client sending a file, the space (same as allocated on
client) is allocated in the TSM server's disk storage pool. If caching
is active in the disk storage pool, and files need to be removed to make
space, they are - up to the limit indicated by the incoming storage
space request. But if the file grows in compression (client has
COMPRESSIon=Yes and COMPRESSAlways=Yes), the cleared space is
insufficient to contain the incoming data.
Look also for a filled storage pool.
Watch out when using TDPs: Their size calculation may be inaccurate, and
thus problems when the stgpool uses caching.
Some customers report this error occurring when Migration is running and
a backup is performed.
See also IBM site TechNote 1156827.
ANR0535W Transaction failed for session nnnn for node xxxx (OS_Type) -
insufficient mount points available to satisfy the request.
As when a client backup session has been running, then along comes a
BAckup STGpool that needs a drive, which denies it to the client
session. The client session, however, does not terminate: only the
current data-sending transaction failed...the client will emit message
ANS4312E Server media mount not possible, then wait for the mount with
message ANS4118I Waiting for mount of offline media.
See: Drives, not all in library being used
ANR0538I A resource waiter has been aborted.
Resource refers to a TSM server lock or synchronization object. The
server terminated the wait for such a resource because it has been too
long (relative to the server RESOURCETimeout value). This could cause a
process or session to fail. This situation may be due to a server
deadlock. May be accompanied by messages which illuminate the
situation, such as ANR4513E.
ANR0540W Retrieve or restore failed for session NNN for node ____ (AIX) - data
integrity error detected.
Accompanied by: ANR9999D smnqr.c(1132): Bitfile 11975366 not found for
retrieval. Has been seen when the volume containing the data is in a
Destroyed state.
ANR0548W Retrieve or restore failed for session session number for node ____
(<Platform>) processing filespace ____ for file ____ stored as
Backup|Archive - data integrity error detected.
Formerly ANR9999D SMNQR(1132): BITFILE XXXXX NOT FOUND FOR RETRIEVAL,
replaced with more descriptive message per APAR IY09212, 2000/03/21.
The server ends a file retrieval operation for the specified session
because an internal database integrity error has been encountered on the
server. May be accompanied by msg ANR1424W, telling of a volume
unavailable because its access mode is "destroyed". Otherwise, you
should re-try the restore or retrieve operation and if the file is also
backed up in a copy storage pool, the operation will attempt to read the
file from the alternate location. If you don't have a copy storage pool
(shame!) then you could try Move Data several times over several drives
to see if it might finally copy the bad file. You can use the Query
CONtent command with the COPied= operand to check for files also being
in a copy storage pool.
Nearby this message in the Activity Log you should see some indications
of I/O errors or other problems, naming a specific volume, which is the
one containing the file in trouble. If that volume is not Destroyed, do
'Query CONtent VolName ... DAmaged=Yes' and see if any Damaged files on
it. See "Damaged" for handling info.
ANR0566W Retrieve or restore failed for session ____ for node _____ (OpSys) -
file was deleted from data storage during retrieval.
This can be due to the file being in the progress of migration to a
lower level storage pool in a hierarchy at the time that its retrieval
is being requested by a client. Until TSM can be redesigned to
accommodate this circumstance, compensate by avoiding a migration or by
activating caching on the migrating-from storage pool.
ANR0567W Retrieve or restored failed for session ____ for node ____ (<Platform>)
insufficient mount points available to satisfy the request.
See: "Drives, not all in library being used"
ANR0670W EXPORT SERVER: Transaction failed - storage media inaccessible.
This is a conclusion message: There should be an accompanying message,
such as ANR1420W, explaining the problem.
ANR0692E EXPORT NODE: Out of space on sequential media, scratch media could
not be mounted.
Seen when Exporting a filespace whose size is such that multiple tapes
will be required to contain it, but you specified too few explicit
VOLumenames instead of using Scratch=Yes.
ANR0812I Inventory file expiration process 330 completed: examined 125455
objects, deleting 1 backup objects, 0 archive objects,
0 DB backup volumes, and 0 recovery plan files.
0 errors were encountered.
The "examined ___ objects" refers to the number of Inactive filespace
objects that were examined. The "deleting ___ backup objects" number
is typically much less than the number examined, thus indicating that
expiration is not looking at just the Inactive objects older than the
expiration periods of their respective management classes. The "DB
backup volumes" and "recovery plan files" values pertain to DRM, the
former value according to the 'Set DRMDBBackupexpiredays __' spec and
the latter per the 'Set DRMRPFEXpiredays __' spec.
If errors were encountered, examine preceding messages in your Activity
Log (do not invoke Expire Inventory with the Quiet option for this).
Typically accompanied by: ANR9999D imexp.c(1350): Error 8 deleting
expired object (011531735) - deletion will be re-tried.
This message with this return code generally means that ADSM could not
find the inventory information for this file. When you get this
message, inventory expiration will take longer while we search for any
other references in this data base to this object and remove them. You
should not see the same message for this same object number the next
time you run inventory expiration unless we are unable to remove the
other references to this object.
ANR0836W No query restore processing session Session_ID for node Node_Name and
Filespace_Name failed to retrieve file High_Level_File_Name
Low_Level_File_Name - file being skipped.
No Query Restore processing failed to retrieve the specified file
because of an error: the file will be skipped.
This may simply be that you were doing a wildcard restore, which is
considered a "no query restore", and TSM is simply reporting files that
are unavailable. However, there may be more serious, underlying causes.
Were there accompanying messages in the Activity Log to explain where
the server expected to find the file, or what problem it had? This could
be a situation like a volume in Destroyed or Unavailable status, or a
similar unavailability problem. If your database is not huge, a 'Select
Volume_Name From Contents Where File_Name=______' may be feasible to
determine where the file backup lives; or you could more basically do a
Query Volume looking for anomalous status values.
ANR0874E Backup object 0.43293636 not found during inventory processing.
Occurs during EXPIRE INVENTORY. Surrounding Activity Log messages may
explain the problem. You may do 'SHow BFObject <ObjectID>' and 'SHow
INVObject <ObjectID>' to identify the object. You may be able to audit
the volume on which the file occurs to relatively simply resolve the
problem; or you may have to contact vendor support (and possibly run
an Audit DB per their specs).
ANR0905Q Options file dsmserv.opt not found
Did you start the server from the server directory? Is the file in the
server directory? What is different about the way that you are starting
the server this time from all the preceding times that it was
successfully started?
ANR0981E The server database must be restored before the server can be started.
The server believes that the RESTORE DB operation was incomplete. If
the Restore operation reported success, then suspect your disk
subsystem, which may have gone defective: Get your operating system and
hardware people involved. Have them look for error logs. They can use
utilities and/or diagnostics to test the voracity of the disk subsystem.
If necessary, and possible, try using a different type of disk
subsystem.
ANR0985I Process NNN for AUDIT LIBRARY running in the BACKGROUND completed with
completion state FAILURE
If you're lucky, those are new volumes that were inserted without having
been labeled. Otherwise they may be old volumes which, in the classic
shared library scenario, were overwritten by the ogre you're sharing the
library with. Might also occur if the SCSI address or Element address
of the drive was changed. Some customers report upgrading the tape
device driver (e.g., Atape) and the problem went away.
ANR0985I Process NNN for LABEL LIBVOLUME running in the BACKGROUND completed
with completion state SUCCESS at 10:19:40.
Beware that though it says Success, some volumes may have failed to
initialize. Look for ANR8806E and accompanying failure messages in the
Activity Log.
ANR0986I Process 61 for SPACE RECLAMATION running in the BACKGROUND processed
136 items for a total of 535,063,780 bytes with a completion state of
FAILURE at 14:14:36.
Simply means that the task was interrupted and stopped: data transferred
up to that point is on its new volumes and is just fine. The failure can
be caused by the Reclamation process being preempted by a higher
priority process, such as an HSM Recall: check your Activity Log.
Accompanied by messages ANR1080W, ANR1440I.
ANR1025W Migration process ___ terminated for storage pool ______ - insufficient
space in subordinate storage pool.
Migration is trying to move data into the next level down in your
storage pool hierarchy, but there isn't enough space in that next level
for the data movement to occur. A classic cause of this is an inadequate
MAXSCRatch value on the destination stgpool, or insufficient scratches
in the library. A less common cause is the lower stgpools not being
read/write.
ANR1081W Space reclamation terminated for volume ______ - storage media
inaccessible.
Seen on a 3494 when the robot actuator hand is losing its gripping
power, failing to pull tapes out of cells. On a 3494 you should also be
seeing an Intervention Required on its operator station. You'll have to
move the problem tape from that cell to cell 1 for the robot to clear
the bad tape status condition, then change the *SM Unavailable status.
ANR1082W Space reclamation terminated for volume ____ - insufficient number of
mount points available for removable media.
See handling under similar ANR1134W.
ANR1086W Space reclamation terminated for volume ______ - insufficient space in
storage pool.
Seen when Stgpool MAXSCRatch is inadequate. See the "MAXSCRatch" topic
to fully understand the requirements.
ANR1117W Error initiating migration for storage pool RECLAIMPOOL - internal
server error detected.
Seen accompanied by:
ANR9999D asutil.c(220): Pool id 4 not sequential-archival strategy.
ANR9999D afmigr.c(644): Error locating pool descriptor for pool id 4.
You have defined a stg pool of DEVCLASS DISK for a RECLAIMSTGPOOL of a
primary tape pool (presumably because that tape pool has only one tape
drive available to it). It won't work. ADSM insists that
RECLAIMSTGPOOLs must be from the FILE device class.
ANR1134W Migration terminated for storage pool ________ - insufficient number of
mount points available for removable media.
You did actually create the tape device in the operating system and then
define it to *SM, yes? Remember with "rm" drivers is that the operating
system would already have its own, usual driver in place to handle the
drive as an rmt device: you have to dissociate that so that you can have
the "mt" device driver control the drive.
In a manual library, you like need to mount the tape, dude.
Otherwise do Query DRive to assure that your drives are online and
available and not already in use. Check your storage pool definitions
to assure that hierarchical migration actually has somewhere to go.
Assure that your MAXSCRatch value is appropriately high.
See also "Insufficient mount points, 3590" in the CONDITIONS section,
further down in this document.
ANR1142I Moving data for collocation cluster 3 of 10 on volume ______.
A tape reclamation is in progress.
ANR1144W Move data process terminated for volume ______ - storage media
inaccessible.
Seen on 3590E drives where the tape was being mounted but, because the
3590E has two springs on the mouth flap instead of one on the 3590B
drives, the gripper could not push the tape into the drive mouth with
sufficient force to get the drive to take it in. So the tape remains
trapped in limbo.
See also: ANR8447E
ANR1149W Move data process terminated for volume ______ - insufficient space in
target storage pool.
Usually, because you are down to your last tape, either in the scratch
pool or defined to a storage pool.
ANR1163W Offsite volume ______ still contains files which could not be moved.
Indicates that a reclamation or MOVe Data was attempted on an offsite
volume, but it still contained data. When MOVe Data or reclamation is
performed for an offsite volume, files are obtained from a primary
storage pool or possibly from another copy storage pool. Message
ANR1163W is issued when residual files are left on the offsite volume
after this move is completed. This typically occurs when the server
cannot copy files from another storage pool because they reside on
volumes that are unavailable or offline. Another possibility is that
files in the source storage pool are marked as damaged and therefore do
not get moved. Check your activity log for messages indicating the
reason why the files were not moved. I would not expect an
'AUDit Volume ... Fix=Yes' to correct the problem, as you are indirectly
using proxy volumes; but you might give it a shot. It might also be the
case that no onsite copy was made of some of the data that went offsite.
Do a Query CONtent on the subject volume and research from there.
ANR1171W Unable to move files associated with node ____, filespace ____ fsId _
on volume ______ due to restore in progress.
Look for an active restore via 'Query SEssion', or a held-off
restartable restore via 'Query RESTore'.
ANR1173E Space reclamation for offsite volume(s) cannot copy file in storage
pool storage ______: Node ____, Type ____, File space ____, fsId ____,
File name _____.
Running expiration or migration at the same time as your offsite volume
reclamation can cause this condition.
Can also occur if you recently switched your offsite pool from
non-collocated to collocated? During offsite reclamation for a
collocated pool, the server checks the clustering information for the
objects on the volume being reclaimed. Clustering information is
typically based upon node name, and filespace. Non-collocation means
that multiple nodes can be mixed on a given storage pool volume. During
offsite reclamation processing, the server moves only one node's data at
a time: the objects for other nodes will be skipped and the ANR1173E
message will appear. Multiple passes are required to fully clear the
offsite volume. To deal with this you can do one of:
- Increase the MOVEBatchsize server option value.
- Issue MOVe Data or MOVe NODEdata, depending upon how many nodes are
on that volume and/or the nature of the storage pool.
When a primary disk pool is involved, an additional step is to have a
primary tape pool to which it can migrates. The logic for offsite volume
reclamation is slightly different when the copy resides in a tape
(sequential) pool instead of on disk: the server retries the reclamation
(reprocesses the volume) more when the copies are on tape.
ANR1216E BACKUP STGPOOL: Process ___ terminated - storage media
inaccessible. (SESSION: ____, PROCESS: ___)
Seen where a 3590 tape drive problem (which causes an Int Req condition
on a 3494 library) incites the library, unto itself, to eject the tape,
leaving TSM to think the tape is still in there. The 3494 display panel
will have an Int Req message like: "Damaged volser (001018) ejected to
the convenience I/O stations (03-01-2005 22:12:55)". In this instance,
the tape leader block was missing - snapped off in the drive. You need
to at least put the drive offline, get it repaired, and then check the
storage pool tape out (without eject), then check it in as Private.
ANR1221E command: Process <process_id> terminated - insufficient space in target
copy storage pool.
Assure that there are scratch volumes available in the copy storage
pool, or at least enough space left on pre-existing tapes that the
BAckup STGpool can proceed. (Remember that though there may be space
left on existing volumes, if the next file to be backed up is too large
to fit, insufficient space results. Msg ANR1405W should appear if no
scratch volume available.)
Check your MAXSCRatch value to assure that you are not artificially
limiting how many tapes may be used. Beware not fully understanding
what MAXSCRatch really means (see the definition in this doc), and
setting the value too low: if in doubt, boost it - which may cause the
problem to disappear. You may intentionally have MAXSCRatch=0 so that
operations will use only volumes specifically Defined to that storage
pool, which is fine.
If using a tape library, assure that your scratches have the proper
category codes to be used by the Devclass.
Unlikely, but: Do 'Query STGpool <PoolName> F=D' and assure "Access:
Read/Write".
The destination storage pool may numerically have enough space, but it
is necessary that its volumes are on-site for the backup to occur.
If the empty volumes are Defined to the storage pool (rather than using
Scratches), assure that your free volumes are read-write and not
Unavailable or Offsite.
As always, make sure you are doing regular Expirations to assure free
tapes.
ANR1315E Vary-on failed for disk volume ...... invalid label block
Could occur, for example, if something failed to mount in your Unix
system, or the volume or file system or file is missing.
Can also occur if you rename the volume in ADSM or the operating system,
but not both, making for inconsistency.
See "Raw logical volume" entry for notes on how *SM uses such a volume.
ANR1339W Session SESSION_ID underestimated size allocation request - _NN_ MB
more space was allocated to allow the operation to continue.
New msg with APAR IC37437 to deal with performance degradation involved
in recomputing server DISK storage pool space estimation as files move
from client to server or within server. The server will now use
subsequently larger allocation sizes to more efficiently and optimally
store the file in the DISK type storage pool. This was supposed to be
fully incorporated into 5.2.2.0; but swg21165187 cites a subsequent
"fudge" - which customers report doesn't help on clients < 5.2.2.0.
ANR1341I Scratch volume _______ has been deleted from storage pool ________.
The corollary of message ANR1340I.
Because the REUsedelay period has expired, or a DELete Volume was
performed, or Reclamation emptied and returned it to scratch status.
Note that DELete VOLHistory does not cause this message, because such
volumes are not in storage pools. A less common reason for this message
is that a scratch volume is mounted for an operation, but the operation
does not proceed: nothing was written to the unused tape, and so it goes
right back to scratch.
ANR1342I Scratch volume 000122 is now pending - volume will be deleted from
storage pool ________ after the reuse delay period for this storage
pool has elapsed.
Will be followed after that many days by "ANR1341I Scratch volume
______ has been deleted from storage pool ________."
ANR1343I Unable to delete scratch volume
Reclamation is still running: volumes which it has emptied remain locked
until the reclamation ends.
ANR1401W Mount request denied for volume ______ - mount failed.
This is a summary message: the actual problem should be spelled out in
preceding messages, such as ANR1144W storage media inaccessible.
Note: In a shared library environment, beware restarting the TSM server
application on the library manager TSM server system but not restarting
the TSM server application on the other system. In a shared environment,
both TSM server applications have to be restarted if one is.
LTO2 had an early microcode problem wherein a tape volume would fill and
the drive would not reset its EOT (end-of-tape) flag. Subsequently, when
a scratch volume is being mounted in the same drive and TSM tries to
write the BOT (beginning-of-tape) information, the drive returns to the
driver (and then TSM) that it is at the EOT (end-of-tape) and therefore
TSM is unable to write to the tape. This is an LTO II drive microcode
problem which is fixed at microcode 37E1 level and above.
ANR1402W Mount request denied for volume 000003 - volume unavailable.
As when doing 'DSMSERV DISPlay DBBackupvolumes
DEVclass=OURLIBR.DEVC_3590 VOL=000003'. The volume probably has a
category code which is not one defined as belonging to this library.
ANR1405W Scratch volume mount request denied - no scratch volume available.
Could mean exactly that. Had you checked in tapes of the right type,
as Scratch? Is your MAXSCRatch value realistic?
For 3995 (optical storage): You checked in your scratch volumes as type
OPTICAL whereas your Devclass Device Type is WORM, or vice-versa.
ANR1410W Access mode for volume 000081 now set to "unavailable".
Seen when a tape was to be mounted on a library drive, but the Load
failed.
ANR1411W Access mode for volume ______ now set to "read-only" due to write
error.
Often accompanied by server msg "ANR8359E Media fault detected ..." and
client msg "ANS4301E Server detected system error".
Most usually the result of dirty tape heads...which can occur if a
manual library has not been manually cleaned or in an automatic library
the automatic cleaning has been disabled or cleaning cartridges have
been exhausted. Could also be a dirty or defective tape.
If using DLT8000 drives, you must use DLT type IV or better cartridges.
Make sure you use the TSM driver software to control the drive, rather
than the driver from the operating system.
Seen particularly in Compaq servers with DLT8000 drives due to problems
with the standard NT SCSI adapter drivers, when the Compaq SCSI
Controller drivers from the SSD was NOT used. The standard NT SCSI
drivers seem to have problems to communicate with DLT8000 drives; but
DLT4000 and DLT7000 worked fine.
Note that 'Query Volume' will show "In Error State?: Yes" for such a
volume, and that a Copy Storage Pool volume whose data has all been
removed will remain in Pending state indefinitely because of the error:
only a 'DELete Volume' will release it.
ANR1420W Read access denied for volume ______ - volume access mode =
"unavailable".
Perhaps seen in an Export operation (with msg ANR0670W). Volumes which
are to participate in such operations need to have a Status value which
allows them to participate.
ANR1423W Scratch volume 123456 is empty but will not be deleted - volume access
mode is "offsite".
The volume's contents have evaporated through expirations and the like
such that the volume is finally empty. But because it is offsite, it
cannot be re-used. It has to be brought back onsite, and then its
status can be changed to read-only so that it will be deleted.
Or you could strong-arm the process by performing:
UPD VOL * ACC=READW WHERESTG=____ WHERESTATUS=EMPTY WHEREACC=OFfsite
ANR1425W Scratch volume 000002 is empty but will not be deleted - volume
state is "mountablenotinlib".
Disaster Recovery state. Do 'MOVe MEDia WHEREState=MOUNTABLENotinlib'
to delete the scratch empty volume.
ANR1440I All drives in use. Process 61 being preempted by higher priority
operation.
As when you are doing a Reclamation and an HSM Recall, Retrieve, or
Restore process has arisen.
ANR1469E DEFINE SCRIPT: Command script _____, Line ____ is an INVALID command
Commonly, a server script employs a GOTO, and the target label does not
have a trailing colon, or the label is more than 30 chars long.
ANR1639I Attributes changed for node <TSM_Nodename>: TCP Address from to
<IP_Address>. (SESSION: ____)
Possibly because the client nodename or IP address differs from their
values in the prior session with the client (which may happen in a
cluster failover). Or maybe the client is employing a Nodename spec
which does not match a reverse lookup on its IP address. See: GUID
ANR2020E UPDATE SCHEDULE: Invalid parameter - <Whatever>'
Typically because you flubbed the quoting of the OBjects parameter.
See the guidelines in the Admin Ref manual under DEFine SCHedule.
ANR2034E QUERY FILESPACE: No match found using this criteria.
And you expected a match for the filespace name you typed. It is
probably the case that you are unwittingly dealing with a Unicode
filespace, where the name created by the PC is itself Unicoded. You can
deal with this in one of two ways:
1. Use "NAMEType=UNIcode" and enter the filespace name accordingly.
2. Simply do 'Query FIlespace' with no operands and find the filespace
in the full output.
ANR2099I Administrative userid __________ defined for OWNER access to node ____
It is a new feature in server level 3.1.2.1, part of the support for the
web B/A client, which is an administrator client. The new admin id lets
the client owner be a limited power admin to run the web B/A client for
his own machine. You can suppress the new admin id's by adding
'USER=NONE' to the 'REGister Node' command.
ANR2100I Activity log process has started.
This is the entry in the activity log written when the server starts
logging to it (at server restart).
ANR2102I Activity log pruning started: removing entries prior to MM/DD/YYYY
HH:MM:SS
ANR2103I Activity log pruning completed: NNN records removed.
The above 2 messages result from Activity Log pruning, as controlled
by the 'Set ACTlogretention N_Days' value. Changing the value
downward tends to kick off the space reclamation in the ADSM server
database, where the Activity Log lives.
ANR2111W BACKUP STGPOOL: No data to process.
When attempting to do a 'BAckup STGpool' from a primary disk pool to a
copy storage pool. This is not abnormal, and typically occurs when the
data that had been in the disk pool had already bee migrated to the next
pool in its defined hierarchy such that there is no longer any new data
to be backed up from the disk pool. Do a 'Query STGpool' to see - and
don't get thrown off by cached data in the disk pool.
ANR2152E REMOVE NODE: Inventory references still exist for node ____.
You attempted REMove Node <NodeName>, but that could not be fulfilled
because filespaces still exist for the node: they must be removed,
first, as via DELete FIlespace.
ANR8212W Unable to resolve address for <Hostname>. (SESSION: ___)
Seeming DNS issue. If no one has changed hostnames in the client options
files, then suspect the DNS service which provides lookup service to
that TSM server system. Make sure no one has changed the
/etc/resolv.conf on that system. Use the 'host' and/or 'dig' commands to
check things out. If no cause can be found, try putting the full
hostname into the options file - including the domain. If still
nothing, try putting the IPaddress rather than host network name into
the options file. Only as a final resort would I recommend employing the
DNSLOOKUP NO server option choice. That is, TSM is calling attention to
systemic problems in your shop, which should be fixed rather than
circumvented.
ANR2321W Audit volume process terminated for volume ______ - storage media
inaccessible.
Seen in libraries where the tape is not in a place where the library can
obtain and mount it. Expect TSM to mark the volume Unavailable upon
finding it inaccessible. Problem can be a defective drive.
ANR2361E BACKUP DB A full database backup is required.
1) You are running BAckup DB for the first time, and don't have a full
backup to start with.
2) You have already taken 32 incremental backups. The maximum number
of incremental backups that ADSM allows between full backups is 32.
ANR2362E command: DATABASE BACKUP IS NOT CURRENTLY POSSIBLE - COMPRESSED LOG
RECORDS EXIST IN THE CURRENT TRANSACTION CHECKPOINT.
A BAckup DB command was issued but a database backup cannot be started.
Log compression has recently taken place, and the compressed log records
are still part of the current transaction checkpoint. After these log
records are no longer part of the current checkpoint a backup can take
place. Reissue the command at a later time.
What this is saying is that there is something running which has
database work-in-progress tied up. You can wait, or see what process or
session is causing this, and possibly cancel it if it persists in
blocking server db backups. A server restart will certainly clear
things.
ANR2391E BACKUP DEVCONFIG: Server could not write device configuration
information to <Windows_Networked_Drive>.
The TSM "service" process, which runs under a "service context" cannot
see networked drives - a Microsoft issue. Drives are networked and
visable in a "user" context.
ANR2404E DEFINE DBVOLUME: Volume /dev/rdsk/c0t4d0s0 is not available. or:
Anr2404e volume /dev/rdsk/c0t1d0s1 not available return code 14.
See "Raw Logical volume in Sun/Solaris".
ANR2404E - DEFINE VOLUME: Volume [volname] is not available.
Can occur in AIX 4.2 after upgrading from AIX 4.1 such that server
modules "dsmserv.42" and "dsmfmt.42" must be put in place of "dsmserv"
and "dsmfmt" in allow the use of files greater than 2GB in size.
Ref: APAR IX75955.
ANR2411E MOVE DATA: Unable to access associated volume ______ - access mode is
set to "unavailable".
Move Data was invoked on a volume; but the first file on the volume is
spanned from a prior volume whose access mode prevents access to it.
ANR2420E DELETE VOLUME: Space reclamation operation already in progress for
volume ______.
The Reclamation process that reclaimed this volume that you are trying
to delete is still running. Wait for it to finish, or cancel it.
Example: You kicked off a reclamation at 08:00. It tried to reclaim
volume 001931, but that volume had some bad files on it, so the volume
could not be emptied. You notice this at 11:00 and at that time do a
Move Data 001931. That yields the ANR2420E message, as the reclamation
process from 08:00, which worked on 001931, is still running.
ANR2434E DELETE DBVOLUME: Insufficient space on other database volumes to delete
volume /var/adsmserv/db.dsm.
Occurs when 'Query DB' Maximum Extension shows no further space left for
'EXTend DB': the DELete DBVolume' wants that space. So do a 'REDuce DB'
to placate ADSM, then 'EXTend DB' after the DELete DBVolume.
ANR2438E <Command>: Insufficient database space would be available following a
reduction by the requested amount.
You are attempting to perform REDuce DB; but the database does not have
enough free space to reduce by the amount specified. See the "REDuce
DB" entry.
ANR2445E DELETE LOGVOLUME: Insufficient space on other recovery log volumes to
delete volume __________
To delete a log volume, Query LOG needs to show a Maximum Extension
value at least as large as the volume being deleted. Do a REDuce LOG
as needed.
ANR2452E DEFINE LOGVOLUME: Maximum recovery log capacity exceeded.
Per APAR IC15376, the recovery log should not exceed 5 GB (5440 MB).
ANR2561I Schedule prompter contacting ____ (session NNNN) to start a scheduled
operation.
As seen with client option "SCHEDMODe PRompted" for ordinary client
schedules, or per the server 'DEFine CLIENTAction' command, for the
server to communicate with the designated client to start a schedule.
ANR2576W An attempt was made to update an event record for a scheduled operation
which has already been executed - multiple client schedulers may be
active for node ________.
Check to see if you have any restartable restores going on for the
client in question: 'Query RESTore'. Cancel the session number if you
do, and see if that makes the problem go away.
ANR2579E Schedule A in domain B for node C failed (return code __).
Seen when a POSTSchedule or PRESchedule command failed: The only
indication that the command wasn't successful was a non zero return code
in the schedule log. (In Unix, return code 1 typically accompanies
"Command not found".)
Or could be a problem with one file system.
Or a stale NFS file handle (as in mount no longer available).
Examine the dsmsched.log and dsmerror.log files.
May be accompanied by msg ANR1512E (q.v.).
ANR2622E DEFINE ASSOCIATION: No new node associations added
Are the nodes you attempted to associate in the same domain?
ANR2716E SCHEDULE PROMPTER WAS NOT ABLE TO CONTACT CLIENT node_name USING TYPE
address_type (high_address low_address)
The address_type value is usually 1, indicating TCP/IP.
The *SM server cannot connect to the client, possibly because:
- The scheduler process is not running on the client.
- The scheduler process cannot run because it is in a stopped state or
has terrible dispatching values relative to other processes which
demand service from the operating system.
- Maybe there is a time shift between the client and server. (Remember
that the client schedule process only opens and listens to port 1501
during the schedule period.)
- PORT 1501 is not opened (on a firewall (if any))
- The client network adress is mistyped in dsm.opt
the server tries to contact the client trough the wrong adress.
- If the scheduler is running on an specific NT account, check if the
account is not locked.
- The scheduler appears to be running, but it actually hangs.
- The client was booting at the scheduled time.
- There is a firewall between the server and the client, and port
numbers cannot be inherited.
That the web-connection works is no guarantee that the scheduler works:
those services operate independantly. If the client can connect to the
server, but not vice-versa, that mostly points to one of above mentioned
reasons. Try excluding each option above, and find the source of the
problem.
ANR2812W License Audit completed - WARNING: Server is NOT in compliance with
license terms
You have more client demand for server sessions than you have licenses.
You have to (possibly acquire and) register more licenses.
ANR2841W Server is not in compliance with license terms.
As the vendor description says, begin by doing Query LICense to see
what's amiss. It may simply be that an "in use" number is greater than
the corresponding "licensed" value. For example, you defined a library,
overlooking the need to register a license for library use. You can
also do AUDit LICenses to reveal what's wrong. Do not just do 'REGister
LICense FILE=*lic*', as that may result in excessive licensing: find out
what the problem is, and treat that. A simple cause is that the named
FILE may not exist in the server directory.
May be caused by having attempted a 'MOVe DRMedia' where DRM is not
licensed: the attempt causes conflicting Query LICense state:
Is disaster recovery manager in use ?: Yes
Is disaster recovery manager licensed ?: No
That can be cleared by doing:
- Issue a Query DRMedia.
- For all the volumes that have a NotMountable state, perform the
following:
1. Update volume's access mode to offsite:
UPD VOL volumename ACCESS=OFFSITE
2. Update volume's ORM state to NULL:
UPD VOL volumename ORMSTATE=""
3. Update volume's access mode to readwrite:
UPD VOL volumename ACCESS=READWRITE
- After all volumes have been updated, perform the following:
1. Issue QUERY DRMEDIA to make sure all volumes are now in
"Mountable" state.
2. Issue AUDIT LICense
3. Issue Query LICense
The output of the Query LICense should now show that DRM is NOT in use.
An insidious cause is your computer's clock being wrong. See "REGister
LICense".
See also: ANR2812W
ANR2909E The SQL statement is incomplete; additional tokens are required.
Commonly occurs where you were so absorbed in coding the handling of SQL
column values that you forgot to add the " FROM <TableName" at the end
of the SQL query.
ANR2914E SQL identifier token '<Whatever>' is too long; name or component
exceeds 18 characters.
*SM's SQL processing places this limit on identifiers. This error is
most commonly encountered where you entered a string - such as a storage
pool name - into a Select, neglecting to enclose it in single quotes.
ANR2938E The column '_____' is not allowed in this context; it must either be
named in the GROUP BY clause or be nested within an aggregate function.
Usually because you coded a SELECT with a bare column name, plus a Sum()
of another column: the two intentions conflict in that Sum() is the
intention to report a total from all columns, not report all rows as the
bare column name intends.
ANR2940E The reference 'FSERV.STGP_COPY' is an unknown SQL column name.
In a Select, you probably provided an object name (volume name, stgpool
name) without enclosing single quotes such that the SQL processor
thought it to be a column name.
ANR2958E SQL temporary table storage has been exhausted.
A Select has invoked the SQL processor, which for this query needs to
use work space within the ADSM database...but it needs contiguous space,
probably at the end, and there isn't enough.
Per APAR IY08737:
Documentation in the Admin Reference, Admin Guide, and the Message
manual need to be updated to indicate the temporary table is created in
the Maximum Reduction location of the DB and large Query's, Select and
Web Administrative Interface commands will fail with ANR2958E SQL table
storage exhausted if the Maximum Reduction is 0 or is not large enough
to fulfill the Query or Select command.
Otherwise, consider adding a volume to the database for the duration of
the Select processing, thereafter Delete it.
ANR3354W Locally defined administrative schedule XXXX is active and cannot be
replaced with a definition from the configuration manager.
An active admin schedule was not recorded as a managed schedule (i.e.,
was locally defined) and therefore could not be replaced during refresh
processing. To verify that the schedule is locally defined, issue the
Query Schedule T=A F=D command and look at the "Managing Profile" field;
if this field is empty, the schedule is locally defined rather than
managed. Why is the schedule treated as locally defined even though it
was created during refresh processing? Perhaps the refresh processing
failed after creating the schedule; in this situation the new schedule
will not be marked as managed and therefore will be treated as locally
defined. Another possibility is that after the schedule was created,
you deleted the subscription to the managing profile, leaving behind the
administrative schedule which would now be treated as locally defined.
ANR4306I AUDITDB: Processed NNNN database entries (cumulative).
Progress message during a 'dsmserv auditdb'. Customers have reported
that this message may repeat with the same number of entries - but will
ultimately go on and finish. Be reasonably patient. There are some
circumstances where this will go on indefinitely, however.
ANR4513E A database lock conflict was encountered
May be accompanied by ANR0538I.
A lock conflict indicates that multiple processes and/or sessions are
contending for the same TSM db area. Do Query SEssion and Query PRocess
to look for such. Some things, like Expiration and Delete Volume, are
heavy db updaters, where it's necessary to keep too much else from
happening.
ANR4556W Warning: the database backup operation did not free sufficient recovery
log space to lower utilization below the database backup trigger. The
recovery log size may need to be increased.
Well, the DBBackuptrigger event occurred because something was putting a
lot of demand on the Recovery Log; and you would expect it to still be
running as the triggered DB backup was running, making Recovery Log
relief problematic.
ANR4571E Database backup/restore terminated - = insufficient number of mount
points available for removable media.
See: "Drives, not all in library being used"
ANR4639I Restored nnnnn of nnnnn database pages.
Message emitted every 30 seconds during a 'dsmserv restore db', to show
restoral progress.
ANR4706W Unable to open file CommandLineBeanInfo.class to satisfy web session 31
From the Messages manual:
Explanation: A web browser requested a file and the server could not
find the file on the local file system. Special note, some files
requested from the server are not vaild and are caused by a browser
error. For example, a request for CommandLineBeanInfo.class is not a
valid file request. However, a request of a GIF image or HTML page
should not produce this error.
From APAR: IX86373
The problem is that Internet Explorer thinks that the ADSM CommandLine
applet is a JavaBean and requests a file that does not exist. This error
can be ignored. As for resolving the error message created by Internet
Explorer, this is a problem with the browser not with ADSM. The applet
is not a JavaBean. Please note, error messages about missing GIF images
or HTML files should not be ignored. The user should check that the
file exists. If the file does exist, verify the permissions of the
file.
ANR5014E Unable to open disk 0A09 - error 12 received from DISKID.
Return code 12 from DISKID means that the volume is not reserved.
Before the volume was defined to ADSM, it had to have been reserved.
The DSMINST and DSMMDISK execs both prepare volumes for use, but it
sounds like something has happened to the volume, and it is no longer
reserved.
ANR5099E Unable to initialize TCP/IP driver - error binding acceptor socket 0
(rc &eq; 0)
In OS/390 (MVS), TCP/IP runs as a task separate from the operating
system itself. Perhaps someone restarted TCP/IP, leaving all dependent
applications stranded: TSM needs to be restarted to pick up on its
communications. If this occurred right after an IPL, look for your OS
people having "planted" an OS change which took effect with the IPL,
which now keeps *SM from working (maybe OS definitions, TCP/IP software,
execution libraries, etc.). Make sure that the TSM server started task
userid has an OMVS segment with a UID=0 and GID=0.
ANR6913W PREPARE: No volumes with backup data exist in copy storage pool ____.
Have you run Set DRMCOPYstgpool to specify that the copy storage pool is
to be managed by TSM? (Check with Query DRMSTatus.)
ANR7804I An ADSM server is already running from this directory.
The ADSM server has attempted to open the adsmserv.lock file in the
current directory but failed to do so because the file indicates that a
previously started server already has the file open.
Examine the contents of the adsmserv.lock file. The PID for the server
that is or was running is recorded in this file. Two ADSM servers
cannot be started from the same directory. You may remove the
adsmserv.lock file and attempt to start the server ONLY if a 'ps -e'
does not show the PID to be dsmserv.
If there is no adsmserv.lock file, then the more trivial cause is that
you inadvertently invoked dsmserv and you are not root (superuser).
ANR7807W Unable to get information for file _______. A file or directory in the
path name does not exist.
Typically seen in server restart where the administrator has renamed or
otherwise relocated the Recovery Log volume(s). The server has the
Recovery Log pathnames recorded in its database, and expects them to be
present at start-up. You need to reinstate the Recovery Log volumes in
your operating system file system.
ANR7823S Internal error LOGSEG871 detected.
ANR7837S Internal error LOGSEG871 detected.
Usually indicates that your Recovery Log is full during server restart.
You should have taken architectural steps, per the Admin Guide manual,
to prevent this; but now it is too late (but read on). If not at the
maximum Recovery Log size, refer to the Admin Guide to allocate more
space to your Recovery Log: use dsmfmt to create a new Recovery Log
volume; run 'dsmserv extend' specifying the added log volume; restart
your server. If you believe you are at the maximum Recovery Log size,
you still might be slightly under the absolute to-the-megabyte maximum
size, in which case you could allocate that many more megabytes and
hopefully get the server to start. If running in Rollforward mode, you
might instead run in Normal mode, but at the risk of transaction loss.
ANR7838S Server operation terminated - internal error BUF087 detected.
Will be accompanied by Activity Log messages like:
ANR9999D blkdisk.c(1198): Error writing to disk adsm-db1.primary.
ANR0252E Error writing logical page 141410 (physical page 141666) to
database volume adsm-db1.mirror.
Explanation (writev error): A file cannot be larger than the value
set by ulimit.
The server was started with Unix Resource Limits inadequate to encompass
the size of files which the server must deal with. Look first to the
filesize limit specs in the shell you are using (Bsh ulimit), which can
cause them to be lower than the operating system limits defined for the
username (as per AIX /etc/security/limits) - which may also need ceiling
boosts. The TSM server is normally run as root: in some systems, root
may not have had ceiling limits of sufficient size. Sometimes the TSM
server is run as other than root: in the OS, non-root users are
typically given much smaller Unix Resource Limits than root - much too
small to run the TSM server, and need adjustment.
ANR7860W INSUFFICIENT SPACE AVAILABLE FOR FILE file name.
Usually when DEFine SPACETrigger is in effect and you have not left
sufficient space for the expansion you specified to actually occur.
ANR8208W TCP/IP driver unable to initialize due to error in BINDing to Port
1500, reason code _____
(The reason code is the return code from the TCP/IP bind API, which is
to say the operating system error number, so see your particular
operating system reference for the meaning.)
The Tivoli message description has good advice: use the 'netstat'
command (or perhaps 'lsof') to look for another process which has
control of that port, and that your server options file does not specify
more than one use of that port number.
It may be that you are attempting to start another instance of the
server. Make sure that the server process is gone before trying to
restart.
ANR8209E Unable to establish TCP/IP session with 127.0.0.1 - connection refused
Your dsm.opt has 127.0.0.1 set for TCPCLIENTAddress, possibly put there
by the GUI preference editor. Remove the unnecessary, incorrect option
and restart the client scheduler.
ANR8212W Unable to resolve address for ____. (SESSION: ___)
That sounds like a DNS lookup problem. If no one has changed the
options files, then suspect the DNS service which provides lookup
service to that TSM server system. Make sure no one has changed the
/etc/resolv.conf on that system. In Unix, use the 'host' and/or 'dig'
commands to check things out. If no cause can be found, try putting the
full hostname into the options file - including the domain. If still
nothing, try putting the IPaddress rather than host network name into
the options file. One customer encountered a weird hostname showing up,
with a newline in it: upon restarting the CAD service on that client,
the error went away (but someone may have undone a bad change there.)
See also: DNSLOOKUP
(May be accompanied by msg ANR2716E.)
ANR8214E Session open with 111.222.333.444 failed due to connection refusal.
The TSM server is attempting to contact the TSM client on the system
identified by the given IP address, to initiate a scheduled session, as
with client option "SCHEDMODe PRompted" in effect, but could not.
Is the client system up, connected to the network, and the network
working (can you 'ping' it?)? If so, is the client schedule process (or
the CAD) not present, or stopped? Has the client been told to use an
unexpected port number? Is there a network problem, or a client problem
in performing data communications?
ANR8216W Error sending data on socket <number>. Reason 32.
The session was interrupted from the client end such that the connection
broke. For example, in an administrative session, the person performed a
Ctrl-C keyboard action to get out of the session, as in abandoning a
"'C' to cancel" cancelled Select operation, which otherwise can take
hours to terminate by itself. (The 32 is the Unix errno: EPIPE Broken
pipe.) The client dsmerror.log may contain "ANS1074W ***User Abort***".
ANR8220W TCP/IP driver is unable to set the window size to 655360 for client
_____________. The default value will be used.
The tcpwindow size defined under TCP/IP under the operating system for
the client is set smaller than the tcpwindow size defined as the ADSM
client option. In AIX, use the 'no -a' command to see the tcpwindow
size (sb_max value).
ANR8263W End of tape detected on <device type> volume ______ in drive _____ of
library _____.
The server has detected end of tape for the specified volume. The volume
reached the end of tape before arriving at the estimated capacity value
specified in the device class.
The current process stops writing to the specified volume. The status of
the volume is set to read-only. The server accesses another volume if
more data must be stored.
Reduce the estimated capacity in the device class. You can issue the
Query Volume command to view the actual capacity of the volume after it
is full. Use the UPDate DEVclass command to change the estimated
capacity for the device class. Do not use the UPDate Volume command to
change the access mode.
ANR8290W Error sending data through shared memory. Reason <errno>.
The reason code is from the "msgrcv" system call, so it can be found
in /usr/include/sys/errno.h. In particular, 36 is EIDRM, which means
that someone (either the server or someone else using the ipcrm
command) has deleted the message queue which was being used for this
session. Look at the server activity log to see if this session was
canceled or timed out or what.
ANR8300E I/O error on library STK1 (OP=8401C058, CC=205, KEY=FF, ASC=FF,
ASCQ=FF, SENSE=**NONE**, Description=SCSI adapter failure)...
With DLT. This is actually a library failure, not a drive failure.
Do you have "fast load" enabled on the library? This is necessary to
run it with ADSM.
The OP code represents the IOCTL that the server issues to the device
driver. The values of opcodes are platform specific. You can decode
using the symbol table (option 2) in the device driver test tools,
i.e. mttest, optest and lbtest. You can also refer to the codes listed
in IBM Technote http://www.ibm.com/support/docview.wss?uid=swg21155888
("Decoding Opcodes for the TSM Device Driver IOCTLs").
ANR8300E I/O error on library STKLIB (OP=00006C02, CC=207, KEY=FF, ASC=FF,
ASCQ=FF, SENSE=**NONE**, Description=Device is not in a state capable
of performing request).
Seen with a computer word-length mis-match, as in the customer having
installed AIX 5.2 with a 32 bit kernel, but 64-bit TSM software. The
problem disappears with matching 64-bit AIX in place.
The problem may also be exhibited with message ANR8418E.
ANR8301E I/O error on library ______ (OP=________, SENSE=N/A, CC=________).
Notes: The OP and CC values are hexadecimal. OP is the operation code,
where the last four hex digits are 6Dxx (e.g., 6D32), where 6D is the
hex represenation of ASCII letter 'm', which is part of the
"#define MTIO___" for the operation, as found in the
/usr/include/sys/mtlibio.h header file that is installed with the atldd
device driver, and the last two hex digits identify the specific MTIO___
operation:
31 MTIOCLM Mount a volume on a specified drive.
32 MTIOCLDM Demount a volume on a specified drive.
33 MTIOCLC Cancel a queued asynchronous library operation.
34 MTIOCLSVC Change the category of a specified volume.
37 MTIOCLQ Return information about the tape library and its
contents.
38 MTIOCLQMID Query the status of the operation for a given message
ID.
39 MTIOCLSDC Assign a category to the automatic cartridge loader for
a specified device.
3A MTIOCLEW Waits for an asynchronous library event to occur.
40 MTIOCLRC Release a previously reserved category.
41 MTIOCLRSC Reserve one or more categories.
42 MTIOCLSCA Set a category attribute.
The CC value is an MTCC_* number condition code returned from an I/O
control request. You can look them up in the manual "IBM TotalStorage
Tape Device Drivers: Programming Reference", topic "Error Description
for the Library I/O Control Requests", near the back of the manual. The
are also listed in the /usr/include/sys/mtlibio.h header file that is
installed with the atldd device driver.
Also refer to IBM site Technote 1171360 "How to understand 3494 error
message ANR8301E".
ANR8301E I/O error on library ______ (OP=005C6D37, SENSE=N/A, CC=00000023).
CC is the I/O completion code (some of which are documented in Appendix
B of the Messages manual), where 23 indicates that the device does not
exist in the library.
Seen when the mtlib status command shows a drive as "Device available to
Library.", but ADSM finds itself unable to actually use the drive (an
ADSM server 'SHow LIBrary' has it avail=0). Attempted mtlib will show:
mtlib -l /dev/lmcp0 -m -f /dev/rmt3 -V 000081
Mount operation Error - Device is not in library.
If the drive just underwent a card pack (electronics) replacement, the
classic cause of the problem is that the CE set the wrong drive serial
number into the new card pack: getting the same serial number as that of
another drive really confuses the Library Manager and causes mounts to
fail like this. In AIX, verify by first doing 'lscfg -vl rmt\*' and
then compare the true serial numbers from that report with what you get
from 'mtlib -l /dev/lmcp0 -D'. Have IBM fix any inconsistencies.
Look at the 3590 control panel: make sure that the drive is functional,
and that the appropriate Path is Online.
There was a defect in 3.1.x and 3.7.x Server code, affecting drives in a
3494 library, where an UPDate DRive command without a DEVIce
specification can cause the device number to be lost to the server.
(See APAR IC27477)
ANR8301E I/O error on library DAFFY (OP=004C6D31, SENSE=00.00.00.67).
ITSM volumes in the 3494 library have the wrong Category Codes. The 67
indicates no volumes of the required category in the library. You can
verify this via the 'mtlib' command. The most egregious cause of
something like this is a "teach" operation having been performed on the
3494 - which resets Category Codes to 0xFF00 (Insert category). Correct
Category Codes using the 'mtlib' or 'tapeutil' command. (IBM doc
indicates that AUDit LIBRary does *not* fix category codes.)
ANR8302E I/O error on drive ________ (/dev/rmt_) (OP=OFFL, CC=0, KEY=02,
ASC=3A, ASCQ=00)
This message with OP=OFFL is seen at ADSM start-up, when it is issuing
an MTOFFL ioctl command to rewind any tape that may have been left in
the drive and unload it, such that the library will put it away.
ANR8302E I/O error on drive ____ (____) (OP=READ, Error Number=23, CC=403,
KEY=08, ASC=14, ASCQ=01, SENSE=..., Description=Media failure).
Is this a case of a dirty read/write head? Is the same tape okay if
used in another drive? Try using tapeutil or the like to exercise the
tape. See topic "Tape drive cleaning".
ANR8302E I/O error on drive ________ (/dev/rmt_) (OP=WRITE, CC=0, KEY=03,
ASC=0C, ASCQ=00)
The ASC=0C indicates a failed Write. In a LABEl LIBVolume operation,
this may reflect the tape already having an internal label, but
OVERWRITE=YES was not specified as part of the command.
During a Backup, the situation is handled same as writing a tape that
filled: a new tape will be mounted to continue the operation
uninterrupted.
ANR8302E I/O error on drive ________ (/dev/rmt_) (OP=READ, CC=0, KEY=20,
ASC=00, ASCQ=00, SENSE=F0.00.20.FF.FF.D8.50.48. ..., Description=An
undetermined error has occurred). Refer to Appendix B in the
'Messages' manual for recommended action.
Seen to result from a 'CHECKIn LIBVol' being done, but the tape has no
label. Run a dsmlabel on it.
Note that with a 3494 the tape will be spit out, so expect to find it
in the Convenience I/O Station.
Accompanied by message ANR8353E.
ANR8302E I/O error on drive 8MM (/dev/mt0) (OP=SETMODE, CC=207, KEY=05, ASC=26,
ASCQ=00, SENSE=70.00.05.00.00.00.00.18.00.00.00.00.26.00.00.80.00-
.04.00.01.00.00.00.12.BB.5E.00.00.D0.00.00.00.,
Description=Device is not in a state capable of performing request).
You probably used the drive with an operating system command, which
fouled up the settings as compared to what was originally defined via
SMIT in establishing the drive as an ADSM device. Remember that the
"/dev/mt" name indicates that you are using an ADSM driver to access the
device. Repeat the SMIT steps to re-establish the settings.
ANR8302E I/O error on drive DRIVE1 (mt0.0.0.3) (OP=WRITE, Error Number=121,
CC=0, KEY=00, ASC=00, ASCQ=00, SENSE=**NONE**, Description=An
undetermined error has occurred). Refer to Appendix D in the
'Messages' manual for recommended action.
As seen on a SCSI-attached library (e.g., 3583) can indicate use of a
faulty device driver. For example, use of Adaptec 29160 SCSI cards, but
with the Adaptec driver instead of the IBM driver. Refer to the
IBMUltrium.Win2k.Readme.txt or similar file for guidance.
ANR8302E I/O error on drive TAPE0 (MT0.0.0.2) (OP=READ, Error Number=1235,CC=0,
KEY=2B, ASC=4B, ASCQ=00, ...
ASC=4B+ASCQ=00 is a Data Phase Error, meaning that an error occurred
during the Data Phase of a SCSI operation; as when a SCSI target device
receives a zero-length data frame, or too many parity errors have
occurred during the Data-In and Data-Out phases of an operation.
Could be caused by a bad (SCSI) cable (closely inspect the pins; try a
different cable) or faulty SCSI termination or exceeding the SCSI chain
length. If possible, try another drive. If a dsmserv restore, assure
that the device config you are using accurately describes the library
and drives. The library/drive combination may be seldom-used equipment,
in questionable condition: initiate testing at the OS level, writing to
a tape with a command like tar, then try to read the data back. If that
works, try using a command like tapeutil to get raw data blocks off the
tape that had an error: a failure there may point to the tape. For
FibreChannel HBA in Windows, check the MAXSGLIST spec (MAXimumSGList
parameter in the Windows registry).
And, as always, assure that the drive has been cleaned.
ANR8304E Time out error on drive ____ in library ____
May be a mechanically faulty tape keeping a mount from succeeding:
examine the tape cartridge.
ANR8308I <ReqNo>: <Devtype> volume ______ is required for use in library ______;
CHECKIN LIBVOLUME required within __ minutes.
A volume required for an operation (Backup Stgpool input, etc.) but is
not in the library. You have as many minutes as defined on your
Devclass MOUNTWait value to do the CHECKIn LIBVolume. Consider doing
CHECKLabel=No if your library is unchanging and you believe that
checking the volume label is superfluous: remember that a mount can take
considerable time.
ANR8310E An I/O error occurred while accessing library ____.
That could be anything. I recommend inspecting the tape involved in the
operation causing the error: in one case with a 3590 tape, I found that
its leader block had been flipped over (ostensibly by a human), making
it impossible for the drive to get ahold of the end of the tape to pull
it into the drive.
ANR8311E AN I/O ERROR OCCURRED WHILE ACCESSING DRIVE <DriveName> FOR
<low-level operation OPERATION, ERRNO = <DriveErrno>.
Tivoli says: Ensure that the DEVICE parameter associated with the drive
is identified correctly in the DEFine DRive command, and that the drive
is currently powered on and ready. Otherwise...
Errno 22 (AIX: EINVAL) tends to indicate that your tape device driver
(e.g., Atape) is downlevel relative to the drive and needs to be
upgraded to understand what the drive is saying.
Errno 78 (AIX: ETIMEDOUT) A variation on errno 22. TSM is making
requests to these devices which they cannot satisfy.
Some customers who encountered this reported resolution by updating a
device driver or fixing a hardware component. Examples:
- Replaced SCSI cables
- Replaced or updated cardpack in drives
- Upgraded Atape driver
- Upgraded drive and SAN switch firmware levels
- Applied fix for AIX APAR IY10452 and upgraded SDG firmware
Can be because there is a tape in the drive when ADSM wants to mount
one there, and ADSM doesn't remember having mounted that one. This
can occur when the opsys was shut down with ADSM still up and a tape
left mounted per MOUNTRetention; or the library might be shared by
multiple ADSM servers without mediation. This condition will typically
be accompanied by message ANR8455E.
In some libraries, this can occur during AUDIT LIBRARY when there is a
cleaning tape in library: that tape will be loaded into the drive and
audited, generating a read error message.
From a developer's look at the logic: Basically, it's a burp from the
tape drives that gets propogated into our lowest level of code. At that
point, we're checking the return code off of the tape, recognize that
things aren't so whippy and bail out of that transaction.
Might be caused by an ill-behaving device on the SCSI chain, interfering
with the quality or content of the signals from an active device. If
encountered with 3590 drives or other drives having two SCSI ports on
the back, consider trying the alternate port to see if that eliminates
the problem. If multiple drives on the chain, try reducing the chain to
one device, and alternate among them to find the faulty one.
ANR8314E LIBRARY ________ IS FULL.
Well, maybe it is the case that the library is full. Issue an mtlib or
like command to get storage cell statistics from the library. Smaller
units, like the 3581 Ultrium Tape Autoloader, will have a row or matrix
of indicators for each cell that is occupied. The library may have a
false indication of its inventory, and a re-inventorying operation, as
may be performed at power-on time, may correct that. If the false
indication persists, the unit has a problem requiring service. Note
that some libraries have reserved cells and/or areas configured for
input-output operations, so the cells you see not being used may be
off-limits.
If you added a frame to your linear library or column to your silo
library and find it not being used: if a library manager is in effect,
assure that you told it that there is such a new area to be used. Assure
that a Teach operation was executed for the library to learn the
physical position of the new space. If the problem persists, it may be
that the library firmware level is not high enough for it to understand
the new library extension.
ANR8341I END-OF-VOLUME REACHED FOR <Device_Type> VOLUME <Volume_Name>.
The server has detected an end-of-volume condition for the given volume.
The volume is marked full. If more data must be stored, the server will
access another volume for it. (TSM will span a large file onto another
volume as another Segment.) This is a common, expected msg.
If this happens and the volume does not show full when you query it, it
may be the case that TSM is in the process of spanning a large file onto
another volume: a very large file will likely span volumes, and TSM has
traditionally not updated volume statistics until the aggregate or large
file has been completely written (to its final volume). If the volume
still shows Filling after the storage pool operation has complete, this
might be a problem induced by bad tape drive microcode.
ANR8353E 010: I/O error reading label of volume in drive ________ (/dev/rmt_).
With message ANR8302E, can result from a 'CHECKIn LIBVol' being done,
but the tape has no label. Run a dsmlabel on it.
If you know that the volume was in a stgpool, then it is likely that
something other than TSM wrote over the tape, as in a shared library
with inadequate controls. (Various OS utilities such as Unix 'tar' write
their tapes without labels.) You can try reading the tape on all the
drives in your library, as a valiant attempt. You can employ a utility
to print the first few blocks of the tape to get a sense of what wrote
over it. Bad drive microcode may also be at fault, as in writing an EOT
mark at BOT.
Note that for a 3494 the tape will be spit out of the robot, so expect
to find it in the Convenience I/O Station.
ANR8355E I/O error reading label for volume ______ in drive ______ (____)
The internal volume label cannot be read...
First: you're sure that this is a tape which has been in a TSM storage
pool, right?
Is this a new scratch tape in the TSM storage pool? You need to label
new tapes, either via 'LABEl LIBVolume' or the dsmlabel command. And you
should not trust "pre-labeled" tapes from a tape vendor to actually be
labeled correctly.
Or maybe your library is shared with other systems, and they took the
liberty of using a tape you thought was yours. This most typcally occurs
in mainframe environments where the tape library is shared. If this is
the case, tread carefully, as another application may now have viable
company data on that tape. Also consider that this may have happened to
more than just this one tape.
Do you have level-adequate device drivers on your system to handle the
drive? Do your drive and library have the appropriate microcode level to
handle the media type being used?
If your tape technology is troublesome, you may be victim to a cranky
tape drive: it can be that the drive loses contact, rewinds the tape,
and then declares itself ready for more TSM data, resulting in the front
of the tape being written over.
Did you upgrade your server, and is this message only appearing with
tapes written with the new server code...tapes which had been in the
scratch pool? Maybe it's unhappy with the label content on those
previously-labeled tapes, such that running a 'dsmlabel' or 'LABEl
LIBVolume' with the new software may make the new server level happy
with them. Or maybe you need to upgrade your Atape level.
Possible TSM defect (2000/11): This problem happens when a process is
writing to tape and that tape reaches its EOT mark prior to all the data
that needs to be writen goes out. The tape is ejected, and generates
this error. A new tape is mounted and the operation continues normally.
The error is being generated erroneously. Apply maintenance.
Other analysis steps...
Via your operating system, use a utility (in MVS, the 'tapeedit' or
'ditto' commands; in Unix, the 'dd' command) to try to capture and
examine what data *is* at the beginning of the tape (making sure that no
label processing is attempted). If there is readable data there and
it's not a tape label, then the tape was written over: the data content
may give you a clue as to what did it. The tape is history. If it's a
Primary Storage Pool tape, try to use your Copy Storage Pool to restore
the volume.
If even the OS utility is having trouble reading the tape, try
physically examining the tape, including the surface at the beginning to
see if there is anything there that can be manipulated. Then try on all
drives available to you, to see if one can manage to read it.
In any case, you're stuck. Do a Query CONtent to see what's on the tape
and if it can be recreated or ignored as lost. And by all means research
the problem to see how it happened, so as to prevent recurrence.
ANR8359E Media fault detected on <Devtype> volume ______ in drive ____ (____) of
library ____.
Check your operating system error log for indications as to whether the
problem is the tape (media surface defect; dirt on media) or the drive
(head may need cleaning). If it looks like media, follow the advisory
in the message manual
ANR8376I Mount point reserved in device class ______, status: RESERVED.
A Reserve also happens when a process needs multiple devices (eg two
tape drives) in *different* device classes and only one is available...
which is then reserved, while the process waits for another one to
become available, making sure the reserved drive is still available for
use when the other resource becomes available. Whereas most customers
will be operating with one library/drive type and a single device class,
this case would probably be uncommon.
You can do 'SHow MP' and possibly 'SHow LIBrary' to verify this status.
Do 'Query REQuest' to see if anything oustanding. Your only quick
recourse may be to restart the TSM server.
Identify the drive involved and look back in the TSM server Activity Log
to try to determine the circumstances under which the condition occurred
and possible allied software components involved (library client,
storage agent) to help avoid the situation and perhaps lead to
correction.
See also: Reserve
ANR8381E LTO volume ______ could not be mounted in drive ____ (/dev/rmt_)
May be accompanied by messages:
ANR8945W Scratch volume mount failed 659ACP.
ANR1404W Scratch volume mount request denied - mount failed.
ANR8779E Unable to open drive /dev/rmt2, error number=46.
Check the obvious first - that the drive is ready and usable.
If the problem occurs just with that one volume, over multiple drives,
it may be the tape. Again, check the obvious first: that the
write-protect flipper on the cartridge is not set to prevent writing.
Look back in your Activity Log for indications of issues with the vol.
If the volume is, for some reason, assigned to a storage pool, do a
'Query Content ... F=D' to see if the volume, which as a scratch should
not contain any data, in fact does. If it does, do an Audit Volume to
try to fix its state. If not, do a Label Libvolume to fix any label
problem and reset the volume's state.
ANR8413E <Command>: DRIVE _____ IS CURRENTLY IN USE.
The drive is apparently in a Busy condition. The drive's front panel
should indicate what its situation is: perhaps it got left in an odd
state by the CE (in Service mode, etc.). If it's a Magstar drive, you
can use the mtlib command or the like to query its status. You should
additionally check the state of your operating system definition of the
drive. For example, a drive in AIX which shows Defined is unusable: it
has to be in an Available state. Similarly, the tapeutil command can be
used outside of TSM to test the drive. Follow "the chain" outbound from
the operating system and host to the drive and find where things are
awry. Don't overlook a bad cable or two drives with the same SCSI
address. Also important is knowing when this started happening, to track
it to an event or busy fingers. If the drive is physically unoccupied,
a power cycle may clear erroneous state.
ANR8418E DEFINE PATH: An I/O error occurred while accessing library STKLIB.
May be a computer word-length mismatch. See ANR8300E.
ANR8419E DEFINE DRIVE: the drive or element conflict with existing drive in
library.
Maybe because you defined the wrong drive as the SMC. Check the body of
documentation concerning your mini library and its configuration for
TSM, particularly against what you see for elements in OS queries.
In a 3570, this was caused by the configuration of hardware setting in
the panel as SPLIT instead of BASE.
ANR8420E DEFINE DRIVE: An I/O error occurred while accessing drive ________
Typically seen when the drive you are trying to define is already in
use, either actively or because of an IDLE or DISMOUNTING type tape
still mounted. For example, you are using the drive for a Unix tar
operation, or the drive is shared with another server.
More problematic is when you go to define a physical drive that is
already defined and in use by TSM: you should not do DEFine DRive more
than once for a single physical tape drive, or you will encounter
conflicts during TSM operations which can result in the drives being put
offline.
A trivial cause is that the drive is not available, as for example being
in a Defined state in AIX, rather than Available.
I encountered this when doing a 'DSMSERV DISPlay DBBackupvolumes
DEVclass=OURLIBR.DEVC_3590 VOL=000001' and drive 301 was already in
use, busy with a tar operation. Can also occur if you invoke the next
command too soon, such that the dismount of the prior volume has not
yet finished. If you are attempting to do this in order to share drives
across *SM servers, be aware that *SM may not release the drive unless
you render it offline, which can make such sharing prohibitive.
If encountered in a server migration to a new platform (of the same OS),
via Restore DB, this may be remedied by doing DELete DRive, then DEFine
DRive, then DEFine PATH for the drive.
ANR8426E CHECKIN LIBVOLUME for volume ______ in library ________ failed.
Possibly, no tape drive available if CHECKLabel=no was not chosen.
Or you attempted to check in a tape whose Category Code is not FF00
(Insert category), as a physical insertion or Checkout would make it.
(This safeguards against the inadvertent adoption of another ADSM
server's tapes when multiple ADSM servers share a library. Consider
using the 'mtlib' command to change the category to FF00, then repeat
the Checkin.)
ANR8442E CHECKOUT LIBVOLUME: Volume ______ in library ________ is currently
in use.
You performed a CHECKOut, but the volume is either mounted or
dismounting. If mounted, try a DISMount Volume, if no process is
using it. If still not obvious what the problem is, do a direct inquiry
of your library to see what the state of the volume is, as in the 3494
action 'mtlib -l /dev/lmcp0 -vqV -V VolName'.
ANR8443E : Volume ______ in library _______ cannot be assigned a status of
SCRATCH.
You attempted 'CHECKIn LIBVolume ... STATus=SCRatch' or 'UPDate
LIBVolume ... STATus=SCRatch', but the volume is already known by TSM to
be in one of its storage pools. You really meant to use STATus=PRIvate.
ANR8444E DEFINE DRIVE: Library ______ is currently unavailable.
Most likely because you did a DEFine DRive without first having done the
prerequisite DEFine PATH.
ANR8444E Internal Operation: Library _______ is currently unavailable.
As seen with a 3494 library. Possibly, you communicate with the 3494
over ethernet and either something happened to your /etc/ibmatl.conf or
the network address specified in the file is no longer valid: it is best
to specify an IP address rather than a network name, to avoid name
service and reverse lookup problems.
Make sure your lmcpd daemon is running.
Go to the 3494 and make sure that its Library Manager is up, that the
library is in an Online, Automated Operation state, and that the host
from which you are trying to reach it is still allowed in its LAN Hosts
list.
If a TSM 5.1+ system, you may have a Path problem: do Query PATH and
check for problems.
ANR8444E UPDATE DRIVE: Library ______ is currently unavailable.
Well, check the physical library and the paths to it.
ANR8447E No drives are currently available in library <LibName>
Simple cause: Your let the Devclass MOUNTLimit default to 1 such that a
multi-tape operation like MOVe Data cannot proceed; or maybe you
specified a MOUNTLimit with a value more than the number of drives
available.
Can be caused when you try to CHECKIn a 3590 tape without the
parameter "DEVType=3590": it thinks you want other than a 3590 drive
and only 3590 drives are available.
Or you defined your drives in Devclass to be a type other than that
which can be used for your current tape types: you need to redefine the
drives.
Maybe the volume to be mounted is of a type or format which cannot be
processed by the available drive. This is a customer configuration issue
in having contradictory formats and capabilities.
Do 'SHow LIBrary' to see what *SM thinks of the drives. Supplement with
'mtlib' display of drive state.
In a tape drives upgrade (e.g., 3590E->3590H) you may have to delete all
the drives and paths and recreate them.
In a library shared by multiple servers, you need to define the number
of drives actually allocated to each server; otherwise, if you let
DRIVES prevail, each server may think it should have access to all the
drives in the library.
Check the state of the drives in your opsys: in AIX, via lscfg and
lsdev - where they should have a state of Available (not Defined).
If a 3494 library, use the 'mtlib' command to check the state of the
library and drive: the drives need to be online and available to the
library.
Do Query Path and check for drives in an Offline state.
There may be an accompanying "ANR8376I Mount point reserved" message.
If the TSM server is Windows, rebooting to remap the drives may help.
One 3466 customer reports that after a 3466 upgrade that tapes have a
device class of 'funny'/"Unknown": the tapes could be read, but not
mounted for writing. He had to do an UPDate Volume ACCess=Reaonly on
all old tapes and add some new ones.
See also explanation for ANR1144W.
ANR8448E Scratch volume ______ from library ________ rejected - volume name is
already in use
Most likely, you are trying to delete one volume from a Backup Series,
as in the Full that the subsequent Incremental(s) is/are dependent upon.
Use Query VOLHistory to inspect it in context. Try to invoke DELete
VOLHistory so that one whole, old Series is deleted.
ANR8452E Initialization failed for 349X library ____; will retry in 2 minute(s).
Well, the obvious thing to do when your system, which has been accessing
the 3494 library fine up until now, has a problem doing so is to at
least check the status of the library via the 'mtlib' command, if not
visit the library and check its status. Your operators may have opened
the library to deal with a tape problem, and failed to put it back into
Automated Operation mode, or the CE may be working on it, etc. Or there
may be a network problem. Do the basic continuity checks along the line
to isolate the problem. Customers with major libraries should have
active monitoring of them: don't wait for TSM to tell you, indirectly,
about problems with your library.
ANR8455E Volume _______ could not be located during audit of library ________
Typically occurs when a tape is idle in a drive when the opsys is shut
down without shutting down ADSM beforehand, thus causing the tape to
be trapped in the drive. When ADSM is restarted it cannot find the
tape in the library storage cells and thus this message.
Expect to see message ANR8311E during attempted use of the drive that
has the tape trapped in it.
ANR8463E <MediaType> volume ______ is write protected.
May indicate exactly what it says, indicating that the cartridge is set
to prevent writing. But in practice, this messages is often seen on
tapes which have just recently been written and, in this mount case,
experience this condition despite the cartridge allowing writing. This
is apparently caused by invalid sensing by the tape drive, perhaps due
to faulty microcode. Note that TSM does not set the volumes Access to
Readonly - and it may keep trying to use that one volume.
ANR8469E Dismount of 3590 volume ______ from drive <Its_TSM_name> (/dev/rmt_) in
library ________ failed.
This may not be accompanied by an I/O error indication.
The AIX Error Log may contain an lmcpd SYSLOG message entry saying
"ERROR on <Libname>, ERA 6D Library Drive Not Unloaded"
and there may be TAPE_ERR4 entries. Further, when you go to manually
unload the drive there may not be any error code on the drive panel,
and the manual unload will work fine.
This set of circumstances suggests that the drive hardware is working
fine, but that there may be a fault in the "card pack" (electronics).
ANR8500E No paths are defined for library ______ in device configuration
information file.
Seen in one customer's attempt to perform a 'dsmserv restore db', though
the devconfig file was fine. The cause turned out to be another systems
guy having changed the network address of the 3494 without telling
anyone: TSM could not get to the library through lmcpd and the
now-inaccurate /etc/ibmatl.conf file.
ANR8555E An error (<Error Code, Error String>) occurred during a read operation
from disk <DiskName>.
The error number is from your operating system. Pursue that. In the
case of Windows, refer to a good error codes web page, like
http://techsupt.winbatch.com/webcgi/webbatch.exe?techsupt/
tsleft.web+WinBatch/Error~Codes+Windows~System~Errors.txt
ANR8749E Library order sequence check on library ________.
ADSM thinks that mount/dismount is pending, in progress, or done for
the volume, typically because ADSM thinks that the tape was left
mounted in the drive by some other action.
ANR8775I Drive ________ (/dev/rmt_) unavailable at library manager.
Spontaneous cause: Intervention Required condition; library panel
message that the drive has failed and requires service.
Manual cause: When we go to the 3494 console and use the Availability
menu to change a bad drive's status to Unavailable until it is repaired,
ADSM will sense this when it goes to use a drive. A 'SHow LIBrary' will
show "avail=0" for that drive.
ANR8776W Media in drive _______ (/dev/rmt2) contains lost VCR data; performance
may be degraded.
(This can be an individual tape problem; but if it happens on all
tapes, suspect a recent application of faulty 3590 microcode.)
Best to do a 'MOVe Data' to get data off the tape, then do a dsmfmt to
reinitialize the tape.
ANR8779E UNABLE TO OPEN DRIVE <DriveName>, ERROR NUMBER=<OS error number from
the open attempt>
The drive cannot be opened by *SM. In AIX, error number is the value
of errno returned by the operating system. In Windows, it is the
Windows Error Code (aka Exit Code, or Microsoft system error). In OS/2,
it is the value of the return code from the call to DosOpen.
Unix: Errno 2 is "No such file or directory". In the trivial case, this
may simply reflect having issued a DEFine DRive or DEFine PATH
command like "DEVIce=rmt1" instead of "DEVIce=/dev/rmt1" (there is
no 'rmt1' in your current directory). It can also indicate that
the drive name you are attempting to use in TSM does not match the
one defined in the operating system. In AIX, consider doing like
'rmdev -dl fscsi0', then 'cfgmgr ...'.
Unix: Errno 6 is "no such device", indicating a problem in the
operating system's configuration with the device. Has the
/dev/rmt_ definition disappeared from your system (as in someone
doing an rmdev, or a cfgmgr with drives powered off)? On AIX, use
lsdev to assure that the drive has state Available, not just
Defined. You can double check it by using tapeutil/ntutil or the
like to try to open the drive as well.
Errno 16 is Resource Busy - Device Busy. Use an OS command, such
as lsdev in AIX, to check device status, and visit the drive's
front panel if necessary. In all cases, make sure that the drive
is properly cabled, has the right SCSI or like address, and is in
a proper state to be used by a host application. If the drive is
physically connected to more than one computer system, make sure
it's not in use by another system.
Errno 46 is Device Not Available, as in the tape drive being in an
Offline state. If AIX, use 'errpt' cmd to seek error detail. If
no entries reflected in error log, perhaps the device is not
defined to AIX: does 'lsdev -Cl rmt_' show Available? If a new
SCSI device, assure element numbers correct.
Errno 47 means that the media is write-protected.
ANR8782E Volume ______ could not be accessed by library ..."
This message is issued when an ERA code is received from the library
manager indicating that it can't access the volume (ERA 64, 67, 6B, or
75). Use the 'mtlib' command to check for the volume being in the
library's inventory, or use the library's console.
ANR8806E Could not write volume label <VolName> on the tape in library _______.
Seen on LABEl LIBVolume. The tape drive failed to write the label on
the volume. May be due to a defect on the tape. Or, more
interestingly, can be caused by a gross physical defect in the mounting
itself, as can be verified via command external to *SM, such as doing
the mount via the 'mtlib' command, which may yield the error message:
"Mount operation Error - Internal error.". If this is an unused tape,
consider doing a 'CHECKOut LIBVolume REMove=No', getting it mounted on
an available drive, then running the 'tapeutil' command operations
"Read and Write Tests" and "Erase" - which will give the tape a workout
and *maybe* overcome tight winding or like problems (but don't get your
hopes up, even if these exercises seem to work). If those exercises
look good, do a LABEl LIBVolume and give the tape a try in a
non-critical way.
ANR8813W Unable to read the barcode of cartridge in slot element 48 in
library _________.
This has been seen with downlevel drive microcode, particularly in the
3575 librraries, which have been notorious.
ANR8820W REPAIRING VCR DATA FOR VOLUME volume name IN DRIVE drive name; MOUNT
MAY BE DELAYED.
The Volume Control Region of the cartridge in the drive are lost or
corrupted, which results in the inability of the drive to do fast
locates to file positions on the cartridge. The VCR is being rebuilt
during the volume mount process in order to avoid performance
degradation on this and future mounts of the volume. There may be a
long delay because the VCR is rebuilt by spacing the tape forward to the
end-of-data.
Solution: See "VCR data" topic.
ANR8824E I/O Error on library _____; request 0F0DF5AE for operation 004C6D31 to
the 3494 Library Manager been lost.
Explanation: A command for the operation was issued to the Library
Manager and a response was not received within the maximum timeout
period.
System Action: The operation and the transaction fails.
User Response: Verify that communications with the library is
operational, that it is online and ready for commands. If the problem
persists, provide your service representative with the 3494 Library
Manager transaction logs and the request id from the failed operation.
Note: If *SM was trying to perform a dismount, this may result in
subsequent ANR8469E Dismount ... failed messages appearing repeatedly:
an mtlib dismount would need to be performed. The AIX Error Log will
also have "Resource Name: lmcpd" errors.
ANR8830E Internal 3590 drive diagnostics detect excessive media failures for
volume XXXXX. Access mode is now set to read-only.
Accompanied by: ANR8831W Because of media errors for volume XXXXX, data
should be removed as soon as possible.
Reflects a MIM error (q.v.).
ANR8834E Library volume <Volname> is still present in library <Libname> drive
<Drivename> (<OSdevname>), and must be removed manually.
As seen during an AUDit LIBRary.
If this occurs for all the drives in the library, and particularly when
the Volname is reported as "**UNKNOWN**" and library inspection shows no
volumes mounted, then there is a serious problem with the library or in
getting valid information from it. This may be due to miscommunication
deriving from drivers in the operating system and/or microcode in the
library manager, which upgrades might fix. Consider utilizing an OS
command or interface provided by the library vendor (such as 'mtlib' in
the case of IBM Magstar) to query the library outside of TSM to try to
obtain the same type of information. Likewise do the same at the
library's control display, if it has one. If the problem persists,
power cycle the library at an opportune time and have it inventory
itself.
ANR8840E Unable to open device /dev/smc0 with error 50.
Seen in a 3584 tape library, or similar SCSI library. 50 is the Unix
errno: ENOCONNECT - Cannot Establish Connection. / No connection.
Has been experienced when the first drive was removed from the library
for repair, while leaving the library available for service. But it
turns out that the first drive, /dev/smc0, is the "master drive" for the
library, and with it removed, the library does not function.
Less obvious cause: Where Atldd and Atape are being used to control the
library and drives, old versions may be in play. If so, upgrade your
Atape and Atldd driver levels.
ANR8847E No 3590-type drives are currently available in library _____.
As during an attempted Label Libvolume. This command will not wait for
a drive to become available, even if one or more drives have Idle tapes
or are in a Dismounting state. Try again after a dismount has left at
least one drive available. It might alternately be the case that a
drive or two is offline: do 'Query DRive'.
ANR8848W Drive _______ of library _______ is inaccessible; server has begun
polling drive.
Look for operating system error log entries for the drive, detailing
problems with it. And there's no substitute for inspecting the drive.
On a 3494, do like: mtlib -l /dev/lmcp0 -f /dev/rmt2 -qD
to check the status of the drive, which may show "Device not available
to Library.". The 3494 Library Manager PC should now contain error
indications that the CE can examine to determine what went wrong. You
could try, from the LM's control panel, making the drive available to
the library again - and perhaps from that action get an indication of
its problem. It may result in an Intervention Required condition at the
Library Manager panel, which would help identify the problem. You could
try doing a Reset Drive from the drive's panel.
This can result from doing 'rmdev -l rmt_' in AIX to take the drive
offline without doing 'UPDate DRive ... ONLine=No' and then *SM tries to
use the drive.
Or perhaps you did not do 'UPDate DRive ... ONLine=No' before using the
drive for non-TSM purposes. (TSM will eventually give up on the drive
with message ANR8471E, and 'Query DRive' will show the it Unavailable.)
Also seen in sharing 3590 drives in a 3494 library (auto-sharing). When
one server obtains the use of a drive, other servers requiring the use
of that drive will find that the drive is locked, and begin to poll the
drive. When the drive becomes free again, the following message results:
ANR8839W Drive _______ of library _______ is accessible. The pending
operation will then proceed on the second server. Note however, that
this behavior is governed by the MOUNTWAIT parameter; if this value is
set too low, the pending transaction will time out before the server
with ownership of the drive releases it.
ANR8914I Drive ____ (____) in library ________ needs to be cleaned.
The drive has returned indications that it needs to be cleaned. In an
automatic library, the library manager will take care of this; but TSM
needs to relinquish the drive.
ANR8939E The adapter for tape drive ________ cannot handle the block size
needed to use the volume.
In Windows, it may be due to the Registry value MAXimumSGList being too
low. See: Ultrium and FibreChannel and ...
ANR8972E Unrecoverable drive failures on drive RMTxxx; drive is now taken
offline.
Reflects a SIM error (q.v.).
ANR9627E CANNOT ACCESS NODE LICENSE LOCK FILE: file name.
Results when the server cannot get at the nodelock file, or the file
system containing it is full, as when doing REGister LICense.
ANR9613W Error loading ./dsmlicense for Licensing function: Exec format error.
Seen when boosting the TSM server level in AIX. "Exec format error"
occurs when the system goes to load a compiled program and can't
recognize its format. This can be due to a downlevel C library (fileset
xlC.rte) or, more likely, a bad mix of 32-bit vs. 64-bit software.
Seen in 64-bit AIX TSM as follows: Fileset tivoli.tsm.license.cert is
common between the 32-bit and 64-bit versions of TSM; but
tivoli.tsm.license.rte is specific to 32-bit and
tivoli.tsm.license.aix5.rte64 is specific to 64-bit, and both of those
resolve to /usr/tivoli/tsm/server/bin/dsmlicense. It may be the case
that you are running 64-bit AIX, but that both the 32-bit and 64-bit TSM
were installed, where the 64-bit server and license fileset were
installed first, but then the 32-bit server and license fileset
thereafter, so the 64-bit modules wer supplanted by the undesired 32-bit
version. Then server maintenance was applied, which resulted in the
32-bit tivoli.tsm.server.rte to be installed first, then the
tivoli.tsm.server.aix5.rte64 server. So then you have a 64-bit dsmserv
module but a 32-bit dsmlicense module. No good. You can verify which
version you have by comparing the mtime timestamps of directories and
files in /usr/lpp/tivoli.tsm.license.rte and
/usr/lpp/tivoli.tsm.license.aix5.rte64 vs the ctime (ls -lc) on your
/usr/tivoli/tsm/server/bin/dsmlicense and dsmserv.
ANR9716E Device '/dev/lmcp0' is not recognized as a supported library type.
May occur when doing 'dsmlabel'. Indicates that the 3494 is in an
Offline state. Go to its operator station and make it Online.
ANR9718E Device '/dev/rmt0' is not recognized as a supported drive type.
Typically, you did not define the tape driver to ADSM, as it
requires. ADSM uses its own tape drivers: you cannot use those supplied
with AIX! Use SMIT to define the drivers, according to the ADSM Device
Configuration manual. The resultant device is typically /dev/mt0 and
/dev/mt0.1 .
ANR9725E The volume in drive '/dev/rmt?' is already labeled (VVVVVV).
You tried to use 'dsmlabel' to label a tape which was pre-labeled by
the vendor.
ANR9798E DELETE DRIVE: One or more paths are still defined for drive ____ in
library ____.
In response to 'delete drive <LibrName> <DriveName>': Do 'show path',
which will probably reveal a redundant drive. Do 'delete drivemapping
<ServerName> <LibrName> <DriveName>' on it.
ANR9969E Unable to open volume F:\TSMDB\SERVER1\LOG08.DSM. The most likely
reason is that another TSM server is running and has the volume
allocated.
Seen in the restart of the *SM server on a Windows machine, after some
odd event. Something is holding a lock. Rebooting the machine usually
clears the problem.
ANR9999D ...
Messages vary. This is a catch-all message number which the developers
use for internal server errors rather than create and document separate
message numbers for various, hopefully rare and unusual, conditions.
The DISAble EVents command's "SEVERE" operand can disable these.
Note that the number in parentheses, like in "ANR9999D dfmigr.c(3224)",
can be expected to be the source code line number, as via the ANSI C
__LINE__ definition.
Few customers look up ANR9999D in the Messages manual to gain
perspective on its intent...which is to provide diagnostic information
which may help you find an APAR which has already been created to
address the circumstance, or to provide diagnostic information to TSM
Support when it is a new problem. The content of the message is
intended more to assist the TSM Support person in handling the problem
rather than directing the customer in a course of action. Thus, if such
a message talked of performing an AUDITDB, I would not infer that it is
telling the customer to do so, but rather that, after looking at the
full picture, an AUDITDB may be a course of action. Keep in mind that
taking action yourself may be "playing doctor", and could result in
irreparable damage to your TSM system. If research on the IBM site does
not turn up an obvious solution to the situation, contact TSM Support
rather than undertaking actions yourself.
In the server, you can do 'Set CONTEXTmessaging ON' to get more info
when they occur.
ANR9999D adminit.c (982) Insufficient log space to update Administrator -
.Attributes
Refer to server commands manual, "DSMSERV EXTEND LOG".
ANR9999D admstart.c(2191): Error 21 from lvmAddVol.
You did 'dsmserv extend log ...' but the volume specified is one
previously used by *SM. You cannot extend the Assigned Capacity of an
existing volume in a stand-alone manner: you have to do that online.
In stand-alone mode you can only add a new volume. The approach you
can take is to temporarily create a File type volume (as in /tmp) -
enough to get your system up - and once up, do an EXTend LOG and then
delete the temporary volume (DELete LOGVolume).
ANR9999D AFMIGR(500): Error checking pending volumes for completion of reuse
See "ANR0104E ASVOLUT(2202)".
ANR9999D afmigr.c(2574): Reconstruction of aggregates is disabled. Run audit
reclaim utilities to re-enable reconstruction of aggregates.
Instructions about this problem are in the 3.1 server README.SRV file.
ANR9999D AFMIGR(2619)
The older README's contain informatiom about this. You start with
running an "AUDIT RECLAIM" command, then you "select * from
RECLAIM_ANALYSIS" and if this table is empty you "cleanup reclaim" and
this reactivates reconstruction processing.
ANR9999D asalloc.c(1195): Missing allocation storage pool.
Reportedly seen during reclamation on version 3.1.1.3 (AIX).
Upgrading to version 3.1.1.5 fixed the problem.
ANR9999D ASRTRV(494): End reached prematurely on volume ____
This message indicates that the database information for a particular
file was not consistent with the actual file data on your storage pool
volume. Using metadata from the database, the backup operation tried
to read a certain number of bytes from a volume, but encountered
end-of-volume before that number of bytes had been read. The root
cause of the problem is likely faulty drive microcode, one case being
the drive dropping tension after a long idle, but failing to verify
position after starting the next operation, wherein the tape had
slipped back a bit, thus causing new data to write over old. Tucson
calls it the "Chopped Block" problem.
Audit Volume Fix=Yes should eliminate this inconsistency by deleting
the problem file from the database.
ANR9999D asutil.c(210): Pool id 6 not found.
ANR9999D asutil.c(215): Pool id 27 not found.
A storage pool shows up numerically during *SM start-up or reclamation.
This message may be caused by the disappearance (unusual removal) of a
storage pool which *SM knew to be valid. Maybe you had some copy groups
(under management classes) that pointed to these storage pools. (*SM may
let you delete a storage pool even though you have copy groups using
it/them.) May be able to fix by updating the copy groups and activate
the policy set again and these should clear. One user reports being
able to accidentally fix this by going into the admin graphical
interface (dsmadm); when he went to exit from the reclamation tab of the
3570 pool it asked if he wanted to save changes. He did not think I had
changed anything, but went ahead & let it save. That appeared to fix
the problem. Otherwise, run an AUDITDB to reconcile the database with
reality. (One customer reports that running 'dsmserv auditdb storage'
was efficacious.)
ANR9999D asvol.c(1043): ThreadId<ThreadNumber> NumEmptyVols went negative for
pool -2.
One of the storage pool descriptor records has a field which records the
number of empty volumes in a pool. Whenever a volume is deleted (as when
it goes empty or is deleted), this count is decremented. Logic checks
whether the count is already zero and, if so, this message is emitted.
The overall cause is a server defect.
ANR9999D asvolmnt.c(1586): Unknown result code (30) from pvrOpen.
Seen where a storage pool volume is physically in the library, but is
not in a TSM checked-in state. Do a Checkin.
ANR9999D BFCREATE(768): Bitfile aggregate 0.7514479 not found in any storage
pool.
ANR9999D BFCREATE(781): Bitfile aggregate 0.7514479 not found for delete.
ANR9999D BFCREATE(712): Inconsistent content for alias aggregates 0.14869371 and
0.7514479.
As encountered in a Delete Filespace. Accompanied by message ANR0859E
Data storage object erasure failure, DELETE FILESPACE process aborted.
You may have to contact Support for resolution. Possible cause is that
a storage pool volume went away, leaving the database entries for the
filespace objects orphaned. Run a Select on the Backups and Archives
tables, seeking the Object_Id that is the lower portion of the bitfile
number (7514479) and try to track the object name to the volume it is
supposed to be on. If the volume is inadvertently out of the picture
and can be reinstated, you would be in luck; else you may have to do an
Audit.
ANR9999D bfcreate.c(1906): ThreadId<18> Destination switched from BACKUPPOOL to
MAGPOOL in the middle of a transaction. [The pool names don't matter.]
Typically seen when the TSM server is upgraded from v4 to v5 without
regard for the levels of existing clients, and a v3 client (possibly
employing DIRMc) attempts to then back up data to the upgraded server.
The server upgrade finally put the server out of the reach of the
antiquated client. IBM stipulates what client-server level mixes will
work (and those would be reasonably contemporary mixes): outside that
range, the mix is untested and unsupported - and obviously may not work.
ANR9999D bfutil.c(3276): ThreadId<51> Unnexpected error obtaining AUX bitfile
information. Callchain of previous message: 0x0000000100017c74 outDiagf
<- 0x00000001002082 cc bfIsAuxFile <- 0x00000001003720f8 DoBackQry <-
0x0000 000100380ccc SmNodeSession <- 0x0000000100434f84
HandleNodeSession <- 0x000000010043ad68 smExecuteSession <- 0x0
00000010042d920 SessionThread <- 0x0000000100007fd0 StartThread <-
0x09000000004e9244 _pthread_body <- (SESSION: 2439)
How ugly is that? Probably accompanied by ANR0538I. One customer
encountered this where database activity was too intense: spreading out
the workload relieved the situation.
ANR9999S Bitfile not found
Seen preceded by "Invalid Object header state in Retrieve Operation"
Could be when you have done a DELete Volume on a primary stgpool volume,
and/or when the primary tape is bad (i.e, unavailable or destroyed), and
some tapes in the backup stgp group are bad also.
ANR9999D dballoc.c(802): Sequence number mismatch for SMP page addr 417792;
HeaderSmpNum = -1, Expected = 408.
Database is corrupted, often involving a rude termination of the
server. MVS users have reported seeing this when deleting filespaces,
and believe that restarting the server right after a deletion helps
them.
ANR9999D DFQRY(449) Missing row for bitfile 0.29693264.
Has been seen in attempting to delete a volume but encounter message
"Volume still contains data", and cannot be deleted. The only reported
solution is to 'DSMSERV AUDITDB FIX=YES DISKSTORAGE'
ANR9999D dsalloc.c(1899): Error writing to volume /dev/rstorage_pool: execRc=-1,
summaryRc=-1.
Encountered when writing to this raw logical volume. Because of the
write error the volume is set to readonly (message ANR1411W).
ANR9999D dsvol.c(501): Error 2 creating bit vector DSKV0000010123 for disk
/dev/rlv-hsm-stgpvol2.
Experienced when performing a DEFine Volume to add a raw logical volume
to a storage pool. The error code 2 translates to BVRC_TOO_LARGE: the
*SM server does not allow the definition of volumes which are larger
than the largest size which the operating system supports for a file.
(TSM itself does not support a volume larger than 1 TB.)
Your recourse is to instead define smaller volumes.
ANR9999D Error reading from standard input; console input daemon terminated
Perhaps you didn't start the server from the server directory, and the
server cannot find some config files.
ANR9999D icrest.c(2076): ThreadId<0> Rc=33 reading header record.
Contrary to what an errno 33 is supposed to mean, this indicates that no
drives were available to mount the needed tape volume, as during a
dsmserve restore db.
ANR9999D icstream.c(1047): ThreadId<9> Invalid record header found in input
stream, magic=_____
You may be trying to restore a TSM db from an older TSM level to
5.1.x.x, which cannot be done. See APAR IC33690.
ANR9999D icvolhst.c(4329): Error Writing to output file
You have too little remaining disk space, as in either the file system
filling or because disk quotas prevent you from using more. In Unix,
use 'df' and/or 'du' commands to examine file system capacity, and
consider deploying the public domain 'lsof' command to see open files.
ANR9999D imexp.c(3405): Error comparing deletion date for object 0 5045722.
If you are using server-to-server functions, it may be that the times
on the two servers are not synchronized.
ANR9999D imutil.c(1296): Error deleting object (0 2361348)
As seen in a Delete Volume operation for a volume containing Archive
data. The second number in the pair is the OBJECT_ID in the Archives
database table: you could perform a Select to identify the name of the
file involved; and *maybe* you could perform a Delete Archive operation
from the owning client system to get the problem file out of the
database.
ANR9999D imutil.c(2555): Lock acquisition (ixLock) failed for Inventory node 17.
This has been seen to occur when you run a Query OCCupancy while an
Import is running.
ANR9999D imutil.c(5570): ThreadId<100> Bitfile id 0.496165168 not found.
Seen by a customer in an EXPIre Inventory. Indicates an inconsistency
in the database. Possible fix: do a Select search on the Backups or
Archives tables on that OBJECT_ID to identify the filespace object, then
via the Contents table identify the volume it is on, then do an
'AUDit Volume Yes'.
ANR9999D Invalid attempt to free memory (invalid header); called from 10020e2e4
(aftxn.c (643)).
Seen when migration can't work because the its target storage pool is
not writable. May be accompanied by msg "ANR1025W Migration process 5
terminated for storage pool <SomeStgpoolName> - insufficient space in
subordinate storage pool. (PROCESS: 5)".
ANR9999D LOGSEG(415) Log space has been over committed - OR ...
ANR9999D logseg.c(498): Log space has been overcommitted (no empty segment
found) - base LSN = 577969.0.0.
Accompanied by: ANR7837S Internal error LOGSEG871 detected. (q.v.)
ANR9999D lvminit.c(1915): ThreadId<0> The capacity of disk '/some/name' has
changed; old capacity 77056 - new capacity 102656.
ANR9999D lvminit.c(1671): ThreadId<0> Unable to add disk '/some/name'
Message set seen in a case where the TSM Database and Recovery Log
volumes were all lost after a TSM shutdown, and the customer responded
by creating new volumes and doing 'dsmfmt -log path size' (only), then
attempted to perform a 'dsmserv restore db [preview=yes]'.
The first message is informational: the second message indicates that
TSM cannot proceed with the volume.
Look out for the former dsmserv.dsk file still being in place, which
may identify the old log and database volumes rather than new ones.
Further, your device configuration server file may name file system
objects which were on the lost disks, and need reworking to reflect
your re-established server object on the replacement disks.
ANR9999D lvminit.c(1872): The capacity of disk '/dev/rtsmvglv11' has changed;
old capacity 983040 - new capacity 999424.
Oh, you're using those dangerous Raw Logical Volumes, where there's no
file system in the logical volume to clue someone in that it's in use
for something, and it appears that someone did a 'chlv', 'extendlv', or
like command to change the size of the logical volume - which speaks to
deficient site administration practices. To deal with this, you first
have to discover who did it, why, and what may be set up to start using
this "empty" space. (If you don't head off this big truck which may be
coming at you, your attempts to recover may be futile.) Then you may
have to restore your db - depending upon what was on that volume - or
use prep routines to replace an empty volume.
ANR9999D lvminst.c(323): ThreadId<0> Error creating Logical Partition Table
for LOG volume ________.
Seen when setting up to restore the *SM database onto another server
machine, and the Recovery Log size exceeds the architectural maximum
ANR9999D mmsflag.c(4551): Operation 004C6D32 failed with Command Reject.
Accompanied by:
ANR8301E I/O error on library DAFFY (OP=004C6D32, SENSE=00.00.00.27).
Probably: The tape label could not be read. Try another drive to see if
it is a drive problem, rather than volume problem. Look for hardware
error indications. If it is a scratch tape, you could try relabeling the
tape and try mounting it again.
ANR9999D Monitor mutex acquisition failed; thread 0 (tid 537551472).
The BUFPoolsize is too large.
ANR9999D pvrfil64.c(1056): ThreadId<30> Error writing FILE volume
V:\TSM\RECLAIMPOOL1\00000DAA.BFS.
Is there space in the filesystem? The MAXCAPACITY of the device
class...is there that much space available?
ANR9999D pvrntp.c(1838): Error writing EOT to NTP volume xxxxxx
Encountered when someone opens the cap door and ejects a cartridge that
is in the process of being written.
ANR9999D pvrgts.c(4059): ThreadId<9> Invalid block header read from volume
______. (magic=5A4D, ver=20048, Hdr blk=5 <expected 0>, db=0
<262144,262144,0>)ANR9999D icrest.c(2076): ThreadId<0> Rc=30 reading
header record.
During a 'dsmserv restore db volumenames=______ devclass=____': If also
accompanied by messages (ANR8326I, ANR8335I) which talk of a device
class (GENERICTAPE) which differs from what is specified on the command
line, it may indicate that dsmserv could not read a possibly incorrect
devconfig file to ascertain the actual tape drive type, and as a result
may be misinterpreting the contents of the tape.
ANR9999D pvrserv.c(650): Error positioning SERVER volume ___________ to MM/DD/YY
HH:MM:SS 1:0.
May be accompanied by:
ANR9999D icrest.c(2076): ThreadId<0> Rc=30 reading header record.
Encountered during an Import: The label prefix in the device classes
has to be the same in both old source server and new source server
device classes. See APAR IC26603.
Encounterd during 'dsmserv restore db': As when refreshing a test
version of a TSM server from your production version. Can be caused by
use of incorrect device configuration file, and the failure to format
the log and database volumes, and perhaps using a different server
options file than was on the source server (where it may have specified
a different devconfig file). Also seen where the new server was set up
with raw logical disk volumes whereas the original server was actually
set up with its disk as filesystems. Another customer reports
encountering this on a new Windows system where the tape drivers were
not installed (they are not installed by default for a WIN2K server
instance).
ANR9999D smadmin.c(2649): IMPORT: Error - Authorization Rule aleady exists.
Remember that the Import default is Replacedefs=No. An Authorization
rule is a specification that allows another user to either restore or
retrieve a user's objects from ADSM storage, and seems to already be
defined in your destination server. You'll either have to resolve the
conflict or allow replacement.
ANR9999D smexec.c(976): Session NOT allowed in standalone mode.
Some clients are attempting to connect to your server while you have it
in some kind of recovery mode.
ANR9999D smexec.c(1171): Session NNNN with client _____ (WinNT) rejected -
server does not support the UNICODE mode that the client was
requesting.
Maybe: ADSMv3 Windows NT client and server at version 2 and the nodename
parm was not used in the dsm.opt file.
ANR9999D sminit.c(656): ThreadId<20> SM Failed to Initialize - Time Out.
Seen when in a stand-alone dsmserv operation or doing UPGRADEDB, and a
client attempted to initiate a session, when the server is not in a
position to initiate sesssions; thus this Session Manager Initiation
message. Might also be the result of a hacker doing port scanning at
that time, or the result of anti-virus software in action. See "Server,
prevent all access" for blocking clients during such activities.
ANR9999D smnode.c(6786): Bitfile not found for BackMigr, session NNNN, client
<NodeName> (<OpsysType>), bitfile 0.201155604.
Pursue as in other bitfile issues documented herein.
ANR9999D smnode.c(5323): Error validating inserts for event 14995.
Seen when using the TDP for MS SQL to view data stored on the server by
a different level of that TDP. Backups made with TDP for MS SQL
Version 1 CANNOT be queried or restored using Version 2 nor can backups
made with Version 2 be queried or restored using Version 1: you must
keep TDP for MS SQL Version 1 for as long as you have Version 1 backups
that may need to be restored. See User's Guide topic "Version
Migration/Coexistence Considerations".
ANR9999D smnode.c(7091): ThreadId<594> Error receiving EventLog Verb - invalid
data type, 24944, received for event number 4964 from node (WinNT)____.
Most likely, screwy in the install of TSM on the named client, resulting
in inconsistencies in what the client is sending to the server. (For
example, imagine a client administrator who never follows instructions
when installing software, and leaves the client scheduler running while
he upgrades the TSM software underneath it, and doesn't reboot
afterward. Or a TSM upgrade was interrupted before it completed, and
the client admin went ahead and started the client scheduler anyway.)
The best course is probably to reinstall TSM on that client, by the
book, and reboot after doing it.
ANR9999D smnqr.c(1132): Bitfile 61238278 not found for retrieval.
Do 'SELECT * FROM BACKUPS WHERE NODE_NAME='UPPER_CASE_NAME' AND
OBJECT_ID=61238278' to get HL_NAME and LL_NAME of the file.
then do 'SELECT * FROM CONTENTS WHERE FILE_NAME='{HL_NAME} {LL_NAME}''
to see whether the file exists on any volume. (Note that a Contents
search is time-consuming.)
It might be that there's a damaged volume that should be audited; or a
reclamation might dispose of the entry if it's old; or you may have to
Audit your database.
ANR9999D ssrecons.c(2210): Invalid magic number found in frame header
May be seen with optical media: "Invalid magic number" error messages
may be triggered because of not tracking the sides of double-sided
media. The error will typically arise during reclamation or
reconstruction.
ANR9999D tcpcomm.c(1567): SessionThread: return code from setsockopt is 22
Seen under Solaris server. Probably related to TCP buffer sizing in
Solaris. See http://www.sean.de/Solaris/tune.html (search for EINVAL,
which is the errno name for the 22 number). Clicking on the SUN TCP/IP
Admin Guide link there takes you to the gospel, at http://docs.sun.com/
ab2/coll.47.4/NETCOM/@Ab2PageView/1787?DwebQuery=ndd#FirstHit where it
says "Attempts to use larger buffers fail with EINVAL". You might try
changing your client and server TCPWindowsize options to see what might
improve things; or perhaps it may be a Solaris adjustment. Confer with
your Solaris people on this.
ANR9999D xibf.c(664): Return code 87 encountered in writing object 0.9041218 to
export stream.
As seen during a server Export operation. It *might* be simply the
result of a tape drive that needs cleaning. Otherwise, it could be a
storage pool file that has a length issue, for example: pushing the file
out of existence with repeated 'dsmc s' Selective backups could end the
problem. You can identify the file via 'SHow INVObject 0 9041218', as in
this instance.
ANR9999D smlshare.c(2174): ThreadId<81> Server-to-Server protocol error. unknown
verbType=20992.
Unrecognized verbs are your big clue to a level mismatch, as in a higher
level server using verbs which a lower level server is not programmed to
recognize, as in trying to mix a 5.2 and 5.1 server. Pay attention to
required levels and Readme files.

ANS-----(client messages)-----------------------------------------
ANS0101E NLInit: Unable to open message repository
'/usr/tivoli/tsm/client/ba/bin/<Lang>/dsmclientV3.cat'.
A common cause is the permissions on the file or its containing
directory preventing access by the invoker. Another cause is the
DSM_DIR environment variable being used, but pointing to the wrong
directory. In rare cases, the TSM client install package is faulty and
fails to create the <Lang> directory. Another obscure cause is when the
product changes names (e.g., ADSM -> TSM) and new names are used in the
path of installs, but the new installer doesn't uninstall the prior
version, making for a mixed and sometimes conflicting environment.
ANS0102W Unable to open the message repository tdpsdan.txt. The American English
repository will be used instead.
Products such as TDP key on the Locale settings of the machine in which
they are running. The operating system Locale setting may not be one
that the given product supports (Danish, in this case) and so it reverts
to English. To avoid the error message, follow the instructions in the
doc (README): set the LANGUAGE environment variable to "ENU" using the
the GUI, or from the command line via 'TDPSQLC SET LANGUAGE=ENU'.
ANS0105E ReadIndex: Error trying to read index for message [message number] from
repository dscameng.txt
Typically, a permissions problem, probably the result of someone
meddling with the client file system.
NT: See if the file dscameng.txt is missing from the baclient directory.
ANS0106E ReadIndex: Message index not found for message _______
Check for the message repository file being in the standard directory
for your operating system. If so, use DSM_DIR to point to it and, if
you get the same ANS0106E error with that, it indicates that the
repository itself is defective. You might try a higher client level to
get a clean copy.
ANS0237E (RC2033) On dsmInit, the node is not allowed when
PASSWORDAccess=GENERATE.
Seen when invoking TDPs or buta. You failed to observe the instruction
in the manual, specifying that the PASSWORDAccess option should be
"prompt" in the Client System Options File.
ANS0239E
As seen using Notes Connect Agent, can be caused by someone having
named a folder in their Notes mailbox a wildcard character such as a
"*". The only way to fix it is to see what mailbox was being backed
up and then look at the folders in that persons mailbox and have them
rename the wildcard to something else.
ANS0263E (RC2230) Either the dsm.sys file was not found, or the Inclexcl file
specified in dsm.sys was not found.
It may not have been found because it's not where it should be: the
dsm.sys that TDP looks for should be in the api/oracle/bin directory
(older TDP) or /opt/tivoli/tsm/client/api/bin - not the dsm.sys that the
standard client software uses. In that TDP uses the *SM API, you can
set environment variable DSMI_DIR to the name of the directory which
contains your dsm.sys file.
If setting up a 64-bit client, assure that no 32-bit stuff is
inadvertently in the mix.
ANS0326E Node has exceeded max tape mounts allowed. (An API message)
The Messages manual fails to come right out and say that the node's
server-defined MAXNUMMP value has been exceeded, perhaps because of an
unusual number of client sessions. If warranted, have the server admin
perform an UPDate Node to boost the value.
ANS0500-0599 These are TDP For Oracle messages
ANS0599 TDP for Oracle: (2106): 05/09/2001:20:12:35 =>(ssrspsfp1-ora)
sbtclose(): oer = 7023, errno = 41.
Errno 41 from TDP means you have exceeded the maxmimum mount point
allowed for your node on the server. Check to see what is the value for
"Maximum Mount Points Allowed" for your node on the server by issuing 'q
node <NodeName> f=d' from admin command line. Don't start more sessions
than that value or change that value.
ANS0944E dsmnotes error(s) occurred
Is basically telling you that something is missing or corrupted inside
the database. If the problem is not too severe, it will give you this
warning. But when the database is badly corrupted, it will not give you
any warning, it will just hang. There is no way for the Notes agent to
detect if the database is corrupted or not before the backup happens.
You should use other Notes tools to check and fix the corrupted database
before doing a backup using the Notes agent.
ANS1005E TCP/IP read error on socket = <SocketNumber>, errno = 73, reason: 'A
connection with a remote socket was reset by that socket.'
The 73 is AIX errno ECONNRESET: Connection reset by peer. The client
detected this, and its peer is the TSM server: check the TSM server
Activity Log for that clock time for an indication of why the server
terminated the session. It may be that your TSM server implementation
is relatively new and still has default configuration values, where its
timeout specs need boosting (particularly, COMMTimeout)?
If both ends of the session see it simply disappear (the server did not
cause its demise) then something in between caused it: network
equipment, OS TCP/IP protocol stack. One possibility is value conflicts
with the router/switch, as where autonegotiation of settings is
involved. If no good reason apparent... Your TCPWindowsize values may
be conflicting with your operating system network sizes.
See also: TCPWindowsize client option; ANR0480W.
ANS1005E TCP/IP read error on socket = <SocketNumber>, errno = 104, reason :
'Connection reset by peer'.
The 104 is Linux errno ECONNRESET. Treat same as above.
ANS1005E TCP/IP read error on socket = <SocketNumber>, errno = 232, reason :
'Connection reset by peer'.
The 232 is HP-UX errno ECONNRESET. Treat same as above.
ANS1005E TCP/IP read error on socket = <SocketNumber>, errno = 10053, reason :
'An established connection was aborted by the software in your host
machine.'.
The 10053 TCP/IP errno indicates that session was terminated at the
other end of the connection, which is to say by the TSM server. Check
its Activity Log for reason.
The client may attempt reopen, msg ANS1809W.
ANS1005E TCP/IP read error on socket = <SocketNumber>, errno = 10054, reason :
'Unknown error'.
The 10054 TCP/IP errno is probably from Winsock, reflecting a connection
being reset by peer, which in TSM terms means that the server terminated
the session with the client. Thus, the place to look is in the server
Activity Log, which should explain why it did so. It may be that server
timeout values are too low. If the server log also shows a mystery
disconnect, that indicates you are having networking problems.
ANS1005E TCP/IP read error on socket = <SocketNumber>, errno = 10054, reason :
'An established connection was aborted by the software in your host
machine.'
The TSM 5.2 server has the ability to prevent clients from initiating
both manual or scheduled sessions by setting the node's
SESSIONINITiation parameter to SERVEROnly. If you have the correct
HLAddress (IP addr) and LLAddress (port) specified and you get this
error, either when attempting to connect manually or via a client
polling schedule, then probably the parameter is set to the SERVEROnly
value. A value of Clientorserver is necessary for the client to be able
to spontaneously contact the server, as in a human-invoked session.
ANS1017E Session rejected: TCP/IP connection failure
See: ANS4017E
ANS1025E Session rejected: Authentication failure
May occur in the TSM 5.2.2.0 Windows client, if the password already
exists in the Registry key of this node:
HKEY_LOCAL_MACHINE\SOFTWARE\IBM\ADSM\CurrentVersion\BackupClient\
Nodes\<Node_Name>
The workaround is to delete the registry key before authentication: the
key will be rebuilt during authentication, as Administrator initiates a
TSM client-server operation. (In rare occasions, the supposed Admin user
does not actually have the permissions expected, so check that.)
An accompanying Error 2 in Windows means File Not Found and may suggest
that the HKEY value is being subverted by an environment variable or
client option like PASSWORDDIR.
ANS1026E (RC136) Session rejected: Communications protocol error
Has been seen in restoring large files in the doofy old MVS TCP/IP
environment, with a buffer overflow in TCP/IP happening every max
sequencenumber: instead of beginning with zero again the TCP session
dropped.
Also seen with "bad" NIC drivers; examine your log for TCP communication
errors, or watch your switch: the NIC may be renegotiating (often) the
speed & duplex settings, which may be avoided by defeating
autonegotiation.
May accompany dsmerror.log msg "sessRecvVerb(): Invalid verb received."
See also ANR0484W
ANS1028S Internal program error. Please see your service representative.
Seen when Retrieving a file...waiting for a tape mount, the "Retrieving"
message appears, then ">>>>>> Retrieve Processing Interrupted!! <<<<<<".
See the dsmerror.log for supplementary info; and/or the server Activity
Log. There is often nothing reflecting a problem in the Activity Log,
which would indicate the client reacting to something within its
environment.
Can be caused by having Archived a file with an ADSMv2 client and then
trying to retrieve it with an ADSMv3 client.
If no Activity Log indications of a problem, the problem may be the
result of the client system operating system level having been boosted,
as seen when IRIX v5 started numbering errno values at 1000.
But another, much more trivial cause is the user having run out of disk
quota during a Retrieve or Restore.
During a backup, watch out for running out of disk space (or disk quota)
for the client logs.
ANS1029E Communications have been dropped.
Could be caused by having the PASSWORDAccess Generate option, and the
NODename option as well. If so, eliminate the latter.
ANS1030E System ran out of memory. Process ended.
A poorly phrased message which misleads most customers: This is not a
real memory issue, as modern computers use virtual memory; and it is
almost never the case that the system ran out of virtual memory, but
rather that your client process ran out of its allotment of virtual
storage, typically during an ordinary incremental backup, which
accumulates and sorts filenames in virtual memory. See the TSM message
description. Tends to be seen mostly on personal computers.
If a Mac OS 7-9 system, boost the application memory size in the Get
Info box.
If a Unix system (including Mac OS X, which is Unix):
- The problem usually is that you exceeded the Unix Resource Limits
values for memory utilization, defined for either your account, or
any process in the system.
Verify with the csh 'ulimit -a', or the equivalent for your shell.
In AIX, also check /etc/security/limits definitions and make sure
that root's memory utilization is not artificially constrained. (A
value of zero or -1 implies "unlimited".)
- In Solaris, this can be a consequence of using option
"LARGECOMmbuffers Yes", and happens principally for non-root users.
The fix for this problem is to do the following:
1. Become root
2. Append the following line to the file /etc/system:
set shmsys:shminfo_shmmax=2097152
If your /etc/system already contains such a line make sure its
value is at least 1500000.
3. Reboot the system by issuing "reboot"
- Check the Backup log to see where the thing ended: this may be the
case of a circular symlink causing ADSM to go in circles until
virtual memory is exhausted. You might also do a
'find DIRNAME -type l -ls' to inspect symlinks in suspected
directory.
- In AIX 4.3.3 there is an interesting JFS architectural situation
involving exhaustion of the .indirect segment for the file system
relative to files >= 32 KB.
See IBM site item swg21162093 and pTechnote0777.
If a Windows system:
- Close all unneeded applications and services, to free memory.
- Change LARGECOMmbuffers and/or MEMORYEFficientbackup (q.v.).
- Your system virtual memory may simply be inadequate: backing up 100
GB of small files via standard Incremental requires a lot of memory
for filename matching during the backup.
- See also notes under ANS9999E.
(Where a system's virtual storage is actually exhausted, there would be
major, obvious manifestations in the system. In the case of AIX, it
would issue SIGDANGER signals to all processes warning of impending
virtual storage exhaustion such that they could end gracefully before
AIX was forced to do SIGKILLs to contend with the problem. In the case
of Solaris, where /tmp is often defined as virtual storage, various
processes would fail in writing to /tmp. If Unix did have to kill off
processes, this would be evident in your Unix process accounting
records (check there). In addition, your AIX Error Log (errpt) should
contain PGSP_KILL entries around the time of the problem. If this was
not in evidence, then it suggests that it was the case that your
process exceeded its Unix Limits value for memory utilization.)
ANS1033E An invalid TCP/IP host name was specified
Check permissions on /etc/hosts, /etc/resolv.conf, /etc/nsswitch.conf
and like network configuration files to assure that they are publicly
readable. Check that you have DNS service.
ANS1035S Options file '/usr/tivoli/tsm/client/ba/bin/dsm.sys' not found
Multi-user systems (Unix) require both a dsm.sys and dsm.opt: make sure
you have both. In using the *SM API, set the DSMI_CONFIG environment
variable to the full-path name of your dsm.opt file. Check any DSM_DIR.
Might also happen if the file permissions don't allow the invoking user
to use the file.
See also ANS0263E.
ANS1036S Invalid option '_OptionName_' found in options file 'file-name' at line
number : _____
The client option is not considered valid by the software.
Do 'dsmc Query Option' to check client options. Refer to the B/A client
manual for details on coding options, as supplemented by any README
documentation supplied with the specific release level software.
Check to see if the option is in the right file (dsm.sys vs. dsm.opt),
or if it needs to be within a server stanza.
If using a TDP, assure that you are using its options file, not the one
used by the Backup/Archive client.
In TSM 5.1 and the Windows Share contains a dollar-sign on an Include
(like include \\remote\test$\*) this was a defect, fixed by IC36467.
ANS1038S Invalid option specified
In Unix, typically because you coded a dsm.sys option in dsm.opt.
ANS1063E Invalid path specification
Accompanied by like "ANS1228E Sending of object 'F:\*' failed"
or "ANS1228E Sending of object '\\something\g$ failed".
If an incremental backup, the account that is running the scheduler
service (SERVICE) does not have full rights/permissions to that drive,
as in SYSTEM account permissions no longer set for the drive. (If it
works manually, but not as a scheduled event, it is almost always a
permissions issue.) If Windows, and the failure had been on a drive
letter, try \\servername\share_name instead. If the problem persists,
you can try resetting the password for the domain admin ID under which
the problem child's TSM scheduler service is running
under. (Double-click on the TSM Scheduler service listing, and switch to
the "Log On As" tab.)
If performing a 'dsmc Backup Image', you probably specified /dev/____
instead of a file system name when the logical volume is defined as a
file system and is mounted. Or you specified the /dev/ character device
name for a device rather than its block device name.
ANS1068E Device is not local
In doing a Backup Imageon AIX, you probably specified like /dev/hd2 but
/etc/filesystems does not contain that spec for the logical volume which
contains a file system.
ANS1071E Invalid domain name entered: '_________'
May result from doing like 'dsmc i /home/ians/projects/hsm*/* -su=yes':
you cannot use wildcards in directory / folder names.
If not using wildcards, and you are specifying like 'dsmc i /etc',
instead try 'dsmc i /etc/': TSM is rather dogmatic about specs, and
expects that an object specified without a trailing slash is a file
rather than a directory, and here /etc is a subdirectory of / rather
than its own file system. (TSM will recognize a true file system without
a trailing slash spec.)
ANS1073E File space correspondence for domain 'domain-name' is not known.
The number defining the correspondence between drive letter or file
(domain name) and volume label is not known to the server.
This might be caused by the specified name not being recognized by ADSM
as a Domain (filespace) because it specifies the filespace name as a
stem and is followed by a directory name which causes ADSM to think that
the whole thing is the domain name. The solution in this case is to set
off the filespace portion of the name with braces.
ANS1074I *** User Abort ***
May appear only in the dsmerror.log - with no complementary server
Activity Log error indications!
Some customers report experiencing this client-side message when the
server disk storage pool runs out of space, or lack of mount points.
The 4.1.2 client level had a defect in failing to emit other messages
describing the actual problem.
In TSM5, setting client option RESOURceutilization to 1 may prevent the
intermittent error, by preventing thread switching.
ANS1075E *** Program memory exhausted ***
*SM thinks: The program has exhausted all available storage.
*SM recommendation: Free any unnecessary programs, for example,
terminate and stay resident programs (TSRs), that are running and retry
the operation. Reducing the scope of queries and the amount of data
returned can also solve the problem.
If a Unix system, check Unix Limits values. Assure that the system is
not running out of virtual storage. If AIX, and still using ADSM, you
may be in need of more than the single memory segment that AIX allows by
default. (AIX TSM employs the Large Program Support conventions to
avoid this situation, as verified by Richard Cowen.) You can modify the
ADSM server module to use LSP, as follows:
The amount of memory that the process needs may exceed the size of one
data segment (256 MB), which is the default number of segments a
process may use. The process is in this case killed by the system.
The work-around for this is to enable the program to be able to use
more than one data segment by enabling Large Program Support, using the
following commands:
cd /usr/lpp/adsm/bin
cp -p <Pgm_Name> <Pgm_Name>.orig
/usr/bin/echo '\0200\0\0\0' |
dd of=<Pgm_Name> bs=4 count=1 seek=19 conv=notrunc
which causes the XCOFF o_maxdata field (see <aouthdr.h>) to be updated.
This allows the program to use the maximum of 8 data segments (2 GB).
Choose the string to use for a given number of data segments from
the following table:
# segments vm size string
------------------------------------------------
8 2 GB '\0200\0\0\0'
6 1.5 GB '\0140\0\0\0'
4 1 GB '\0100\0\0\0'
2 512 MB '\0040\0\0\0'
ANS1076E *** Directory path not found *** (same as ANS4078E)
In the dsmsched log, the msg follows the problematic name.
The command was given a parameter which it took to be a file system
object name, went looking for it, but could not find it.
The most trivial cause is a misspelling, or that the DOMain or command
line specifies a subdirectory which was removed from the system.
Could be something as silly as forgetting the hyphen in front of a
command line option, like "description=" instead of "-description="
or "archmc=" instead of "-archmc=" such that it looks like a filespec
instead of an option.
You may have incorrectly specified the object to be backed up, as in
perhaps something like "/filesys/.../*" in a client schedule.
May be caused, in Unix, by an invalid symlink, such that you have to fix
it and repeat the operation which stumbled onto it.
Another possibility is that the file system type is odd...one that the
client is not programmed to recognize and handle (such as a newer file
system type in Linux, being tried from an older client).
Netware: Might be a Rights issue. Watch out for the situation where the
NWUSER is logging into the server and not the tree: in certain
applications, if you specify the server name instead of the NDS tree, it
will default to Bindery login. If Netware server Bindery Context is not
enabled, the volume might not be recognized since the needed
authentication did not occur. Less likely: try running a vrepair on the
affected volumes and then retry incremental backup.
A bizarre cause of this error was a user employing the Selective backup
command on the content of his Include lines.
ANS1078S *** Unknown system error <Error_code>; program ending ***
"An unknown and unexpected error-code occurred within the client
program. This is a programming failure and the client program ends."
In Unix: By reason, the "system error (nnn)" should reflect the Unix
errno global variable value returned by a system subroutine. However,
you may find that the errnos in your /usr/include/sys/errno.h do not go
up that high - which indicates that the errno value is garbage, perhaps
there when the TSM client module called the system subroutine, and upon
return it believes that the errno value is meaningful, and reacts.
Might be due to running an old client on a new opsys, where the opsys
has error codes that are newer than when the client program was written.
Upgrading the client usually eliminates such errors. If not, then it is
purely a product defect and should be reported to the vendor.
Circumvention: Well, the error is there, and you need to get work done
despite it. Consider changing variables in a controlled manner seeking
one which helps, such as TXNBytelimit.
ANS1079E No file specification entered.
In creating a client schedule to Archive files, you may have forgotten
to specify the files to be Archived in the OBJects parameter of the
DEFine SCHedule command? That is, unlike the Incremental command,
Archive does not assume file objects; and the OBJects parameter is
required when ACTion=Archive. Note that your Include-Exclude list will
also be observed when the archive operation is actually performed, so
you can specify an alternate management class on an Include statement.
ANS1081E Invalid search file specification '/usr/stuff/*/fonts.info' entered
The given spec string contains invalid characters or a wildcard in the
file system name (Unix) or drive name (Windows). The most likely cause
is attempting to use wildcards for directories, particularly where the
restoral is "in place", rather than to an alternate place... As the Unix
Client manual says: "In a command, you can use wildcard characters in
the file name or file extension only. You cannot use them to specify
destination files, file systems, or directories." Alternately, it may
be that you invoked 'dsmc' to enter interactive mode, and then entered
the filespec in quotes, which might cause a client at a given
maintenance level to take the wildcards as literal characters instead of
as wildcards: quotes are for the OS command level, where you need to
keep the shell from expanding wildcards - don't use quotes in dsmc
interactive mode. Another possibility is there being multile filespaces
with common name ingredients such that you need to explicitly delineate
the filespace portion with braces: {/usr/stuff}/*/fonts.info.
ANS1082E Invalid destination file specification '/usr/here' entered
You attempted to perform a file system restoral like:
dsmc restore -su=yes '/adsmbkup/usr/there/*' /usr/here
where the destination is a new file system, and got this rejection, as
seen under the AIX TSM 3.7.2 client. The problem is that in the absence
of a trailing slash, TSM thinks that the destination is a file rather
than a directory; that is, you told it that the restoral was a "many to
one". What you have to do is specify the destination as "/usr/here/"
and it will work.
ANS1086E File not found during Backup, Archive or Migrate processing
The file was probably transient, and went away between the time that TSM
got a list of files to process to the time that it got to this file.
If you believe that this file should not have been included for
processing, see: Include-Exclude "not working".
ANS1092E No files matching search criteria were found [same as ANS4095E]
ANS1102E Excessive number of command line arguments passed to the program!
(Might also be seen/reported as "too many arguments passed to the
program.")
May be accompanied by "ANS1133W An expression might contain a wildcard
not enclosed in quotes".
The dsmc client command has a self-imposed limit on the number of file
specifications that may be passed on the command line. (The intention is
to "protect the customer from himself", as in inadvertent "runaway"
situations where a wildcard might supply a large number of filenames to
an operation.) Limits:
Query: 1 Restore: 2 Retrieve: 2
Archive: 20 Delete: 20 Selective: 20
It is with Archive, Delete, and Selective that you typically seek to
pass a large number of file names. If they are unique names, you are
forced to specify only up to 20 per command invocation. If they have
common elements, you may be able to use wildcards. In a Unix
environment, at least, you should then either put the whole file
specification in quotes, or put a backslash (\) before each wildcard
character, to keep the shell from expanding the wildcards.
One user inadvertently got this error by forgetting to put '#' before
comments in the dsmc sched line in the Unix /etc/inittab file.
See also: Continuation and quoting; "dsmc command line limits";
-REMOVEOPerandlimit; Wildcard characters
ANS1103E Invalid management class entered
In Archive: The management class for this file does not have an archive
Copy Group, and so the file cannot be archived. This can be caused by
having defined a management class, but not having done the
ACTivate POlicyset command to have it participate in the Policy Set.
ANS1105E The management class for this file does not have a valid backup copy
group. This file will not be backed up.
Check your server definitions, and review administrative changes.
Remember that when you do a backup, you're doing more than backing up
current files - the client is also telling the server what files no
longer exist on the client, such that those objects which have existed
in server storage can now be marked for expiration. Those files in
server storage were associated with a given management class. If you
delete that management class definition and the files are still in
server storage, you might run into this situation. If this is the case,
recreate the old management class definition.
ANS1107E Invalid option/value: '-PITDate'
Lazy programming fails to be specific. The problem is typically that
the date format employed in the value is inconsistent with prevailing
date format options.
ANS1107E '-Clusternode='yes'' invalid option / value pair in dsm.opt file.
That is the format for command-line options: in the options file,
options should be specified like: CLUSTERnode Yes
ANS1108E Invalid option (-POSTSchedulecmd) for the INCREMENTAL command
Or similar. Commonly, you tried to specify an option on the command
line when it is legal only in the options file, per the manual.
ANS1115W File '_____' excluded by Include/Exclude list
When running an Archive or Selective operation against a file spec which
contains files that are excluded, this message should be issued for each
excluded file; and if the operation is via schedule or batch, the return
code should be 4. (This should occur regardless of whether QUIET is in
effect.) The rationale is that Archive and Selective operations are
explicit requests for objects to be sent to TSM storage, and that if
they are not sent because of an Exclude, you very much should be made
aware of that...particularly with the preservational intent of Archive.
In contrast, the message would not appear for an Incremental type backup
where the files set is implicit because that is not an operation where
it is not required that files go to the server.
ANS1115W File '/tmp/whatever' excluded by Include/Exclude list
In Unix, /tmp is defined by Tivoli to not be backed up, so even if you
do not have /tmp excluded in your inclexcl, it does not want to back up
anything in /tmp, whether by Incremental or Selective backup. See: /tmp.
ANS1128S Invalid Management Class assigned to directories. Please see the
error log.
Are you using DIRMc, but it refers to a Management Class which doesn't
have a backup copy group assigned to it?
ANS1134E Drive \\MachineX\d$ is an invalid drive
Also known as the "Invalid drive specification D:" problem.
The simplest cause is that the system or invoker did not have
permission to use the drive. Perhaps the TSM scheduler is running as a
system account and mapped drives are not available: Try changing the
service to run using a local administator account, and confirm that the
user account has the mapped drive in its profile.
The message may be seen with hard drives other than "C", which can
indicate attempting to operate on a remote drive, which may not be
resolvable because of UNC or other issues. On machine MachineX, you
can define drive D as shared with the name "D$" then it will be able
to back it up.
ANS1149E No domain available for incremental backup
Sounds like the Client User Options file (dsm.opt) got changed so that
it is now lacking a DOMain statement, as for ALL-LOCAL.
ANS1194E Specifying the schedule log
'/usr/tivoli/tsm/client/ba/bin/dsmsched.log' as a symbolic link is not
allowed.
May be followed by "ANS1190E Symbolic link
'/usr/tivoli/tsm/client/ba/bin/dsmsched.log' to '' was successfully
deleted", which suggests that one of the subdirectories in the path is a
symbolic link. (Some software will examine each path element in turn,
not just the final file name.) Also seen with file
/var/log/adsmclient/adsmclient.log where /var/log/adsmclient is
erroneously a file rather than a directory.
ANS1228E Sending of object '____' failed
During an Archive or Backup, the client tried to send the the server
either the file itself, for addition to the server storage pool, or
information about the file (attributes update, expire the file), but
that interaction with the failed due to something invalid. This will
typically occur every time the client job is run.
There may be accompanying messages to explain why, as in:
ANS1063E (lack of permissions); ANS1086E (file not found); ANS1310E
(object too large). Another cause, in Windows: path length exceeding
the maximum of 259 characters. A very ugly case we've seen is where the
files named in the message had been deleted from the client some time
ago, meaning that the Sending action involves the client trying to tell
the server to expire a file whose name is in the list of Active files
which the client obtained from the server when an Incremental Backup
started. The file name in such a case may contain "tough" characters -
probably binary, and most likely binary zeroes. The TSM software should
programmed to be able to deal with bogus characters in file names, so
this failure should be considered a defect. Consider trying a backup
with -INCRBYDate to avoid filename passing between client and server.
Keep in mind that many message-issuance routines expect normalcy in the
strings they handle, and neither look for nor deal with inadvertent
non-displayable, binary characters. That kind of thing can always throw
off an investigation: what you see is not the reality.
If accompanied by "fsCheckAdd: unable to update filespace on server" in
the error log, it may be that database locks were in effect, as when a
serious operation like Export Node is happening the same time as a
Backup. Be careful to not have conflicting things running.
With .NET, note that *.cch.* files are temporaries: consider excluding
them from backup.
ANS1228E Sending of object 'c:\adsm.sys\EVENTLOG' failed
ANS1228E Sending of object 'c:\adsm.sys\IIS' failed
ANS1228E Sending of object 'c:\adsm.sys\WMI' failed
Accompanied by ANS4005E messages. The product has traditionally been
doing "exclude c:\adsm.sys\...\*" - but it should have been doing
"exclude.dir c:\adsm.sys", to avoid race issues. Amend your exclude list
to have the exclude.dir. See APAR IC40016.
ANS1228E Sending of object '/intermail/mss_db/mbox/18/db' failed Read Failure
The file to be backed up could not be read. Do the environmental
problem analysis to find out what the problem *actually* is: an OS error
log may reveal that there is a disk situation at play - which might be
something as resolvable as a loose cable or failed power supply, meaning
that the disk and its data may be intact but currently unreachable.
ANS1230E Stale NFS File Handle
See ANS4010E
ANS1245E (RC122) Format unknown [sometimes regarded as "unknown format"]
See: ANS4245E
ANS1256E Cannot make file/directory.
If a Windows machine, possibly one of the following:
1. "Permission Folder" was deleted by mistake
2. Illegal characters (maybe "?") used in file/dir name
3. File name (including directory path) exceeds 255 characters
You can check this by going to the subdirectory note in the error
messages, then right clicking, selecting permissions and attempt to
reapply the permissions on this directory. Windows should give you
an error at this point saying that it can't apply valid security
permissions to a certain file(s). These are your offending
files... either rename / delete.
Of course, as an expedient you can Exclude the offending objects.
ANS1262E Password is not updated. Either an invalid current password was
supplied or the new password does not fulfill the server password
requirements.
In addition to what the Messages manual advises, do Query Status on the
TSM server and check the Minimum Password Length value.
See: Set MINPwlength
ANS1287E Volume can not be locked.
ANS1287E Volume could not be locked
Seen during attempted restoral of an image backup. Some causes:
- The drive partition was inadvertently left open.
- The dsm.opt file directs log files to that volume.
- The TSM Journal service is running against that drive.
- Having an Internet Explorer window open to that drive.
- Remote users accessing that drive via IE across a network connection.
If despite all Windows closed it persists, do a quick format of the
drive, which has been seen to clear it: a similar message may appear,
but you get the option to go ahead and format anyway.
ANS1301E Server detected system error
See: ANS4301E Server detected system error
ANS1304W Active object not found
The most common, contemporary cause is that a prior Backup stored the
object in TSM server storage as a given type (e.g., regular file), but
in the most recent Backup the object is a different type (e.g,
directory), and this confuses things. Seen mostly in Netware.
As of 2001/02, this is being seen as a result of a debacle in the
misprogramming of the 4.1.2 client series in the handling of
international characters in filenames - including the lowly question
mark (?). This is also known as "the umlaut problem".
This problem occurred when you migrate from a V2 client to a v3.1.0.5
(or 3.1.0.6) client and when you have international characters in
filenames or directories.
And the problem is treated but may not be fully fixed with APAR
IC21764. And you should also have: USEUNICODEFilenames No.
A fix is provided in the TSM 4.1.2.12 client. See its README
(IP22151_12_TSMCLEAN_README.1ST).
ANS1309W (RC9) requested data is offline
A nonsensical error encountered during a TDP Exchange *backup* - not a
restoral as the TSM error message explanation suggests. (Consider the
misleading message a programming defect.) The standard fix is to, in
one way or another, set the TDP Mountwait to Yes (/Mountwait=Yes or
options file change). Issue the TDPEXCC QUERY TDP command to verify.
Also check your server stgpool MAXSize value to assure that it allows
the storage of a large incoming blob, and that there are volumes
available for the management class that the client is using.
ANS1310E Object too large for server limits
The object is too large. The configuration of the server does not
accommodate, or allow, such a large object in the storage pool. The file
is skipped. The message is apparently referring to the Stgpool MAXSize
value, if not the physical capacity of the storage pool.
Expect it to be accompanied by: ANS1228E. Probable server message:
ANR0521W. Note that this condition may result when client compression
is enabled and an already-compressed file is sent - and the secondary
compression attempt causes it to expand.
ANS1311E Server out of data storage space
See: ANS1329S
ANS1312E Server media mount not possible. [Same as ANS4312E]
(Note that there may be no "timeout" reflected as the message
description suggests.)
Maybe no tapes available: check your MAXSCRatch value and the number of
tapes available on the server.
Maybe no mount points available: Is the server already busy servicing
other clients such that all drives are in use? Also check your
Storage Pool MOUNTLimit and Node MAXNUMMP values. (Some customers have
reported finding after an upgrade to 5.1.7 that their prevailing
MAXNUMMP value of 2 won't work: they boosted to 4 and msg disappeared.)
It might be that the server's drives are all busy with higher priority
operations and your operation (restore is higher than backup, etc.). You
may simply have to adjust the day's scheduling of server administration
processes vs. client sessions to avoid contention for serial resources
such as tape drives.
See also "Insufficient mount points, 3590" in the CONDITIONS section,
further down in this document.
Starting in TSM 3.7 this can be caused by the REGister Node parameter
MAXNUMMP being zero in a backup/archive operation.
Do 'dsmc q mgm' to see if the client is using the appropriate management
class, and pursue the management class definitions in the server to see
if they lead to a faulty devclass definition such that no volumes of
that kind exist to be mounted.
If collocating by node and backing up directly to tape, the server will
want to append to the last tape that it was filling, but if that is in a
peculiar state it may immediately quit rather than going on to a scratch
or, if no scratches left, to append to any other node volume.
One uncommon cause is that the licensing is incomplete (probably lack of
Advanced Device Support license, as for using a 3494).
Watch out for Query DRive reporting GENERICTAPE instead of DLT, for
example. When backup is attempted, *SM errors out with this error
message. Has been seen to occur after powering off server and tape
drives, so review shutdown procedures.
Check the server Activity Log or dsmerror.log for indications. Possibly
it could not talk to the library to get a tape mounted.
ANS1315E (RC15) Unexpected Retry request
This lazy message does not begin to suggest that the problem is on the
server, in being unable to store the data which the client is trying to
send. Refer to the server Activity Log for the reason. One customer
found the cause to be tape write errors.
ANS1317E The server does not have enough database space to continue the current
operation
Well, your TSM server administrator should be monitoring the server
database over time, and has neglected that administration task such that
the server database has filled. Contact the admin.
ANS1328E An error occured generating delta file for ______, return code 4539.
Probably, your subfile backup cache is full.
ANS1329S Server out of data storage space [same as ANS1311E]
"Out of space": Typically, your server storage pools are full: you have
exhausted all tapes in that storage pool and need to either add more
tapes or perhaps lower your migration threshold or retention periods.
(Also assure that you are running Expiration regularly, to make space.)
Trivial cause: the destination storage pool is marked Readonly rather
than Readwrite.
Or could be that your Stgpool MAXSCRatch value is insufficient.
Or perhaps you believe you have plenty of free tapes - but are they
perhaps assigned to a different storage pool?
Did you do a 'VALidate POlicyset' as part of activating a policy change,
to check that your changes are consistent and logically correct?
Follow the server policy definitions (as used by the client) downward to
see if they lead to usable space. It's easy to forget to define volumes
or a scratch pool.
Is the incoming data larger than the size of any of the storage pools in
the hierarchy, or over the storage pool MAXSize?
Another cause is in the server not being properly licensed for the
number of clients in play.
Has also been seen with a symlink which points to nothing.
Might also be a Backup or Archive on a file system with complex
directory entries (e.g., NTFS) such that they by default go to the
storage pool with the longest retention, but that storage pool (probably
different from where your data would go) cannot be written to. Look into
ARCHMc and DIRMc.
See also: "Storage pool space and transactions"
TDP for SQL: Take a look at the following options in the User's Guide:
/LOGESTimate=numpercent /DIFFESTimate=numpercent
In some situations, the intial size estimate which the SQL server
relates to TDP is too low.
Further: At the start of the backup for the file, the server reserves
enough space in the storage pool to hold the file based on the client's
estimate. If storage pool caching is turned on then cached entries have
to be released. If the system can not reserve enough space for the file
in the storage pool then it is stored on the next storage pool that has
room for the file. Normally, at least one storage pool is defined with
no size limit, so this normally works. Then the file is transmitted to
the server. If it is not compressed or reduces in size with compression
it is stored in the reserved space and all is okay. If the file grows in
size with compression and COMPRESSAlways=No then the client will stop
sending the file and retransmit without compression and all is ok. But
with COMPRESSAlways=Yes the file will be transmitted until the reserved
space is used up. After that time the "server out of data storage space"
message is issued if there is no free space in the storage pool. Without
caching there is normally free space, but with caching the storage pool
is full by design. It would be nice if the client could wait for the
server to find more space in the storage pool or one of the next storage
pools and then continue the backup.
See also: ANR0520W; ANS4329S Server out of data storage space.
ANS1351E Session Rejected: All server sessions are currently in use
May be just that: issue 'Query SEssion' server command and see what's
using them, and review the Activity Log for background. If there are no
sessions, maybe you have "DISABLESCHEDS YES" in your server options
file. Beyond that, consider boosting the "MAXSessions" definition in the
server options file.
ANS1353E Session rejected: Unknown or incorrect ID entered
Can occur when your operating system hostname is not a simple name: is
like "myhost.mycompany.com" instead of simply "myhost".
See also: dsmc SET Access
ANS1357S Session rejected: Downlevel client code version
The server version and your client version do not match such that
sessions cannot proceed. The client code is downlevel relative to the
server. Possibly, the server administrator upgraded the server level
and you weren't advised that it was going to happen; or maybe they did
some rotation among multiple servers. Maybe there are multiple levels
of ADSM/TSM on your client system (as can happen with different versions
installing in different directories) and you invoked the wrong one.
Maybe your client configuration is not now pointing to the right server.
See also: ANR0428W
ANS1327E The snapshot operation for 'C:____' failed. Error code: 673.
Go to www.ibm.com and search on: +ANS1327E +673
Topic "TSM Client v5.2 Open File Support"
(http://www.ibm.com/support/docview.wss?uid=swg21121552) which says:
"There is known limitation in Microsoft Terminal Services server on
Windows 2000 that prevents the OFS feature from working over a Microsoft
Terminal Services session."
ANS1369E Session Rejected: The session was canceled by the server administrator.
This should be due to 'CANcel SEssion' on the server. Might also be due
to THROUGHPUTTimethreshold or THROUGHPUTDatathreshold in effect.
ANS1410E Can not reach the network path - or -
ANS1410E Unable to access network path
In a Backup, it may mean that the System account doesn't have access to
a drive.
In a Restore on Windows NT, you probably specified restoring a file to a
machine other than the one which did the backup, but using the same file
path name. As of version 3.1.0.5 of the client, ADSM now uses UNC names
for the files. This means that the machine name is part of the file
name. If you specified "original location", then ADSM tried to restore
the file to "node_one" because "node_one" is part of the file name
(i.e. \\node_one\c$\mydir\myfile.doc). Instead, try choosing another
location. The dialog allows you to select a drive and directory to
restore to, which will be the local drive and directory on machine
"node_two". Also check the filespace name on the server: it may need
renaming to accommodate the current client machine and disk names, or
vice versa.
ANS1435E An Error Occurred saving the Key.
Accompanied by: ANS1428E Registry Backup function failed.
and maybe ANS4036E.
Make sure there is sufficent space on your system drive to hold the
staged registry files. Also check for a TSM temporary file left over
from the previous backup: it tries to delete such temp files, but if the
temp file has a SHR attribute, that will prevent deletion. If all else
fails, run the backup with client tracing to reveal the problem in
detail. Other things to check:
- Verify that all the .exe, .dll, and dsc*.txt files in your
..\tsm\baclient directory have the same timestamp on them (or at least
within a couple of seconds of each other).
- Verify that adsm32.dll, adsmv3.dll, dsmntapi.dll, dsmutil.dll,
dsmw2k.dll (if Windows 2000), and tsmapi.dll all have the same
timestamp as the files above.
- Verify that if your run DSMC SCHEDULE in the foreground (while logged
on) it works okay.
- Assuming that all of the above check out okay, try configuring the
scheduler service to use the Local System account. Also, don't do
anything else fancy; just use dsm.opt located in ..\tsm\baclient. Make
it as basic as possible. For now, don't bother with any kind of pre-
or post-schedule commands, include/exclude lists, or any other options
not necessary to test the basic function. For example:
COMMMethod tcpip
TCPServeraddress your.tsm.server.address
PASSWORDAccess GENERATE
NODename yournodename
SCHEDMODe PRompted
"NODename" and "SCHEDMODe" are not necessary if you are already using
the default values of the local machine name and "polling",
respectively.
If this works, then the problem may indeed be related to the particular
account being used, or something else in the configuration.
ANS1448E An error occurred while attempting to access NTFS security information
To backup NTFS files, the user also needs the "Manage Auditing and
Security log" user right.
May be accompanied by ANS1228E (q.v.).
ANS1449E A required NT privilege is not held
The user running the backup doesn't have access to the root of the
volume being backed up. If the scheduler is running the backup, you
have to give the SYSTEM id (or whatever id the scheduler is running
under) access to the volume root.
ANS1474E An error occurred using the Shared Memory protocol
This is a blanket message which tells you only that a session using that
protocol could not be established, but does not say why. During
client/server communications the server can close a shared memory
protocol session before the client is ready for it to close. As a
result, the client may still be expecting a message when the session is
closed. As a result, the client issues message ANS1474E. (But the
server code should have been fixed to keep this from happening.)
Perhaps you are not adhering to the rules for using Shared Memory
communication. Look at the server Activity Log for indications.
ANS1485E Schedule log pruning failed.
Like other permissions problems, this plagues NT systems. Get the
current schedule log out of the way and let ADSM create a fresh one.
ANS1497W Duplicate include/exclude option 'EXCLUDE *:\...\pagefile.sys' found
while processing the client options passed by the server.
Do 'dsmc Query Inclexcl' to check for such duplication. If not there,
then be aware that TSM respects the entries in registry subkey
HKLM\System\CurrentControlSet\Control\BackupRestore\FilesNotToBackup
and that pagefile.sys should be in this list (unless removed manually or
with some other tool). So if you have an include/exclude list that has
an exclude for this file, and it is in FilesNotToBackup, then that is
the source of the redundancy.
ANS1503E Valid password not available for server '________'.
Seen when trying to establish a PASSWORDAccess GENERATE type client
password via a dsmc operation. May be due to PASSWORDDIR being present
in the dsm.sys options file, but specifying a regular file rather than a
directory, or the directory not existing. Have a good look at the file
system object that your PASSWORDDIR specifies, and make sure that you
are running the dsmc as root.
ANS1505E Trusted Communication Agent has terminated unexpectedly.
Look for the dsmtca module (in /usr/lpp/adsm/bin, or perhaps /usr/adsm)
having incorrect permissions, or zero length.
ANS1512E Scheduled event '____' failed. Return code = __.
Known "Return code" values:
1 May be accompanied by error like
GetHostnameOrNumber(): gethostbyname(): errno = 11004.
TcpOpen: Could not resolve host name.
The common cause is a faulty customer POSTSchedule or PRESchedule
command. IBM topic on this:
http://www.ibm.com/support/entdocview.wss?uid=swg21108971
May be accompanied by msg ANR2579E (q.v.).
4 Often caused by lack of proper volume label on PC type file
system. See also: ANS4036E
12 Can result when *SM tries to backup/archive a file which has
exclusive open. This may be due to a false indication from the
operating environment, such as Novell NetWare, where a service
pack update may be called for. A ANS9999E error in the
dsmerror.log may point out a problem file system object, which in
turn incites Severe error ANS1028S at the conclusion of a
scheduled backup, which results in return code 12.
Circumvention: Exclude problem files.
127 Typical in a client schedule having been defined with
ACTion=Command OBJects='Somecmd ...' where Somecmd is a command
name which is not in the Path which was in effect with the client
schedule process was started. If there may be any doubt about
command findability within Path, then by all means code the
command with a full path specification.
402 General "error processing request" code indicating that errors
occurred in processing the command. You need to look in the
dsierror.log and the like for reasons.
1837 Means you have all objects excluded from backup, as seen in an
Exchange backup where DSM.OPT has a goofy construct like
EXCLUDE "*\...\*" .
ANS1809E Session is lost; initializing session reopen procedure.
Seen as an NT message, accompanied by preceding messages:
TcpRead(): recv(): errno = 10054
sessRecvVerb: Error -50 from call to 'readRtn'.
Seen as an AIX message, repeatedly during a session. The most innocuous
cause is preemption, where a higher priority process (e.g., Restore)
needs a tape drive which is in use by a lower priority process (e.g.,
Backup). Another common cause is a too-low IDLETimeout value (server
msg IC43445). Alternately, may indicate that you are having local
network problems, likely resulting from an intrinsic error in your
network configuration. Or, you are going through a firewall, with its
own timeout values, which conflict with those between the TSM client and
server, which can cause the session to be cut off and have to restart as
client communication idles while the client searches for candidate
backup files in the file system. Employ the traceroute command, ping -R,
or the like to determine what network elements you are going through.
One customer reported changing TCPServeraddress from a network name to a
numeric IP address to circumvent the problem - but a DNS thing like this
should not cure dunning errors.
See possible explanations under "ANS4017E" - could be a COMMTimeout
value problem.
See also ANS1005E
ANS1810E ITSM session has been reestablished.
Possibly, a networking problem caused the session to be interrupted,
and the client is re-establishing it. The server Activity Log will have
ANR0406I for the session (re)starting. Might be due to an overly
optimistic MAXNUMMP spec for the client node.
ANS1834S Unable to write to '/etc/security/adsm' for storing password.
As the message manual advises, check access permissions and disk space.
/etc/security/adsm should exist, be a directory, and be writable by
root. Are you running the TSM operation as root? (The first execution
after installing a client should be run as root, where PASSWORDAccess
Generate is in effect, to establish the client password in encrypted
form.)
ANS1840E File 'C:\adsm.sys\Registry\VEVPIL01\Users' changed during processing.
File skipped.
It is best to set SERialization in the backup copy group to be
SHARED STATIC, to avoid this error condition.
ANS1865E session rejected: Named pipes connection failure.
The Windows client is attempting to enter into a session with the
Windows server, via the proscribed Named Pipe communication method, but
cannot start the session. The first thing to check is that the server
is actually running and is viable. Also check that the file object
identified by the client NAMedpipename still exists, and is the same as
expected by the server. You can start the server (dsmserv process)
directly from your console, then you can see the messages and what is
happening on your server. Also look for supplementary error indications
in the client and server error logs. (Consider that your Windows system
may have been compromised by one of the innumerable Microsoft
programming gaffes - beware overnight operators taking liberties with
server PCs.) And, Named Pipes are just one client-server communication
choice: you could switch to another method, like TCP/IP.
ANS1874E Login denied to NetWare Target Service Agent '______'.
When logging in to Novell Netware, use a fully qualified NDS ID. For
example, you might use .TSM.BACKUP.BCIT as your user ID. Note that the
leading period needs to be there. Or: An increased number of client
threads consumes more Netware connections, so increasing the number of
available connections for the TSM/Novell ID in Nwadmin may fix it.
See also Novell Knowledge Base Technical Information Document 2944976.
One NOS engineer found: "A specific TSANDS is required in order to get
the Mainframe to login to a 5.1 server to perform backups. If I use
other than the 9/8/2000 TSANDS the server will not allow an unattended
login."
ANS1879E Netware NDS Error on restore processing:
Object .o=organization.ou=organizational_unit.cn=context_name
TSA Error FFFDFE83 - 603 User has no rights to the named object.
The NDS user ID that has been assigned to the client doing the restore
does not have the proper NDS rights assigned. Check the users effective
rights to make sure that it has supervisor object and property trustee
rights.
ANS1899I ***** Examined 2,689,000 files ***** [sample]
Usually seen during a Restore (can also be in a Retrieve), where *SM is
reviewing the server list of files which may be candidates in servicing
the specifications of the restoral being performed. Expect to see low
CPU utilization for the TSM server, if this is the one demand upon it,
and high I/O (vmstat pi/po) there, and TSM db Cache Hit Pct dropping
(reflecting a lot of unique lookups). Expect the client to slow down,
as more of its memory is consumed, and paging increases. The message
will be prominent where the filespace involved has a very large number
of files (millions). Updating the options file to include "TESTFLAG
DISABLENQR" may be appropriate, to cause Classic Restore operation
instead of No Query Restore. (See notes on this elsewhere in this
document.)
ANS1931E An error saving one or more eventlogs.
May be accompanied by: ANS1228E Sending of object 'C:' failed.
A Windows Event Log could not be backed up. Most commonly, you don't
have access to the C: drive, because of permissions problems. (Someone
may have changed them.) If not that, check for having run out of space
on the C: drive. Check the dsmerror.log for indications, and the
Windows xx Event Logs themselves. Could be the result of a *SM defect:
upgrading the client level may fix.
More extreme: Try deleting the c:\adsm.sys directory, then see if the
event log backup still fails. If not, then add the following lines to
your dsm.opt file: tracefile c:\trace.txt
traceflags eventlog
Then re-run backup of *just* the event log, then examine the trace.txt
file.
ANS1950E Backup via Microsoft Volume Shadow Copy failed. See error log for more
detail.
Well, that Windows service may have failed and need restarting; or you
may need to reboot. The "error log" referred to is the Windows event
log; but don't overlook the dsmerror.log as a source of hints.
ANS2048E Named stream of object '\\server\share\full\path\to\file' is corrupt.
May be reported as "File has a corrupt named stream".
Seen during a Windows restoral. As explained by APAR IC33922:
"NTFS file systems support multiple data streams in a file. The part of
the file that you normally see via Windows Explorer or the DIR command
is the unnamed (default) stream. However, some applications also write
one or more named(secondary) streams to a file. For example, an
application that creates bitmap images might store the main image in
the file's default stream, and a "thumbnail" image in a named stream
(that is part of the same file). This APAR concerns itself with named
streams. Because named streams are supported only on NTFS file
system,this APAR affects only the Windows NT-based platforms (NT 4.0,
2000, and XP). The Windows 9x-based family (98/Me) are unaffected.
When the default stream (the "main" part of the file) is restored
correctly (no TSM warning and error messages) and the named streams are
not restored correctly (ANS2048E) the TSM client shouldn't stop.
Circumvention, should the restoral stop: Use 'testflag continuerestore'
to skip the 'bad' file.
ANS2604S the web Client agent was unable to Autheticate with the Server
Requires an administrative account with owner privilegies to the node.
ANS2609S TCP IP Communication failure between the browser and the client mashine
Cause just after installation:
Did you install the web client via the wizard? The initial install
doesn't do it by default. Go to Utilities/Setup Wizard from the menu
bar and install, or check your services panel to see if this service is
installed and started (and set to automatic).
Causes during ongoing operations:
- The LAN connection to the TSM client machine went down.
- You are trying to connect to the TSM client machine using the wrong
port number.
- The Client Acceptor Daemon on the TSM client machine is not up and
running and accepting connections.
ANS2820E An interrupt has occurred. The current operation will end and the
client will shut down.
Mystery message in TSM 5.3. Reported to occur in the dsmserror.log and
dsmsched.log, when the scheduler concludes.
ANS3408W The volume /xxx/xxxx contains bad blocks
Seek it in the Messages manual as ANS13408W(!).
ANS3603E Error creating directory structure
Do not try to restore files via "~USERNAME" form.
ANS4001E Error processing '____': file space not known to server
May be a conflict with lower/upper case. Do Query Filespace to see
what's actually there vs. what you're specifying.
ANS4005E Error processing "<Filename>": File not found
In Novell Netware, usually caused by downlevel TSANDS and/or TSA600
NLM's.
ANS4007E Error processing '<FileName>': access to the object is denied.
In Unix, it may simply be that you are not the owner of an Archived
object being Retrieved, or perhaps you are trying to overwrite a
destination file to which you lack write permission.
If Archiving Files in Windows without being administrator, the user
needs the SE_SECURITY_NAME privilege. This privilege is granted
through the "Manage Auditing and Security Log" right. If the
SE_SECURITY_NAME privilege is not held, GetFileSecurity() (a Windows
function) issues a return code of 1314, which is what ADSM reports in
the dsmerror.log messages you are seeing. At this point there are two
options:
1) Grant the "Manage auditing and security log" right.
2) Code SKIPNTPermissions Yes in dsm.opt. ***** WARNING ***** If this
option is used, NT permissions will not be restored/retrieved when
the files are restored/retrieved.
3) Perform work from the System account.
4) If run from a scheduler, running as a service, and the schedule
references a UNC name directly then the service must be running
under a domain authorised account. Running under the Local System
account (which is the default) won't work because this account
doesn't have any access to domain resources. This could explain
why backup can work from the GUI but not the scheduler. Try
logging the service in as a domain admin account.
5) The file may be one which is always open, like NetWare print queues,
and thus you cannot back it up.
(Also seen as message:
ANE4007 (Sessio: ___, Nod: ______) E Error processing
'D:\labfiles\PHCT_32\OTS\49399900.OLT': access to the object is
denied.
or in Novell:
ANE14007 (Sessio: 1370, Nod: NOV_BLK_EDV_PROD) E Error processing
'SYS:/QUEUES/7702001.QDR/Q_0277.SRV' : access to the object is denied
ANS4010E Error processing '<SOME_FILE_SYSTEM>': stale NFS handle
What this is *supposed to mean*:
SOMETHING attempted to mount this file system in an "NFS manner" at some
earlier time in this opsys uptime; but the mount failed, and remains
pending, hence the staleness. One way for this to have happened via
implied mount request by virtue of being defined as an NFS file system
in /etc/filesystems or equivelent: at machine start the NFS mounter
would try to mount the remote filesystem, fail, and go on. (Eliminating
the unnecessary stanza in the /etc/filesystems will prevent recurrence.)
Another means of it happening is someone having done a manual mount
specifying "System:Filesystem". Or some facility might have issued a
system call to do it. But in any case the mount could not complete, and
so the stale handle.
The associated errno label is ESTALE, which would usually be returned by
statfs() or stat().
What this can mean due to faulty ADSM programming:
It is issued any time that a ADSM makes a timed stat() system call on
any file system and the stat system call does not return in the allotted
time, as governed by the ADSM NFSTIMEOUT value. (In ADSMv3 PTF 7 you
can reportedly code "NFSTIMEOUT 0" for indefinite wait.)
One circumvention is said to be to remove the 'dsmstat' module.
Another circumvention (particularly with HSM) is to put the undocumented
NFSTIMEOUT operand into dsm.sys, with a 120-second timeout:
NFSTIMEOUT 120
You can also try the more extreme 'fuser -k <filesystem_name>', which
kills any NFS process associated with the file system.
Some PMR info about this:
The issues I was referring to are that a stale NFS error can cause the
client backup to fail instead of skipping the effected filespace, and,
that the stale NFS error should really be a stale FS error. The APAR
which contains these issues is IX86323.
The fix, however, is a bit more complex. The old way clients dealt
with the Stale NFS handle issue would cause file data to be
expired. There was a fix which caused ADSM to stop processing to avoid
that expiration, but now clients fail to complete backups. The planned
fix will be to skip these filesystems so that the backup can complete.
Work is still going on in this area and it looks like the fix will be
in 3.1.0.7, but there is still work that needs to be done to ensure
the safety of the fix so it may be delayed. A workaround is to try
and make the NFSTIMEOUT value larger to give the filesystem a change
to return to the call.
The condition has also been seen when a CD-ROM is mounted in the
operating system, but the CD itself is physically removed from the
drive. That is, the device cannot respond.
ANS4014E Error processing '/some/file': unknown system error (157) encountered.
Program ending.
See: ANS1078S
ANS4017E Session rejected: TCP/IP connection failure [Same as ANS1017E]
This is what the client sees and reports, but has no idea why.
The cause is best sought in the ADSM server Activity Log for that time.
Could be a real datacomm problem; or...
Grossest problem: the TSM server is down.
If you get this condition after supposedly changing the client and
server to use a different port number (e.g., 1502), and the Activity Log
has no significant information about the attempted session, use
'netstat' or 'lsof' or similar utility in the server operating system to
verify that the *SM server is actually serving the port number that you
believe it should be. (You *did* code the port numbers into both the
client and server options files, right?)
An administrator may have done a 'CANcel SEssion'.
If during a Backup, likely the server cancelling it due to higher
priority task like DB Backup starting and needing a tape
drive...particularly when there is a drive shortage. Look in the
server Activity Log around that time and you will likely see
"ANR0492I All drives in use. Session 22668 for node ________ (AIX)
being preempted by higher priority operation.".
Or look in the Activity Log for a "ANR0481W Session NNN for node
<NodeName> (<NodeType>) terminated - client did not respond within NN
seconds." message, which reflects a server COMMTimeout value that is
too low. Message "ANR0482W Session <SessionNumber> for <NodeNode> name
(<ClientPlatform>) terminated - idle for more than N minutes." is
telling you that the sever IDLETimeout value is too low. Remember that
longstanding clients may take considerable time to rummage around in
their file systems looking for new files to back up.
Another problem is in starting your client scheduler process from
/etc/inittab, but failing to specify redirection - you need:
dsmc::once:/usr/bin/dsmc sched > /dev/null 2>&1 # TSM scheduler
An unusual cause is in having the client and server defined to use the
same port number!
Might also be a firewall rejecting the TSM client as it tries to reach
the server through that firewall.
ANS4024E Error processing '<SomeFileName>': file write error
Usually a Rights issue when doing a restoral.
ANS4025E Error processing filespace ________: file ____ exceeds user or system
file limit
Check your login filesize limit.
ANS4042S Invalid option 'NODENAME' found in options file ____________
You coded a NODename which is the same as the system hostname, or the
NODename definition is not within a SErvername stanza.
ANS4028E Session rejected: Authentication failure
This message appears all over the console, usually accompanied by
dsmrecalld and similar processes seemingly looping. It signifies an
ADSM defect in having obliterated the client password entry in
/etc/security/adsm/<SRVRNAME> in the face of high activity.
At the client, as root, perform 'dsmc q sch' to trigger a prompt to
enter the password for the client, which will most likely re-establish
things. You should not have to perform an 'UPDate Node' command at
the server to re-establish the password, but be prepared to.
ANS4031S Error processing 'FILESPACE_NAMEPATH_NAMEFILE_NAME': destination
directory path length exceeds system maximum
Can be caused by too long a file name/path name. In NT, one can have
shared directories. If such a file is then given the maximum possible
pathlength (255 chars), that in conjunction with the real NTFS on the
disk causes the path that leads to the shared directory to be longer
than the 255 char max.
In Unix, this may be a recursive directory symlink, which would be
apparent in the reported object name.
ANS4035W File '____________' currently unavailable on server.
This message is usually seen when the tape volume the required files are
on has suffered an I/O error such that the tape has gone at least
'read-only' (message ANR8830E), if not 'unavailable'. Refer to the
server Activity Log for the issue. Look for a corresponding ANE4035W
message in therein, as well as perhaps ANR8359E and ANR0541W.
ANS4036E An error occurred saving the registry key.
Can be that the user attempting the backup is not authorized to back up
the registry. Or the C: drive was full: *SM requires space to therein
make a copy of the Registry (adsm.sys directory), to then back up that
copy. Sometimes, deleting the adsm.sys directory and trying again will
allow a successful operation. See also: ANS5166E
ANS4071E Invalid domain name entered: '/some/directory'
Typically means that what you entered was not a file system name, but
rather a subdirectory of a file system; or it is an arbitrary manual
mount point which is not one defined in /etc/filesystems.
If you really need to backup via subdirectory, consider using the
VIRTUALMountpoint option of the Client System Options file.
ANS4078E *** Directory path not found ***
See ANS1076E
ANS4089E File not found
Probably due to a link to a non-existent file.
ANS4090E Access to the specified file or directory is denied
DFS: The DFS ACL prevents access from Root or cell_admin.
ANS4095E No files matching search criteria were found. [same as ANS1092E]
ARCHIVING/RETRIEVAL: Possible problems...
- You forgot to put a hyphen (-) before a command line option such as
DIRMc.
- You attempted to archive a named pipe (FIFO) or special file.
- You are attempting the operation across nodes and the file system
architectures are incompatible.
- A defect in the ADSM client causes it to think that, by virtual of the
file system name, that it is incompatible with the request.
- You may have to enclose the filespace portion of the file pathname in
braces {} to keep it from getting confused as it parses the pathname.
That is, if you have two filespaces, /archive and /archive/blah, how
is *SM to know which is meant when you say you want to go after
archived file /archive/blah/myfile? It's ambiguous unless you are
explicit as to which it is.
- Beware ADSM sensitivity to a slash (/) following the object name: it
basically says that the object is a directory and that the search is
to look for anything below that directory, while omitting the trailing
slash says to report only names matching that one.
In particular, when using the -dirsonly option, specifying a directory
name with a trailing slash (e.g., dsmc q ar -su=y /usr1/me/) will
fail, but leaving it off (e.g., dsmc q ar -su=y /usr1/me) will work.
Conversely, when using the -filesonly option, specifying a directory
name with a trailing slash (e.g., dsmc q ar -su=y -filesonly /usr1/me)
will fail, but adding a slash (dsmc q ar -su=y -filesonly /usr1/me/)
will cause it to work.
RESTORAL/QUERY BACKUP: Possible problems...
- Your username may not be the same as the one which backed up the
file(s). (Root will have universal access.)
- The file was erased and another backup took place, such that the file
is not Active: restore with -INActive.
ANS4103E Ran out of disk space trying to Restore <File_Name>.
Retry/skip/abort (r/s/a)? _
Can occur during a RESToremigstate=No file restoral of an HSM-managed
file system, as the restoral speed may overrun dsmmigrate's speed in
migrating files to tape to make room. Usually, by the time you ponder
the message, dsmmigrate has been able to clear space, as verified by
doing a 'df' on the file system. (Note that it is normal for the file
system to fill to 100% during RESToremigstate=No restorals, and that
dsmmigrate is usually able to keep up: you will see the restoral pause
when it is writing progress dots to the terminal, and then resume once
space becomes available.)
Note that you need to respond within the session IDLETimeout limit,
else suffer session cancellation, with manifestation message:
"ANS4017E Session rejected: TCP/IP connection failure".
ANS4105S Internal program error. Failing message value was 16.
Please see your service representative.
This is a message reflecting inadequate programming on the part of the
developers, who have failed to intercept and interpret all the error
conditions they should.
In an HSM file recall this error results from going after a file whose
size is larger than your Unix filesize limit (csh 'limit' command).
ANS4116I One or more files will be stored on offline media.
Do you wish to proceed?
Occurs when an ADSM operation will go to tape and your TAPEPrompt
client option says that you should be prompted. This message can
appear during Backup operations, and in HSM when you add data to a
file system, which in turn causes it to go to the storage pool, and
that pool's high migration threshold is exceeded such that it needs to
migrate some of its holdings to the next storage pool level, which
happens to be tape.
ANS4118I Waiting for mount of offline media.
As in backing up directly to tape and client option TAPEPrompt says to
show the mount wait message. Note that you will typically see an intial
flurry of files supposedly having already been sent to the server before
the mount message appears, then followed by Retry messages. This
reflects the communication medium (e.g., TCP/IP) having absorbed the
initial amount of data in its buffers before transmission actually
occurred; hence, the mount message did not appear after the first file.
Such a mount will also be required in the backup of migrated HSM data,
where the HSM client is in the same system as the *SM server such that
*SM will implicitly perform the backup from HSM storage pool volume to
backup storage pool volume, without recalling the data to the client
file system.
Refer also to "Network data transfer rate".
ANS4123E Unable to read commands entered from keyboard. Exiting...
You attempted to run dsmc in the background (perhaps from
/etc/inittab) but neglected to specify the "schedule" keyword, which
is the only way that command runs without a terminal.
ANS4132I Removal of file space "______' successfully completed.
The ADSM client performed a 'dsmc del filespace' operation. The above
message returns immediately - but the filespace has not actually gone
yet: it will take the server some time to delete all its file object
entries from the database.
ANS4228E Send of object 'somefile...' failed
If accompanied by: ANS4268E This file has been migrated. ...
You tried do 'dsmmigrate' a file explicitly, or perhaps ADSM tried to do
so automatically per the list of migration candidates in the
HSM-managed file system .SpaceMan/candidates file. But the file is
already migrated - you can't migrate it again. This is informational,
not a problem. If this was the result of ADSM trying to honor the
candidates list, run a 'dsmreconcile' on the file system to refresh that
list.
If accompanied by: ANS4089E File not found during Backup, Archive or
Migrate processing) ...
Most likely, the file was in transition, as in existing during ADSM's
look at the file system repertoire, but no longer there when it came
time to perform the operation.
If accompanied by: ANS4312E Server media mount not possible ...
Typically means that some other session or process (like BAckup STGpool)
is using the tape drives. (See ANS4312E) This results in message
"ANS4638E Incremental backup of ____ finished with 1 failure" at the end
of the filespace backup, and a non-zero "failed" count in the job-end
summary statistics.
ANS4245E Format unknown [same as ANS1245E]
This message means that the data format is unexpected:
- You may be trying to backup or restore data using a client level which
is lower than was used to back up the data originally. (Note that this
can occur in a backup as the client is endeavoring to expire an older
file in the server storage pool.) As client software evolves, it
introduces new features which require changes in the format of the
data as stored on the server. Obviously, an older client cannot
understand data formatting which is beyond its programming.
- You may be trying to mix and match data handled by the API client vs.
either the command line or GUI client. They cannot be intermixed, and
the API cannot even query data stored by the "normal" clients.
See the "API" entry for further info.
ANS4251E File system/drive not ready
As seen in Backup output: Typically refers to an HSM-managed file which
HSM cannot serve, for some reason. One reason: the filespace was
imported and/or a RESToremigstate restoral was done to populate the
file system with stub files, across nodes; but that just yields the
stubs, with no file data in the HSM storage pool.
ANS4253E File input/output error
Seen on NT systems in the presence of a bad file, which will probably
be named in the dsmerror.log, like:
03/21/1998 07:47:03 TransWin32RC(): Win32 RC 1392 from
FioGetOneDirEntry(): getFileSecuritySize
03/21/1998 07:47:03 PrivIncrFileSpace: Received rc=164 from
fioGetDirEntries: E:
\NMCDATA\Images\VB4\TOOLS\GRAPHICS\ICONS\OFFICE
The return code 1392 is from Windows NT, and means that the file is
corrupt or otherwise unreadable. The RC 164 is the ADSM return code,
translated from the NT return code, that indicates a file I/O error
(i.e. same thing as the 1392). Run a SCANDISK against the E: drive
to clean up the corruption.
The Microsoft Windows NT and 95 error codes are in the WINERROR.H file,
which comes with Microsoft Visual C++ (it may come with some other
development packages like Visual Basic as well).
ADSM return code information can be found in the "Using the Application
Programming Interface" manual or the dsmrc.h file that is installed
with the ADSM API.
ANS4255E File exceeds system/user file limits
A file being restored or retrieved exceeds system set limits for this
user; so the file is skipped. Ensure that the system limits are set
properly. Seen in AIX 4.1 with a file of size 2147483640.
ANS4267E The management class for this file does not allow migration.
HSM is not activated in this MGmtclass. You need to do:
'UPDate MGmtclass ... SPACEMGTECH=AUTOmatic'.
ANS4268E This file has been migrated.
Usually follows an "ANS4228E Send of object 'somefile...' failed",
meaning that the file was *previously* migrated. (The "has been
migrated" terminology in the message misleads you to thinking that the
migration just happened.)
ANS4314E File data currently unavailable on server
As in attempt to restore from a tape whose Access value is Unavailable,
which can be due to the tape having been involved in a past error
situation, having been checked out of the library, etc.
ANS4301E Server detected system error [same as ANS1301E]
May be seen when the server tape encounters an I/O error. See the
server Activity Log for the circumstances. Has been seen with a tape
stuck in the drive, as in a failed unload operation.
One customer reports TSM Support recommending use of the option
MEMORYEFficientbackup=Yes - rather specious. Another, Windows customer
reported getting by this by renaming the 'SYSTEM OBJECT' filespace to
'SYSTEM OBJECT OLD' and then reattempt the backup, suggesting a corrupt
filespace. But look in the server Activity Log for the reason for the
problem - don't shoot in the dark.
Might be due to the type of object on the server being different than on
the client, as in having previously backed up a name which was a file,
but has since been replaced on the client with a directory.
This can also occur when he time zone information is not properly
configured: see IBM site Solution swg21153685 "ITSM Server internal
clock does not reflect change in system clock?"
ANS4312E Server media mount not possible [Same as ANS1312E]
Typically occurs when all drives are currently in use: expect to see
ANR0535W in the server Activity Log. The TSM client, particularly a
scheduled backup, will typically wait for a drive to become available,
with msg ANS4118I Waiting for mount of offline media. Check that your
DEVclass MOUNTLimit is not artificially limiting mounts to below the
number of drives actually available.
ANS4314E File data currently unavailable on server
Has been seen in Restore operations. Reinvoke restoral, adding
"REPlace=No" to avoid waste.
ANS4329S Server out of data storage space.
Typically occurs with HSM when the storage pool quota either defaults to
the size of the file system or is otherwise exceeded by an attempt to
write more data into the file system.
See also: ANS1329S Server out of data storage space
ANS4353E Session rejected: Unknown or incorrect ID entered
The node is not known to the server. At the server, perform a
'REGister Node'.
ANS4475E Insufficient authority to connect to the shared memory region
You must be root to use shared memory for client connections.
ANS4503E Valid password not available for server '________'.
The root user must run ADSM and enter the password to store it locally.
ANS4638E Incremental backup of 'FileSystemName' finished with 2 failure
Message resulting from a Backup operation which encountered problems.
(A successful backup generates message "Successful incremental backup
of 'FileSystemName'", which has no message number.) Things seen when
a backup fails:
ANS4312E Server media mount not possible
ANS4089E File not found during Backup, Archive or Migrate processing
(which can occur when a transient file, as in the .Spaceman/logdir
directory, evaporates between file identification and the actual
backup attempt)
ANS4940E File '________' changed during backup. File skipped.
ANS4776E Unable to recall file from server due to error from recall daemon.
Seen when dsmrecalld daemon processes are looping. Has been cured by
at least killing the child process; but may also have to kill the
parent and reinvoke it.
ANS4847E Scheduled event 'SOME_SCHEDULE' failed. Return code = 4.
Appears in client log to indicate that something bad happened during
the scheduled event. There should be another message in there, as in
above Backup stats, saying what the problem was. And the
/dsmerror.log and the server Activity Log should also be consulted.
ANS4928E PASSWORDAccess is GENERATE, but password needed for server.
You need to establish or renew your client system server access
password, from the client root account.
ANS4931S File space [whatever] in System Options File is invalid.
Typically, for VIRTUALMountpoint you specified a file system
subdirectory which is not present; or you perhaps implicitly attempt to
reference a virtual mount point (as via 'dsmc query filespace') and you
are not the owner and are not superuser.
One thing you should not do is code a Virtual Mount Point which will be
a subdirectory once the file system is mounted, because when it is not
mounted there will be nothing there and this error will be produced
whenever anyone on the client issues a dsmc command.
Another possibility is that you did not code the VIRTUALMountpoint
within the appropriate dsm.sys server stanza.
ANS4999E (RC2120) Unable to log message to server: message too long.
API programming message. In the dsmInit() invocation, the application
identification string exceeds DSM_MAX_PLATFORM_LENGTH (16 chars).
ANS5092S Server out of data storage space. [See also ANS1329S]
You're out of space in your storage pools. Do 'Query LIBVolume' and see
if you are out of volumes. See if your volumes are writable (versus
unavailable/read-only). Boost MAXSCRatch if appropriate.
ANS5174E A required NT privilege is not held.
To backup NTFS files, the user also needs the "Manage auditing and
Security log" user right.
If you are using the schedule service, ensure the user for the service
(System, by default) has the rights to the files.
ANS5166E An Error Occurred Saving the Registry Key
See if there is enough space on the C Drive to allow the Registry key to
be saved to the adsm.sys directory. See also: ANS4036E
ANS5503E File '/usr/lpp/adsm/bin/dsm.sys', line 32, value
'DEFAULTServer ADSM.SRV5' is not a valid option.
The name to be used on DEFAULTServer and SErvername options are *not*
the 64 character server names used in SET SERVER commands, but instead
are stanza names, and are restricted to 8 characters. The person who
wrote the manual confused the two and said that you can use a name of up
to 64 characters on DEFAULTServer and SErvername.
ANS5628E Invalid host name.
Your dsm.sys file needs work, in terms of server identification,
TCPServeraddress.
ANS8001I Return code NN.
You used the adminstrative client (dsmadmc) to issue a server command,
and that command ended with the resulting return code indicated.
Refer to the Admin Ref appendix on Return Codes for possible numeric
values and symbolic names for the errors that you can use in Server
Scripts.
ANS8001I Return code 3.
In entering a continued server command, you may have neglected to leave
at least one space between operands in the way you continued the command
from one line to another.
ANS8017E Command line parameter 3: 'dataonly=yes' is not valid.
Or similar, where you are certain that, for example, the dsmadmc command
does indeed support the flagged parameter, but for some reason the
client is failing to recognize it as valid. This has been seen to be
caused by a faulty LANG (locale) being in effect for the user login.
ANS8023E Unable to establish session with server
As when you attempt to employ the dsmadmc command to conduct an
administrative session with the TSM server, where the server has either
failed or has not yet completed its initialization. There may be a fatal
condition preventing the server from coming up, such as a full Recovery
Log, in which case you need to start the server by going into its
directory and invoking 'dsmserv' (without the "quiet" option), to see
the failure message.
ANS8034E Your administrator ID is not recognized by this server.
Explanation: The administrator ID entered is not known to the requested
server.
This could also occur if you try to use SERVER_CONSOLE from an admin
client, which is prohibited because the userid is not
password-protected: as its name implies, you must use it from the server
console
ANS9003E dsmrecall: file system for ____ is not in the dsmmigfstab file.
Typically because in performing the dsmrecall you specified a full path,
which includes a symbolic link with makes the path look unlike the one
which HSM manages. Instead, use the true path, or go into the directory
where the file lives and invoke dsmrecall on just the file name.
ANS9094W dsmautomig: no candidates found in file system ________.
With MIGREQUIRESBkup=Yes in effect, data must be backed up before it can
migrate.
ANS9096E User is not the owner of file _____ so file is skipped.
Seen with HSM where some random user is trying to do the good-citizen
thing of a dsmmigrate on a file which the user previously dsmrecall'ed
to examine. Doing a dsmmigrate requires that the userid doing it be the
file owner.
ANS9101I No migrated files matching '<SomeFilename>' were found.
In HSM, you attempted a explicit 'dsmrecall' or implicit recall of a file
(migrated or not, as dsmls reports) and got this error. It can
trivially indicate just the condition that the error is saying.
Also seen in a physical (non-standard) HSM file system migration where
the stub-laden file system is imported, but there's nothing in the TSM
database reflecting any migrated files. (A wacky situation anyway.)
You might also see issues where you are attempting to use HSM on a
client but there's no HSM licensing in the server, where the file access
attempts would result in the following error message in the server
Activity Log:
ANR2812W License Audit completed - ATTENTION: Server is NOT in
compliance with license terms. (SESSION: 16)
Query LICence would show:
Number of space management clients in use: 0
Number of space management clients licensed: 0
ANS9126E dsmautomig: cannot get the state of space management for
/ssa/home04/sscphenk/tmp/exportfs: No such file or directory.
This can be due to the file name having a Newline character embedded
in it (or perhaps other binary) such that HSM takes the path preceding
the newline to be the whole file name, though there is more. Look in
the .Spaceman directory's candidates list, then then do 'ls -lb' in
the actual pathnamed file system to expose any binary.
ANS9126E dsmmonitord: cannot get the state of space management for ____:
File table overflow.
ADSM defect, as in APAR IX71926, where a system has many HSM file
systems and an incremental backup causes the system inode table is being
exhausted.
ANS9178E : cannot open file /etc/adsm/SpaceMan/config/dsmmigfstab: No such file
or directory.
This is the HSM file systems control file.
ANS9183E dsmmigrate: file system / is not in the dsmmigfstab file.
You are going through a symbolic link to migrate from the file system.
Use the actual file system name.
ANS9148E dsmdu: cannot find mount point for file system ____
You issued the dsmdu command specifying a file name rather than a
directory name.
ANS9199S Cannot open /dev/fsm
Will appear if the HSM kernel extension (kext) is not loaded.
See: HSM kernel extension loaded? See also: /dev/fsm
ANS9230E Cannot unmount FSM from file system <FileSystemName>.
Message from umount command:
Invalid parameter: U
Typically occurs when you remove HSM management from a file system, as
via 'dsmmigfs remove <FileSystemName>'. Is symptomatic of a defect in
ADSM and/or AIX in performing the umount of the FSM which is mounted
over the JFS.
Otherwise, the problem can simply be that you are sitting in that
directory when you issued the 'dsmmigfs REMove' command, which makes it
impossible for the Unix 'umount' command to unmount the FSM file system
(Device busy) condition. 'cd' out of that directory and repeat the cmd.
ANS9267E dsmautomig: File system _________ has exceeded its quota.
Do 'dsmdf' on the file system name: it will probably report a Mgrtd KB
value which is way over the Quota value reported by 'dsmmigfs query'.
ANS9281E Space management kernel extension is downlevel from the user program.
Encountered when a new level of the HSM software was installed over a
live HSM system - a bad thing to do. See: installfsm
(Some customers, not knowing exactly what all the components are in the
client install package, install them all. This is unhealthy, in that it
can, as in the case of HSM, result in a kernel extension being added to
the system, and additional processes running.)
ANS9283K Attempting to access remote file.
HSM has to go to a storage pool to retrieve the involved file, which may
be on disk or tape.
ANS9285K Cannot complete remote file access.
The server may be unavailable. If you recently relocated your HSM
services, you may have neglected to update your client options file to
specify the new and correct HSM server: code MIgrateserver only if the
default server is not also the HSM server.
Or, for HSM, could well mean that there is not sufficient space (dquota
or physical space) in the file system to recall the named file. Consider
doing dsmmigrate on some files to make room for the subject file. (The
dsmmonitord's design is such that it cannot detect such spurious
events.) Typical scenario:
Recalling 1,928,733 /hsm-file-system/hsm-file
ANS9285K Cannot complete remote file access.
** Unsuccessful **
ANS4227E Processing stopped; Disk full condition
Additionally, make sure that all the dsm* HSM daemons that should be
running are running, and that there are no duplicate, conflicting
processes.
Look in the server Activity Log for reasons for the failure, if not
indications that the HSM client is actually contacting the server.
If it looks like some other condition, check the usual:
Use 'dsmmigfs query' to assure that the file system is really under HSM
control. Make sure that dsmmonitord and dsmrecalld are running and that
/etc/adsm/SpaceMan/dsmmonitord.pid and /etc/adsm/SpaceMan/dsmrecalld.pid
reflect them. Consider using installfsm to query that the kernel
extensions are loaded and in effect. Check the client dsmerror.log for
problems.
Another cause is in having redefined the client environment, and
possibly restored the server, where the dsmrecalld and related daemons
are still using obsolete info.
Beyond that, check for filespace name and other consistencies.
ANS9288I File __________ size ____ is too small to qualify for migration.
In a 'dsmmigrate', the file that ADSM HSM examined is itself a "stub"
file, and thus lacks the excess size (typically, >4KB) required for it
to serve as a stub as the file itself is migrated. This is an
informational message - there is no problem.
ANS9297I File ________ is skipped for migration: No backup copy found.
ADSM defaults to requiring the condition that "migrate requires backup",
as defined in the MGmtclass (do 'dsmmigquery -M -D' to check). You can
do: 'UPDate MGmtclass .. MIGREQUIRESBkup=No' to override.
ANS9501W dsmmigfs: cannot set event disposition on session 0 for file system
_______ token = 0. Reason : No such process
Has been seen when HSM has just been installed, but its daemon processes
(dsmrecalld, dsmmonitord) have not been started by virtue of the
/etc/rc.adsmhsm shell script being run.
ANS9528W dsmscoutd: cannot read from the state
file/etc/adsm/SpaceMan/config/dmiFSGlobalState.
As when trying to access an HSM file. This situation indicates a
problem with the /etc/adsm/SpaceMan/config/dmiFSGlobalState. Can be
fixed by recreating the file, as:
- cd /etc/adsm/SpaceMan/config
- If file dmiFSGlobalState exists, rename to some backup name, like
dmiFSGlobalState.ANS9528W
- Do 'dsmmigfs globalreactivate'
ANS9918E Cannot open migration candidates list for ________.
HSM file system has run out of physical space - expand the file
system. (Msg appears on console and in /dsmerror.log)
ANS9950E File: <file-spec> is not qualified for migration because the Space
Management Technique attribute is set to None.
You may lack "SPACEMGTECH=AUTOmatic" in the management class definition,
or you do but failed to activate the policy set containing it.
ANS9999E ntrc.cpp(879): Received Win32 RC 1450 (0x000005aa) from FileRead()
ANS9999E is the client equivalent of server message ANR9999E, used for
reporting debugging information where unexpected conditions occur, for
which there are no established error messages.
Seen in a Windows 2000 backup. Windows error code 1450:
ERROR_NO_SYSTEM_RESOURCES - Insufficient system resources exist to
complete the requested service. This is a Windows issue, encountered
when backing up big filesystems, or particularly large files. Windows
has a certain amount of memory pool space that it can allocate to
programs, and TSM is using the memory available from that pool such that
there is no more memory left to allocate. TSM is a victim of the Windows
architecture shortcoming. Windows 2000 and its ilk use 32-bit
addressing for memory. This only allows for 4 GB of addressable RAM,
which must be divided into various sections of virtual memory. The
kernel only has 2 GB to divide up and, in this distribution of
addresses, Windows allocates a paged-pool memory maximum size of 192
MB. (This is a good reason to avoid Windows and use a real operating
system.)
The following docs from the Microsoft Knowledge Base Articles describe
this error condition:
Q304101 - Backup Fails with Event ID 1450. [This article talks of
changing some Registry settings, which one customer reports
having resolved his backup problems.]
Q247904 - How To Configure the Paged Address Pool and System Page Table
Entry Memory Areas
Q142719 - Windows Reports Out Of Resources Error When Memory Is
Available
Q236964 - Delayed Return of Paged Pool Causes Error 1450 "Insufficient
Resources"
Q192409 - Open Files Can Cause Kernel to Report
The presence of the ANS9999E may cause the client scheduler to exit with
return code 12 (via ANS1028S).

ANU-----(TDP for Oracle)---- and ORA-nnnnn Oracle messages ---------------------


Refer to the TDP for Oracle manual. Don't overlook the dsmerror.log and
dsmsched.log files as additional sources of information.
ANU2508E Wrong write state
There's no pat answer for this problem. The message is one that is left
over from the Oracle Agent days...it should now read "Wrong state". This
message indicates that Oracle made a call to Data Protection for Oracle
that is out of sequence from their stated protocol. Because there is
nothing in the error log preceding this it is not very helpful. You will
likely have to contact TSM support for assistance in resolution, where
the effort will involve collecting traces and logs.
ANU2602E The object /mount/appl00001//c-213141136-20031104-0e was not found on
the TSM Server
Oracle 9i introduced the concept of autobackups for the control file.
During the autobackup process, Oracle dynamically generates the
backuppiece name for the control files that are being saved. During this
backup processing, a unique name is generated by Oracle prior to backing
it up and the TSM Server is then checked to ensure that this backuppiece
name does not exist. When performing this check for any existing
objects that might have this name, Oracle will first try to delete this
file regardless of whether it exists or not. The return code from the
deletion process not finding the object on the TSM Server is the
ANU2602E message. During the autobackup processing, Oracle calls the
Media Management Layer (Data Protection for Oracle/TSM Server in this
case). Oracle issues the command to attempt a deletion prior to
autobackup of the control file. Because each MML operation is a unique
and distinct session, the MML has to treat each delete the same. In
other words, Oracle gives no hints as to the type of deletion being
performed therefor Data Protection for Oracle just attempts the delete.
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
Set the LIBPATH environment variable to include $ORACLE_HOME/lib
before /usr/lib.
ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer
IBM AIX RISC System/6000 Error: 2512: System call error number 2512.
The 2512 is from DP for Oracle which, reflecting a licensing problem.
Check that your license file (agent.lic) exists and that it can be found
by DP Oracle. If your license file is in the installation directory then
it may be a permissions problem. Otherwise, you need to set
TDPO_PSWDPATH to point to the directory of where it can find the license
file.
ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer
IBM AIX RISC System/6000 Error: 2534: System call error number 2534.
The 2534 error code indicates an error in the TDP Oracle options file,
or possibly environment variables (LANG, etc.). One customer found that
their local RMAN backup script did a shell SOURCE command to absorb
variables - which caused oracle/bin to be used instead of oracle/bin64.
Double-check the environment in general, and options file(s). Review any
tdpoerror.log for indication of cause.
SBT-2175 MM/DD/YYYY hh:mm:ss send2.cpp(650): sbtbackup(): Exit - tdpoQueryObj()
failed. dsmHandle = 1, rc = 8
As seen in the sbtio.log, suggesting that the object name that Oracle is
giving to the file for that backup already exists on the TSM Server.
Usually this is a result of the backup_piece name that Oracle generates
is very long and the part of the name that should make the name unique
is getting truncated so it looks like a duplicate name to TDP
Oracle/TSM. Refer to the Rman User's guide.

BKI-----(TDP for R/3: Backint)--------------------------------------------------


Refer to the TDP for R/3 manual. Don't overlook the dsmerror.log and
dsmsched.log files as additional sources of information.
BR266E program backint interrupted exist status 90009,
(The status number may vary.) Address by adjusting the Unix Resource
Limits memory limit value, via ulimit and AIXTHREAD_SCOPE setting:
'ulimit -m unlimited' and 'SETENV AIXTHREAD_SCOPE S'.

BMR-----(SysBack messages)--------------------------------------------------
BMR0030E stbackup.c(805): Error from TSM API during SendData call: ANS0278S
(RC157) The transaction will be aborted.
That message doesn't tell you anything useful. If a dsierror.log, check
for actual problem indications therein. Otherwise check the TSM server
Activity Log - which may show a MAXSCRatch problem.

IDS-----(TDP for Symmetrix)-----------------------------------------------------


Refer to the TDP for Symmetrix manual. Don't overlook the dsmerror.log and
dsmsched.log files as additional sources of information.

OBK-sbt, like:
(2651) OBK-sbt:<06/18/2001:15:10:22> odsmSess(): # of dsmInit retries = 1
(2650) OBK-sbt:<06/18/2001:15:10:22> sbtread(): End of file reached. oer =
7061, errno = 2505.
These are not Tivoli messages, but rather are passed back to EBU (Oracle
Enterprise Backup Utility) by the media management software. Contact your
respective Media Management Vendor for support.
BusinesSuite Module for Oracle error messages appear in the following format,
where pid is the process id and function is an internally defined function name:
(pid) OBK-sbt: <function>: <error message>
In addition, BusinesSuite Module for Oracle will write extended debugging
information in the file specified by the NSR_DEBUG_FILE environment variable.

(10613) OBK-sbt: sbtpvt_tfn: BACKUP_DIR not set.


This message is issued by by Oracle internal library - which you should
not be using if you installed properly... Have you performed the
relinking procedure? If so, did you specify the correct libobk.a? In the
$ORACLE_HOME/lib there should be a correct link to the TSM libobk.a.
See also: BACKUP_DIR

MSSQL-----(Microsoft SQL messages)-----------------------------------------


MSSQLSERVER Error (2) 17055 NT AUTHORITY\SYSTEM NAUTILUSA 18210 :
BackupVirtualDeviceSet::SetBufferParms: Request large buffers failure on backup
device 'TDPSQL-00000AF0-0000'. Operating system error -2147024888(error not
found).
Often, the problem usually lies with the SQL Server Virtual Device
Interface (VDI) rather than the TDP config. There may be fragmentation
of the SQL Server MemToLeave area. Use the SQL Server '-g' startup
switch to deal with that (introduced in SQL Server 7.0 SP2 - see
Microsoft knowledge base article id Q254555).
May mean not enough storage is available to process this command.
Use the 'TDPSQLC QUERY TDP' command to check the settings for:
SQLBUFFERS
SQLBUFFERSIZE
If you are seeing resource issues, try lowering these values. You can
set SQLBUFFERS to 0 to allow the SQL server to decide what value to
use.
Restore failed [Microsoft] [ODBC SQL Server Driver] [SQL Server] Could not find
database ID 65535. Database may not be activated yet or may be in transition
Did you try to perform a differential restore of the database? If so,
realize that a full restoral must be performed first.

SQL-----(DB2 messages)-----------------------------------------
SQLnnnn messages are from DB2 itself. Return Values tend to be return codes
from the TSM API. See also IBM message references like
https://aurora.vcu.edu/db2help/db2m0/frame3.htm#sql2000
SQL2025N An I/O error "_RC_" occurred on media "ADSM".
The RC values are from the API manual, Return Codes appendix.
41 means DSM_RS_ABORT_EXCEED_MAX_MP, which means that the client was
attempting to use more mountpoints for a backup or archive operation
than permitted by the server. From the Admin, run 'Query Node nodename
Format=Detailed' to determine the maximum allowed mountpoints for the
node. You may need to use UPDATE NODE to increast this value. If the
intent is for this client to back up to disk, you will need to check
other things in your configuration to understand why it is trying to go
to tape.
SQL2062N An error occurred while accessing media ____. Reason code: ___.
General notes: The reason code is from *SM itself. The TSM TDPs utilize
the database's API on one side, and the TSM API on the other side, to
effect backup and restoral. Thus, you should look in the API manual for
an explanation of the reason code (API Return Code).
Note that the "media" is usually db2tadsm.dll, the DB2-to-ADSM interface
module: that is, DB2 is writing its backup data to a conveyor module
rather than a tape device.
SQL2062N An error occurred while accessing media. Reason code: "-50".
This is a TCP/IP failure of the *SM API to connect to the *SM server.
You might look in client error logs for leads; and the *SM server
Activity Log may well reveal the circumstances.
SQL2062N An error occurred while accessing media
"/home/db2pet1/sqllib/adsm/libadsm.a". Reason code: "138".
As always, determine when the backup was last run successfully, and what
changed since then. API return code 138 suggests someone diddling with
permissions, or the software being run from an inappropriate or
authority-changed account. Doing an 'ls -l', 'ls -lu', and 'ls -lc' on
the lib file is always advisable, to ascertain when the lib was last
used and, per -c, when someone changed its attributes.
SQL2062N An error occurred while accessing media
"/home/db2inst1/sqllib/adsm/libadsm.a". Reason code: "185".
May be an incorrect version of the libadsm.a library, as in the ADSMv3
client having been installed on a system where ADSMv2 had been, without
uninstalling v2 first. In AIX, do 'lslpp -l "adsm*"' to list the ADSM
program products that are installed and if you find anything at Version
2, remove it.
SQL2062N An error occured while accessing media "C:\SQLLIB\bin\db2adsm.dll".
Reason code: "406"
The 406 indicates that the program cannot locate your API options file.
You may have DSMI_CONFIG set, but pointing at the directory in which the
file resides, rather than naming the file itself.
SQL2062N An error occurred while accessing media
"/home/dbadm/sqllib/adsm/libadsm.a". Reason code: "610".
Seen when that module has been deleted or moved. Replace it.

ADSM/API ERROR CODES:

106 (RC_ACCESS_DENIED).
This error code causes ADSM to skip the problem file and continue on
with the next file.
131 (RC_SYSTEM_ERROR).
This error code causes backup processing of the file system to stop.
0150 (S DSM_RC_UNKNOWN_FILE_DATA_TYPE)
Has been seen in an attempted DB2 restore where there were two copies of
the same DB2 logfile, one of them corrupt, and one of them not. The DB2
client does not offer the granularity to pick between two objects of the
same name.

DSMERROR LOG (dsmerror.log) MESSAGES:


See also the "DSIERROR LOG" section which follows. The error texts which
appear in the various error logs may reflect issues involving the TSM API,
which is at the core of the clients.

ConsoleEventHandler(): Caught Ctrl-C console event .


This is one of those annoyingly misleading, unhelpful messages which IBM
has neglectfully left as-is in its source code. The message might, on
rare occasions, actually reflect some user having done Ctrl-C at a
terminal session on the client; but it more generally means that some
operating system event occurred to cause the client process to be
interrupted (terminated). Look through your operating system error log
for indications of cause. Try to reproduce the situation by manually
invoking the operation, and watching it. Such an event is most often
seen in an Incremental Backup of a mature client with a very large
number of small files, such that the catalog of files which the server
sends to the client may be so large as to over-tax real and virtual
memory. The ultimate solution in such a case is to boost real and
virtual memory, and also look for a needless glut of files from some
careless user. Consider using MEMORYEFficientbackup.

CreateSnapshotSet(): AddToSnapshotSet() returns


hr=VSS_E_UNEXPECTED_PROVIDER_ERROR
Probably a Microsoft defect: Go to http://support.microsoft.com and
search for "833167" to find article "Time-out errors occur in Volume
Shadow Copy service writers, and shadow copies are lost during backup
and during times when there are high levels of input/output".

cuPing: Out of sequence verb: verb: 4D


Seen in a Windows Backup session, after the backup, accompanied by:
sessOpen: Session state transition error, sessState: sSignedOn.
sessOpen: Transitioning: sSignedOn state ===> sTRANSERR state
ANS1074W *** User Abort ***
ANS1029E Communications have been dropped.
My suspicion would be a data communications problem, possibly in network
hardware, but more likely (my suspicion) in Windows having done
something to the socket as the Producer Session, now getting control
after the Consumer Session of the TSM client session has sent its data,
tries to send its results to the TSM server for ANE message logging in
the server Activity Log. Maybe there's an indication of such a problem
in the Windows Event Log?

cuPing: Out of sequence verb: verb: 61


Seen during a backup, where client-server communication had been
established. Accompanied by "sessOpen: Session state transition error,
sessState: sSignedOn." and "sessOpen: Transitioning: sSignedOn state
===> sTRANSERR state", and maybe also "ANS1312E Server media mount not
possible". Might be due to an inadequate server IDLETimeout value.

CuSignOnResp: Server rejected session; Result code:51


Accompanied by:
SessOpen: Error 51 receiving SignOnResp verb from server
ANS1351E Session Rejected: All server sessions are currently in use
The error number is the API Return Code DSM_RC_REJECT_NO_RESOURCES.
Either your "MAXSessions" server option would seem is too low or your
server is clogged with processes.

cuSignOnResp: Server rejected session; result code: 53


As of ADSM 3.1.2.1, whenever a Register Node is performed, a node
administrator userid is automatically defined, having the same name and
password as the node being registered. When a session is conducted,
there is an automatic secondary session involving the node admin. But
if there is no node admin (as in nodes registered before ADSMv3), the
secondary session signon fails. (This might also occur if the
associated node admin is locked, or its password is inconsistent with
that of the node itself.)
So for each client that doesn't have an admin id...
REGister Admin client_node_name client's_pswd
GRant AUTHority auth client_node_name CLasses=Node AUTHority=Owner -
NOde=client_node_name

file thought to be compressed was not


The product forbids backing up files with a given client software level
and then trying to restore them with a lower level client. (The product
developers could certainly do better in detecting and reporting the
cause.)

fioScanDirEntry(): Can't map object 'C:\whatever...' into local ANSI


codepage, skipping ...
The TSM 4.2.0 client for Windows NT/2000 began Unicode support and will
support the client environment where file names from various code pages
are co-mingled in a file system. This support co-req's a TSM 4.2+
server: that is, starting with 4.2, the server is Unicode sensitive, and
contains accommodations for Unicode. The message indicates that the file
in question contains characters which are not within the same code page
as the language of the operating system from which the TSM client is
being run and that you either are not running a TSM 4.2 or later server
or have not migrated the client to a Unicode-enabled file space. The
migration is documented in the Windows "Using the Backup-Archive Client"
manual (v4.2.0 or greater). Under "Considerations for migrating to the
Unicode-enabled client", note 7 describes the skipping.
You might ignore all this and simply delete all the files you find with
odd names; but it's better to find what's causing them...specifically,
to determine what is allowing them to occur. It may be that the files
arrived via an FTP from a system where they naturally had odd characters
in their names; but nevertheless, the Windows system is allowing the
creation of files with the same odd characters, so there are
circumstances to be aware of.

fsCheckAdd: received error from server query response


This is usually a TCP/IP communication issue, reflecting the inability
to contact the server. The cause may be anything from a loose ethernet
cable to TCP/IP buffer size issues. Poke around.

fsCheckAdd: received more than one response


The fsCheckAdd module on the client queried the TSM server for info on
the file system it is processing, and received info on more than one
filespace. Perform a Query Filespace, which will probably show more
than one occurrence of the filespace. Might have been caused by
something like replacement of the IDE disk which held the file system.

fsCheckAdd: unable to stat local filespace


The fsCheckAdd client module is trying to get information on the client
file system, as via a stat() or statfs() call, but that is failing for
some reason. (This msg itself does not include the errno involved: it
may appear on an accompanying message.)
This may be a permissions issue, or in some instances the file system
itself may not be responding (in Unix, try 'df' on the file system
during problem time).
fsCheckAdd: unable to update filespace on server
See description of message ANS1228E.

GatherWriterStatus(): GetWriterStatus() returns VSS_WS_FAILED_AT_FREEZE


failure for writer 'WMI Writer'. Writer error code: [0x80042319]
The VSS_WS_FAILED_AT_FREEZE is an indication that the WMI service is in
such a state that it vetoed the request by TSM to create a shadow copy
from which system state backup could be obtained. I would check the
status of the "Windows Instrumentation Service" in the service control
panel to insure that it was running (and that the service startup was
set to its default-Automatic). I would also examine the Windows event
log to see if there are any WMI-related messages there.
The 'vssadmin list writers' command can give you an indication of the
status of VSS on your system.

GetBackupStreamSize(): CreateFile(): Win32 RC = 32.


This is ERROR_SHARING_VIOLATION, reflecting files that can not be opened
by the TSM client because another application has already opened them
for exclusive use (such as NTUSER.DAT files or *.pst Exchange mailbox
files). This can be considered a part of normal TSM processing: you can
safely ignore these messages.
These messages are really intended for use by development for diagnostic
purposes, and are not intended for end-user consumption. However, in
practice they aren't particularly informative (even to developers), and
they only cause confusion. Over time they will eventually be eliminated,
unless absolutely necessary to capture a run-time problem.
Also, these messages may be generated as a result of an excluded file
(TSM still examines the file, even if it is excluded), or else you would
get an error message indicating that backup for a file had failed. These
"junk" error messages are removed per APAR IC27750, which was fixed in
the 4.1.2 PTF. Using the 4.1.3 client you should no longer see such
messages.

mpDestroy: Memory Pool #23 doesn't exist.


The TSM client is attempting to dispose of a memory pool which it
previously established via mpCreate, but the pool apparently doesn't
exist. This may be a memory management problem, perhaps with the client
programming, perhaps an OS defect.

NpOpen : Named pipe error connecting to server


WaitOnPipe failed.
NpOpen: call failed with return code:121 pipe name \\.\pipe\jnl
This error occurs when running journaling in the Windows client, where
the journal daemon process attempts to send a response on a named pipe
provided by a backup client process and the pipe no longer exists or
isn't valid. This can happen if the backup process/session ends or
closes the pipe before the journal daemon sends or is finished sending
the response or in some cases when the journal daemon is shutting down
and cleaning up resources. By itself the error is innocuous.
(The "Np" module prefix stands for "Named pipe".)
Another cause is another backup session is attempting to connect to the
journal daemon while another journal based backup session is in
progress. This can happen if multiple backup client processes attempt to
perform a journal based backup at the same time, or if the
ResourceUtilization option setting is higher than 2 and produces
multiple backup sessions. The level of client may only wait about 2
minutes for a connection to the journal daemon to become free and will
then time out. A JNLINBNPTIMEOUT testflag was implemented in the 5.1.6.2
level fixtest to allow a client to specify a timeout value that the
client will wait for a connection to the journal daemon to become free
(that is, the currently running jbb session to finish). You might also
consider reducing the ResourceUtilization setting to 2 or less.

NpPeek: No data.
This error occurs when running journaling in the client, where the
backup client is trying to read a response sent from the journal daemon,
which isn't available at the moment the read is being done. This error
can happen if the journal daemon ends (obviously a problem) or possibly
if the response the backup client is looking for from the journal daemon
is still in progress, meaning that the journal daemon hasn't finished
processing/sending it. In most cases the response is ready when the
backup client goes to read it, but if it isn't the backup client will
keep trying to read the response until it either arrives or a timeout
occurs. Customers with old (4.3) clients seriously need to upgrade.
APAR IC36144 (5.2.0.1 on Windows 2000) change the msg to the one below.

psNpPeek(): Timed out waiting for 4 bytes to arrive on pipe.


May be a problem with the journal daemon (not running, stuck). The
default Named Pipe timeout for Journal based backups is 60 seconds,
which may be too low for the given backup: boost by adding the following
to your dsm.opt file: testflag jnlinbnptimeout:600 [value is in secs]
All these timeout problems are reported to be addressed in 5.2.

Pattern compilation failed mxCompile rc=149


The message originated in the part of the TSM client which parses your
include/exclude list. This problem could be coming from a missing
directory delimiter in an include/exclude pattern, as before/after the
"..." match-directories string.

PrivIncrFileSpace: Received rc=106 from fioGetDirEntries: /SomeFilename


Probably a permissions issue, as in Unix when performing a backup of a
system directory but not being superuser. In Windows, you can often
verify by logging on to that client as Administrator and drilling down
to the problem area, getting "access denied".

PrivIncrFileSpace: Received rc=131 from fioGetDirEntries: /dsk/b3


/quad/96.1/qinstall
rc=131 means SYSTEM_ERROR, that ADSM has detected that the system
delivered an (errno) error code. This is a perfunctory message. See
previous messages which describe the inciting event.

processSysvol(): NtFrsApiGetBackupRestoreSets(): RC = 2
processSysvol(): NtFrsApiDestroyBackupRestore(): RC = 0
Windows TSM 5.1 innocuous messages indicating that File Replication
Services is not present/active on your system. The extraneous messages
can be ignored; they will be eliminated in 5.2

ReadPswdFromRegistry(): getRegValue(): Win32 RC=2 - or -


ReadPswdFromRegistry(): RegOpenPathEx(): Win32 RC=2
Occurs when TSM attempts to read the generated password from the Windows
Registry, and the password is not present.
Most commonly caused by having done SET SERVERNAME command to change the
TSM server name where clients are using PASSWORDAccess GENERATE in
clients. The name of the TSM server is part of the path in the registry
to the stored password the client uses to authenticate. When the
servername changes, the client looks in a new path in the registry for
the password key and is not able to find one. All Windows clients will
fail their backups if on a schedule and passwordaccess generate is used
until there is manual intervention on each client to reset the password
after the servername change.

sessSendVerb: Error sending Verb, rc: -50


The TSM client has tried to send some data to the TSM server and the
TCP/IP socket was closed or gone for a reason typically revealed in the
server Activity Log (expect a timeout).

sessRecvVerb(): Invalid verb received.


May be accompanied by
sessRecvVerb(): length=0000, verb=00,magic=04
ANS1026E Session rejected: Communications protocol error
Seen with a Z/OS server. Possibly caused by network access problems, or
maybe TCP/IP settings (TCPWindowsize et al?) are inconsistent between
client and server, causing packet mangling.

TcpOpen(): setsockopt(SO_SNDBUF): errno = __ [like 55]


TcpOpen(): setsockopt(SO_RCVBUF): errno = __ [like 55]
These are operations are to set the send and receive buffer sizes.
If your ADSM client buffer sizes (e.g. TCPWindowsize) are larger than
the max for your operating system config, you need to bring them into
compatibility. Look at your /usr/include/errno.h to see what errno
indicates for your operating system.

TcpFlush: Error 32 sending data on Tcp/Ip socket 5.


sessRecvVerb: Error -50 from call to 'readRtn'.
The 32 is the Unix errno EPIPE, indicating a broken pipe, in which the
other end of the session terminated the communication. TSM is a
client-server facility: if on the client end you get no indication as to
why the session was terminated, refer to the server Activity Log for the
reason.

TcpOpen(): Warning. The TCP window size defined to ADSM is not supported by
your system. It will be to set default size - 33232
Usually, your client options file specifies a TCPWindowsize larger than
your operating system supports (see: TCPWindowsize client option).
Seen on Solaris: The session quits. Attempting to define TCPWindowsize
in dsm.sys results in:
ANS1036S Invalid option 'TCPWINDOWSIZE' found in options file
'/opt/IBMadsm-c/dsm.sys'
Was caused by a mismatch in duplex between the client and the 100Mb
ethernet switch.

TcpRead(): recv(): errno = ...


Just a note on the structure of this:
"TcpRead()" is the private name of the C function created by the
vendor. (Unix C functions are always all lower-case names.)
"recv()" is the name of the system library function which the vendor's
TcpRead function software is calling.
"errno" is the Unix error number, as in /usr/include/sys/errno.h .

TcpRead(): recv(): errno = 73


Usually accompanied by "sessRecvVerb: Error -50 from call to 'readRtn'."
The 73 is the Unix errno for ECONNRESET - Connection reset by peer:
the peer is obviously the TSM server, which terminated the TCP session
and connection. Refer to the server Activity Log for a message
indicating the cause.
TcpRead(): recv(): errno = 10054
Usually accompanied by "sessRecvVerb: Error -50 from call to 'readRtn'."
Errno numbers 100xx and up are Windows Sockets (Winsock) error codes.
10054 is "Connection reset by peer": An existing connection was
forcibly closed by the remote host. This normally results if the peer
application on the remote host is suddenly stopped, the host is
rebooted, the host or remote network interface is disabled, or the
remote host uses a hard close. This error may also result if a
connection was broken due to keep-alive activity detecting a failure
while one or more operations are in progress.
See message ANS1809E explanation herein for more info.

TcpRead(): recv(): errno = 10058


Usually accompanied by "sessRecvVerb: Error -50 from call to 'readRtn'."
The 10058 is: Cannot send after socket shutdown. A request to send or
receive data was disallowed because the socket had already been shut
down in that direction with a previous shutdown call. By calling
shutdown a partial close of a socket is requested, which is a signal
that sending or receiving or both has been discontinued.

The 103068111th code was found to be out of sequence. The code (3432) was
greater than (2259), the next available slot in the string table.
May be a defect in TDP. Another customer who had this problem with TSM
also found it to prevail for FTP, rcp, and other communication functions
- which resolved to a driver defect for the gigabit ethernet cards
10/100/1000 Base-TX PCI-X Adapter (14106902) under AIX, for which PTFs
are available. (As a stop-gap, you can set adapter attributes
chksum_offload and large_send to No.

TransErrno: Unexpected error from open, errno = 22


The Restore is probably trying to restore something that the client
should not have backed up in the first place, in that the operating
system may not support its recreation in a restoral. This might happen
if the client code is schizophrenic, or the file system was backed up
with an older version of the client which thought it could handle
certain objects, but the newer client doing the restore just can't.
Early v3 code thought it could back up and restore sockets, for example,
but later that claim was withdrawn.

"unrecognized symbols for current locale, skipping..."


Seen with the Unix B/A client when that system is functioning as a Samba
file server for PC systems and the PC users are depositing files with
strange characters on that file server, as for example a Linux file
server suddenly having Hebrew file names. SAMBA 2.x simply writes the
filenames to the Unix disk just as it received them from the network, in
the Windows machine's local codepage.

VssQuerySystemWriters(): pAsync->QueryStatus() returns


hr=VSS_E_WRITER_INFRASTRUCTURE
The VSS_E_WRITER_INFRASTRUCTURE is a possible indication that the Windows
2003 Volume Shadow Copy Service is not running or is an error state.
The 'vssadmin list writers' command can give you an indication of the
status of VSS on your system.

win32NpWrite(): Error 233 writing to named pipe


Indicates that the backup client has encountered an error sending a
journal notification to the journal service. Most likely, the error
occurred while the client was performing the initial non-journal based
backup of a drive with an active but not yet valid journal: the journal
service probably encountered an error reading a notification from the
backup client and severed the connection with the client (check
jbberror.log for a return code of 998), and any subsequent notifications
sent to the journal service by the client would result in broken
connection errors (what rc 233 indicates). Since the client can't send
any more notifications the final "Incremental Complete" notification is
never sent to the journal service and the journal for the drive never
set to the valid state, which means that the next backup of the drive
won't be journal based. See APAR IC40627.

DSIERROR LOG (dsierror.log - the API error log) MESSAGES:


Note that API messages are sometimes intended more for application
developers (TDP products or 3rd party products using the API), to whom they
make sense...rather than making sense to B/A client customers.

TcpFlush: Error 10054 sending data on Tcp/Ip socket NN.


See possible explanations under "ANS4017E" - could be a COMMTimeout
value problem.
See also: ANS1005E

DSMSCHED.LOG ERROR MESSAGES:

Scheduler has been stopped.


As in: "04/23/2003 06:38:50 Scheduler has been stopped."
See the server Activity Log. Has been seen in circumstances:
- Often, the scheduler or dsmcad was not running to accept contact from
TSM server.
- Can occur where the client is (no longer) registered on the server.
- Msg ANR0490I Canceling session NNNN for node ______
which is to say that the dsmc schedule process has reacted rather
badly by disappearing instead of resetting.
- An I/O error occurred on the HSM input volume for a backup, where the
server is internally getting data from the HSM tape volume to write
to the backup storage pool.
- Uncommon, server problem, like: ANR0530W Transaction failed for
session 32 for node SRV5 (AIX) - internal server error detected.

Server prompted scheduling not supported under your communication method.


Polling method will be used if server currently supports it.
These two messages probably indicate that your dsm.sys specifies
"COMMMethod SHAREdmem" but you also have "PASSWORDAccess Prompted",
which is incompatible. Shared Memory access requires
"PASSWORDAccess Generate".

Unknown system error


As seen in dsmsched.log, without a message number or error number. See
dsmerror.log, which may report a message number and error number.
Usually occurs because the client program was not compiled under the
level of the operating system that the customer is using, such that the
client code is unaware of recently-introduced error numbers; or it may
be that the client was not programmed to observe the full range of error
numbers. (In Unix, a C program can know the highest errno via the
sys_nerr global variable.) A later client level may help.
Some possible error causes:
- The client doesn't know how to respond to the file system type, that
it does not recognize.
- A file lock may be in effect at the time of the TSM operation - which
can explain the appearance of this error only sometimes.

JBBERROR.LOG MESSAGES:

jnlDbCntrl(): Error updating the journal for fs 'E:', dbUpdEntry() rc = -1,


last error = 27
The journal service was shut down because an I/O error occurred while
updating a journal entry. The precise meaning of return code 27 is that
a seek was attempted to an offset greater than 2 GB in the journal db
file, which essentially means that the journal db grew too large (the
supported maximum of the journal db manager is 2 GB).

jnlDbCntrl(): Restarting journal for fs 'T:' per client request.


The backup client has requested the journal daemon to reset (invalidate)
a journal because it has determined that the journal "lacks integrity"
for one of several reasons. Refer to the TSM 5.3 Client Problem
Determination Guide for reasons this may happen.

psFsMonitorThread(tid 7148): Notification buffer overrun for monitored FS


'F:\'. 06/17/2003 04:00:01 psFsMonitorThread(tid 7148): Reallocating
0x00399999 byte=20 notification buffer.
Reported by a customer as a problem with the journaling engine unable to
keep up with the rapidity of changes. The circa 5.1 customer worked with
IBM, but could not resolve the problem, and removed Journaling.

ERRNO SIGNIFICANCE:

74 AIX:
ENOBUFS: No buffer space available. Usually happens when you've
specified a TCPWINDOWSIZE setting that is larger than your operating
system TCP/IP configuration is set up to handle:
- In AIX, you need to check the sb_max value (on AIX use the command
'no -a' to determine the current sb_max). sb_max is expressed in
bytes, so if you divide by 1,024, that will tell you the maximum
setting you can use for TCPWINDOWSIZE. For example, if sb_max is
65,536, then the maximum TCPWINDOWSIZE value you can use is 64.
- In HP-UX, the limit is the kernelparameter STRMSGSZ, which is
expressed in KB.
Try lowering TCPWINDOWSIZE so that it is less than or equal to sb_max,
and the messages should go away. Alternatively you can increase
sb_max. IMPORTANT NOTE: sb_max is a system-wide TCP/IP setting. You
should be familiar with tuning TCP/IP (or get help from someone who
knows how to tune TCP/IP) before changing sb_max or any other
system-wide TCP/IP settings.

132 Solaris:
Same as AIX 74.

GENERAL SITUATIONS:

Segmentation Fault
This is a software module failure resulting from a programming defect.
It many times manifests itself where virtual memory is constrained: the
programming assumes much, and does not account for boundary conditions
(*SM "hits its head on the ceiling"). In Unix, boosting your Resource
Limits values can circumvent the problem. If encountered in the latest
level of a given piece of software, report it to the vendor. If you can
identify the event or process whose initiation seems to cause the
failure, relaying that information to the vendor will facilitate getting
the problem corrected; and knowing what incites it may make it possible
for you to avoid the failure.

CLIENT SITUATIONS:

Client schedule stays "Pending" for some minutes before it becomes "Started"
Schedules involve a "startup window". They do not necessarily start at
the leading edge of that window. See "Randomizing Schedule Start Times"
in the Admin Guide.
This may also be an effect of the PRESchedule task running, per your
client options file.

Client schedule fails, on any platform:


POSTSchedulecmd or PRESchedulecmd may be coded with a blank or null
value.

Client schedule fails to act on a PC:


Has been seen with CPU power-saver mode active: the PC is dormant, and
won't do anything until the keyboard or mouse are used.

Client sessions not going away after nightly backups


You arrive in the morning and find numerous overnight backup clients
lingering, two sessions each. The 2 sessions would be control and data.
They should go away once they do their thing, if they are
simply-scheduled 'dsmc i' backups. Sessions not going away is a
problem, to be pursued. Do 'q sess f=d' in the server to see what their
state is, as if waiting on some server resource. If they are idle, just
sitting around for no good reason, try to isolate client type via 'q no
f=d' and what scheduling they are using. Look also in the Activity Log
for ANE messages reflecting end of session stats: the clients should
reach that point and then go away. It's possible that the client
options file specifies a Domain which happens to contain a problem file
system/Windows volume which, in its turn, might be causing sessions to
get stuck at that point. Try to get your hands on some client logs as
well, to see how far they are getting.

MSSQL backups no longer work, after TSM Backup/Archive client upgrade:


May be a password problem, resolved via 'dsmcutil updatepw'.

Scheduler / scheduler service is "Starting" - won't go further:


Check to see if you can access the server at all from that client node.
It could be that its options file has the wrong server network address.

Scheduler stops:
You should see indications of the problem in the client dsmerror.log,
and perhaps the server Activity Log. One cause is TCP Read Buffer
errors: in AIX, for example, doing 'netstat -v' may show a non-zero
value for "No Receive Pool Buffer Errors:". Proper operating system
administration will notice such issues and adjust the configuration,
in this case the "Receive Pool Buffer Size" (where a value of 2048 is
typically good).

/var (or other sub-root file system) skipped in scheduler-run backup


You expect the file system to be backed up per "DOMain ALL-LOCAL", but
it's not happening.
First, check that /var is in fact a separate file system in that Unix
host - not a subdirectory of /. And, of course, that the user doing the
/var backup is root. And that "Incremental backup of volume '/var'"
truly does not show up in the backup log.
Then check that 'dsmc show opt' does show /var in the Domain, that
'dsmc show inclexcl' does not show client or server excluding it, and
the scheduler start time is after the last options change. (Remember
that the scheduler lives with the specifications that were in effect
when it was started.) Remember that LOFS file systems and LOFS through
automounter to not participate in ALL-LOCAL. Check dsmerror.log for any
indications.

SERVER SITUATIONS:

CPU utilization high on server


Has been seen with the web client accessing the server, using Microsoft
Internet Explorer. Going to a higher level of IE resolved the problem.

Empty tapes not returning to scratch


Seen where MOVe MEDia was done to get the volume(s) out of the library;
but after all the contents on the volumes expire, and the volumes show
as empty in Query Volume, the volumes still do not return to scratch
status (REUsedelay not a factor). This is because they are still under
the control of MOVe MEDia: if you use that command to move volumes out,
you need to use it to fully reverse its effects.
Do 'MOVe MEDia WHERESTATUs=EMPty' to undo.

Files do not span volumes, as you expect


TSM is well known to span files from one volume to the next, which
nicely utilizes tape volumes to their fullest. But you see that it is
not doing that: a next backup will start fresh on a new volume. Two
obvious things to check are (1) that the volume is read/write; and (2)
that collocation policies in effect for the stgpool allow the new data
to mingle on the same volume with the old - the new data may be for a
different node or filespace.

Help command output awry


Seen after an upgrade. Can result from an incomplete upgrade. For
example, in AIX, doing it in SMIT via Latest Available Maintenance
rather than All Available Maintenance. Not using the latter will result
in the fileset containing the new help files not being installed.

LanFree path failures


Customers run afoul of LanFree tape drive connection problems, which can
be caused by things such as:
- The pre-existence of an internal tape drive of some kinnd (e.g., 8mm)
at the host rmt0 position.
- The LanFree system's low and high numbered HBAs are connected to
different SANs in the opposite order to the TSM server.

Server crash
Look for the file dsmserv.err in the server directory. Sometimes when
the server crashes it puts useful info in there.

Shrinking (dwindling) number of available scratch tapes ("tape leak"):


There can be many reasons for tapes being unavailable for re-use...
Your retention policies are usually the major reason, as tapes remain
committed to long-term data. New installations often express alarm at
how many tapes are being consumed and none given back. This is just the
manifestation of the retention level building towards its plateau, after
which Expiration will start yielding tape space.
Where ordinary client actions are employed (Backup, Archive), you need
to perform regular Expirations and tape Reclamations to get back
nearly-empty tapes.
Check the REUsedelay of your serial media storage pools. (Such volumes
would be shown in 'Query Volume STatus=PENding'.)
Where special clients are employed (Connect Agents, TDP, API), make sure
that those folks are regularly doing Deletes; otherwise, that stuff
remains in TSM indefinitely, as it does not participate in ordinary
expiration.
If someone is doing 'DEFine Volume' to dedicate tapes to storage pools,
they won't be Scratches.
If using HSM, make sure that dsmreconcile is being run regularly.
Make sure your TSM DBBackups are accompanied by regular executions of
'DELete VOLHistory ... Type=DBBackup': tapes used for DBBackup are not
freed until this deletion is done. (Similarly for Export tapes.) If you
happen to use volumes of devtype FILE, this will delete the files.
As an administrator, you need to perform the following regularly:
'Query Volume ACCess=UNAVailable,DESTroyed''
'Query Volume ACCess=READOnly STATus=FIlling'
'Query Volume STatus=PENding Format=Detailed'
to find tapes which TSM has given up on, as per messages like ANR1411W.
Do 'Query Volume STGpool=<Non-collated storage pool names>
STATus=FIlling' and look for occurrences of more than one volume whose
state is Filling: concurrent circumstances may cause TSM to start
writing multiple tapes (such as a Move Data output candidate volume
dismounting such that *SM uses a fresh volume instead), but thereafter
it may let one of them languish: you can reclaim the orphan with Move
Data and gain an extra tape.
Also do: 'SELECT * FROM LIBVOLUMES WHERE VOLUME_NAME NOT IN (SELECT
VOLUME_NAME FROM VOLUMES)' and investigate non-DbBackup Private volumes
in your Activity Log and Volume History, and suspiciously old DbBackup
volumes.
See also: Pending; READOnly
(Regular monitoring of your Activity Log for abnormalities will help
avoid such surprises.)
Also do:
'Query Volume STATus=FIlling Format=Detailed'
and look for filling volumes which TSM has "forgotten about": hasn't
written to in a long time, though via Query FIlespace and other means
you know that client activity has been writing to other tapes, which are
in Filling state. (This tends to happen when TSM would be inclined to
write to a given tape, but the tape is busy in another process or
session, or is dismounting: TSM reverts to a scratch tape, but may never
resume use of the orphan, which just languishes.)
In the case of questionable tapes, use 'Query CONtent Count=1' to
determine what node has been using the tape, where collocation is
active, supplemented by 'SHow VOLUMEUSAGE'.
It is also the case that the TSM administrator has to "chase people" to
dispose of old filespaces which haven't seen an incremental backup in
ages and have apparently been abandoned by their client host creators.
Unless dealt with, those will hog valuable storage pool and database
space indefinitely.

Tape drives (all) offline


Usually occurs when the vendor hardware engineer comes in and takes your
library offline, and/or reboots it.
Vaulted tape, past expiration, no data on it, but it won't delete from TSM
The simplest method: 'DELete Volume VolName DISCARDdata=Yes'. (See also
"Tape recovery procedure" entry.)
Or: Update the access to READONLY instead of OFFSITE (even if you
haven't brought the tape back yet): AUDit Volume ______ Fix=Yes If there
really is 0 data, it won't call for a mount, it will just give you a
message about fixing inconsistent data. Then the tape should go to
EMPTY status. You may need to change the tape status from READONLY back
to OFFSITE, depending on what vaulting software you are using to check
for tapes to return.

RETURN CODES, WINDOWS (see the Microsoft references for full list):

1450 Means: Insufficient system resources exist to complete the requested


service. Could it be that the partition you use for paging/swapping
is running out of space?

AIX MESSAGES:

Cannot restore -- due to write access denied


You are trying to restore over a file that is in use, as in restoring
library files (/usr/lib/XXXX.a). Such libraries are memory mapped since
the file on disk acts as paging space for the loaded code (text, in
object-file language) that is being using by at least one if not many
running processes. The OS will not allow anything (even ADSM with root
privileges) to overwrite those files. The only way to restore them is
to restore them when they are not in use. That is what a mksysb does.
It boots off tape or network, creates a filesystem in memory, restores
enough files to run the restore from that RAM filesystem, then creates
and restores the OS filesystems on disk.

Could not load program dsm:


Could not load module /usr/dt/lib/libDtSvc.a(shr.o).
Dependent module libtt.a(shr.o) could not be loaded.
Could not load module libtt.a(shr.o).
Error was: No such file or directory
Occurs when invoking the dsm command - the GUI - which needs some of the
Common Desktop Environment installed on the AIX system.

Method error (/etc/methods/cfgtsmdd):


0514-051 Device to be configured does not match the physical
device at the specified connection location.
cfgtsmdd[valid]: peripheral device type is unknown
cfgtsmdd[valid]: the device is NOT supported.
cfgtsmdd[inq..]: free dds.
cfgtsmdd[main ]: error inquiry or building dds, rc=51
Seen when trying to use SMIT to configure an LTO tape drive.
May be that you have not taken the preliminary step of installing the
appropriate device driver (Atape, for LTO, 3590).

<Some file system> unreadable


When access is attempted by root; non-root users can get at files.
This turns out to be an NFS permissions problem, relating to the
specifics of the server /etc/exports file versus the communications
path actually used when the file system is mounted. Specifically, if
you perform the mount as "mount Sysname:Fsname MountPoint", the
path the communication actually takes is not apparent, and you can end
up with the "unreadable" situation. But if you specify the
subnet-qualified name in the mount, a la
"mount Sysname-Subnet:Fsname MountPoint", which matches a specific
subnet the root authorization of the server exports file, then the
client root will be able to access the files.

SOLARIS MESSAGES:

the mt drive was successfully added to system but failed to attach.


You're trying to use a 32 bit driver under a 64 bit OS. Before 3.7.3
there were no 64 bit drivers. Try upgrading your server to 3.7.3 or
above and it should work okay.

SOLARIS SITUATIONS:

Segmentation Fault in 3.7


The 3.7 Solaris client (at least, GUI) is reported to experience a
Segmentation Fault failure due to a problem in the encrypted password
file. Removing the problem file from the /etc/adsm/ directory (or, the
whole directory) will eliminate the SegFault. (Naturally, you have to
perform a root client-server operation like 'dsmc q sch' to cause the
password file to be re-established.)

EMACS MESSAGES:

IO error reading <some filename>: Device not ready


Experienced with Emacs, trying to View an HSM file, this message appears
in the Emacs minibuffer (bottom of window) after a second or so. The
file has been migrated, but Emacs does not wait for it to be recalled.
However, by virtue of going after the file, HSM *is* recalling it. Retry
the operation in roughly 30 seconds and you should then be able to see
the recalled file. If the message persists, it can be due to client
sessions having been DISAble'd.

EXCHANGE MESSAGES:

An unknown Exchange API error has occurred.


This occurs when the Exchange Agent has called an Exchange API to get
more data from the Exchange Server and the Exchange Server has returned
an error code that is not documented.

Backup fails, RC=419


This is an "internal error". If the DP for Exchange log does not provide
any further information, try:
1. Retry the operation that failed.
2. If the problem occurred during an incremental, differential, or
database copy backup, run a full backup. If the full backup completes
successfully, retry the operation that failed.
3. If the problem still exists, close other applications, especially
those applications that interact with Exchange (anti-virus
applications, for example). Retry the operation that failed.
4. If the problem still exists:
a. Shut down the Exchange server.
b. Start the Exchange server again.
c. Retry the operation that failed.
5. If the problem still exists:
a. Shut down the entire machine.
b. Start the machine again.
c. Retry the operation that failed.
6. If the problem still exists, determine if it is occurring on other
Exchange servers, and then call TSM support.

Error 61 initializing ADSM API session


Error "61" means your NODE is locked out from the ADSM server side.
Check on the ADSM Server to find out why this might be.
After having the ADSM Administrator "unlock" the node, try your
operation again. After it is unlocked, you can also use the ADSM
Exchange Agent GUI to connect to the server and change or update an
expired or wrong password.

JAVA MESSAGES (Java console messages, as in use of web interface)

java.lang.ClassFormatError: WebConsole (Bad magic number)


Seen after upgrading Internet Explorer 6.x, in trying to use the command
line interface on the server web interface. Changing the Java V1.4 level
doesn't help. APAR IC34256 recommends turning off Java applet processing
in the browser...
"... Go to Tools --->Internet options ---> advanced tab ---> find Java
(sun) and deselect the "Use java 2 v1.4.0_01 <applet>" check box. The
checksum check on classes is a something new with the Java version 1.3
and 1.4. During the installation of Java, the APPLET tag is modified
to work like an OBJECT tag. When the browser encounters an APPLET the
installed Java Virtual Machine is invoked. To correct this problem
uncheck the Java(Sun) option Use Java 2 vXXXX for <applet> under the
Advance tab in Internet Options of Internet Explorer."
IC36044 further explains:
"Installing the latest version of Java won't work either because of a
protocol issue. The web engine is based on an old version protocol."

3494 OPERATOR STATION MESSAGES:

A cartridge could not be released from Gripper 1 , Accessor A


One customer experienced this with cartridges at the far end of the last
frame of the 3494 - a suspicious location. It was found to be caused by
the robot crash bumper at the end of the track being incorrectly
positioned such that the robot could not position to the far end to
correctly align with the cells.

Query operation Error - Library is Offline to Host.


In response to something like 'mtlib -l /dev/lmcp0 -qL' may mean that
there is no access because the 3494 is itself offline. Go to its
operator station and go to Mode Online, if found that way.
Another cause is that the 3494 lives in a subnet which is not routed,
meaning that only systems within the subnet can communicate with each
other: there is no access from outside.
For utmost assurance, check that the lmcpd is running, that your
/etc/ibmatl.conf is correct, and if a LAN connection that within the
3494 Library Manager you have authorized your host to access it.

MTLIB MESSAGES:
Demount operation Failed, ERPA code - 68, Library Order Sequence Check.
You requested a dismount for a drive upon which no tape is currently
mounted.

Mount operation Failed, ERPA code - 68, Library Order Sequence Check.
Means that it can't mount the tape you requested because it is already
mounted on that drive.

Mtlib: Unable to open device special file /dev/rmt1: Resource temporarily


unavailable.
This reflects the special file being "busy" within AIX itself, which
can be verified by attempting an 'rmdev -l rmt1', which in this context
will say "Device busy". We have seen this situation when something was
done to the drive outside ADSM (like a failed microcode download), which
then causes ADSM to get hung up trying to use the drive, which then is
unavailable to all other processes because of the ADSM status. The best
way out of this is to perform a drive reset and halt/restart ADSM.

Volume present in Library, but Inaccessible


For a 3494 library, this message derives from a Volume attributes value
of 80 for the involved tape (see definitions in
/usr/include/sys/mtlibio.h header).
Typically, the tape volume is stuck in a drive - failed to unload.
Or could be that a Mount request wasn't satisfied and timed out.
The 3494 may be in an Intervention Required state, but maybe not.
(Sites with a 3494 should really have some kind of monitor running
which watches for Int Req conditions so that remediation can occur
with minimal service outages.) The tape may even be outside the library

WINDOWS MESSAGES (including in Event Log):

Could not start the ADSM Scheduler service on \\xxxxxx


Error 0193: %1 is not a valid Windows NT application.
At the client, open the registry-editor and find the ADSM keys. In those
keys there's a path to the dsmsrvc.exe. This would look like
"c:\program files\ibm\adsm\baclient\dsmsrvc.exe". The path will not be
found because of the space appearing in the directory name. This is a
setting somewhere else in the registry, regarding long filename support.
Just change "program files" to "progra~1".

Error : 4099 - Unable to journal object 'D:\.......' : Access Error 32


The 32 indicates that in a Journal-based Backup, the journal service is
unable open the drive for monitoring because it is locked by another
process (rc 32 is a sharing violation). The drive is opened for "list
access" which is about the lowest level access that can be requested.
You'll need to investigate to uncover the contention.

GetFileShareInfo(): connectRegistry() for machine ____ failed, rc=53.


File share information cannot be obtained. Error is ignored.
The 53 is Windows error ERROR_BAD_NETPATH: The network path was not
found. The common cause is "stale Shares" on the client... When a
directory is shared and that directory is deleted from outside of that
client, the Share isn't removed from the registry. To illustrate: You
are using client A, with a directory called TEST mounted as a
Share. When you connect to the root of A from computer B and delete A's
directory TEST, the share isn't removed on A: "stale Share". You will
have to look into the registry to find which shares there are and on
which directory they are/were mapped. In Windows NT the shares are in
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\LanmanServer\Shares].
The Share can be removed by removing the key from the registry or by
recreating the missing directory. You will however have to reboot before
the directory gets shared again.

GetHostnameOrNumber(): gethostbyname(): errno = 11001.


TcpOpen: Could not resolve host name.
sessOpen: Failure in communications open call. rc: -53
The 11001 is a WinSock error number meaning "no such host is known".
The -53 is the TSM API DSM_RC_BAD_HOST_NAME, meaning that an invalid
TCP/IP host name or address was specified.
The situation indicates that your computer is not getting DNS service
in order to do lookups on TCP/IP network names to get IP addresses.

Install time message:


The wizard was interrupted before Tivoli could be completely installed.
your system has not been modified. To complete installation at another
time...
This may be due to data readability issues: if you are installing from a
CD, make sure that it is in good condition (not scratched, dirty, etc.);
or if downloaded from the IBM site, make sure that the transfer was in
BINARY mode, as specified in the README.FTP file, and after download,
make sure that the size is equal to that on the FTP site.
Beyond that, seek to eliminate the cause of interruptions which could
stop an install: disable any anti-virus or other ancillary software
which may be interfering with the install.
It may also be that the installer wants to use 8.3 name creation for
NTFS and that is disabled in your Windows system (APAR IC29085).
General note: Various vendors (Adobe, etc.) employ the technology of the
Microsoft Windows Installer (MSI), and it returns an error during
installation if the Windows Installer is corrupt or outdated. If
downlevel, get the latest Windows Installer from one of the following:
www.microsoft.com/downloads/release.asp?releaseid=32832&newlist=1
http://download.microsoft.com/download/WindowsInstaller/Install/2.0/
NT45/EN-US/InstMsiW.exe
(To see the level of your MSI, enter "msiexec" in a DOS prompt.)
I have seen mention that if the Windows Installer is not present on your
machine, or is downlevel to that which is employed by the TSM Client, it
will automatically be installed or upgraded and then require a reboot of
your machine. After reboot, the installation of the TSM software
continues unabated from where it was interrupted. That plan, however,
could be thwarted by a firewall at your site.

"Internal Error, report how you get this!"


Can occur when there is a corrupt file on disk. Use the 'scandisk'
utility to try to find the disk file problem.

Microsoft Visual C++ Runtime Library


Assertion failed
Program: C:\Program Files\Tivoli\TSM\baclient\dsm.exe
File: cubackup.c Line:612 Expression: mgmtClass !=NULL_MGMTCLASS
Seen when there are Database files on the drive being backed up, which
should be excluded, like "exclude d:\...\*.mdf" and
"exclude d:\...\*.ldf" in your dsm.opt file.

TcpOpen: Could not resolve host name.


Has appeared with TDP for Lotus Notes (Domino), accompanied by message
GetHostnameOrNumber(): gethostbyname(): errno = 11001. This indicates
that either a hostname is incorrect in a configuration file, or that DNS
service is problematic.

The object is in use by another process (ANE4987E)


Often seen with the ntuser.dat and ntuser.dat.log files. Can occur
despite the given user is not logged in, by virtue of a Service running
under that user's ID. The message is not necessarily consequential, in
that the logical contents of the NTUSER.DAT are backed up anyway as part
of the Registry backup. You can restore the user customization if you
have either a backup copy of NTUSER.dat, or a good copy of the Registry;
and you never need to restore NTUSER.log.

WINDOWS PROBLEMS/SITUATIONS:

"-2147220998: Internal error in Windows installer" when installing or


upgrading a client.
Typically, the Windows Service Pack level is too low for the new
software.

Blah, blah, blah ... Win32 RC=5


Like: WritePswdToRegistry(): RegCreatePathEx(): Win32 RC=5
This is a permissions problem; RC=5 is the Windows error code for
"Access is denied."

D: drive not backed up


The simplest cause would be that the backup was attempted via the
scheduler and it was not restarted after a change to either the drives
complement or client options file, to know to back up that drive.
Do 'dsmc query inclexcl' to see if the drive is being inadvertently
excluded from Backup by a client-side or server-side Exclude.
This normally is a permissions problem. Check the scheduler service via
control panel / services. See what account the service logs in as. If
it is the SYSTEM account, make sure that the drive has permissions for
the SYSTEM account. A lot of NT Administrators remove EVERYONE from the
permissions, which also prevents the SYSTEM account from being able to
do backups. Note that running the backup manually causes it to run
under your own account, so that may succeed whereas the scheduled backup
may fail.
Another reason could be that disks involved are part of the Microsoft
clustering environment. Have you included CLUSTERnode Yes in the
cluster.opt file for each logical node on the cluster?
And, the volume should really have a label.
See also: Backup skips some PC disks

dsmcsvc.exe looping - consuming most of the system's CPU time (high CPU
usage)
Has been seen with defective file systems. Run CHKDSK, SCANDISK, or
comparable OS utility to examine the file system and the disk containing
it.
Another possibility: A problem or conflict with Norton Anti-Virus (NAV)
running and the CreateFile() Win32 API; in other words, a problem in
Windows itself. TSM calls CreateFile() during backup and this as where
CPU is up to 100%. So consider shutting down NAV services (Alert and
Auto-Protect) during the backup. (Just disabling Auto-Protect may not
help.) See if there is a Norton upgrade which may help.

SERVER STARTUP LOG MESSAGES:


Explanation (fstat error): No such file or directory
Indicates that a file constituting the server database or recovery log
is not present. This could be due to someone having diddled with
/etc/filesystems such that a file system you had established in it is
either not present in that config file, or is not automatically mounted.
Yes, it sure would be nice if the lazy programmer had instead said in
the message what component was missing.

Explanation (fstat error): A file or directory in the path name does not
exist
The server is testing for its database and recovery log volumes, per its
dsmserv.dsk file, and cannot find them. This can be due to having
rc.adsmserv started in /etc/inittab, but being run too early in the
system start-up sequence, before the volumes containing your TSM
database and recovery log are mounted and ready. This is certainly the
case if you later have no trouble with starting the server from the
command line. Look into any untoward mount delays and/or consider
changing the position of your rc.adsmserv in the inittab or modify
rc.adsmserv to wait for resources you know it needs.

Trace/BPT trap(coredump)
This is a SIGTRAP (signal 5) condition.
Has been seen when swap space (paging space) was not active on an AIX
system.
Also, assure that the file system in which server resources are located
is not full, and has sufficient elbow room for any additional space
that it needs.

CONDITIONS:

3590, newly installed, won't eject or perform properly:


Assure that it is set to "disable CU mode = yes". CU mode is for
A00-attached drives. It is step 4 in the installation manual.

Access denied...
With static serialization, the adsm client will try to obtain an
exclusive read-lock on a file before backing it up. If it fails to
obtain the lock, it will return an "Access denied..." message. This
is misleading, making it seem like a permissions problem.

ADSM Server is already running from this directory


adsmserv.lock exists in server start directory (/usr/lpp/adsmserv/bin
in the case of AIX).

Device not ready


Condition encountered when a dsmrecalld is killed when someone is
going after an HSM-stored file.
Can occur when a dsmrecall command is suspended by one process and
another is trying to access the file which the suspended process was
recalling.
See also ANS4776E.

Divide by zero failure:


Beware any file systems which have the drive capacity is reported as
zero. Music CDROMs and certain network mapped drives can look this way.
Keep in mind that when the client starts it attempts to obtain
statistics for all existing drives available so the problem will occur
regardless of whether or not an attempt is made to backup from or
restore to the drive in question. So when you encounter a
divide-by-zero failure, do a 'df'.

DSMINIT FAILED RC=2220


Could not find the dsm.opt file.
If using ADSM via an API, use the DSMI_CONFIG variable to point to where
the file is.

dsmreconcile fails on a very large file system


The size of the dsmreconcile process may exceed the size of one data
segment (256 MB), which is the default number of segments a process may
use. The dsmreconcile process is in this case killed by the system.
The work-around for this is to enable the ADSM dsmreconcile module to be
able to use more than one data segment by enabling Large Program
Support (probably already in TSM), using the following commands:
cd /usr/lpp/adsm/bin
cp -p <Pgm_Name> <Pgm_Name>.orig
/usr/bin/echo '\0200\0\0\0' |
dd of=dsmreconcile bs=4 count=1 seek=19 conv=notrunc
which causes the XCOFF o_maxdata field (see <aouthdr.h>) to be updated.
This allows dsmreconcile to use the maximum of 8 data segments (2 GB).
Choose the string to use for a given number of data segments from
the following table:
# segments vm size string
------------------------------------------------
8 2 GB '\0200\0\0\0'
6 1.5 GB '\0140\0\0\0'
4 1 GB '\0100\0\0\0'
2 512 MB '\0040\0\0\0'

Error 1246208 deleting row from table "Archive.Descriptions"


An ADSM server defect has corrupted the database. Due to a failure in
the locking scheme your archive description tables may contain errors.
So you will have to fix these tables by (undocumented) service aids,
like "audit archdescriptions <nodename> fix=yes" or "dsmserv auditdb
archd fix=yes".

Failure Executing Schedule XXX, RC=4


This is a failure result from a scheduled backup, as on WindowsNT.
It is just a generic error, meaning that SOMETHING went wrong during
the scheduled backup. To find out what, look first for the
dsmerror.log file, then for the dsmsched.log file. Both should be (be
default) in the same directory where the ADSM client is running, which
is usually the ....\baclient subdirectory.

File '____' exists


Seen in the 3.7 client when trying to restore to an alternate location,
and the location specified is a directory name - but the name was
entered as a simple name like "/tmp/area". The client is being stupid,
and is attempting to treat the directory as if it were a file, and will
repeatedly tell you that it cannot replace it. What you need to do is
instead specify the name with a slash at the end, to indicate tell the
dim-witted client that it is a directory, like "/tmp/area/".

Filespace not ready


Typical client message when the client attempts to back up a file
system soon after having done a Delete Filespace of it: the Delete is
asynchronous and has yet to complete.
Insufficient Memory (client):
May be one or more of the following:
- The client system has too little virtual memory.
- Unix Limits values need boosting. (In AIX, see /etc/security/limits
values, which set a system ceiling on usage values.)
- If client and server are running in the same machine, Shared Memory
may be in effect, and you may not have kernel parameters set high
enough to support this.
In HP-UX: The default data segment limit is 64 MB per process, so
when dsmc runs on rather large file systems, it may hit this limit.
This limit can be increased by modifying another kernel parameter
(this procedure is going to be documented in the HP client README
that's going to be shipped with the next ptf):
1. As root user, start "sam".
2. Select "Kernel Configuration"
3. Select "Configurable Parameters"
4. Locate "maxdsize" and increase its value through the menu entry
"Actions/Modify Configurable Parameter...", e.g. set it to
268435456 for a 256 MB max size of the data segment.
5. The kernel gets rebuilt by sam after this change and the system
needs to be rebooted for the new setting to take effect.

Insufficient mount points, 3590


See: Drives, not all in library being used

No space left on device (as from 'mv' command):


Found to occur on an HSM-managed file system when attempting to add a
file. What is going on is that dsmautomig was kicked off in trying to
make room, but even after going through its migration candidates list
(from the last dsmreconcile) it could not make enough room.
One of the following conditions prevails:
- dsmautomig is sluggish in making space: it will eventually make
some, and you can retry the data movement operation.
- All of the migration candidates have been migrated and there is no
physical space for a new incoming file. Thus, the space taken up by
all the stub files plus the space required by the new incoming file
exceeds the remaining capacity of the file system....
This could occur if you had removed a bunch of files which were named
in the migration candidates list such that dsmautomig found itself
short on files to migrate. If this is the case, run dsmreconcile,
then dsmautomig, then try the move-in again.
Otherwise you will need to delete some files or expand the file
system.
HSM will know of the failed operation and will spontaneously initiate
dsmautomig, and hence a reconciliation. Give that time to complete and
try the operation again. If still no good, you will have to either
remove junk from the file system or extend the file system ('chfs').
See also 'dsmmode' command, "outofspace" parameter.

Scheduler stopped/disappeared:
See "Scheduler has been stopped." under DSMSCHED.LOG ERROR MESSAGES

Server unresponsive to dsmadmc logins after it is restarted:


It may be doing a resync of its large database or log file.

Tape won't mount (as during reclamation):


Has been seen caused by tape drive device names changing in AIX when
one or more tape drives have been added or removed, such that device
names change from what was defined to ADSM. To check, issue the AIX
command '/usr/sbin/lsdev -C -c tape -H' and compare "IBM 3590" tape
drives agains what shows up in ADSM server command 'Query DRive'. As
needed, perform 'UPDate DRive LibName /dev/rmt?' to adjust.

TCP/IP connection failure


Can be caused by a bad file. Remove or exclude it.

TcpOpen: TCP/IP error connecting to server.


sessOpen: Failure in communications open call. rc: -50
Do you have the TCPServeraddress field in the client's DSM.OPT field
filled out with your Server's address?
Are you using TSM's inadequate default values for timeouts?

Waiting for mount of output volume <VolName> (NNN seconds)


Can occur for operations such as "BAckup STGpool", as when a defined
volume needs a CHECKIn before it can be used (do "Query ACtlog" to
reveal); or a cleaning tape may be busy in the drive. Be aware that a
CHECKIn takes priority over 'BAckup STGpool'.

CLIENT TRACING:

The TSM client provides substantial tracing facilities, principally for


support purposes, but which can be used by customers to gain insights into
client problems. The information provided here is intended to help
interpret some control and output information. Refer to the TSM "Trace
Facility Guide" for authoritative info. Note that, as in any software,
tracing entails substantial overhead and should not normally be active.
Though tracing substantially slows down client runtime and thus skews timing
statistics, the relative values amongst the derived statistics is indicative
of what's going on in normal client runs.

IMPORTANT: Client tracing functions only when running under the CLI, not the
GUI!

Basically you specify an output file:

In the options file: TRACEFILE filespace


On the command line: dsmc ... -tracefile=SomeFilename

Then you specify the flags:

In the options file: TRACEFLAGS flags (or perhaps TESTFLAG flags)


On the command line: dsmc ... -traceflag=_______ ...

Allowable flags are:

ALL All traceflags except INSTR, INSTR_CLIENT, DETAIL,


INSTR_VERBOSE
ADMIN Administrative component
ALLCOMM Enables COMM, 3270COMM, EHLLAPI, 3270ERROR
ALLSESS Enables SESSION, VERBINFO, SESSVERB, VERBADMIN
ALLFILE Enables DIROPS, FILEOPS, FIOATTRIBS
ALLBACK Enables INCR, TXN, POLICY
ALLPROC Enables ALLBACK, ALLFILE, ALLSESS
API API tracing
AUDIT List files backed up or restored (Macintosh and Windows)
COMM Communications interface. Detail is voluminous, like:
commtcp.c (1470): TcpWrite: 30707 bytes written of
65548 requested.
commtcp.c (1593): TcpFlush: 32768 bytes written on
socket 3.
COMMDETAIL Detailed communications
COMPRESS Compression, expansion processing
CONFIG Configuration file processing
DIROPS Directory operations
DISABLENQR This runs the Restore using the old style restore
protocol which is much faster than the No Query Restore
protocol when restoring a minor portion of a filespaces
having a large number of objects. Per APAR IX87848:
PERFORMANCE PROBLEM RESTORING SMALL NUMBER OF FILES
FROM A NESTED SUBDIRECTORY
Traversing down multiple sub directories and then
issuing a restore on a small number of files can result
in a delayed start of the restore if the program
decides to go through the No Query Restore path. This
customer has seen a delay of 40 minutes until the
restore started. After enabling the TESTFLAG DISABLENQR
(and thus going through the Classic Restore path),
restore started with acceptable delay.
Option may be placed in dsm.opt as "TESTFLAG DISABLENQR".
Note that the use of this option renders the restoral
non-restartable.
EHLLAPI PC3270W V3.0 EHLLAPI tracing
ENTER Entering or exiting a major functions
ERROR Severe errors tracing
FILELISTS User interface file list processing
FILEOPS File I/O operations
FIOATTRIBS File and directory attributes during backup and archive
FS File space processing
GENERAL General process flow operations
INCR Incremental process operations. Reports the progress of
the incremental backup. Output is not voluminuous.
INSTR Instrumentation tracing
INSTR_CLIENT Client entry or exit and network times
INSTR_CLIENT_DETAIL For detailed process information on where the client
is spending its time. The report is sectioned: the total
of all the sections accounts for all of the run time
from the client point of view. Sections, in run order:
Client Setup, Process Dirs, Solve Tree, Compute,
Transaction, BeginTxn Verb, File I/O, Compression,
Data Verb, Confirm Verb, EndTxn Verb, Client Cleanup.
See below for details.
Note: Replaced in 5.1.5 by TSM Client Instrumentation
trace (Instrument).
INSTR_VERBOSE Print all and final time statistics
INSTRUMENT TSM Client Instrumentation Trace, new in 5.1.5, to
replace INSTR_CLIENT_DETAIL and PERFORM (which no longer
work). Provides command line Backup/Archive client
performance instrumentation statistics by thread. (Not
applicable to API or TDPs, or GUI or web client.) Only
shows threads with instrumented activities. Includes
client command, options, and summary statistics. Enable
via:
command line: -testflag=instrument:detail
options file: testflag instrument:detail
Output is appended to the dsminstr.report file. You can
of course cancel the session from the server.
For the API, rather than the B/A client, use:
INSTRUMENT:API
LINK Hard link processing
MEMDETAIL Detailed memory tracing
MEMORY Memory allocation, buffer pool
MESSAGES User interface event messages
NLS National Language Support processing
POLICY Policy management tracing: see what the client thinks it
has available for management classes, and which one it
picks for directories.
PREFIX Adds module(line number) tracing suffixes to messages
SERVICE Enables ALL -NLS -COMMDETAIL.
This is a good, single, first-choice trace flag when
starting in on a problem.
SESSION Session layer tracing
SM Space Management tracing
SMSDEBUG Storage Management Services
SMVERBOSE Space Management detailed tracing
TIMESTAMP Timestamps on trace records
TRUSTED Trusted Communications Agent
TXN Backup and Archive Transaction list processing
VERBADMIN Administrator Datastream tracing
VERBDETAIL Client-server Verb fields contents tracing.
In conjunction with VERBINFO.
VERBINFO Client-server Verb fields contents tracing
3270COMM Low-level 3270 for Windows tracing
3270ERROR Low-level 3270 error tracing (Windows)

TSM 5.1+ note: Newer TSM releases may not support some of the above flags,
such as INSTR_CLIENT_DETAIL - see above (but TRACEFLAGS itself still
works). Some alternatives:
- For TSM V5.1 (and earlier) clients:
dsmc s 20MBtest.file -password=your_pw -tracefile=trace1.out
-traceflag=perform,general > test1.txt
- For TSM V5.1.5 (and later) clients:
dsmc s 20MBtest.file -password=your_pw -testflag=instrument:detail
Or, add to dsm.opt: testflag instrument:detail
This will produce a file called dsminstr.report in the same directory
as your dsmerror.log (by default the baclient directory).

To activate all but a few flags, preface the flags with a dash.

Example: TRACEFLAGS ALL -COMMDETAIL -NLS

Report elements explained, in alphabetical order:


(from the ADSM Problem Determination Guide and other sources.)

BeginTxn Verb Summary report line in INSTR_CLIENT_DETAIL trace.


Sending a begin-transaction verb to signal the beginning
of a transaction to back up or restore a group of files.
Client Cleanup Summary report line in INSTR_CLIENT_DETAIL trace.
Reflects processing after the last EndTxn verb.
Client Setup Summary report line in INSTR_CLIENT_DETAIL trace.
As it implies, the backup is preparing to run, where the
client is doing signon, authorization, and querying the
server for policy set and file system information.
Compression Summary report line in INSTR_CLIENT_DETAIL trace.
Reflects the compression of data being sent to the
server in Backup, and uncompression during Restore, if
client option COMPRESSIon Yes is in effect.
Compute Summary report line in INSTR_CLIENT_DETAIL trace.
The client is computing throughput and transfer sizes.
Confirm Verb Summary report line in INSTR_CLIENT_DETAIL trace.
In Backup, sending a confirm verb and waiting for a
response to confirm that data is being received by the
server.
CRC When computing or comparing CRC values.
Data Verb Summary report line in INSTR_CLIENT_DETAIL trace.
Reflects sending or receiving data to/from the
communication layer (e.g., TCP/IP). Data Verb time
correlates closely with the standard "Data transfer
time" statistic. In Backup, this time may not reflect
the entire time to do data transfer, because buffering
in the communication layer of the operating system may
rapidly absorb data for transmission, and the actual
sending may take some time. The remaining time ends up
being charged to Confirm Verb or EndTxn Verb time.
Therefore, the best estimate of the actual data transfer
time is the sum of Data Verb and Confirm Verb time. A
higher than expected data transfer time seen by the
client prompts an investigation of server performance
and the communication layer (which includes elements of
both the client and the server).
Corresponds to the "Network data transfer rate" job-end
statistic in a Backup or Archive.
Delta Adaptive subfile backup processing, including
determining the changed file bytes or blocks.
Encryption Encrypting or decrypting data.
EndTxn Verb Summary report line in INSTR_CLIENT_DETAIL trace.
In Backup, sending an end-transaction verb to signal the
end of a backup transaction and waiting for the
response. Average EndTxn time for backup depends on the
size of the transaction (i.e. how many files in the
transaction). See the section on time estimates. An
excessive average time suggests a problem on the server.
EndTxn Total Time is significant for small file
processing. A savings in total EndTxn time may be
achieved with a larger transaction size, i.e. by
increasing TXNGroupmax on the server and/or TXNBytelimit
on the client.
File I/O Summary report line in INSTR_CLIENT_DETAIL trace.
In Backup, the client process is reading the file system
data, to send it to the server. In Restore, the client
is writing file system data received from the server.
Each File I/O usually represents a 32K logical request
(or the remaining data if less than 32K). File I/O may
be entered one additional time at the end of the file.
With compression on some smaller clients a File I/O
represents a request for less than 32K. A File I/O
request may require multiple physical accesses. For
small files and on systems without read-ahead, average
File I/O time for backup is generally 15ms to 40ms
depending on the platform. For large files on systems
doing read-ahead, disk overlap can significantly reduce
the average File I/O time for backup, depending on the
amount of time it takes for other processing.
Process Dirs Summary report line in INSTR_CLIENT_DETAIL trace.
The preliminary stage of obtaining file system content
information from the server prior to principal
operation. For Incremental Backup it includes querying
the server for backup status information. For ordinary
Restore it includes retrieving the file list. (For
No Query Restore, not used.) When the client has to
construct a files list to govern the operation, it does
so in client memory, which needs to be sufficient to
accommodate the operation. If the client's real memory
is undersized, paging will be involved, which will slow
the operation. However, if the client is properly sized,
it could be that the server database cache hit ratio is
not high enough.
Solve Tree Summary report line in INSTR_CLIENT_DETAIL trace.
For Selective Backup: determining if there are any new
directories in the path that need to be backed up. This
involves querying the server for backup status
information on directories. With a large number of
directories, Solve Tree time can be large.
Transaction Summary report line in INSTR_CLIENT_DETAIL trace.
A general category to capture all time not accounted for
in the other sections, including file open/close time
and other miscellaneous processing on the client. The
more files there are to process (as in a file system
with a large number of small files), the more time
consumed in this aspect. Otherwise it is generally not a
large component of elapsed time. Since transaction time
includes certain up front processing, as well as
processing during file transfer, average Transaction
time is not very meaningful. Heavily populated
directories impede speed: see "Directory performance".

For network measurement, the trace flag INSTR_CLIENT_DETAIL is the most


valuable. After a Backup, for example, examine the "Data Verb" and "Confirm
Verb", which in combination will reflect network transfer time.

Keep in mind that tracing adds its own overhead. Refer to the Trace
Facility Guide manual.

Example of a Backup which sends a small file and waits for a tape mount:

dsmc i -traceflag=INSTR_CLIENT_DETAIL somefile

Session established with server ADSMSERV: AIX-RS/6000

Incremental backup of volume 'somefile'

Directory--> 1,536 /usr1/it [Sent]


Normal File--> 20,237 /usr1/it/rbs/somefile [Sent]
Retry # 1 Directory--> 1,536 /usr1/it [Sent]
Retry # 1 Normal File--> 20,237 /usr1/it/rbs/somefile
** Unsuccessful **
ANS1114I Waiting for mount of offline media.
Retry # 2 Normal File--> 20,237 /usr1/it/rbs/somefile [Sent]
Successful incremental backup of 'somefile'

Total number of objects inspected: 3


Total number of objects backed up: 2
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects failed: 0
Total number of bytes transferred: 59.35 KB
Data transfer time: 0.00 sec
Network data transfer rate: 35,606.16 KB/sec
Aggregate data transfer rate: 0.61 KB/sec
Objects compressed by: 0%
Elapsed processing time: 00:01:36
------------------------------------------------------------------
Final Detailed Instrumentation statistics
Elapsed time: 2.178 sec
Section Total Time(sec) Average Time(msec) Frequency used
------------------------------------------------------------------
Client Setup 0.433 433.3 1
Process Dirs 0.057 18.9 3
Solve Tree 0.016 16.2 1
Compute 0.000 0.0 3
Transaction 0.464 20.2 23
BeginTxn Verb 0.001 0.4 4
File I/O 0.011 1.8 6
Compression 0.000 0.0 0
Data Verb 0.002 0.6 3
Confirm Verb 0.000 0.0 0
EndTxn Verb 1.638 409.6 4
Client Cleanup 0.004 4.4 1
------------------------------------------------------------------

Observations on the above example:


The "Elapsed processing time" of a minute and a half reflects waiting for
the tape mount.
The "Network data transfer rate" is ridiculously large, reflecting TCP/IP
buffer absorption of the data from the client program, not the actual
sending.

Example of a Backup which sends a large file and waits for a tape mount:

dsmc i -traceflag=INSTR_CLIENT_DETAIL 10MB-file

Incremental backup of volume '10MB-file'


Directory--> 1,536 /usr1/it [Sent]
Normal File--> 10,485,760 /usr1/it/rbs/10MB-file [Sent]
Retry # 1 Directory--> 1,536 /usr1/it [Sent]
Retry # 1 Normal File--> 10,485,760 /usr1/it/rbs/10MB-file
** Unsuccessful **
ANS1114I Waiting for mount of offline media.
Retry # 2 Normal File--> 10,485,760 /usr1/it/rbs/10MB-file
[Sent]
Successful incremental backup of '10MB-file'

Total number of objects inspected: 3


Total number of objects backed up: 2
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects failed: 0
Total number of bytes transferred: 30.00 MB
Data transfer time: 97.66 sec
Network data transfer rate: 314.59 KB/sec
Aggregate data transfer rate: 303.36 KB/sec
Objects compressed by: 0%
Elapsed processing time: 00:01:41
------------------------------------------------------------------
Final Detailed Instrumentation statistics
Elapsed time: 101.287 sec
Section Total Time(sec) Average Time(msec) Frequency used
------------------------------------------------------------------
Client Setup 0.250 249.7 1
Process Dirs 0.041 13.8 3
Solve Tree 0.014 14.4 1
Compute 0.014 0.0 483
Transaction 0.318 0.2 1464
BeginTxn Verb 0.001 0.3 4
File I/O 2.581 5.3 486
Compression 0.000 0.0 0
Data Verb 97.657 202.2 483
Confirm Verb 0.012 12.1 1
EndTxn Verb 0.656 163.9 4
Client Cleanup 0.006 6.4 1

Observations on the above example:


The "Elapsed processing time" is comparable to that of sending a small file,
in that tape mount and positioning time dominates.
This example clearly shows that the Data Verb is equal to the
"Data transfer time". (10MB over 483 Data Verbs is 2171 bytes per verb; but
what does that indicate?)
The single file being sent is 10MB; but "Total number of bytes transferred"
is reported as 30.00 MB
Dividing "Total number of bytes transferred" (30.00 MB) by "Data transfer
time" (97.66 sec) yields 316.7 KB/sec.

SERVER TRACING:

The following is an extract from the "Trace Facility Guide".

Basically you do:

TRace ENAble Trace_Class(se)


TRace Begin Output_File_Name

and afer some time:

TRace END
TRace Disable Trace_Class(se)

Do 'Query TRace' during the tracing to check.


Do 'TRace List' to list the trace classes.

The server trace classes are:

ALL All traceflags except INSTR, INSTR_CLIENT, DETAIL,


ADMCMD Command tracing
APPCERROR APPC driver error data tracing
APPCINFO APPC driver general information tracing
BLKDISK Block oriented disk driver
DIALERROR DIAL driver error tracing (S/390)
DIALINFO DIAL driver general information tracing (S/390)
INSTrumentation Instrumentation tracing (new in 5.2.0). Provides server
or storage agent performance instrumentation statistics
by thread. Shows threads with disk, tape, or network
I/O operations.
To start: INSTrumentation Begin [Maxthread=<Number>}
(By default max 1024 threads instrumented.)
...Let run for a few minutes...
To end: INSTrumentation End [File=<Filename>}
(Default: output to console or admin display)
Should be run less than 24 hours.
IPXDATA IPX driver data (OS/2, AIX)
IPXERROR IPX driver errors (OS/2, AIX)
IPXINFO IPX driver informational data (OS/2, AIX)
IUCVERROR IUCV driver error tracing (S/390)
IUCVINFO IUCV driver general information tracing (S/390)
LVM Database/Recovery log management functions
MMSBASE Entry points into mount management services component
NETBIOSDATA Netbios driver data (OS/2, AIX)
NETBIOSERROR Netbios driver errors (OS/2, AIX)
OPER Operator interface tracing (S/390)
PID Command Process ID (UNIX)
SCHED Central Scheduling
SYSTIME System time on trace records
TCPERROR TCP/IP driver error tracing
TCPINFO TCP/IP driver general information tracing

During a client restoral you can examine the distance between files on tape
by running the following trace on the server (as when you feel that your
restore is stalled or slowed down). Enter the following two commands:
trace enable pvr as
trace begin data_set/file_name
After letting the restore run for a few minutes, enter the server command:
trace end
View the trace file and look for the following (non-contiguous) entries:
<31>pvrgts.c(3121): Positioning from block xxxx to block yyyyy
<31>pvrgts.c(3121): Positioning from block yyyyy to block xxxx

Ref: http://www.ibm.com/support/docview.wss?uid=swg21107022

THROUGHPUT MEASUREMENT:

From time to time you will want to perform measurements of *SM throughput -
particularly when client people complain that they are not seeing the
performance they expect. Closely allied with that is the throughput one may
expect to a tape drive. The rate you get through an application like TSM is
dependent upon all the things that the application has to do in addition to
transferring data. The big factor in TSM is, of course, database updating
as part of file transfer (including the Versions expiration which occurs at
Backup time). The more (small) files you have, the more db updating, and
thus reduced throughput.

Here are some steps to follow, in hierarchical order, starting in the OS


environment...

- Measure the tape drive path performance in isolation:


You want to measure how all those elements in the path from host to tape
drive actually perform: the adapter card, its microcode, cabling, any
switching boxes, etc. It is possible to virtually eliminate the tape
drive from the path and almost purely measure the SCSI or FibreChannel
performance, by effectively making the tape drive a "sink". On the drive
to be used in the test, mount a non-*SM test tape (via 'mtlib' command,
manually, etc.). In Unix, do: 'dd if=/dev/zero of=/dev/rmt1' with tape
drive compression *enabled*. (You can probably realize a comparable
command in other operating systems.) This will send an endless stream of
binary zeroes to the tape drive - which are highly compressible - causing
the tape drive to have to do virtually nothing but gobble the incoming
data (little, if any, tape writing) which is almost like having a
/dev/null receptor inside the drive, allowing you to measure the path
rate. This will provide you with MB/second numbers which almost purely
reflect maximum path rates, and should be way above the maximum rate spec
for your tape drive. If inordinately lower, check interface card
parameters, OS settings, etc. before going on.

- Measure the tape drive performance:


On the drive to be used in the test, mount a non-*SM test tape (via
'mtlib' command, manually, etc.). Then use a simple command that will
transfer a large amount of data (a gigabyte would be good) of known size
to a mounted + ready tape, and time the transfer. What to send: randomized
data (like a Unix core file); or dolike 'dd if=/dev/zero of=/dev/rmt1' in
Unix, with tape drive compression disabled. This will yield true write
performance numbers. Compare the figures with those in the vendor specs
for the tape drive: if inordinately lower, check drive settings.

- Time a TSM client test in the same host system where the server resides:
This exercises TSM in a client-server arrangement, but eliminating
networking factors.
- Perform a Selective or Incremental backup on a single file, as large as
possible (to eliminate TSM db updating factors), containing random data
(to exercise tape drive data compression at a fairly representative
level). Consider utilizing Shared Memory as well as data communication
methods, for perspective comparison.
- Same conditions, but using a large number of small files, to gauge the
impact of TSM db updating while excluding network issues.

- Time a TSM client test in the subnet as the server:


This exercises TSM in a client-server arrangement, with nominal networking
factors. Similar single-large and many-small files as in previous test.

- Time a TSM client test in a subnet different from the server's:


This exercises TSM in a client-server arrangement, with "worse" networking
factors (as many clients may encounter) going across a router/switch in
the networking. Similar single-large and many-small files as in previous
test.

Notes: - TSM typically optimizes tape drive attributes for best


performance: the customer need not make any adjustments.
- See "Backup performance" note regarding SCSI chains.
- Tape quality is a huge factor in throughput. The best visible
measure of tape quality is to watch (Query Process) a Backup
Stgpool operation: on some tapes you will see it struggle to make
headway, retrying multiple times to read the input tape or writ
the output.
- With networking (regardless whether LAN or SAN), keep in mind that
it is not *SM which actually sends the data - the operating system
communications transport subsystem is doing that (e.g., TCP/IP).
*SM hands the data off to the comm system and considers it
transmitted, using that time in reporting data rate. Depending
upon data volume, in fact none of it may have been transmitted
yet, in that it may simply have been absorbed into the
communications buffers, which is of course a much faster operation
than is actual transmission. So when attempting to measure the
throughput capabilities of transmission facilities it is usually
necessary to send a great quantity of data, so as to statistically
average out buffering effects. See "Network data transfer rate".
- In a *SM backup timing test involving tape, perform your test in
two stages: A timing test based upon elapsed time will be skewed
by the time required to mount and position the tape which will
take the data. The way to deal with this is to perform the test
in two stages: perform an initial perfunctory Backup whose purpose
is to load and position the tape; then right afterward, perform
your Backup timing test. The tape will remain mounted and
positioned by virtue of MOUNTRetention, and the second stage will
reflect pure throughput.
- Use Client Trace facilities to get detailed information. The
Client Trace Facility can provide insights into where time goes in
a client session. See "CLIENT TRACING", above.

NETWORK PERFORMANCE (ETHERNET PERFORMANCE):

The subject of performance of ethernet networks often comes up on ADSM-L.


Often, sites will complain that they have 100 Mb (100 Mega bit - small 'b'
means bits, large 'B' means Bytes) but that they are not getting the kind of
throughput they should be seeing through *SM. And we often see the product
immediately blamed for the throughput problem, with no analysis on the part
of the customer technicians.

It cannot be stressed strongly enough that you MUST obtain benchmark numbers
BEFORE deploying any facility, in order to both assure that it meets vendor
performance specifications (product acceptance test) and so that you have
numbers against which to compare when issues come up in production. Far too
many customers simply put a complex facility into production without having
done any basal measurements and later are frantic for an answer to what's
wrong with throughput. That's obviously a chaotic way to operate a data
processing complex. What you need to do is, during quiet times when clear
measurements can be made, conduct unit studies of the various components
which comprise a complex and make the numbers available to all site
personnel for later reference. You may have to engage subject matter
specialists (e.g., network people) to conduct some studies. Don't hesitate
to involve others: they will be impressed that you thought to pursue this.
In the study, you may uncover anomalies which can then be addressed and
corrected, before they bite you. Once you have basal numbers for a properly
operating amalgam of components, you can much more readily analyze problems
in operating the whole.

For 10 Mb ethernet you should expect a nominal 1 MB/sec throughput.


For 100 Mb ethernet you should expect a nominal 10 MB/sec throughput.
The classic reason for not getting that is conflicting configurations, as in
the computer's ethernet card (NIC) versus the configuration of the device
through which the computer is connected to the network (router, switch,
etc.), and this is most commonly caused by naively setting the computer's
ethernet card to Auto Negotiation. One should rightly expect this to, as
its name implies, talk to the attached network device and come to an
agreement on the optimal communication parameters. In practice, however,
(particularly with 3Com cards) this doesn't happen: the two devices instead
lapse into confilicting settings, as in half- vs. full-duplex. Auto
Negotiation is best unused. Instead, manually configure per what your
network support people say.

Network load, other than TSM, is a major factor in what TSM can get out of
the network. The whole point of a SAN is that you are dedicating networking
to storage access needs, excluding other types of traffic. If your TSM
traffic is going over a LAN, you are subject to contention with all the
other stuff going through it, not the least of which is the amazingly large
amount of traffic deriving from all the port scans and probes incited by
endless Microsoft Windows security lapses, as sites throughout the world -
and infected computers in your own site - attempt to exploit security holes
at every other site. (Use firewalls!)

Verify your settings via operating system queries. In AIX, do


'lsattr -EHl ent_' on the particular ethernet adapter. If the output
includes the attribute "media_speed", then the adapter is multi-speed: if
the attribute value is "10_Half_Duplex", it's running at 10 Mb; if
"100_Full_Duplex", it's running at 10 Mb; etc.

You can also use the 'netstat -i' and 'netstat -v' commands on some Unix
systems to see ethernet statistics. If you are seeing a lot of Collisions,
your subnet may be overloaded. If you there are a lot of Late Collision
Errors, you probably have a Full Duplex vs. Half Duplex configuration error
between your ethernet card and network access device, which results in
incredibly slow throughput. (Avoid Auto Negotiation.)

To validate performance, use an application outside of TSM. Possible ways:


1. Do an FTP of a large file (several megabytes at least), specifying
/dev/null for the output file so as to avoid creating a large file over
there, and to avoid disk I/O on that system as a factor. A hallmark of
FTP is its report of the data transmission rate, which is what you are
seeking.
FTP is TCP-based (session-based), like TSM.
(Aside note: The success of FTP on a given network does not necessarily
indicate that there are no network problems.)
In Unix, you can conduct a basic test, without input/output files, as:
ftp your.servername.com
ftp> bin
ftp> put "|dd if=/dev/zero bs=64k count=10000" /dev/null
ftp> quit
Inspect the rate value (e.g., 904 Kbytes/s).
2. Employ a downloadable utility like "Test TCP" (TTCP, WSTTCP)
(http://www.pcausa.com/Utilities/pcattcp.htm,
http://www.cisco.com/warp/public/471/ttcp.html, et al), which runs on a
transmitting and receiving system to test functionality and report
rates.
3. Use the Unix 'spray' command to send a given number of packets to the
remote system to gauge performance. If the remote system does not have
the sprayd daemon active, you can instead have spray use the ICMP echo
protocol, as in 'spray RemoteSysName -i -l 2082' as run from root.
Spray is UDP-based (sessionless) unlike *SM, which is TCP-based.

If using Gigabit ethernet, consider implementing Jumbo Frames, as supported


by some networking vendors (and AIX 4.3), to boost the MTU size and thus
throughput.

In some Unix systems - AIX in particular - watch out for an insidious


behavior called Path MTU Discovery. This is the utilization of ICMP probes
for dynamically discovering the maximum transmission unit (MTU) of an
arbitrary internet path so as to avoid or minimize packet fragmentation and
thus maximize transmission performance and reduce network overhead. Is
obviously worthwhile only when the amount of data to be sent far exceeds the
amount of work involved in the discovery. AIX utilizes Path MTU Discovery.
The 'netstat -s' command reports PMTU discovery statistics. Ref: RFC 1191
http://publib16.boulder.ibm.com/pseries/en_US/aixbman/commadmn/
tcp_pathmtu.htm http://www.sendmail.org/tips/pathmtu.html

3590 TAPE DRIVE SPECIAL DEVICE FILES:

The 3590 tape drives are identifiable in /dev by their Major Device number
being 27, as revealed by 'ls -l /dev/rmt*'.

Special File Rewind Retension Bytes Trailer Unload


Name on Close? on Close? per Inch label? on Close?
/dev/rmt* Yes n/a n/a No No
/dev/rmt*.1 No n/a n/a No No
/dev/rmt*.2 Yes n/a n/a No No
/dev/rmt*.3 No n/a n/a No No
/dev/rmt*.4 Yes n/a n/a No No
/dev/rmt*.5 No n/a n/a No No
/dev/rmt*.6 Yes n/a n/a No No
/dev/rmt*.7 No n/a n/a No No
/dev/rmt*.20 Yes n/a n/a No Yes
/dev/rmt*.40 Yes n/a n/a Yes No
/dev/rmt*.41 No n/a n/a Yes No
/dev/rmt*.60 Yes n/a n/a Yes Yes
/dev/rmt*.null Yes n/a n/a No No
/dev/rmt*.smc n/a n/a n/a n/a n/a

If Rewind On Close is not chosen, the Close causes one tapemark to be


written, thus delimiting the end of the file on tape. If Rewind On Close is
chosen, the Close causes one additional tapemark to be written (a total of
two) before rewinding, thus serving to delimit the file and denote the end
of the tape.

Retension on Close and Bytes per Inch (density) are not applicable to 3590s
because the drives perform such functions automatically.

The rmt*.null file is a pseudo device to facilitate software development,


and functions in a way similar to the /dev/null AIX special file. It allows
ioctl() calls to be issued without a real device there, thus serving a dummy
function which always returns successful completion. Read and write calls
will return the requested number of bytes.

The rmt*.smc is for controlling the SCSI Medium Changer (SMC), which is an
assembly on the front of 3590 drives in devices like 7331 and 7336, but not
3590 drives like in the 3494. Note: When running 'cfgmgr -v' to define a
3590 library, the 3590's mode has to be in "RANDOM" for the rmt_.smc file to
be created. (Note: With 3575 and 733* models, the device is /dev/smc_.)
You can issue SMC commands manually via the 'tapeutil' command. Mounts
occur by specifying that whatever tape is in a certain slot number is to be
mounted (it is not done by volser).

Reference: The device drivers manual "IBM SCSI Tape Drive, Medium Changer,
and Library Device Drivers: Installation and User's Guide",
Chapter 4, Special Files.
ADSM DATABASE STRUCTURE AND DUMPDB/LOADDB
(per David Bohm, ADSM server development, posted 19981201):

"The ADSM server data base contains different objects. Most of the objects
are b-tree tables. The cause of using more space for the LOADDB than was
actually used in the data base that was dumped with the DUMPDB command is a
result of the algorithm used to perform the DUMPDB/LOADDB and the
characteristics of a b-tree object...

When a record is to be inserted into a node in a b-tree and that record does
not fit then a split occurs. In a standard b-tree algorithm 1/2 of the data
goes in one leaf node and the other 1/2 goes into another leaf node. When
this happens randomly over time you get a tree where about 50% of the data
base is unused space. With the V2 ADSM server we added a little more
intelligence in the split process. There are many tables in the ADSM server
where a new record will always be the highest key value in that table. If
the insert is the highest key value then instead of doing a 1/2 and 1/2
split we just add a new leaf node with that single record. This results in
closer to 100% utilization in each of the leaf nodes in the ADSM server.

This now takes us to the DUMPDB/LOADDB process. One of the purposes of this
process is to recover from a corrupted data base index. What this means is
we ignore the index on the DUMPDB process and only dump the b-tree leaf
nodes (plus another type of data base object called a bitvector). These
leaf nodes are not stored physically in the data base in key order, which
means they get dumped out of key sequence. The LOADDB will take the records
from each of those leaf nodes and then perform inserts of those records into
the new data base. This means we take those pages that were nearly 100%
utilized because of the efficient b-tree split algorithm and convert them
into 50% utilized pages because of having to use the generic b-tree page
split algorithm.

We do not "compress" records in the data base. The data in the data base is
encoded to reduce space requirements. The data will always be written in
the encoded form to the data base as it is required for us to properly
interpret the data in the data base pages. This encoding is performed with
any writes of records into the ADSM data base, including the LOADDB since it
calls the same routines to perform the writes into the data base as the rest
of the server functions.

APAR IC13101 also describes this.

Note also that it is counterproductive to "pack" data in a database which is


to be generally used, which means being subject to updating as well as
retrieval, in that performance is improved in not having to perform block
splitting as soon as updating starts. All of which is to say that you need
to imbed freespace during the load."

This is to say that the reload phase of a reorganization has to proceed


according to the architecture and algorithms under which the database
operates, and so compaction cannot be expected.

TSM DATABASE AUDITING:

Here are some samples from various types of database auditing, as


experienced by TSM customers. This will give you a sense of what the output
should look like. (The Admin Ref manual sadly fails to provide such
illumination, choosing to address only command syntax.)

A full database audit, having taken the TSM server down to run the batch
command 'DSMSERV AUDITDB FIX=NO', which just identifies problem areas:

ANR4140I AUDITDB: Database audit process started.

ANR4075I AUDITDB: Auditing policy definitions.


ANR4040I AUDITDB: Auditing client node and administrator definitions.
ANR4135I AUDITDB: Auditing central scheduler definitions.
ANR3470I AUDITDB: Auditing enterprise configuration definitions.
ANR2833I AUDITDB: Auditing license definitions.
ANR4136I AUDITDB: Auditing server inventory.
ANR4138I AUDITDB: Auditing inventory backup objects.
ANR4139I AUDITDB: Auditing inventory archive objects.
ANR4307I AUDITDB: Auditing inventory external space-managed objects.
ANR4310I AUDITDB: Auditing inventory space-managed objects.
ANR4137I AUDITDB: Auditing inventory file spaces.
ANR4230I AUDITDB: Auditing data storage definitions.
ANR4264I AUDITDB: Auditing file information.
ANR4266I AUDITDB: Auditing sequential file information.
ANR4265I AUDITDB: Auditing disk file information.
ANR4256I AUDITDB: Auditing data storage definitions for disk volumes.
ANR4263I AUDITDB: Auditing data storage definitions for sequential volumes.
ANR6646I AUDITDB: Auditing disaster recovery manager definitions.
ANR4210I AUDITDB: Auditing physical volume repository definitions.
ANR4446I AUDITDB: Auditing address definitions.

ANR4141I AUDITDB: Database audit process completed.

Note that you might choose to run the audit with FIX=YES to get everything
taken care of in one execution, in that fully supported execution method.
Alternatively, you may want to try fixing problems via partial audits. Be
advised that the partials are not necessarily documented for customer use,
and may be appropriate when used only in the context of TSM Support
guidance, to knowingly guide correction of problems fully identified to be
in a given area. You may proceed to use the following known, available
partial audits, accepting any risk:

DSMSERV AUDITDB ADMIN DETAIL=YES

ANR4075I AUDITDB: Auditing policy definitions.


ANR4040I AUDITDB: Auditing client node and administrator definitions.
ANR4135I AUDITDB: Auditing central scheduler definitions.
ANR3470I AUDITDB: Auditing enterprise configuration definitions.
ANR2833I AUDITDB: Auditing license definitions.

DSMSERV AUDITDB ARCHSTORAGE DETAIL=YES

ANR4230I AUDITDB: Auditing data storage definitions.


ANR4264I AUDITDB: Auditing file information.
ANR4266I AUDITDB: Auditing sequential file information.
ANR4263I AUDITDB: Auditing data storage definitions for sequential vols.
ANR4210I AUDITDB: Auditing physical volume repository definitions.
ANR4446I AUDITDB: Auditing address definitions.

DSMSERV AUDITDB DISKSTORAGE DETAIL=YES

ANR4230I AUDITDB: Auditing data storage definitions.


ANR4264I AUDITDB: Auditing file information.
ANR4265I AUDITDB: Auditing disk file information.
ANR4256I AUDITDB: Auditing data storage definitions for disk volumes.
ANR4210I AUDITDB: Auditing physical volume repository definitions.
ANR4446I AUDITDB: Auditing address definitions.

DSMSERV AUDITDB STORAGE DETAIL=YES

ANR4140I AUDITDB: Database audit process started.


ANR4230I AUDITDB: Auditing data storage definitions.
ANR4264I AUDITDB: Auditing file information.
ANR4265I AUDITDB: Auditing disk file information.
ANR4266I AUDITDB: Auditing sequential file information.
ANR4256I AUDITDB: Auditing data storage definitions for disk volumes.
ANR4263I AUDITDB: Auditing data storage definitions for sequential vols.
ANR4210I AUDITDB: Auditing physical volume repository definitions.
ANR4446I AUDITDB: Auditing address definitions.

DSMSERV AUDITDB INVENTORY DETAIL=YES

ANR4136I AUDITDB: Auditing server inventory.


ANR4307I AUDITDB: Auditing inventory external space-managed objects.
ANR4138I AUDITDB: Auditing inventory backup objects.
ANR4139I AUDITDB: Auditing inventory archive objects.
ANR4310I AUDITDB: Auditing inventory space-managed objects.
ANR4137I AUDITDB: Auditing inventory file spaces.

MACROS:

It is possible for a sequence of commands within a macro to create problems,


even though the same sequence is successful when the commands are issued
individually from the command line. Normally, commands within a macro are
executed within the same transaction, and this may cause problems if there
are interactions between commands in a sequence. If problems occur, try
placing a COMMIT command between the problem commands in the macro or use
the -Itemcommit option when you start the admin client. See the section on
"Controlling Command Processing in a Macro" in the Admin Reference.

Try using the -itemcommit option on the dsmadmc command, i.e.

dsmadmc -id=me -Itemcommit

Then issue the macro command.

Alternatively, place the 'commit' statement between commands in the macro,


i.e.

backup volhist file=/sys/logs/adsm/volhist.%1


COMMIT
del volhist todate=today-7 type=dbbackup
COMMIT
BAckup DB devc=atl3590 type=full scratch=no vol=%2

MACINTOSH (UP TO OS 9) INCLUDE-EXCLUDE LIST RECOMMENDATIONS:

The Mac client manual, under "Creating an Include-Exclude List" suggests


a limited list of objects to include. A more comprehensive list as
suggested by various customers:

Exclude "...:Desktop DB"


Exclude "...:Desktop DF"
Exclude "...:Desktop"
Exclude "...:Trash:...:*"
Exclude "...:Wastebasket:...:*"
Exclude "...:VM Storage"
Exclude "...:Norton FileSaver Data"
Exclude "...:Norton VolumeSaver Data"
Exclude "...:Norton VolumeSaver Index"
Exclude.dir "...:System Folder:Preferences:cache-cache"
Exclude.dir "...:System Folder:Preferences:Netscape Users:...:Cache f"
Exclude.dir "...:System Folder:Preferences:Netscape f:Cache f"
(where each 'f' is actually the script f conventionally
identifying a folder, produced by holding down the Option key
and then pressing f)
Exclude.dir "...:System Folder:Preferences:Explorer:Temporary Files"
Exclude "...:Temporary Items:...:*"
Exclude "...:...:TheFindByContentIndex"
Exclude "...:aaaaaaa?????*"
Exclude "...:...:TSM Sched*"
Exclude "...:...:TSM Error*"

MOUNTING FILE SYSTEMS READ-ONLY FOR BACKUP:

Because ADSM updates the access time (atime) of each file that it reads to
back up, and implicitly updates the access time of every directory that it
traverses to get to files, one would like to mount file systems for backup
in read-only (R/O) mode, on the host where the file system is native, in
order to leave files undisturbed. (This is important for mail program
functionality, is an issue in user file privacy, and is needed for system
administration in knowing when a user last accessed files.) However,
read-only remounting is not a readily achievable goal...

AIX will happily remount a JFS file system R/O on its native host, via the
AIX command: 'mount -r /FSname /MountPoint'. And thereafter you can
traverse the remounted file system via AIX commands and get at all the data.
However, because the file system is still mounted read-write, via its
primary mount, all file access to the read-only version *still* results in
file access times being updated! Thus, regardless of the file system being
read-only, what AIX is going after at a low level is still the same
read-writeable data, and so it updates the inodes accordingly.

The same situation applies if you try to get around this via NFS
remounting... Let's say you export the file system to its local host
(trivial case of NFS) and then remount it via NFS as 'mount -r
ThisHost:/FSname /MountPoint'. All file accesses will still result in inode
updates. Let's say you export the file system to some other host and then
remount the file system there in read-only mode as 'mount -r
SrvrHost:/FSname /MountPoint'. In this case you would hardly expect inode
updates to occur, and yet they do, because again all accesses go back to the
original host where the file system is mounted read-write.

I additionally pursued two more ideas: making the mount point permissions
purely 'r'; and employing the -ro option on 'exportfs'. Neither helps.

ADSM does not like file systems which are remounted without NFS: attempting
to perform incremental backup on a remounted file system fails with error
message "ANS4071E Invalid domain name entered: '/MountPoint'". No
combination of VIRTUALMountpoint and/or DOMain definitions within ADSM will
get by this. You can get ADSM to back up the file system by using NFS
remounting, but the inodes will get updated, because the file system is
mounted read-write on its native system.

For completeness, note that HSM-managed file system cannot be remounted


without NFS: attempting to perform 'mount -r /FSname /MountPoint' results in
error "mount: /FSname on /MountPoint: Invalid argument". NFS remounting
will work.

In summary: no method of remounting a file system read-only can achieve the


goal of accessing data without file access times being updated. Unless
future AIX releases provide some means of achieving this, the only current
means is for the primary mount to be read-only, which is infeasible in most
cases due to the need for data availability. As of this writing the only
avenue which I see left is to write a backup application utilizing the ADSM
API which will traverse a file system, query the ADSM database for each
file, feed the file's data to the server for backup, and then reset the file
access time via the Unix utime() function (which results in the
administrative ctime value being updated, which is fine for non-system
files). A more extreme measure would be to pursue a device driver which
would essentially duplicate file system access, reading the raw device.

ON MIXING FILE SYSTEM AND OPERATING SYSTEM ARCHITECTURES:

File system data archived or backed up with one client architecture


(e.g. Windows) usually cannot be retrieved with another client (e.g. Unix)
because their file systems have different layouts: they are incompatible.

You cannot do a DSMSERV RESTORE using an ADSM database backup tape created
on a machine with a different architecture: in such cases you must perform
an Export-Import.

TXNBYTELIMIT VS. TXNGROUPMAX AND OPERATING SYSTEM BUFFERS:

CLIENT SERVER

TXNBytelimit Server buffer


arbitrary buf size
---------------------- ---------------------
| | | |
---------------------- ---------------------
^ ^
\ /
\ /
v v

Op. sys comm buf Op. sys comm buf


------------------ ------------------
| | < - - - - - - - - > | |
------------------ ------------------

When the client and server first intercommunicate, they exchange and agree
upon various settings. Among them, the client learns the TXNGroupmax value
of the server and will observe that when sending data to the server: if
either the number of files accumulated to transmit to the server exceeds the
TXNGroupmax value, or the size of the data in KB exceeds the TXNBytelimit
value, the accumulated transaction occurs. That is, though TXNGroupmax is a
server option, the client knows of and operates according to it.
Note that TXNBytelimit implies that the client creates a holding area of the
specified size, which is independent of the operating system communications
buffer size, and will typically be much larger than it. When it comes time
for the client to send the data or receive data, the size disparity will
typically make for much shoveling to get the full contents of the client
holding area sent to the server, or received from it.

ON CONTEMPORARY BACKUP/RESTORAL:

The size of modern disks and their contained file systems can be
characterized as "huge". That size alone means that traditional backup and
restoral mechanisms are problematic in whole-disk recovery, partly because
of speed and largely because of the sheer volume of data that would be lost
in recovering since the most recent backup.

Guarding against disk disaster these days calls for some flavor of
mirroring, which is a form of continuous backup. With it, recovery from a
failed disk can be immediate, and in many cases transparent, with no loss of
data. The commodity pricing of todays disks makes mirroring very
practicable.

Traditional backup/restoral these days serves as a safeguard. It


principally allows for recovering pieces...files or directories
inadvertently deleted. It also serves as the only means of recovering from
insidiously corrupted data, which mirroring unknowingly propagates, and
which might not be noticed for some time. The multi-version, long-term
nature of backups allows recovering a good version of such files.

ANALYZING DISK PROBLEMS:

In TSM processing, particularly with disk storage pools, you may encounter
disk problems. Here we explore approaches to dealing with the situation.

The first thing to appreciate is that reacting to the disk problems from the
application (TSM) level is the wrong first course of action. Taking action
at the TSM level without first determining what the problem actually is can
result in inappropriate actions, wasted time, and lost data. For example,
consider a disk having intermittent electronics problems: If you react by
performing an AUDit Volume, it may unwittingly deem files to be bad when in
fact they are entirely viable on the drive. You need to approach disk
problems from the operating system level, where there is more substantive
information and diagnostics to analyze the problem.

In the general case, consider the following elements that are involved in
access to the disk, from the computer outward, and what can go wrong with
them:

- The computer's motherboard I/O planar, or bus slot.


Dirt or oxidation can make for a bad connection. Or the electronics
serving the I/O bus may be defective. Or there can be an electrically
noisy card plugged into the adjoinging slot.

- The disk adapter (e.g., SCSI card) plugged into the bus slot.
In some instances, the card may not be properly seated in the slot. In
rare instances, the adapter card may fail.

- The disk and adapter card device drivers.


These don't "break" or "wear out"; but a mismatch can occur. If you
install a new disk or adapter type and don't install the corresponding
device driver for the operating system to effectively communicate with
and through the devices, you have problems. Or perhaps unbeknownst to
you, your operating system people apply system maintenance or advance to
a new OS release, which installs new device drivers, and suddenly you
have problems. Or it could be that the same device is defined as being
accessed by multiple device drivers: you run happily for weeks, and then
some other facility goes to access the device, which causes the other
device driver to be loaded which supplants yours.

- The cable and cable connectors connecting the disk adapter card to the
disk.
This is a classic problem area. Often, computer room personnel connect
SCSI cables and don't bother to secure the connection with provided
screws or clips, and so over time it is easy for the cable to work
loose, particularly as the cables hanging from the back of a computer
system are jostled by people working behind the systems. Bent or broken
pins inside the connectors are not unknown. In the case of SCSI, people
may unknowingly make the chain too long and it suffers degraded signal
quality; or they fail to terminate the chain or use the wrong type of
terminator. (SCSI is a *very* confusing black art which makes SSA and
Fibre Channel all the more attractive.)

- The electronics collocated with the disk drive which interface it to the
cable connection and govern the actions of the disk drive.
Disk drives always have attached to them a printed circuit card
containing interface and driver electronics, plus power and signal
connectors. The electronics sometimes fail. With spare disks on hand,
you could replace the electronics portion of the drive, typically held
in place by screws.

- The power supply in the disk enclosure.


Every disk assembly contains a power supply, to convert power line
alternating current to low direct current voltages suitable for
electronics and small motors (typically, DC 5V and 12V). If you're
lucky, the power supply will fail outright: that's obvious and done
with. Worse is when the power supply degrades, resulting in power of
bad levels or bad quality getting to the disk drive motor and
electronics: that can cause erratic operation of the drive mechanicals,
poor recording of data, electrical damage to electronics, and bad signal
quality going out over the connection to the computer (and to any other
devices on that chain of devices).

- The cooling fan in the disk enclosure.


Often overlooked because it seems so trivial, failure of the lowly fan
can spell disaster for the disk drive, as overheating can cause rapid
deterioration of the electronics in the cabinet. Fortunately, it is
usually easy to detect when a fan is operating or not from outside the
enclosure.

- Within the disk drive, the drive motor, the disk arm, read/write heads at
the end of the arm, and the oxide-coated platter surfaces of the spinning
disk.
Disk drives which have seen a long life often experience their bearings
wearing out or lubricants drying out. Worn bearings can cause vibration
or wobble which makes for bad track alignment and difficulty reading
previously-written data. When lubricants dry out, the disk arm may
experience difficulty moving, and the spindle of a disk that is turned
off for some period of time may fail to spin-up when turned on. Head-
disk assemblies (HDAs) are hermetically sealed, so should never
experience problems resulting from dirty computer room air. But platter
surfaces can be ruined if the disk heads, which typically fly over their
surfaces at very high speed, come into contact with the platter
surfaces. How can this happen? Consider how many computer systems (with
internal drives) and disk cabinets are placed on desks or tables that
get bumped or jarred. Consider uneven computer room raised floor tiles
that serve as teeter-totters when people walk by adjoining equipment.
Consider carts being rolled through computer rooms and accidentally
bumping equipment, or the custodian with a vacuum cleaner. Disk drives
have relatively high G ratings, but don't push your luck.

- Dust
Sounds like a joke, right? Dust as a component of disk drives? Yes,
because it's unavoidable and pervasive. Consider that almost no computer
rooms are sealed environments: people open doors to walk in and out,
stuff is rolled through, cardboard boxes are routinely opened in
computer rooms, plumbers take down ceiling tiles to work on overhead
pipes and drill holes, etc. Take a look inside any computer or disk
drive and you'll find a disturbing amount of dust covering components
and blocking air flow. Get enough of it blanketing heat-sensitive
electronics components and you get overheating that leads to reduced
life. (Tests I have conducted reveal that ordinary dust is not
conductive, so it should not be the cause of short circuits.) The dust
problem is aggravated by equipment routinely designed to pull air across
innards with no filter of incoming air. Have your computer room people
take avantage of long downtimes to vacuum the inside of cabinets. A set
of small vacuuming tools for use with ordinary vacuums can be obtained
for about $15, and is well worth having.

Note that commercial data processing practice mandates having spares for all
disk drives in the shop, so as to minimize downtimes. Realize that it can
take hours or days to get a replacement drive from your hardware service
people or your hardware supplier. You need to have a spare ready to either
wholly take the place of a failed drive, or provide parts for repair of the
failed drive. At that point you order a new spare, when you can afford to
wait for that one.

TSM TUNING CONSIDERATIONS

Vantage point is everything when you consider performance tuning.

A common pitfall in approaching the tuning of any operating system subsystem


like TSM is in only looking at it only from viewpoint of the subsystem.
In modern operating systems, applications run in a virtual environment: the
storage they use is virtual, what they perceive as contiguous run time is
actually broken up in sharing the real processor with other system
processes, etc. So if you stand inside TSM and look at things, what you are
actually seeing is "virtuality", not reality. A conspicuous case in point
is the TSM database Cache Hit Pct. value, reflecting database lookups being
satisfied from the database buffer pool (memory) rather than having to be
read from the database, with that attendant delay. For optimum performance,
the Cache Hit Pct. value should be up near 100%, as controlled by the
BUFPoolsize server option. But you can have a Cache Hit Pct. of around
100%, and be satisfied with your achievement - and still have mediocre
performance from the database. How? When your computer has insufficient
real memory to fulfill its role as a server such that far too many memory
pages have been paged out to the paging space backing store (disk) because
they can't all fit within real memory. Your 100% caching is in virtual
memory. How much of it is in real memory and truly available for instant
reference is a function of the amount of memory in your computer versus
demands upon it from the various processes in the system. (Operating system
performance monitoring and measurement tools need to be employed.)

APARS, FIXES, AND SOFTWARE LEVELS:

If you call in a problem with a supported level of the software, will you
get a fix? Maybe. When there are two levels of software being "supported"
at one time (for example, 3.7 and 4.1), you can probably get a fix for the
newer one, but not the older one, despite both being "supported". The
typical procedure is to open APARs only against the most current release of
the client, as that is where maintenance will be applied. If a problem
exists on a 3.7 client but is not reproducible on a 4.1 client, an APAR may
be opened against the 3.7 client, but this does not necessarily mean that a
fix will be made available for the 3.7 client. Depending on the nature of
the problem and how severe an issue it is, the APAR may be closed 'fixed in
next release' and the resolution would be to apply the 4.1 level of code.
If the APAR is severe enough, a fix may be provided at earlier levels, but
this is usually not done automatically. Fixtests on previous versions are
usually only made on request, and only if the APAR is deemed serious enough,
or there are compelling reasons why the customer can not upgrade to the
current release that contains the fix. This is because pursuit of a fix
takes development time, and the vendor doesn't want to put time into an
older (yet supported) release when they could be using the time to address
higher severity issues and new client functionality. If the customer has a
compelling reason (old 3.1 server that the 4.1 client is not going to be
compatible with), that needs to be known for development to consider if the
fixtest is justified.

PRODUCT USAGE GUIDELINES:

- If you're going to be using the latest version+release level of the


product, it is essential that you have a support contract: this is the
only way that you can get fixes when there are problems, and only with a
contractual access code can you look at current APARs on the IBM web site,
which specify "Registration required".
If you "hang back" at a more stabilized version+release, a service
contract may be superfluous, and then-historic APARs can be searched
without registration.

SHOULD YOU CHOOSE TSM FOR YOUR SITE?

Competing vendors are prone to denigrating TSM by telling the prospective


customer that TSM restore time takes forever. It *can* take "forever" - but
only if you want it to... What that competing vendor didn't want to tell you
is that TSM provides a spectrum of choices for your backup/restore
configuration: you choose based upon economics and the type of recoveries
you expect to encounter. At the "low" end of the scale you can use no
collocation and write to tape as cheap as 8mm, using "incremental forever":
that provides for miscellaneous file recoveries at minimal cost. At the
high end of the spectrum you can use high-performance tape drives, collocate
by file system, perform full backups every time, and make portable backup
sets which you can restore wholly using client hardware.

How you back up is dictated by your restoral objectives. As another posting


said, in modern systems you don't want to depend upon a batch-oriented
recovery of a disk: you instead use redundant hardware, mirroring, and
snapshots to keep your e-business running with minimal outages and
transaction loss. TSM is probably best regarded as your surity copy of your
data...your guarantee of having images that you can depend upon when you
eventually discover that files have been corrupted, erased files need to be
restored, and historical investigations have to be performed. If you trace
through the TSM Technical Guide redbooks, you'll see how the product is
evolving (subfile backup et al) to meaningfully compare with that competing
vendor's offering.

Make sure that your business has firmly defined objectives when it goes to
compare storage management products: don't simply look at packages and
compare them. You want a solution which will fulfill defined needs, not a
vendor's sale objectives.

SUMMARY OPINION ON THE CURRENT STATE OF THE PRODUCT

I had a Bad Feeling when IBM took the product from the hardware people
(Adstar) and gave it over to Tivoli, in that Tivoli impressed me as just a
generic, market-to-executives type of organization. I was hoping that my
impressions would be wrong, but realities proved otherwise. Customers
suffered one defective maintenance level after another, speaking to the
utilization of outmoded development and testing techniques, and some
combination of lack of interest by IBM as a corporation and lack of
technological leadership on the part of Tivoli management. Such conditions
result in stressful conditions for the staff, and a high probability that
the best people will leave for better situations. Many customers stuck with
(older) releases they knew worked, rather than facing spending scarcely
available time to hunt for a newer release which provides some minimal
number and type of defects which would allow such a release to work in their
shop. In a nutshell, the product was no longer in the hands of
technologists, and the product and its customers were suffering.

In 2003, IBM folded things back into the main company, and quality improved.
The product continues to have defects, but not fiascos.

ADVICE TO NEW ADMINISTRATORS

Learn about and understand the systems for which you are responsible:
It cannot be stressed strongly enough that as part of implementing any
major system, it is absolutely essential that we become familiar with
it, which means having read the manuals and having become familiar with
both the elements of the system and where to look up information,
particularly problem handling information.

Administer the system:


Incredibly, on the ADSM-L mailing list we repeatedly find that customers
are writing in because their systems have failed...because the systems
have not received proper levels of administration, which is to say,
caretaking. The most vital element of administration is monitoring,
particularly for resource shortages. You need to watch TSM Database and
Recovery Log utilization, scratch tape availability, tape drive
availability, and the like. It is immensely frustrating to see a
customer write in with a downed TSM server, caused by a full Recovery
Log...whose usage level had been increasing over time, where the
administrator had ample time over a period of weeks to increase space or
change schedules to make more reasonable use of the space.

Keep records:
When changing any significant system file, always make a copy of it
first. (I created a 'bkupfile' command, which does a 'cp -p' to make an
image of the file, appending a .YYYYMMDD datestamp to indicate the date
of change; and that command is religiously used at our site.) Leaving
tracks like this is invaluable in both pointing out when changes have
been made, and providing something to revert to.
In some way, preserve the contents of your Activity Log to an age which
encompasses the re-use of your oldest tape. Only your Activity Log will
give you a clear picture of tape usage over time, and that is invaluable
when trying to find out what was used, when - particularly in recovery
situations.

Investigate early indications of problems:


So often we see ADSM-L postings where an administrator talks of having
seen indications of a problem weeks earlier, but took no action - and
ultimately the problem grew into a calamity. Don't let problem
indications go ignored: nature is trying to tell you something.
Better yet: look for problems before they smack you on the head. Most
people don't look in dsmerror.log unless some conspicuous failure causes
them to do so; and yet that log may contain valuable information
relating to missing files, network delays, and media issues. Likewise,
review your OS logs for problems (e.g., AIX Error Log).

Plan your TSM configuration for restorals, more so than backups:


Many novice administrators, given the task of setting up TSM "in a
vacuum", configure it for the best way to do backups. Wrong! The whole
point of the procduct is the rapid recovery of data. If you optimize
for backups, that will most likely result in aggravated restoral times.
For example, the novice admin will not use collocation, thus making the
fullest use of all tapes. Sure, it does that; but in the restoral of a
given node:filespace:directory, the restoral process will have to find
its data over all the tape space occupied by other nodes and filespaces.
This is not to say that recovery speed should be the only factor in
designing your system: going wholly in that direction may create a
situation where backups are prolonged, or call for more drives or tapes
than can reasonably be made available. Your design thus has to be the
best compromise which affordably optimizes restoral time.

Never take defaults:


Good administration *never* allows defaults, partly because it means no
statement of intent per codings in the config files, and partly because
you are then at the mercy of the vendor's next arbitrary change.

Avoid mixing long and short retention data on the same serial storage media:
This is another configuration design issue. You may have multiple
filespaces mingle on the same serial volumes (tapes). If the files
thereon have wildly different retention values, the volumes will
prematurely end up with a lot of "holes" where the short-period data has
expired, which in turn elevates the amount of reclamation you have to
do - which is mechanically bad for tape drives, and interferes with
other schedules.

Do not stay at base version+release level when updates are available:


Remarkably, on ADSM-L we find many customers installing the base
version+release level (e.g., 5.1.0.0) and never going beyond it - and
then writing in about problems they are having. The base level is just
that: a starting point. Maintenance levels then begin appearing, fixing
various problems and making adjustments. If you haven't applied recent
maintenance, you are in a poor position to get problems resolved. And
don't take an axiomatic stance that you can't upgrade a server without
upgrading all other clients and agents in your environment at the same
time: TSM components can often be a whole version apart and still
interoperate without problems (the manuals often specify IBM-supported
version/release mixes). General advice: never implement an environment
grounded upon base-level software. Wait a bit for a new version to
settle down and have a mature code base before adopting it.

If you have a major media library, monitor it:


If you have a substantial library, such as a 3494, upon which the
organization depends, it only makes sense to have something actively
monitoring it to assure its operational state and send alerts in a
timely manner when it has a problem. Waiting for TSM to detect a
problem can mean finding out long after the problem started and getting
indirect (and potentially obscure) indications from TSM that take costly
time to track down to the actual library cause.
You can base a monitoring facility on something as simple as a periodic
looping invocation of the 'mtlib' command, for example. (Write to me
directly for a C program example.)

Treat problems by first finding out what the problem is!


Amazingly, some customer technicians immediately launch into arbitrary
remedies before finding out what has caused a problem. For example, the
technician finds 3590 tape volumes going Unavailable and embarks upon an
AUDit LIBRary. Wrong! 3494 libraries maintain their own consistency and
do not require such audits. Hardware problems and human meddling
(removal of tapes from libraries) cause tapes to become Unavailable.
Launching into a software-based "remedy" in the presence of a hardware
problem is at least a waste of time, and can even make things worse.
Physicians don't launch into treatments before diagnosis, and neither
should you.

Always consider your server & clients environment as a whole:


Server and clients must be able to interoperate as a wholly compatible
amalgam. When contemplating a server upgrade, it is absolutely
essential to fully consider the levels of your clients and plan to also
upgrade clients which would not operate with the new server level if
left at their (old) client level. As obvious as this advice may sound,
we still see postings from sites who have upgraded their TSM server
without considering the clients, and wonder why some of their clients no
longer function, and what those strange server messages mean. Remember
that vendors can guarantee (per their testing) only certain, limited
combinations of server and client levels: make the disparity too great
and the results are unpredictable.

When investigating session errors, look at both ends of the session:


The ADSM-L archives are rife with postings about session problems where
the site technician looked at indications on just the client end of the
attempted session. Obviously, if the TSM server cancelled the session,
it will have logged some information as to why. (The reason for session
problems is almost always found at the TSM server.) Examine indications
at both ends of the session to get the whole picture. Post session
questions to ADSM-L only after investigating the whole, and not finding
the answer.

Lo unto those who run anti-virus software on file systems being backed up:
There have been innumerable problems created by anti-virus software, for
any backup product, when run at the same time on a file system which is
undergoing a backup. Performance and functionality suffer.

Don't let end users dictate technology choices:


Within an organization, end users tend to be notorious for dictating how
a system should be implemented, based upon their limited and usually
obsolete knowledge of data processing technology. End user departments
are responsible for conveying business needs, not defining technological
choices. The IT department is supposed to be fully up on technology,
and should be determining best implementation. For example, some end
user may come up with a backup scheme for daily, weekly, and monthly
backups - obviously based upon how they saw some old backup package work
in a past job. Their scheme probably has nothing to do with business
needs; and if followed would subvert the capabilities of the expensive
TSM package which the company adopted to more intelligently secure
business data.

Don't use all the drives for administrative purposes:


There are occasions when you as the TSM administrator want to do a bunch
of stuff, like backing up storage pools and reclaiming tapes, and so are
tempted to use every last tape drive to do so. Don't. In most
environments, client requests may come in at any time. Your occupancy
of all the drives will at a minimum delay sessions. It may also cause
your administrative processes to be preempted...and if you had walked
away fully expecting reclamations to create enough scratch tapes for
coming schedules, you may be rudely surprised later.

Problem analysis advice:


A few points in this area:
- On the mailing list we too often see postings about performance
problems start with "We have two identical systems...". Never believe
that any two systems are identical. Such a thing is physically
impossible, and if believed, will inhibit productive analysis of
system performance issues. Differences are healthy, and afford the
opportunity to compare the effects of differing ingredients.
- As an extension of the above, do not embark upon analysis of any
system with preconceived notions of how things are happening in that
system. Do that and you will blind yourself to actual system workings.
Put yourself into a mindset of seeking to learn what is happening, and
your analysis will be far more effective.

Don't implicitly trust vendors:


We all want to be trusting; but in matters involving money, trust needs
to be measured. While vendors are generally scrupulous, there are those
whose mindsets cause them to habitually resort to dirty tricks
(Microsoft being a notorious example), and cases where lapses occur, as
when a new executive appears. In particular, be sure that your
purchasing/legal people pore over contractual details. A prime example
of vendor games was HP in late 2002, where one of their executives came
up with the "creative business practice" of decreeing that warranties
would begin when the purchase order date - not the date of transfer of
title, according to law, which may actually occur weeks or months
later.

Expect warts:
Don't expect any new release of any software to be perfect. Every
release of something has some warts. For example, upgrade to TSM 5.2.4
to solve some problems and get the minor annoyance of Query PRocess
having misaligned text for Space Reclamation processes. The watchword
in life: "It's always something."

REFERENCES AND OTHER RESOURCES:

Tivoli: http://www.ibm.com/software/tivoli/
Contacting Tivoli (TSM publications feedback):
http://www.ibm.com/software/tivoli/contact.html
Glossary: http://publib.boulder.ibm.com/tividd/glossary/termsmst04.htm
Search: http://www.ibm.com/software/sysmgmt/products/support/
Software Support downtime web page notice:
http://www.ibm.com/software/support/outages.html

Tivoli product inventory:


http://www.ibm.com/software/tivoli/products/product-matrix.html

Tivoli-specific web feedback mail address: Tivoli_eSupport_Feedback@us.ibm.com

TSM products (including client and server requirements):


http://www.ibm.com/software/tivoli/products/storage-mgr/product-links.html

TSM features list and platforms on which they are available:


http://www.ibm.com/software/tivoli/products/storage-mgr/
product-features.html

Tivoli Storage Manager:


http://www.ibm.com/software/tivoli/products/storage-mgr/
http://www.ibm.com/software/tivoli/products/storage-mgr/product-links.html
Products links:
http://www.ibm.com/software/tivoli/products/storage-mgr/product-links.html
http://www.ibm.com/software/tivoli/solutions/storage/products.html
Datasheet:
ftp://ftp.software.ibm.com/software/tivoli/datasheets/ds-tsm.pdf
Introductory stuff:
Flash-based animation ("video") overview of the product:
http://www.ibm.com/software/tivoli/library/demos/storage-mgr.html
"Tivoli Field Guide - A Brief Introduction to IBM Tivoli Storage Manager"
http://www.ibm.com/support/docview.wss?uid=swg27004975
"TSM Policies Demystified" IBM site TechNote 1052632
Supported platforms and requirements:
http://www.ibm.com/software/tivoli/products/storage-mgr/platforms.html
End-of-currency, end-of-service (product withdrawal; end of support; EOS)
dates (End of Support Matrix web page):
http://www.ibm.com/software/sysmgmt/products/support/eos.html
IBM Software Support Lifecycle:
http://www.ibm.com/software/info/supportlifecycle/
Support Technical Exchanges (STE):
http://www.ibm.com/software/sysmgmt/products/support/supp_tech_exch.html
TSM for Space Management (HSM):
http://www.ibm.com/software/tivoli/products/storage-mgr-space/
Tivoli Maintenance and Release Strategy (VRML):
http://www.ibm.com/software/sysmgmt/products/support/
Tivoli_Software_Maintenance_and_Release_Strategy.html
TSM manuals (as of 2003/02/01):
The manuals are available for download as PDFs, or online reference as
HTML. (The manuals are not provided for download as HTML bundles: that
is available only on the BOOK CD.)
In general:
1) Go to http://www.ibm.com/software/tivoli
2) On the left-hand side, click "Library".
3) On the left-hand side, click "Product manuals", which goes to:
http://publib.boulder.ibm.com/tividd/td/tdprodlist.html
4) From there, select the appropriate manual.
http://publib.boulder.ibm.com/tividd/td/tdprodlist.html
Tivoli Technical Product Documents by Marketing Category:
http://publib.boulder.ibm.com/tividd/td/tdmktlist.html
Using the API:
http://publib.boulder.ibm.com/tividd/td/TSMC/GC32-0793-00/en_US/PDF/
GC32-0793-00.pdf
Installing the Clients:
http://publib.boulder.ibm.com/tividd/td/TSMC/SH26-4119-02/en_US/PDF/
SH26-4119-02.pdf
Messages:
http://publib.boulder.ibm.com/tividd/td/StorageManagerMessages5.1.html
Client-Server Requirements, Supported Devices:
http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html
Supported devices:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_AIXHPSUNWIN.html
TSM latest version-release READMEs:
http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManagerVersionRelease.html
TSM 5.2 Announcement: http://www.ibmlink.ibm.com/usalets&parms=H_203-095
TSM 5.2 Features:
http://www.ibm.com/software/tivoli/products/storage-mgr/
enhancements-v5.2.html
TSM 5.2 Kernel, Addressing and Filesets During Installation:
http://www.ibm.com/support/docview.wss?uid=swg21154486
TSM 5.3 features:
http://www.ibm.com/at/events/tsm/pdf/Poschke-TSM-V5_3-Update.pdf
Version-Release Documents web page:
http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManagerVersionRelease.html
Whitepapers:
http://www.ibm.com/software/tivoli/library/whitepapers/
"Beyond backup toward storage management"
(gives a good overall view of TSM's strategy for backups)
ftp://ftp.software.ibm.com/software/tivoli/whitepapers/wp-beyond-backup.pdf
http://www.research.ibm.com/journal/sj/422/kaczmarski.pdf
"Internet Protocol storage area networks":
http://www.research.ibm.com/journal/sj/422/sarkar.pdf
"Tivoli Storage Manager - Using the Archive Function"
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100254
ITSM - determining why data is not sent LAN-free:
http://www.ibm.com/support/entdocview.wss?uid=swg21155327
Files not restorable in a LAN-Free environment:
http://www.ibm.com/support/docview.wss?uid=swg21067224
Quick Steps to setup an ITSM 5.2 LAN-Free Agent:
http://www.ibm.com/support/docview.wss?uid=swg21150209
Tivoli Data Protection (TDP) for Applications and Databases:
http://www.ibm.com/software/tivoli/products/storage-mgr-db/
For databases:
http://www.tivoli.com/products/index/storage_mgr/addbase.html
For mail:
http://www.ibm.com/software/tivoli/products/storage-mgr-mail/
Requirements:
http://www.tivoli.com/support/storage_mgr/addbase.htm
MS SQL:
ftp://ftp.software.ibm.com/storage/tivoli-storage-management/
maintenance/tivoli-data-protection/ntsql/v515/nt/
TSM for Mail:
http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManagerforMail.html
Server/Client software:
ftp://service.boulder.ibm.com/storage/tivoli-storage-management
/maintenance/
(ftp.software.ibm.com is another, less reliable site)
Customer Support Handbook:
http://www.tivoli.com/support/handbook/
Storage Area Network (SAN):
http://www.tivoli.com/support/storage_mgr/san/overview.html
TSM Managed System for SAN Storage Agent User's Guide:
http://publibfp.boulder.ibm.com/epubs/pdf/c2346930.pdf
Redbooks and Redpieces of note (at www.redbooks.ibm.com):
"Getting Started with Tivoli Storage Manager: Implementation Guide"
has been renamed "Tivoli Storage Manager Implementation Guide"
(SG24-5416) (Includes performance tuning info)
http://www.redbooks.ibm.com/abstracts/sg245416.html
http://www.redbooks.ibm.com/redbooks/SG245416.html
http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg245416.pdf
http://www.redbooks.ibm.com/redpieces/abstracts/sg245416.html
http://www.redbooks.ibm.com/redpieces/pdfs/sg245416.pdf
"Tivoli Storage Management Concepts" (SG24-4877)
http://www.redbooks.ibm.com/abstracts/sg244877.html
http://www.redbooks.ibm.com/redbooks/SG244877.html
http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg244877.pdf
"Tivoli Storage Management Reporting"
http://www.redbooks.ibm.com/abstracts/sg246109.html
"ADSM Version 3 Technical Guide"
http://www.redbooks.ibm.com/abstracts/sg242236.html
http://www.redbooks.ibm.com/redbooks/SG242236.html
http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg242236.pdf
"Tivoli Storage Manager Version 3.7.3 & 4.1 Technical Guide"
http://www.redbooks.ibm.com/abstracts/sg246110.html
http://www.redbooks.ibm.com/redbooks/SG246110.html
http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg246110.pdf
"Tivoli Storage Manager Version 4.2 Technical Guide"
(also delves into 4.1 features)
http://www.redbooks.ibm.com/abstracts/sg246277.html
http://www.redbooks.ibm.com/redbooks/SG246277.html
http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg246277.pdf
"Tivoli Storage Manager Version 5.1: Technical Guide" (SG24-6554)
http://www.redbooks.ibm.com/abstracts/sg246554.html
http://www.redbooks.ibm.com/redbooks/SG246554.html
http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg246554.pdf
"IBM Tivoli Storage Area Network Manager: A Practical Introduction"
http://www.redbooks.ibm.com/redpieces/abstracts/sg246848.html
http://www.redbooks.ibm.com/redpieces/pdfs/sg246848.pdf
"Backing Up DB2 Using Tivoli Storage Manager"
http://www.redbooks.ibm.com/abstracts/sg246247.html
http://www.redbooks.ibm.com/redbooks/SG246247.html
http://www.redbooks.ibm.com/redbooks/pdfs/sg246247.pdf
TSM Performance and Tuning:
TSM Performance Tuning Guide (SC32-9101-01):
http://publib.boulder.ibm.com/tividd/td/TSMM/SC32-9101-01/en_US/HTML/
SC32-9101-01.htm
Older Tuning Guide:
http://www.ibm.com/software/tivoli/library/technicalbriefs/
ftp://ftp.software.ibm.com/software/tivoli/technical-brief/
tsm-tuning.pdf
TSM Technical Exchange: Performance Diagnosis (Dave Daun):
http://www.ibm.com/support/entdocview.wss?uid=swg21145012
SHARE:
http://www.share.org/proceedings/sh97/share.htm Session 5722
Diagnosing Performance Bottlenecks in TSM:
http://www.share.org/proceedings/sh98/data/S5723.PDF
IBM articles:
"How to determine when disk tuning is needed for your ITSM server"
(includes Backup DB and Expire Inventory performance expectations)
http://www.ibm.com/support/docview.wss?uid=swg21141810
Version Release Information (APARs, ReadMes, downloads, Technotes, doc):
http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManagerVersionRelease.html

TSM 3.7:
http://www.redbooks.ibm.com/redpieces/abstracts/sg245477.html
Manuals (clients, messages, but not server manuals):
http://ezbackup.cornell.edu/techsup-v3.7/ibmdocs/index.html
TSM 4.1:
http://www.tivoli.com/products/documents/updates/
storage_mgr_enhancements.html#4.1
Manuals (clients, messages, TDPs, but no server manuals):
http://ezbackup.cornell.edu/techsup-v4.1/ibmdocs/
TSM 5.3:
http://publib.boulder.ibm.com/infocenter/tivihelp/index.jsp

FTP sites: ftp.software.ibm.com (current)


ftp.storsys.ibm.com (old)

TSM client and server software, fixes:


ftp://service.boulder.ibm.com/storage/tivoli-storage-management
/maintenance/
(Note: ftp.software.ibm.com is another, but less reliable site.)
Client-server compatibility (relative levels):
TSM: http://www.tivoli.com/support/storage_mgr/compatibility.html
Also in Backup/Archive Clients manual, chapter 1, under "Additional
Migration Information"
ADSM: http://www.tivoli.com/support/storage_mgt/adsm/adsercli.htm
Note the statement in appendix A.2 of the "TSM Version 3.7.3 & 4.1:
Technical Guide" redbook:
All version 3.1 clients can be used together with Tivoli Storage
Manager V4.1 servers. In this case, Version 3.7 and 4.1 client
function is not available.
ADSM manuals, version 3:
http://www.tivoli.com/support/storage_mgt/adsm/pubs/admanual.htm
http://www.tivoli.com/products/index/storage_mgt/adsm/pubs/admanual.htm
http://books.adsm.org
http://ezbackup.cornell.edu/techsup-v3.1/ibmdocs/index.html
ADSM manuals, version 2:
ftp://index.storsys.ibm.com/adsm/pubs/version2/ clients,servers
http://ezbackup.cornell.edu/techsup-v2/ibmdocs/index.html
ADSM-TSM history (Mike Kaczmarski article for Computer Technology Review):
www.plcs.nl/upload/files/nieuws/TSM%2010%20Years.pdf
www-1.ibm.com/industries/cpe/download9/19719/TSM_10_Years_CTR_Reprint.pdf
www.keyinfo.com/downloads/TSMoverview.pdf
IBM Systems Journal: http://www.research.ibm.com/journal/
http://www.leeds.ac.uk/ucs/systems/archive.html
Leeds File Archive System
Tape and Optical Storage technology publications (3490, 3494, 3590, etc.):
http://www.storage.ibm.com/hardsoft/tape/pubs/prodpubs.html
"Is it Tape *and* Disk or Tape *versus* Disk?"
http://wwpi.com/CTR_Current/June04_2.asp
Tape Technology Council: http://www.tapecouncil.org/
Barcode information: www.tharo.com web site has referenceable info
3466 Network Storage Manager:
http://www.storage.ibm.com/nsm
EC Levels and Corresponding PTF Information: see PTF II09953
3480,3490,3590 vendor:
http://www.tapedrives-3480to3590.com/ (Comco)
http://www.online-magstar-tape.com/
3494 product info (an info sheet, not a manual):
http://www.storage.ibm.com/hardsoft/tape/3494/prod_data/g225-6601.html
3494 Tape Library Dataserver bookshelf (view online):
http://www.s390.ibm.com/os390/bkserv/hw/44_srch.html
http://www.s390.ibm.com/bookmgr-cgi/bookmgr.cmd/Shelves/A06BK013
3494 manuals, downloadable (Operator Guide et al):
http://www.storage.ibm.com/hardsoft/tape/pubs/pubs3494.html
3494 redbook:
"IBM Magstar Tape Products Family: A Practical Guide" (SG24-4632)
http://www.redbooks.ibm.com/abstracts/SG244632.html
http://www.redbooks.ibm.com/redbooks/SG244632.html
http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg244632.pdf
(See also the "IBM TotalStorage Tape Device Drivers" manuals, below)
3494, 3590 microcode:
Call 1-800-IBM-SERV and request the latest microcode for your device.
3494 home page:
http://ssdweb01.storage.ibm.com/hardsoft/tape/3494/index.html
3570
3570 --> All About It
http://www.gruftie.net/ibm/tl/techlib/qna/sfam/html/FC/FC4084.htm
How to load and unload tapes in a 3570
http://www.gruftie.net/ibm/tl/techlib/qna/sfam/html/BY/BY2034L.htm
3575:
Manuals:
http://www.storage.ibm.com/hardsoft/tape/pubs/pubs3575.html
Redbook: "Storage Area Networks: Tape Future in Fabrics" (SG24-5474)
Microcode: ftp://ftp.software.ibm.com/storage/357x/
3580 publications:
http://www.storage.ibm.com/hardsoft/tape/3580/index.html
3581 Ultrium Tape Autoloader:
Description:
http://www.storage.ibm.com/hardsoft/tape/3581/prod_data/g225-6851.html
Technical support:
http://ssddom02.storage.ibm.com/techsup/webnav.nsf/support/3581
Setup and Operator Guide:
http://publibfp.boulder.ibm.com/epubs/pdf/a67sg0ct.pdf
358x (LTO/Ultrium) microcode/firmware:
ftp://service.boulder.ibm.com/storage/358x/
The following site seems to be gonzo:
http://ssddom01.storage.ibm.com/techsup/swtechsup.nsf/support/
ultriumfmr_ftp
358x (LTO/Ultrium) device driver:
ftp://ftp.software.ibm.com/storage/devdrvr/
3584
http://www.storage.ibm.com/hardsoft/tape/3584/prod_data/g225-6853.html
http://ssddom02.storage.ibm.com/techsup/webnav.nsf/support/3584
Firmware web page:
http://www.ibm.com/support/docview.wss?rs=546&org=ssg&doc=S4000043&loc=enus
3590
Microcode: ftp://service.boulder.ibm.com/storage/3590/code3590
ftp://service.boulder.ibm.com/storage/3590/code3590/index.html
(beware the index.html file being out of date!!)
Publications:
http://www.storage.ibm.com/hardsoft/tape/pubs/pubs3590.html
At www.redbooks.ibm.com:
"IBM Magstar Tape Products Family: A Practical Guide" (SG24-4632)
"Magstar and IBM 3590 High Performance Tape Subsystem Technical Guide",
"Magstar and IBM 3590 High Performance Tape Susbsystem: Multiplatform
Implementation", SG24-2594-02
Terabyte cartridge:
http://www.ibm.com/storage/europe/tapenews/index.html
"The IBM 1TB Tape Roadmap" terabyte cartridge presentation:
http://www.ckzeto.com.pl/pub/IBM_CKZeto.pdf
http://ww2.keylink.pios.com/mkt/IBM.nsf/
cf2be34cd3c4be5c85256a4e00630b1c/cab4320d75adddb185256bbb004bce90/
$FILE/IBM+1TB+Tape+Roadmap+presentation.pdf
ISV matrix of supporting vendors:
http://www.storage.ibm.com/tape/conntrix/pdf/3590_isv_matrix.pdf
3590E to 3590H upgrade (available to registered customers):
http://www.ibm.com/support/docview.wss?uid=swg21112812
3590 vs. 9840 tape drives: (see follow-on paper just below this one)
http://www.storage.ibm.com/hardsoft/tape/3590/prod_data/3590perform.pdf
(G522-2508)
"IBM Tivoli Storage Manager (ITSM) AIX 3494 3590 Drive Mappings":
IBM site Technote 1064661
3592
Tape cartridge brochure:
http://www.storage.ibm.com/media/tapecartridges/prod_data/g225-6987-00.pdf
Redpaper: "3592 Presentation Guide"
http://www.redbooks.ibm.com/abstracts/redp3749.html
IBM 1/2" Tape Cartridges:
http://www.storage.ibm.com/media/tapecartridges/index.html
AIT (Sony):
"Achieving One Terabyte per Cartridge..." S-AIT:
http://www.thic.org/pdf/Oct01/sony.jwoelbern.011009.pdf
DLT: www.dlttape.com

LTO
Ultrium roadmap:
http://www.lto-technology.com/newsite/html/format_roadmap.html
http://www.qualstar.com/146252.htm
Ultrium vs. Super-DLT:
http://www.storage.ibm.com/hardsoft/tape/lto/prod_data/ltovsdlt.html
"IBM LTO Ultrium Performance Considerations"
ftp://ftp.software.ibm.com/software/tivoli/whitepapers/wp-tsm-lto.pdf
IBM Tech Support:
Updating firmware:
http://ssddom02.storage.ibm.com/techsup/webnav.nsf/support/
ltofaqs_updatefw_drivefw
LTO - A New Robust Tape Standard:
http://www.storage.ibm.com/tape/lto/white_papers/ltowhitepaper.html
LTO Data Compression:
http://www.storage.ibm.com/tape/lto/white_papers/pdf/
whitepaper_compression.pdf
LTO Ultrium cleaning issues:
http://www.t10.org/ftp/t10/document.03/03-204r1.pdf
LTO Sense Data: http://www.tuganz.org/filemgmt_data/files/SenseData_04.pdf
Ultrium tape recording method (animated overview):
http://www.ultrium.com/newsite/html/about_tech.html

Sense Data:
"Tivoli Storage Problem Determination Guide - Understanding Sense Data"
http://www.ibm.com/support/entdocview.wss?uid=swg21063859
"SCSI Sense Data Structure and Example"
http://www.ibm.com/support/docview.wss?uid=swg21063859

Tape Is Not Dead!


http://ww2.keylink.pios.com/mkt/IBM.nsf/
cf2be34cd3c4be5c85256a4e00630b1c/cab4320d75adddb185256bbb004bce90/
$FILE/Tape+is+not+dead!+(Illuminata+3-28-02).pdf

LMCPD (atldd) and 3590 (Atape) driver software (found via "Support" on the
3494 home page):
ftp://service.boulder.ibm.com/storage/devdrvr/ ...or...
ftp://index.storsys.ibm.com/devdrvr

ADSM-L mailing list, a LISTSERV-managed list:


ADSM-L@VM.MARIST.EDU via LISTSERV@VM.MARIST.EDU
Admin: Martha McConaghy <URMM@VM.MARIST.EDU>
(She is Manager of Systems, Network and Operations.)
To subscribe: Send email to LISTSERV@VM.MARIST.EDU with a blank
subject and a body of "subscribe ADSM-L your name".
OR: visit www.marist.edu/htbin/wlvindex?adsm-l
To unsubscribe: Send email to LISTSERV@VM.MARIST.EDU containing the
Listserv command: SIGNOFF ADSM-L
OR: visit www.marist.edu/htbin/wlvindex?adsm-l
*DO NOT* send an unsubscribe request to ADSM-L: the
many hundreds of people who will receive that message
can do nothing about unsubscribing you, and will just
be annoyed with your faux pas.
(Note that the process of subscribing and unsubscribing is explained in
the introductory information you received when you joined the List...
which you were supposed to save; in the TSM client manuals, under
"Online forum" or "Internet"; in the product upgrade README files; in
the Monthly TSM FAQ; on the IBM Tivoli Communities web page
www.ibm.com/software/sysmgmt/products/support/Tivoli_Communities.html.
And, of course, you can always web search on "adsm-l unsubscribe".)

To change your subscription to a daily digest of all postings for the


send email to LISTSERV@VM.MARIST.EDU with the mail body containing the
text: set adsm-l DIGEST
To change back to regular email: set adsm-l NODIGests
(Be aware that the digest function has had reliability problems.)
To get information on the services provided by the LISTSERV program,
send email to LISTSERV@VM.MARIST.EDU with the mail body containing the
text: info genintro

The ADSM-L list is archived by LISTSERV on a monthly basis; and as the


month proceeds, the current month's file is accumulating and can be
retrieved in its "thus far" state. You can get a list of what's there
by doing:
mail LISTSERV@VM.MARIST.EDU
with the body of the mail containing "index adsm-l".
That will provide a list of the available files.

Individual files have time-sequenced names, in "ADSM-L LOGyymm"


format, such as "ADSM-L LOG9808" for August, 1998 and "ADSM-L LOG0002"
for February, 2000.
Retrieve each by doing:
mail LISTSERV@VM.MARIST.EDU
with the body of the mail containing
"get adsm-l <FileType>", as in "get adsm-l LOG9907".

You can also get them via FTP from ftp://vm.marist.edu/academ:adsm-l./


These logs are by month, beginning with Sept. 1993 when the list was
created. Some of them are pretty large, so be sure to have enough disk
space.

The archives are also searchable from the web:


www.adsm.org is the most commonly used
www.marist.edu/htbin/wlvindex?adsm-l yields a primitive file list
You can also email the list administrator from this page.
(Be aware that its web server is always slow.)
www.mail-archive.com/adsm-l@vm.marist.edu/

To suspend getting email, but remain a member of the List, you can
adjust your personal settings on the Listserver for "NOMail". Send email
to LISTSERV@VM.MARIST.EDU with the one-line body: SET ADSM-L NOMail

ADSM-L posting advice:


- First and foremost: don't immediately post a question. Make the
effort to look for the answer in available information sources
(manuals, redbooks, websites, archives, etc.). If you are having a
problem with client options, be sure to have done 'dsmc q opt' to
"compile" and validate your options. If you are having a problem with
Include-Exclude specifications, be sure to have done 'dsmc q
inclexcl' to "compile" and summarize the composite from server and
client. If it is a session problem, look in the logs on both sides of
the session. Take the initiatives which data processing professionals
do. Then you may post saying that you've done the right thing in
performing research. Remember that the people who respond to
questions are not sitting around waiting for questions to come in -
they are busy doing what their employers expect of them. Expecting
other people to look up readily available information you haven't
bothered to is very bad form. Asking questions which have been
answered many times in the past, and which can be viewed in the List
archives, is a waste of people's time and List archive space.
- Always include the V.R.M.L level numbers of the software you are
writing about. Be specific: don't specify "version 5" when you need
to specifically say 5.1.7.9.
- Post in plain text, to aid immediate readability and future archives
searching. Sending email as HTML, RTF or the like, or using
proprietary format (i.e., MS Word) attachments deters respondents.
- Include details and specifics, including pertinent error messages,
platform type, and configuration info. Too many postings omit
detail, resulting in wasted time prying the information out of the
poster (and what incentive is there to help someone like that?).
- Mention what avenues you have already pursued in investigating the
problem. There's nothing more annoying to a responder than taking the
time to formulate a recommendation - only to have the original poster
write back saying that they already considered that: thanks a lot for
saying so in the first place.
- Do not use the List to write to an individual. It's a complete waste
of Internet bandwidth and the time of 1600 people to have to process
a posting where someone asks, "Dave, could you send me a copy of your
utility?" Always be conscious of the address that your email
response is using.

Problem situations:
- Mail back with Subject "Rejected posting to ADSM-L@VM.MARIST.EDU"
and body saying "Your message is being returned to you unprocessed
because it appears to have already been distributed to the ADSM-L
list. ..." This is because some idiot List member is rejecting his
incoming ADSM-L mail back to the listserver. Examine the expanded
mail headers to determine the offending site.

ADSM.ORG is a product-specific reference site which began, and still


principally is, a copy of ADSM-L postings (whose primary storage site is
Marist College, which hosts ADSM-L). The ownership of the site is not
divulged.
http://my.adsm.org

TSM user groups:


http://www.ibm.com/software/sysmgmt/products/support/Tivoli_User_Groups.html

TivoliGuru.com is "an open discussion forun for Tivoli professionals", begun


around the beginning of 2003, whose ownership is not divulged. As of this
time, its value is dubious, seeming to be a very general site addressing all
Tivoli products in general, rather than ADSM/TSM in depth, as ADSM-L does.
http://tivoliguru.com/

ADSM Problem Determination Guide (a short aid):


http://www.ibm.com/support/techdocs/atsmastr.nsf/
2a87efd214ce1a4785256842007bb416/85256760006a08d58525663b005b190c?
OpenDocument
"It doesn't work!" http://www.chiark.greenend.org.uk/~sgtatham/bugs.html
See also part 6.2.3 of redpaper
Certification Study Guide for IBM Tivoli Storage Manager Version 5.2
(http://www.redbooks.ibm.com/abstracts/REDP3934.html)

Tivoli presentations, datasheets, articles, etc.:


http://www.ibm.com/software/tivoli/library/technicalbriefs/
http://www.ibm.com/software/tivoli/library/faqs/
http://www.ibm.com/software/tivoli/library/datasheets/

` Tivoli Field Guides:


http://www.ibm.com/software/sysmgmt/products/support/Field_Guides.html
"Using the Tivoli Storage Manager Central Scheduler"
http://www.ibm.com/support/docview.wss?uid=swg27004753
"An approach to patches"
http://www.ibm.com/support/entdocview.wss?uid=swg27001633
"A Brief Introduction to IBM Tivoli Storage Manager Operations - A Plain
Language Guide on TSM Care and Feeding" (TSM Operational Reporting)
http://www.ibm.com/support/docview.wss?uid=swg27005054
"Full-Incremental Rotations Using IBM Tivoli Storage Manager":
http://www.ibm.com/support/docview.wss?uid=swg27005212

TSM vs Veritas NetBackup:


ftp://ftp.software.ibm.com/software/tivoli/whitepapers/wp-tsm-comparing.pdf
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
with archives to search through at:
http://mailman.eng.auburn.edu/pipermail/veritas-bu/
and
http://marc.theaimsgroup.com/?l=veritas-bu&r=1&w=2

UCSD's 3494:
http://www-act.ucsd.edu/act/ibm3494.html
HSM:
Redbook: "Using ADSM Hierarchical Storage Management" (SG24-4631)
Tivoli Field Guide: TSM for Space Management:
http://www.ibm.com/support/entdocview.wss?rs=0&uid=swg27002498
IBM redbooks, for online viewing and download:
http://www.redbooks.ibm.com
Send feedback email to: redbook@us.ibm.com
IBM product information, emailed to you:
http://isource.ibm.com/world/index.shtml
Lotus/Domino redbooks: http://www.lotus.com/developers/redbook.nsf
IBM Techdocs: http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/Flashes
APARs, PTFs (APAR repository/APAR database):
TSM: http://www.ibm.com/software/sysmgmt/products/support/
IBMTivoliStorageManager.html
where you can enter word, or phrases without quoting
General: http://www.ibm.com/support/
Enter phrases in double quotes.
Other:
http://service.software.ibm.com/cgi-bin/support/rs6000.support/databases
http://www.tivoli.com/asktivoli/cgi-bin/cast.cgi (need userid, password)
http://www.ibm.com/software/sysmgmt/products/support/
-> select IBM Tivoli Storage Manager -> select "Solutions"
Be aware that many are the typing and spelling errors in the databases,
which can thwart searches. (There is no IBM editor assigned to review
the coherency and correctness of what technicians write therein.)
For a given TSM level, you can get a list of the APARs fixed at that level
by searching IBM for like: "APARs fixed in V5.1 PTFs".

"IBM TotalStorage Tape Device Drivers: Installation and User's Guide",


(GC35-0154) (a renaming of the earlier manual "IBM SCSI Tape Drive, Medium
Changer, and Library Device Drivers: Installation and User's Guide", of the
same publication number)
"IBM TotalStorage Tape Device Drivers: Programming Reference" (GC35-0346)
(a renaming of the earlier manual "IBM SCSI Tape Drive, Medium Changer, and
Library Device Drivers: Programming Reference" (WB2107))
Available at ftp://ftp.storsys.ibm.com/devdrvr/Doc/
(refer to the .message or README file in that directory)
or ftp://ftp.software.ibm.com/storage/devdrvr/Doc/ as files:
IBM_TotalStorage_tape_IUG.ps or IBM_TotalStorage_tape_IUG.pdf
IBM_TotalStorage_tape_PROGREF.ps or IBM_TotalStorage_tape_PROGREF.pdf
"IBM Ultrium Device Drivers, Installation and User's Guide" (GA32-0430)
"IBM Ultrium Device Drivers, Programming Reference" (GC35-0483)
Available at ftp://ftp.storsys.ibm.com/devdrvr/Doc/
(refer to the .message or README file in that directory)
or ftp://ftp.software.ibm.com/storage/devdrvr/Doc/ as files:
IBM_ultrium_tape_IUG.ps or IBM_ultrium_tape_IUG.pdf
IBM_ultrium_tape_PROGREF.ps or IBM_ultrium_tape_PROGREF.pdf
Comparision study of backup software:
http://www.networkcomputing.com/920/920r2.html

Disaster recovery:
Redbook: "Disaster Recovery Strategies with Tivoli Storage Management"
(SG24-6844)
http://www.redbooks.ibm.com/abstracts/sg246844.html
http://www.redbooks.ibm.com/redbooks/SG246844.html
http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg246844.pdf
Windows bare metal restore:
MS Knowledge Base article "How to Move a Windows 2000 Installation to
Different Hardware":
http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q249694

Education/Training:
http://www.tivoli.com/services/education/courses/
http://www.rdperf.com/ (R&D Performance Group)
Media (3590 tapes):
http://www.emtec-magnetics.com/
http://www.mtc-open.net/ Magnetic Tape Cartridge technology
http://www.mtc-open.net/Infocenter/Linkpage/
http://www.thic.org/pdf/Oct00/imation.jgoins.001003.pdf
Tape Media Guide (table):
In: http://www.storage.ibm.com/pguide/SSGProductsweb0503.pdf
http://fujifilmmediasource.com/specs/new/misc/tapewip02.pdf
Oxford annual ADSM/TSM symposium: http://tsm-symposium.oucs.ox.ac.uk/
Papers/presentations/seminars:
http://tsm-symposium.oucs.ox.ac.uk/callfor.html (current)
http://tsm-symposium.oucs.ox.ac.uk/papers (papers dir.)
http://adsm-symposium.oucs.ox.ac.uk/1999/callfor.html
or http://adsm-symposium.oucs.ox.ac.uk/1999/papers/
http://adsm-symposium.oucs.ox.ac.uk/2001/callfor.html
or http://adsm-symposium.oucs.ox.ac.uk/2001/papers/
"The TSM Client - Diagnostics":
http://adsm-symposium.oucs.ox.ac.uk/2001/papers/Raibeck.Diagnostics.PDF
http://tsm-symposium.oucs.ox.ac.uk/ (TSM Symposium 2003)
HSM on Windows 2000 (NT 5):
http://www.highground.com/rsm/rsmoverview.htm
Microsoft Windows error numbers:
http://msdn.microsoft.com/library/wcedoc/wcesdkr/appendix_2.htm
http://msdn.microsoft.com/library/psdk/psdkref/errlist_9usz.htm
http://www.mvps.org/btmtz/win32errapp/ (Win32 Error Codes application)

Other storage mailing lists:


http://www.backupcentral.com/forums.html (Faq-o-matic)
(by W. Curtis Preston, author of the O'Reilly book Unix Backup &
Recovery)

RAID levels: http://www.pcguide.com/ref/hdd/perf/raid/levels/index.htm

Salary surveys:
http://adsmsalarysurvey.8m.com/ As of 2001/05/15 replaced by:
http://tsmsalarysurvey.8m.com by Mark Mooney <m.mooney@ais-nms.com>
www.salary.com

Sams Vantage product info:


http://www.cai.com/products/sams/ca_vantage_tsm.htm
SANs:
Redbook "Planning and Implementing an IBM SAN" (SG24-6116)
SAN Basics: http://www.storage.ibm.com/ibmsan/basics.htm

SQL: Admin Ref manual 'Select' command description.


TSM Technical Guide redbook, appendix A "TSM SQL".
Redbooks Technote - ITSM 5.1 SQL Interface:
http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/
tips0010.html?Open
"Sample SQL Select Statements":
http://www.ibm.com/support/entdocview.wss?uid=swg21049808
"Show drives used the last 24 hs":
http://www.ibm.com/support/docview.wss?uid=swg21155483
"SQL Workshop" presentation at Oxford 2003 TSM Symposium:
http://tsm-symposium.oucs.ox.ac.uk/papers/AndyLauraRobert.pdf
http://www.sql.org/online_resources.html
http://www.firstsql.com/tutor.htm
http://riki-lb1.vet.ohio-state.edu/mqlin/computec/tutorials/
SQLTutorial.htm
http://4guysfromrolla.com/webtech/sqlguru/
http://www.aspnetcenter.com/cliktoprogram/ (SQL Basics)
http://www.katungroup.com/ Select "Database" from lefthand panel menu
http://www.sqlcourse.com/ http://www.sqlcourse2.com/
http://www.dcs.napier.ac.uk/~andrew/sql/
http://www.geocities.com/SiliconValley/Vista/2207/sql1.html Intro to SQL
http://builder.com.com/article.jhtml?id=u00320020531dol01.htm
http://builder.com.com/article.jhtml?id=u00320020628dol01.htm
http://www.baymediax.com/portfolio/msutton/cars/basics/sqlrefer.htm
"Using the ADSM SQL Interface" (by IBMer Andy Raibeck)
http://www.uni-karlsruhe.de/~rz57/ADSM/3rd/handouts/raibeck.ps
(a PostScript file, to print or see with a utility like the free
Ghostscript or GSview)
Summarizing data with SQL:
http://www.paragoncorporation.com/ArticleDetail.aspx?ArticleID=6
Functions:
http://sybooks.sybase.com/onlinebooks/group-fs/awg0602e/dbrfen6/
@Generic__BookTextView/30162
SQL Where clause:
http://blink.ucsd.edu/Blink/External/Topics/Policy/0,1162,3000,00.html

Solution providers (third party storage hardware/software/consulting):


http://www.moregroupinc.com/

STK (StorageTek) web site: http://www.storagetek.com/

Storage Photo Album:


http://www.ibm.com/ibm/history/exhibits/storage/storage_photo.html

Tivoli Customer Support News:


http://www.tivoli.com/Tivoli_Electronic_Support/Supnews.nsf/Allnews

Tivoli Decision Support for Storage Management Analysis (TDS for SMA)
http://www.tivoli.com/products/index/decision_support_storage_mgt/
Said to help you A) analyze your current storage situation, and B) help
predict your longer term storage needs. 2003/06: will going into
retirement fairly soon, to be supplanted by Tivoli Data Warehouse and
TEC.
TSM management:
IBM has a Guide that runs with 'Tivoli Decision Support' called 'Storage
Management Analysis' that is for reporting *SM data. See redbook
"Tivoli Storage Management Reporting" (SG24-6109).

User-contributed tools and aids:


*SM scripts:
http://adsm.nerdc.ufl.edu/scripts
ADSM interface movie:/
ftp://ftp.lanl.gov/public/ggrider/adsmsmsaud.avi or
http://public.lanl.gov/ggrider/adsmsmsaud.avi
TSM For Perl: http://home.wtal.de/the_swordsman
Helpful scripts:
http://www.coderelief.com/depot.htm
http://nix.itss.auckland.ac.nz/adsm/

User implementations:
Cornell EZ-Backup, and fee for services:
http://www.ezbackup.cornell.edu/overview
Linux:
Supported devices:
http://www.ibm.com/software/sysmgmt/products/support/
IBM_TSM_Supported_Devices_for_Linux.html
Manuals:
http://publib.boulder.ibm.com/tividd/td/StorageManagerforLinux5.1.html
Client, 3.7:0
ftp://ftp.software.ibm.com/storage/tivoli-storage-management/maintenance/
client/v3r7/Linux/LATEST/
or:
ftp://service.boulder.ibm.com/storage/tivoli-storage-management/
maintenance/client/v3r7/Linux/

Windows:
Redbook: Deploying the Tivoli Storage Manager Client in a Windows 2000
Environment (SG24-6141)
http://www.redbooks.ibm.com/redbooks/SG246141.html
http://www.redbooks.ibm.com/redbooks/pdfs/sg246141.pdf
"Microsoft Installer (MSI) Return Codes for Tivoli Storage Manager Client &
Server": http://www.ibm.com/support/docview.wss?uid=swg21050782
Backup/Archive products in general:
http://windows.about.com/cs/backupswproducts/

DLL archive (Windows): http://solo.abac.com/dllarchive/

Adabas backups:
ADINT/ADSM (http://www.ibm.com/de/entwicklung/adint_adsm/index.html)
Veritas vs. TSM (a limited comparison, sponsored by Veritas...):
http://www.keylabs.com/results/veritas/veritas.html

The long-term (archival) archiving (preservation) of electronic records:


http://www.archives.gov/publications/records_management_publications.html

Digital Archaeology: Rescuing Neglected and Damaged Data Resources


http://www.ukoln.ac.uk/services/elib/papers/supporting/pdf/p2.pdf

Disk recovery services:


www.drivesavers.com

Exchange 2000 Server Database Recovery:


http://www.microsoft.com/TechNet/exchange/dbrecovr.asp
Disaster Recovery for Microsoft Exchange 2000 Server:
http://www.microsoft.com/Exchange/techinfo/deployment/2000/E2Krecovery.asp

Hardware and software analyses: Gartner: www.gartner.com

TCP/IP communications errors:


http://publib.boulder.ibm.com/infocenter/db2help/index.jsp?topic=
/com.ibm.db2.udb.doc/core/rcommsec.htm
http://www.pdc.kth.se/doc/SP/manuals/db2-5.0/html/db2m0/db2tcp.htm

SHARE proceedings: http://www.share.org/proceedings

Enterprise Tape Storage presentation (3590, LTO; John Martin presentation):


http://www.cartagena.com/naspa/LTO1.pdf

IBM Tape Solutions (Scott Hoyle presentation, HPSS User Forum, 2000/07/26;
3590 vs. 9840, 3580 Ultrium/LTO, DLT 8000 tape drives):
http://www4.clearlake.ibm.com/hpss/Forum/2000/AdobePDF/
Freelance-Graphics-IBM-Tape-Solutions-Hoyle.pdf
3590 vs. 3580 Ultrium/LTO:
Redbook "The IBM TotalStorage Tape Selection and Differentiation Guide"
http://www.redbooks.ibm.com/redbooks/pdfs/sg246946.pdf

Torture-testing Backup and Archive Programs: Things You Ought to Know But
Probably Would Rather Not, a 1991 paper by Elizabeth D. Zwicky, SRI
International, for LISA V.
http://ftp.at.linuxfromscratch.org/utils/archivers/star/testscripts/
zwicky/testdump.doc.html

TSM for Perl (perhaps more appropriately: Perl for TSM)


Said to provide convenient access to the administrative console of the TSM
server.
http://home.wtal.de/the_swordsman/

Allen Rout's whitepaper on moving server storage pool data:


http://open-systems.ufl.edu/services/NSAM/whitepapers/50ways.html

(This ADSM/TSM Quick Facts document was made available on the web 2000/05/18.
It is known to be indexed by:
http://dir.adsm.org/FAQ/ http://dir.adsm.org/Cool/
http://www.coderelief.com/depot.htm
http://www-backup.univie.ac.at/ (Vienna University; click on "FAQs")
http://www.meduniwien.ac.at/itsc/services/backup/literatur.php
http://www.akh-wien.ac.at/medwrz/services/backup/literatur.shtml
http://adsm0.cso.uiuc.edu (University of Illinois, Urbana-Champaign
Campus Information Technologies and Educational Services)
http://folk.uio.no/kjetilk/tsmserver.html (TSM doc. at Oslo University)
http://www.tsmgg.nl/Links.htm (Netherlands TSM users group)
http://www.uni-ulm.de/urz/Dienste/ADSM.pdf (University of Ulm)
http://www.jasi.com/TSMUG/Useful_Links/useful_links.html
TSM user group for D.C. area
http://revelstoke.cit.cornell.edu:8080/
http://www.living-wreck.de/bm/reinhold_htmltab.htm
http://www.tsmgg.nl/Links.htm (Oxford University TSM 2001 Symposium)
http://www.autovault.nl/linksnl.html (AutoVAULT, Nederlands)
http://www.jasi.com/TSMUG/Useful_Links/useful_links.html
(The TSM User Group for Baltimore, Washington DC, and Northern Virginia)
http://www.tuganz.org/links.php (Tivoli User Group/Australia, New Zealand)
http://www.lrz-muenchen.de/services/datenhaltung/adsm/sonstiges/
)

"When you can measure what you are speaking about, and express it in numbers,
you know something about it; but when you cannot measure it, when you cannot
express it in numbers, your knowledge is of a meager and unsatisfactory kind:
it may be the beginning of knowledge, but you have scarcely, in your thoughts,
advanced to the stage of science." -- William Thomson, Lord Kelvin

"Today's computers and software are like toddlers, who have to be continually
watched. What scares me is the future, when they become adolescent types and
are convinced they know more than we do..." -- me

"It's not what you know, it's knowing where to find it."
-- Andy Raibeck, Oxford 2001 seminar

"I never waste memory on things that can easily be stored and retrieved from
elsewhere." -- Albert Einstein

"Life itself is incremental." -- me

http://people.bu.edu/rbs/ADSM.QuickFacts

Das könnte Ihnen auch gefallen