Beruflich Dokumente
Kultur Dokumente
Preface ......................................................................................................................................... v
The ActifioNOW Customer Portal.....................................................................................................................v
Actifio Support Centers ....................................................................................................................................v
| actifio.com | Sybase ASE DBA’s Guide to Actifio Copy Data Management iii
iv Sybase ASE DBA’s Guide to Actifio Copy Data Management | actifio.com |
Preface
The information presented in this guide is intended for users who are familiar with basic Actifio processes and
procedures as described in Getting Started with Actifio Copy Data Management and who are qualified to
administer Sybase ASE databases.
Your Actifio appliance’s Documentation Library contains detailed, step-by-step, application-specific instructions on how
to protect and access your data. Each guide is in PDF format and may be viewed online, downloaded, or printed on
demand. The following guides will be of particular interest:
• Connecting Hosts to Actifio Appliances
• Virtualizing and Protecting Copy Data with the Application Manager
• Accessing and Recovering Copy Data with the Application Manager
• Restoring Copy Data with the Application Manager
This chapter introduces Actifio concepts and the procedures used to capture and access databases. It includes:
Actifio Data Virtualization on page 1
Capturing Data on page 2
Replicating Data on page 2
Accessing Data on page 3
Introduction to Actifio Sybase ASE Administration on page 5
Application data is captured at the block level, in application native format, according to a specified SLA. A Golden copy
of that data is created and stored once, and is then updated incrementally with only the changed blocks of data in an
“incremental forever” model. Unlimited virtual copies of the data can be made available instantly for use, without
proliferating physical copies and taking up additional storage infrastructure.
Replicating Data
Data can be replicated to a second Actifio appliance or to the cloud for recovery, disaster recovery, or test/
development purposes.
Data replication has traditionally been an inhibitor to efficient data management in a geographically distributed
environment. Actifio replication addresses these issues with a global deduplication and compression approach that:
• Drives down overall network usage.
• Eliminates the need for a dedicated WAN accelerator/optimizer.
• Does not require storage array vendor licenses as data is sent from one Actifio appliance to another.
• Is heterogeneous from any supported array to any supported array: Tier 1 to Tier 2 and/or Vendor A to
Vendor B.
• Preserves write-order, even across multiple LUNs.
• Is fully integrated with VMware Site Recovery Manager (SRM) and Actifio Resiliency Director.
• Encrypts data using the AES-256 encryption standard. Authentication between Actifio appliances is
performed using 1024-bit certificates.
Replication is controlled by Actifio Policy Template policies:
• Production to Mirror policies have several options to replicate data to a second Actifio appliance.
• Dedup Backup to Dedup DR policies use a fixed, Actifio proprietary replication engine to replicate data to a
second Actifio appliance. In addition, Dedup Backup to Dedup DR policies allow you to replicate data to two
locations.
• Production to Vault policies use a fixed, Actifio proprietary replication engine to replicate data to the cloud.
Mounts
The Actifio mount function provides instant access to data without moving data. Captured copies of databases can be
rolled forward via the Actifio user interface and mounted on any database server. Application Aware mounts are
described in Chapter 4, Accessing, Recovering, or Restoring Sybase Database.
LiveClones
The LiveClone is an independent copy of data that can be refreshed when the source data changes. The advantage of
LiveClones is that they are independent copies of data that can be incrementally refreshed and masked before being
made available to users. This allows teams such as development and test to ensure they are working on the latest set
of data without having to manually manage the data and not access or interfere with the production environment.
Restores
The restore function reverts the production data to a specified point in time. Restore operations actually move data.
Typically restore operations are performed to restore a database to a valid state after a massive data corruption or
storage array failure. The amount of time required to complete a restore operation depends on the amount of data
involved. Restores are described in Chapter 6, Restoring and Recovering a SAP HANA Database.
Workflows
While SLAs govern the automated capture of a production database, Workflows automate access to the captured
database.
Workflows are built with captured data. Workflows can present data as either a direct mount or as a LiveClone:
• Direct mounts (standard or application aware) work well for data that does not need to be masked prior to
being presented. A mounted copy of data can be refreshed manually or on automatically on a schedule.
Direct mounts allow you to instantly access captured data without actually moving the data.
• A LiveClone is a copy of your production data that can be updated manually or on a scheduled basis. You
can mask sensitive data in a LiveClone prior to making it available to users.
Combining Actifio’s automated data capture and access control with Workflows and their optional data masking
capabilities allows you to create self-provisioning environments. Now, instead of having to wait for DBAs to update
test and development environments, users can provision their own environments almost instantly.
For example, an Actifio administrator can create an SLA Template Policy that captures data according to a specified
schedule. Optionally, the administrator can mark the captured production data as sensitive and only accessible by
users with the proper access rights.
After access rights have been defined and data has been captured, the administrator can create a Workflow that:
• Makes the captured data available as a LiveClone or as a direct mount
• Updates the LiveClone or mountable data on a scheduled or on-demand basis
• (Optional) Automatically applies scripts to the LiveClone’s data after each update. This is useful for masking
sensitive data.
Backup Manual and/or scheduled online backups (incremental forever full snapshots).
Test/Dev Copy Multiple point in time copies and instant Test/Dev refresh
This section details the steps involved in preparing a Sybase database for Actifio protection and management:
Before You Begin on page 7
Enabling Linux Change Block Tracking on page 8
Note: You can also get this application id from the appliance command line; run udsinfo lsapplication.
11. Modify freeze, thaw and configuration file script extension from "xxx" to the application id from step 4
above (for example: 5534800).
From the command line:
mv freeze.xxx freeze.5534800
mv thaw.xxx thaw.5534800
mv act_Sybase_Src.xxx act_Sybase_Src. 5534800
12. Edit and modify the act_Sybase_Src.5534800 script for the input parameters:
vi act_Sybase_Src.5534800
Replace the line:
OSUSER=<Sybase Binary Owner>
SRC_SYBASE_SQLD="<Sybase Software Home Location>"
SRC_BACKUP_USER="<DB Username who has Backup Privs>"
#SRC_USER_PASSWD="<SRC_BACKUP_USER password if PASSWORD IS NOT SET TO NULL>"
PROTECTED_LVM_LIST="<Comma separated List of LVM Protected under Actifio>"
SRC_DBNAME="< Comma separated List of Source DB Names >"
SRC_SERVER_NAME="<Sybase Server Name>"
MANIFEST_FILE_LOC="<Manifest file Location (Must be one of the protected LVM)>"
With
OSUSER=sybase
SRC_SYBASE_SQLD=/opt/sybase/OCS-16_0
SRC_BACKUP_USER="actuser"
PROTECTED_LVM_LIST="/actprd/data,/actprd/log"
SRC_DBNAME="primarydb,sybasedb"
SRC_SERVER_NAME=sybase01
MANIFEST_FILE_LOC="/actprd/data"
Where
OSUSER= Operating system user who owns the Sybase Binary Owner
SRC_SYBASE_SQLD=Sybase Software Home Location
Home Location can be retrieved by running ps -ef | grep sybase
SRC_BACKUP_USER=DB Username who has Backup Privileges created in Before You Begin on page 7.
3. To restrict the log backup path set the "Start Paths" under Advanced settings of this application.
4. Set up the scripts:
Log into the database server as root.
cd to /act (cd /act)
Note: Another way to get the application id is to log into the appliance and run udsinfo lsapplication.
6. Modify the appid.xxx, freeze.xxx, ,thaw.xxx script extension from "xxx" to the appid from Step 4 (for
example: 5542334)
From the command line:
mv thaw.xxx thaw.5542334
mv freeze.xxx freeze.5542334
mv appid.xxx appid.5542334
7. Edit and modify act_Sybase_logbackup.conf script for log cleanup input parameters.
vi act_Sybase_logbackup.conf
OSUSER="<Sybase Binary Owner>"
SRC_SYBASE_SQLD="<Sybase Software Home Location>"
SRC_BACKUP_USER="<DB Username who has Backup Privs>"
#SRC_USER_PASSWD="<SRC_BACKUP_USER password if PASSWORD IS NOT SET TO NULL>"
SRC_DBNAME="<Comma separated List of Source DB Names>"
SRC_SERVER_NAME="<Sybase Server Name>"
Use cases:
• Mount and Refresh Target Sybase Database as a Virtual Application to Database Backup Point-in-Time and/
or Roll-Forward the Log to a Specific Point-in-Time: To present and refresh a read-write virtual copy of the
Sybase database on a new target to a scheduled database backup point in time and or Mount and Refresh
a target Sybase database with roll-forward of log.
• Mount and Refresh the Target Sybase Database as Warm Standby: To add a virtual standby as warm
standby database to the source primary.
Note: TARGET_DBNAME_LIST: uncomment and set this value if database name on target is different than
source.
Note:
TARGET_DBNAME_LIST is to change each individual database name from source to target.
NEWDBNAME_PREFIX is to change all the source database name by using the prefix value.
If both TARGET_DBNAME_LIST and NEWDBNAME_PREFIX are not set, then the source database name
will be retained on the target clone copy.
If both values are set then the clone will error out.
Example:
MANIFEST_FILE_LOC=/actprd/data
OSUSER=sybase
TARGET_MNT_PNT=/primary
TARGET_SYBASE_SQLD=/opt/sybase/OCS-16_0
TARGET_DB_USER="actuser"
TARGET_DBUSER_PASSWD=
TARGET_SERVER_NAME=sybase01
SRC_DBNAME="primarydb,sybasedb"
TARGET_DBNAME_LIST="primarydb:devprimarydb,sybasedb:devsybasedb"
#LOG_RECOVERY=
#LOG_BKP_LOC=
#UNTIL_TIME=
Save the file.
7. Click Mount. This will mount the data volume to the target server as /sybaselog
8. Change the directory to /act/scripts on target host and run the script act_Post_Target_Primary.sh. This will
bring up Sybase database online with roll-forward of log.
Note: Note: To run this script from command line, edit act_Post_Target_Primary.sh and uncomment these
two lines:
#export ACT_JOBTYPE="mount"
#export ACT_PHASE="post"
If running from workflow using these scripts as pre/post, make sure these two lines are commented.
Note: Note: If running from workflow using these scripts as pre/post, make sure these two lines are
commented.
#export ACT_JOBTYPE="unmount"
#export ACT_PHASE="pre"
To run this script from command line, edit act_Pre_Target_Primary.sh and uncomment the above two lines
7. Click Mount. This will mount the data volume to the target server as /standby.
8. On the target host, log in as root and change the directory to /act/scripts and execute the script
act_Post_Target_Standby.sh as:
To run this script from command line, open the act_Post_Target_Standby.sh script and uncomment these
two lines:
#export ACT_JOBTYPE="mount"
#export ACT_PHASE="post"
9. Run the act_Post_Target_Standby.sh script:
./act_Post_Target_Standby.sh
Note: If running from workflow using this script as pre/post, make sure these two lines are commented
Note: If running from workflow using these script as pre/post, make sure these two lines are commented.