Sie sind auf Seite 1von 13

**********************************************************************

Configuring DBFS on Oracle Exadata Database Machine (Doc ID 1054431.1)


**********************************************************************
In this Document
Goal
Solution
Configuring DBFS on Oracle Database Machine
Managing DBFS mounting via Oracle Clusterware
Steps to Perform If Grid Home or Database Home Changes
Removing DBFS configuration
Creating and Mounting Multiple DBFS Filesystems
Troubleshooting Tips
Community Discussions
References
APPLIES TO:
Oracle Exadata Hardware - Version 11.2.0.1 and later
Oracle Database - Enterprise Edition - Version 11.2.0.0 and later
Information in this document applies to any platform.
GOAL
This document describes the steps needed to configure Oracle Database Filesystem
(DBFS) on Oracle Database Machine (Exadata). For platforms other than Oracle Da
tabase Machine, additional preparation steps may be required. The steps in this
document apply to Oracle Database Machines running 11.2 software.
As per the documentation, for this to work on Solaris, 11.2.0.3 is needed.
See
http://docs.oracle.com/cd/E11882_01/appdev.112/e18294/adlob_client.htm
Without it you may encounter an error such as:
"The mount option can only be used on Linux platforms"
SOLUTION
Note: This procedure applies to Linux and Solaris 11 database servers and has be
en verified on both Exadata Database Machine and Oracle SuperCluster.
Configuring DBFS on Oracle Database Machine
This article guides the configuration of Oracle Database Filesystem (DBFS) on Or
acle Database Machine. Most of the steps below are one-time setup steps, except
where noted otherwise. On platforms other than Oracle Database Machine, addition
al setup steps may be required to install the required fuse RPM packages which a
re installed by default on Oracle Database Machine database servers.
Those using DBFS should review Note 1150157.1 for recommended patches.
For nodes running Solaris, the following items should be reviewed before followi
ng the steps in this note.
Solaris 11 SRU 7 (Patch 14050126) or higher is required for Solaris to support D
BFS mounting with fuse. Solaris 11 Express hosts must be upgraded to Solaris 11
(see note below).
Review Note 1021281.1 to learn about Solaris support repositories which may be n
eeded to apply SRU updates to Solaris machines as prerequisites to configuring D
BFS on Solaris database servers.

For systems running Solaris 11 Express, follow MOS note 1431284.1 to upgrade to
Solaris 11. After upgrading to Solaris 11, apply Solaris 11 SRU 7 or later.
Note: All references to ORACLE_HOME in this procedure are to the RDBMS ORACLE_HO
ME directory (usually /u01/app/oracle/product/11.2.0/dbhome_1) except where spec
ifically noted. All references to GI_HOME should be replaced with the ORACLE_HOM
E directory for the Grid Infrastructure (GI).
By convention, the dollar sign ($) prompt signifies a command run as the oracle
user (or Oracle software owner account) and the hash (#) prompt signifies a comm
and that is run as root. This is further clarified by prefixing the $ or # with
(oracle)$ or (root)#.
If running Solaris 11, the system must be running Solaris 11 SRU 07 or later. Ad
ditionally, the libfuse package must be installed. Presence of the libfuse packa
ge can be verified with "pkg list libfuse" (should return one line).
To verify the SRU currently on the system, as root run: "pkg info entire | grep
SRU" and you'll see a reference to the SRU in the output. The delivered SRU vers
ion based on the Exadata release may be found in 888828.1. If the system is runn
ing SRU 06 or earlier, it will require an update before installing the libfuse p
ackage. If the system is running SRU 07 or later, skip to the next step to insta
ll libfuse.
After reviewing note 1021281.1 to configure repository access, run: pkg update
The system will apply the latest package updates and create a new boot environme
nt and set it as the default. To confirm, run: beadm list. You should see a "R"
shown next to the boot environment that will be active upon reboot. The "N" will
show the boot environment that is active now. At this stage, these two letters
should be on different lines until you reboot the system.
Reboot the server to have it boot to the updated SRU environment.
If running Solaris 11, ensure that the libfuse package is installed by running "
pkg info libfuse" at the prompt. If no rows or an error are returned, then follo
w the steps below to install libfuse.
After reviewing note 1021281.1 to configure repository access, run this command
to install libfuse: pkg install libfuse
Confirm that it installed by running: pkg verify libfuse
The pkg verify command should have no output if successful.
In the procedures listed in this note, both Solaris and Linux database servers a
re assumed to have user equivalence for root and the DBFS respository database (
typically "oracle") users. Each of those users is assumed to have a dbs_group fi
le in their $HOME directory that contains a list of cluster hostnames. The dcli
utility is assumed to be available on both Solaris and Linux database nodes.
When non-root commands are shown, it is assumed that proper environment variable
s for ORACLE_SID and ORACLE_HOME have been set and that the PATH has been modifi
ed to include $ORACLE_HOME/bin. These things may be done automatically by the or
aenv script on Linux or Solaris systems.
For Linux database servers, there are several steps to perform as root. Solaris
database servers do not require this step and can skip it. First, add the oracle
user to the fuse group on Linux. Run these commands as the root user.
(root)# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle
Create the /etc/fuse.conf file with the user_allow_other option. Ensure proper p
rivileges are applied to this file.
(root)# dcli -g ~/dbs_group -l root "echo user_allow_other > /etc/fuse.conf"
(root)# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf
For Solaris database servers, to enable easier debugging and troubleshooting, it
is suggested to add a line to the /etc/user_attr file to give the oracle user t
he ability to mount filesystems directly. As root, run this on a database server
:

(root)# dcli -g ~/dbs_group -l root "echo 'oracle::::type=normal;project=group.o


install;defaultpriv=basic,priv_sys_mount' >> /etc/user_attr"
After running this, logout of your oracle session and login again to enable the
additional privileges.
For all database servers, create an empty directory that will be used as the mou
nt point for the DBFS filesystem.
(root)# dcli -g ~/dbs_group -l root mkdir /dbfs_direct
Change ownership on the mount point directory so oracle can access it.
(root)# dcli -g ~/dbs_group -l root chown oracle:dba /dbfs_direct
For Solaris database hosts, it is required to employ a workaround for bug 128320
52. This requires an edit to the <GI_HOME>/bin/ohasd script to be made on each d
atabase server locally. First, make a copy of the current ohasd script as root:
(root)# dcli -g ~/dbs_group -l root cp -p /u01/app/11.2.0/grid/bin/ohasd /u01/ap
p/11.2.0/grid/bin/ohasd.pre12832052
Then edit the script locally on each node (do not copy the file from one node to
another) and change the original lines (at around line 231) from this:
$LOGMSG "exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM
\"$@\""
exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM "$@"
To add a line before the existing ones as shown (the line starting with ppriv i
s added) so that the resulting section looks like this:
ppriv -s I+sys_mount $$
$LOGMSG "exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM \
"$@\""
exec $PERL /u01/app/11.2.0/grid/bin/crswrapexece.pl $ENV_FILE $ORASYM "$@"
Note that this workaround will be required after each bundle patch installation
on the GI_HOME until bug 12832052 is fixed and included in the bundle patch.
To pick up the additional group (fuse) membership for the oracle user on Linux o
r the workaround above on Solaris, Clusterware must be restarted. For example, t
o restart Clusterware on all nodes at the same time (non-rolling), you can use t
he following commands as root:
(root)# dcli -g ~/dbs_group -l root /u01/app/11.2.0/grid/bin/crsctl stop crs
(root)# dcli -g ~/dbs_group -l root /u01/app/11.2.0/grid/bin/crsctl start crs
Note that the "crsctl stop cluster -all" syntax may not be used as it leaves oha
sd running and Solaris database hosts require it to be restarted for the workaro
und to take effect.
Create a database to hold the DBFS repository. Follow Note 1191144.1 to create t
he DBFS repository database.
As the RDBMS software owner, create the DBFS repository inside the repository da
tabase. To create the repository, create a new tablespace to hold the DBFS objec
ts and a database user that will own the objects.
Use sqlplus to login to the DBFS repository database as a DBA user (i.e. SYS or
SYSTEM).
In the following create tablespace statement, use the any disk group (this examp
le shows DBFS_DG, but any diskgroup with sufficient space available can be used)
and size appropriately for the intended initial capacity. Autoextend can be use
d, as long as the initial size can accommodate the repository without requiring
autoextension. The following example statements create a tablespace of 32GB with

autoextend on, allocating additional 8GB to the tablespace as needed. You shoul
d size your tablespace according to your expected DBFS utilization. A bigfile ta
blespace is used in this example for convenience, but smallfile tablespaces may
be used as well.
SQL> create bigfile tablespace dbfsts datafile '+DBFS_DG' size 32g autoextend on
next 8g maxsize 300g NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SP
ACE MANAGEMENT AUTO ;
SQL> create user dbfs_user identified by dbfs_passwd default tablespace dbfsts q
uota unlimited on dbfsts;
SQL> grant create session, create table, create view, create procedure, dbfs_rol
e to dbfs_user;
With the user created and privileges granted, create the database objects that w
ill hold DBFS.
(oracle)$ cd $ORACLE_HOME/rdbms/admin
(oracle)$ sqlplus dbfs_user/dbfs_passwd
SQL> start dbfs_create_filesystem dbfsts FS1
This script takes two arguments:
dbfsts: tablespace for the DBFS database objects
FS1: filesystem name, this can be any string and will appear as a directory unde
r the mount point
For more information about these arguments, see the DBFS documentation at http:/
/download.oracle.com/docs/cd/E11882_01/appdev.112/e18294/adlob_client.htm
Check the output of the dbfs_create_filesystem script for errors.
Perform the one-time setup steps for mounting the filesystem. The mount-dbfs.sh
script attached to this note provides the logic and necessary scripting to mount
DBFS as a cluster resource. The one-time setup steps required for each of the t
wo mount methods (dbfs_client or mount) are outlined below. There are two option
s for mounting the DBFS filesystem and each will result in the filesystem being
available at /dbfs_direct. Choose one of the two options.
The first option is to utilize the dbfs_client command directly, without using a
n Oracle Wallet. There are no additional setup steps required to use this option
.
The second option is to use the Oracle Wallet to store the password and make use
of the mount command. The wallet directory ($HOME/dbfs/wallet in the example he
re) may be any oracle-writable directory (creating a new, empty directory is rec
ommended). All commands in this section should be run by the oracle user unless
otherwise noted.
On Linux DB nodes, set the library path on all nodes using the commands that fol
low (substitute proper RDBMS ORACLE_HOMEs):
(root)# dcli -g dbs_group -l root mkdir -p /usr/local/lib
(root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0/dbhome_1/
lib/libnnz11.so /usr/local/lib/libnnz11.so
(root)# dcli -g dbs_group -l root ln -s /u01/app/oracle/product/11.2.0/dbhome_1/
lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1
(root)# dcli -g dbs_group -l root ln -s /lib64/libfuse.so.2 /usr/local/lib/libfu
se.so.2
(root)# dcli -g dbs_group -l root 'echo /usr/local/lib >> /etc/ld.so.conf.d/usr_
local_lib.conf'
(root)# dcli -g dbs_group -l root ldconfig
Create a new TNS_ADMIN directory ($HOME/dbfs/tnsadmin) for exclusive use by the
DBFS mount script.

(oracle)$ dcli -g dbs_group -l oracle mkdir -p $HOME/dbfs/tnsadmin


Create the $HOME/dbfs/tnsadmin/tnsnames.ora file with the following contents on
the first node. This example presumes that the name of the DBFS repository datab
ase is fsdb and the instance on the first node is named fsdb1. If your RDBMS ORA
CLE_HOME is not /u01/app/oracle/product/11.2.0/dbhome_1, then change the PROGRAM
and ENVS settings accordingly).
fsdb.local =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL=BEQ)
(PROGRAM=/u01/app/oracle/product/11.2.0/dbhome_1/bin/oracle)
(ARGV0=oraclefsdb1)
(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
(ENVS='ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1,ORACLE_SID=fs
db1')
)
(CONNECT_DATA=(SID=fsdb1))
)
On other nodes, create similar entries (all using the name "fsdb.local") and cha
nge all occurrences of fsdb1 to the appropriate instance name to match the insta
nce name running on the node where that tnsnames.ora file resides. The tnsnames.
ora file on each node will be slightly different so that each tnsnames.ora file
references the instance running locally on that node.
On each node, create the $HOME/dbfs/tnsadmin/sqlnet.ora file with the same conte
nts on each node after making the proper substitution for <HOMEDIR_PATH_HERE>:
WALLET_LOCATION =
(SOURCE=(METHOD=FILE)
(METHOD_DATA=(DIRECTORY=<HOMEDIR_PATH_HERE>/dbfs/wallet))
)
SQLNET.WALLET_OVERRIDE = TRUE
Ensure you substitute the correct path for the DIRECTORY attribute. You may not
use variables in this path - it must be a literal full path.
Copy the file to all nodes using dcli:
(oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/tnsadmin -f $HOME/dbfs/tns
admin/sqlnet.ora
Create a wallet directory on one database server as the oracle user. For example
:
(oracle)$ mkdir -p $HOME/dbfs/wallet
Create an empty auto-login wallet:
(oracle)$ mkstore -wrl $HOME/dbfs/wallet -create
Add the necessary credentials to the wallet. The credentials can be specific for
the connect string used as shown here:
(oracle)$ mkstore -wrl $HOME/dbfs/wallet -createCredential fsdb.local dbfs_user
dbfs_passwd
Copy the wallet files to all database nodes.
(oracle)$ dcli -g ~/dbs_group -l oracle mkdir -p $HOME/dbfs/wallet
(oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/walle
t/ewallet.p12
(oracle)$ dcli -g ~/dbs_group -l oracle -d $HOME/dbfs/wallet -f $HOME/dbfs/walle
t/cwallet.sso
Ensure that the TNS entry specified above (fsdb.local in the example) is defined
and working properly (checking with "TNS_ADMIN=/home/oracle/dbfs/tnsadmin tnspi

ng fsdb.local" is a good test).


(oracle)$ dcli -g ~/dbs_group -l oracle "export ORACLE_HOME=/u01/app/oracle/prod
uct/11.2.0/dbhome_1; TNS_ADMIN=$HOME/dbfs/tnsadmin /u01/app/oracle/product/11.2.
0/dbhome_1/bin/tnsping fsdb.local | grep OK"
dm01db01: OK (20 msec)
dm01db02: OK (30 msec)
Download the mount-dbfs.sh script attached to this note and place it on one data
base server in a temporary location (like /tmp/mount-dbfs.sh). To ensure that th
e file transfer didn't modify the script contents, run dos2unix against it on th
e database server:
For Linux, run this:
(root)# dos2unix /tmp/mount-dbfs.sh
For Solaris: run these:
(root)# dos2unix /tmp/mount-dbfs.sh /tmp/mount-dbfs.sh.new
(root)# mv /tmp/mount-dbfs.sh.new /tmp/mount-dbfs.sh
Edit the variable settings in the top of the script for your environment. Edit o
r confirm the settings for the following variables in the script. Comments in th
e script will help you to confirm the values for these variables.
DBNAME
MOUNT_POINT
DBFS_USER
ORACLE_HOME (should be the RDBMS ORACLE_HOME directory)
LOGGER_FACILITY (used by syslog to log the messages/output from this script)
MOUNT_OPTIONS
DBFS_PASSWD (used only if WALLET=false)
DBFS_PWDFILE_BASE (used only if WALET=false)
WALLET (must be true or false)
TNS_ADMIN (used only if WALLET=true)
DBFS_LOCAL_TNSALIAS
After editing, copy the script (rename it if desired or needed) to the proper di
rectory (GI_HOME/crs/script) on database nodes and set proper permissions on it,
as the root user:
(root)# dcli -g ~/dbs_group -l root -d /u01/app/11.2.0/grid/crs/script -f /tmp/m
ount-dbfs.sh
(root)# dcli -g ~/dbs_group -l root chown oracle:dba /u01/app/11.2.0/grid/crs/sc
ript/mount-dbfs.sh
(root)# dcli -g ~/dbs_group -l root chmod 750 /u01/app/11.2.0/grid/crs/script/mo
unt-dbfs.sh
With the appropriate preparation steps for one of the two mount methods complete
, the Clusterware resource for DBFS mounting can now be registered. Register the
Clusterware resource by executing the following as the RDBMS owner of the DBFS
repository database (typically "oracle") user. The ORACLE_HOME and DBNAME should
reference your Grid Infrastructure ORACLE_HOME directory and your DBFS reposito
ry database name, respectively. If mounting multiple filesystems, you may also n
eed to modify the ACTION_SCRIPT and RESNAME. For more information, see section b
elow regarding Creating and Mounting Multiple DBFS Filesystems. Create this shor
t script and run it as the RDBMS owner (typically "oracle") on only one database
server in your cluster.
##### start script add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DBNAME=fsdb
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0/grid

PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type local_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh
Then run this as the Grid Infrastructure owner (typically oracle) on one databas
e server only:
(oracle)$ sh ./add-dbfs-resource.sh
When successful, this command has no output.
It is not necessary to restart the database resource at this point, however, you
should review the following note regarding restarting the database now that the
dependencies have been added.
Note: After creating the $RESNAME resource, in order to stop the $DBNAME databas
e when the $RESNAME resource is ONLINE, you will have to specify the force flag
when using srvctl. For example: "srvctl stop database -d fsdb -f". If you do not
specify the -f flag, you will receive an error like this:
(oracle)$ srvctl stop database -d fsdb
PRCD-1124 : Failed to stop database fsdb and its services
PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME EN
DS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET
!= OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STA
TE != OFFLINE)))
CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or
relocating 'dbfs_mount', but the force option was not specified
Using the -f flag allows a successful shutdown and results in no output.
Also note that once the $RESNAME resource is started and then the database it de
pends on is shut down as shown above (with the -f flag), the database will remai
n down. However, if Clusterware is then stopped and started, because the $RESNAM
E resource is still has a target state of ONLINE, it will cause the database to
be started automatically when normally it would have remained down. To remedy th
is, ensure that $RESNAME is taken offline (crsctl stop resource $RESNAME) at the
same time the DBFS database is shutdown.
Managing DBFS mounting via Oracle Clusterware
After the resource is created, you should be able to see the dbfs_mount resource
by running crsctl stat res dbfs_mount and it should show OFFLINE on all nodes.
For example:
(oracle)$ <GI_HOME>/bin/crsctl stat res dbfs_mount -t
-------------------------------------------------------------------------------NAME
TARGET STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------dbfs_mount
OFFLINE OFFLINE
dscbac05
OFFLINE OFFLINE

dscbac06

To bring dbfs_mount online which will mount the filesystem on all nodes, run crs
ctl start resource dbfs_mount from any cluster node. This will mount DBFS on all
nodes. For example:
(oracle)$ <GI_HOME>/bin/crsctl start resource dbfs_mount
CRS-2672: Attempting to start 'dbfs_mount' on 'dscbac05'
CRS-2672: Attempting to start 'dbfs_mount' on 'dscbac06'
CRS-2676: Start of 'dbfs_mount' on 'dscbac06' succeeded
CRS-2676: Start of 'dbfs_mount' on 'dscbac05' succeeded
(oracle)$ <GI_HOME>/bin/crsctl stat res dbfs_mount -t
-------------------------------------------------------------------------------NAME
TARGET STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------dbfs_mount
ONLINE ONLINE
dscbac05
ONLINE ONLINE
dscbac06
Once the dbfs_mount Clusterware resource is online, you should be able to observ
e the mount point with df -h on each node. Also, the default startup for this re
source is "restore" which means that if it is online before Clusterware is stopp
ed, it will attempt to come online after Clusterware is restarted. For example:
(oracle)$ df -h /dbfs_direct
Filesystem
Size Used Avail Use% Mounted on
dbfs
1.5M 40K 1.4M 3% /dbfs_direct
To unmount DBFS on all nodes, run this as the oracle user:
(oracle)$ <GI_HOME>/bin/crsctl stop res dbfs_mount
Note the following regarding restarting the database now that the dependencies h
ave been added between the dbfs_mount resource and the DBFS repository database
resource.
Note: After creating the dbfs_mount resource, in order to stop the DBFS reposito
ry database when the dbfs_mount resource is ONLINE, you will have to specify the
force flag when using srvctl. For example: "srvctl stop database -d fsdb -f". I
f you do not specify the -f flag, you will receive an error like this:
(oracle)$ srvctl stop database -d fsdb
PRCD-1124 : Failed to stop database fsdb and its services
PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME EN
DS_WITH .svc)) && (TYPE == ora.service.type)) && ((STATE != OFFLINE) || (TARGET
!= OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STA
TE != OFFLINE)))
CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or
relocating 'dbfs_mount', but the force option was not specified
Using the -f flag allows a successful shutdown and results in no output.
Also note that once the dbfs_mount resource is started and then the database it
depends on is shut down as shown above (with the -f flag), the database will rem
ain down. However, if Clusterware is then stopped and started, because the dbfs_
mount resource still has a target state of ONLINE, it will cause the database to
be started automatically when normally it would have remained down. To remedy t
his, ensure that dbfs_mount is taken offline (crsctl stop resource dbfs_mount) a
t the same time the DBFS database is shutdown.
Steps to Perform If Grid Home or Database Home Changes

There are several cases where the ORACLE_HOMEs used in the management or mountin
g of DBFS may change. The most common case is when performing an out-of-place up
grade or doing out-of-place patching by cloning an ORACLE_HOME. When the Grid In
frastructure ORACLE_HOME or RDBMS ORACLE_HOME change, a few changes are required
. The items that require changing are:
Modifications to the mount-dbfs.sh script. This is also a good time to consider
updating to the latest version of the script attached to this note.
If using the wallet-based mount on Linux hosts, the shared libraries must be res
et.
For example, if the new RDBMS ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/dbhom
e_1 *AND* the wallet-based mounting method using /etc/fstab is chosen, then the
following commands will be required as the root user. If the default method (usi
ng dbfs_client directly) is used, these steps may be skipped.
(root)# dcli -l root -g ~/dbs_group rm -f /usr/local/lib/libnnz11.so /usr/local/
lib/libclntsh.so.11.1
(root)# dcli -l root -g ~/dbs_group "cd /usr/local/lib; ln -sf /u01/app/oracle/p
roduct/11.2.0.2/dbhome_1/lib/libnnz11.so"
(root)# dcli -l root -g ~/dbs_group "cd /usr/local/lib; ln -sf /u01/app/oracle/p
roduct/11.2.0.2/dbhome_1/lib/libclntsh.so.11.1"
(root)# dcli -l root -g ~/dbs_group ldconfig
(root)# dcli -l root -g ~/dbs_group rm -f /sbin/mount.dbfs ### remove this, new
deployments don't use it any longer
For all deployments, the mount-dbfs.sh script must be located in the new Grid In
frastructure ORACLE_HOME (<GI_HOME>/crs/script/mount-dbfs.sh). At times when the
ORACLE_HOMEs change, the latest mount-dbfs.sh script should be downloaded from
this note's attachments and deployed using the steps detailed earlier in this no
te steps 14-16. Since the custom resource is already registered, it does not nee
d to be registered again.
With the new script deployed into the correct location on the new ORACLE_HOME, t
he next step is to modify the cluster resource, to change the location of the mo
unt-dbfs.sh script. Also, if not already configured, take the opportunity to cha
nge the RESTART_ATTEMPTS=10. Use these commands which should be run from any clu
ster node (replace <NEW_GI_HOME> with full path appropriately):
(oracle)$ crsctl modify resource dbfs_mount -attr "ACTION_SCRIPT=<NEW_GI_HOME>/c
rs/script/mount-dbfs.sh"
(oracle)$ crsctl modify resource dbfs_mount -attr "RESTART_ATTEMPTS=10"
After these changes are complete, verify that the status of the resources is sti
ll online. This concludes the changes required when the ORACLE_HOMEs change.
Removing DBFS configuration
The steps in this section will deconfigure the components configured by the step
s above. The steps here will only deconfigure the parts that were configured by
this procedure.
Stop the dbfs_mount service in clusterware using the oracle account.
(oracle)$ <GI_HOME>/bin/crsctl stop resource dbfs_mount
CRS-2673: Attempting to stop 'dbfs_mount' on 'dadzab06'
CRS-2673: Attempting to stop 'dbfs_mount' on 'dadzab05'
CRS-2677: Stop of 'dbfs_mount' on 'dadzab05' succeeded
CRS-2677: Stop of 'dbfs_mount' on 'dadzab06' succeeded
Confirm that the resource is stopped and then remove the clusterware resource fo
r dbfs_mount as the oracle (or Grid Infrastructure owner) user.

(oracle)$ <GI_HOME>/bin/crsctl stat resource dbfs_mount -t


-------------------------------------------------------------------------------NAME
TARGET STATE
SERVER
STATE_DETAILS
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------dbfs_mount
OFFLINE OFFLINE
dadzab05
OFFLINE OFFLINE

dadzab06

(oracle)$ <GI_HOME>/bin/crsctl delete resource dbfs_mount


If a wallet was used, remove the /home/oracle/dbfs directory and subdirectories
as the oracle user.
(oracle)$ dcli -g dbs_group -l oracle rm -rf $HOME/dbfs
Remove the custom action script that supported the resource and the /etc/fuse.co
nf file as the root user.
(root)# dcli -g dbs_group -l root rm -f /u01/app/11.2.0/grid/crs/script/mount-db
fs.sh /etc/fuse.conf
Remove the mount point directory as the root user.
(root)# dcli -g dbs_group -l root rmdir /dbfs_direct
On Linux servers only, modify the group memberships for the oracle user account.
This assumes that you have not added any other group memberships for the oracle
account (other than the defaults configured during deployment). The following i
s an example showing that the group memberships that remain do not include the f
use group. In this case, the oracle user was a member of oinstall, dbs, oper, an
d asmdba groups in addition to the fuse group. The group memberships for the ora
cle user may vary, so some modification to this example command may be required.
(root)# dcli -g dbs_group -l root usermod -G oinstall,dba,oper,asmdba oracle
On Linux servers only, if a wallet was used, follow these steps to remove wallet
-specifc configuration changes:
dcli -g ~/dbs_group -l root 'sed -i "/^\/sbin\/mount.dbfs/d" /etc/fstab
dcli -g ~/dbs_group -l root rm -f /sbin/mount.dbfs
dcli -g ~/dbs_group -l root 'cd /usr/local/lib; rm -f libclntsh.so.11.1 libfuse.
so.2 libnnz11.so'
dcli -g ~/dbs_group -l root 'sed -i "/^\/usr\/local\/lib$/d" /etc/ld.so.conf.d/u
sr_local_lib.conf'
dcli -g ~/dbs_group -l root ldconfig
dcli -g ~/dbs_group -l root 'sed -i "/^\/sbin\/mount.dbfs/d" /etc/fstab'
On Solaris servers only, remove the line from /etc/user_attr that was added in t
his procedure by executing the follow command as root:
(root)# dcli -g ~/dbs_group -l root 'sed "/^oracle::::/d" /etc/user_attr > /tmp/
user_attr.new ; cp /tmp/user_attr.new /etc/user_attr ; rm -f /tmp/user_attr.new
The DBFS repository objects remain. You may either:
Delete the DBFS repository database using DBCA once the steps above are complete
d.
Remove the DBFS repository by connecting to the database as the repository owner
using SQL*Plus and running @?/rdbms/admin/dbfs_drop_filesystem <filesystem-name
>. In the example in this note, the filesystem-name is FS1, so the command would
be @?/rdbms/admin/dbfs_drop_filesystem FS1
SQL> connect dbfs_user/dbfs_passwd

SQL> @?/rdbms/admin/dbfs_drop_filesystem FS1


SQL> connect / as sysdba
SQL> drop user dbfs_user cascasde;
Creating and Mounting Multiple DBFS Filesystems
There are several ways to create additional DBFS filesystems. Some environments
may wish to have more than one DBFS filesystem to support non-direct_io. DBFS fi
lesystems may always hold shell script files or binary files, but if mounted wit
h the direct_io option, the files on DBFS will not be executable. In such cases,
a second DBFS filesystem may be used since it can be mounted without the direct
_io option to support executable files or scripts.
There is nothing "inside" the DBFS filesystem that makes it direct_io or non-dir
ect. Instead, to change from one type of access to the other, the filesystem sho
uld be unmounted (using the CRS resource), mount options changed in the mount-db
fs.sh script on all nodes, and then the filesystem mounted again (using the CRS
resource).
Let's review some high-level points related to multiple DBFS filesystems.
Create additional filesystems under same DBFS repository owner (database user)
The additional filesystems will show as sub-directories which are the filesystem
names given during creation of the filesystem (second argument to the dbfs_crea
te_filesystem_advanced script).
There is only one mount point for all filesystems created in this way.
Only one mount-dbfs.sh script needs to be configured.
All filesystems owned by the same DBFS repository owner will share the same moun
t options (i.e. direct_io).
Create another DBFS repository owner (database user) and new filesystems under t
hat owner.
Can be in the same database with other DBFS repository owners or in a completely
separate database.
Completely separate: can use different tablespaces (which could be in different
diskgroups), separate mount points, possibly different mount options (direct_io
versus non-direct_io).
One DBFS filesystem has no impact on others in terms of administration or dbfs_c
lient start/stop.
Requires a new mount point to be created and used.
Requires a second mount-dbfs.sh to be created and configured in Clusterware.
Also supports having completely separate ORACLE_HOMEs with possibly different so
ftware owner (Linux/Solaris) accounts managing the repository databases.
To configure option #1 above, follow these steps:
It is recommended (but optional) to create a new tablespace for the new DBFS fil
esystem you are creating.
Connect to the DBFS repository as the current owner (dbfs_user is the example ow
ner used in this note) and then run the dbfs_filesystem_create_advanced script a
gain using a different filesystem name (the 2nd argument).
The filesystem will appear as another subdirectory just below the chosen mount p
oint.
To configure option #2 above, follow these steps:
Optionally create a second DBFS repository database.
Create a new tablespace and a DBFS repository owner account (database user) for
the new DBFS filesystem as shown in step 4 above.
Create the new filesystem using the procedure shown in step 5 above. substitutin
g the proper values for the tablespace name and desired filesystem name.
If using a wallet, you must create a separate TNS_ADMIN directory and a separate
wallet. Be sure to use the proper ORACLE_HOME, ORACLE_SID, username and passwor

d when setting up those components.


Ensure you use the latest mount-dbfs.sh script attached to this note. Updates we
re made on 7-Oct-2010 to support multiple filesystems. If you are using previous
versions of this script, download the new version and after applying the necess
ary configuration modifications in it, replace your current version.
To have Clusterware manage a second filesystem mount, use a second copy of the m
ount-dbfs.sh script. Rename it to a unique file name like mount-dbfs2.sh and pla
ce it in the proper directory as shown in step 16 above. Once mount-dbfs2.sh has
been properly modified with proper configuration information, a second Clusterw
are resource (with a unique name) should be created. The procedure for this is o
utlined in step 17 above.
The remaining steps in this note (configuration, starting, stopping) relate to a
single DBFS filesystem resource. If you create additional DBFS filesystem resou
rces, you will need to start each of them individually at startup time (i.e. whe
n the database is restarted). Starting or stopping one DBFS resource does not ha
ve any affect on other DBFS resources you have configured.
Troubleshooting Tips
When configuring DBFS, if the clusterware resource(s) do not mount the DBFS file
system successfully, it may be useful to run the mount-dbfs.sh script directly f
rom the command line on one node. You should run this as the RDBMS owner user li
ke this (with specifying one of the 3 arguments shown):
<GI_HOME>/crs/script/mount-dbfs.sh [ start | stop | check ]
Your script may have a slightly different name, especially if you are deploying
multiple filesystems. Often, running the script in this way will display errors
that may otherwise not be reported by clusterware.
Also, starting with the 28-Jan-2011 version, you will find messages related to D
BFS in the /var/log/messages (on Linux) or /var/adm/messages (on Solaris) file t
agged with the string DBFS_<mountpoint> for easy identification.
If you encounter issues when mounting the DBFS filesystem, it may be required to
umount the filesystem manually. To unmount file system, run "fusermount -u /dbf
s_direct" (on Linux) on the node(s) having problems and then make sure that the
dbfs_client (or mount.dbfs if using the second mounting option) is not running.
When using "fusermount -u /dbfs_direct" to unmount the filesystem, if the client
(dbfs_client or mount.dbfs) is still running, that process should be killed. On
Solaris, use "umount /dbfs_direct" instead of "fusermount -u /dbfs_direct".
Normally, if no mounts are present, there should be an empty directory at /dbfs_
direct (ls -l /dbfs_direct). When running "ls -l /dbfs_direct", if an error mess
age like "Transport Endpoint is not connected" is observed, this may indicate th
at the DBFS client (dbfs_client) is no longer running, but fuse still has record
of the mount. Often, this will be resolved by using "fusermount -u /dbfs_direct
" and ensuring that the client process is no longer running before re-attempting
the mount operation using one of the methods outlined in this note.
Other items that can lead to "Transport Endpoint is not connected" error:
- the sqlnet.ora is missing
- the dbfs_user's password contained a dollar sign. So the password included in
the wallet configuration command mkstore must be enclosed by single quotes.
- TNS_ADMIN must be defined prior to running the mkstore command because it need
s to connect to the database via the SQL*Net connect string fsdb.local.
If attempting to umount the filesystem results in errors saying "device busy", t
hen you may try using "fusermount -u -z /dbfs_direct" (on Linux) and then identi
fy the dbfs_client and/or mount.dbfs programs that are still running and kill th
em. Then, you should be able to mount the filesystem again.

On Solaris hosts, if you want to inspect the arguments for dbfs_client to see wh
at options it was invoked with, you will want to identify the process ID using "
ps -ef|grep dbfs_client", but then you'll need to use the "pargs <PID>" command
to see the complete options. The Solaris output for the ps command truncates the
command line at 80 characters which is typically not enough to display all opti
ons.
If you receive an error saying "File system already present at specified mount p
oint <mountpoint>" then ensure that the mount point directory is empty. If there
are any contents in the mount point directory, this error will prevent the file
system mount from succeeding. Seasoned system administrators will note that this
behavior differs from typical filesystem mounts where mount point directories c
an have contents and those contents will be hidden while the mounted filesystem
remains mounted. In other terms, it "overlays" the new mount. With fuse-mounted
filesystems, the mount point directory must be empty prior to mounting the fuse
(in this case, DBFS) filesystem.

Das könnte Ihnen auch gefallen