Sie sind auf Seite 1von 12

Add disk in VCS shared disk group

Created: 26 Mar 2010 | Updated: 25 Sep 2010 | 25 comments


Texas78155's picture
Texas78155
0 0 Votes
This issue has been solved. See solution.
inShare
We want to transfer data from array1 to array2. We are using AIX 5.3 .
Could you please let me know how to do it following steps in cluster environment
.
1. Add disk to dg
2. mirror dg
3. check
4. unmirror.
5. remove old disk.
If you need any other info please let me know.
Discussion Filed Under:
Storage and Clustering, Cluster Server, Application monitoring, Clustering, Disa
ster recovery, Failover , High availability, Service group
Comments RSS Feed
Comments 25 Comments Jump to latest comment
Marianne van den Berg's picture
Marianne van den Berg Trusted Advisor Partner Accredited
26 Mar 2010 : Link
Procedure looks fine. Just verify that new array is visible on all cluster nodes
.
Test failover after mirror syncronization has completed.
Remove mirror and old disk after all tests have completed successfully.
Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup o
n Unix and Windows
Handy NBU Links
0
Actions
Texas78155's picture
Texas78155
26 Mar 2010 : Link
Could you please give me steps with command with example.

0
Actions
Texas78155's picture
Texas78155
26 Mar 2010 : Link
do we need to freeze the group also in VCS.
0
Actions
Texas78155's picture
Texas78155
26 Mar 2010 : Link
Please check my procedure and let me know , do i need to freeze the cluster grou
p also.
Its VCS envrionment, Do i need any update in cluster..
It will show array
# vxdmpadm listctlr all
It will show controllers
# vxdisk -o alldgs -e list
show all disk
find new LUNs, initialize in vxvm and add them in dg
initialize the choosen luns
# vxdisksetup -i diskname
# vxdg -g dg_name adddisk dg_name03=diskname
check the new_LUNs status
# vxdisk -o alldgs list|egrep "new_disks"
# vxprint -thg dg_name
# vxdisk -g dg_name list
Mirror volume
# vxassist -g dg_name mirror volume1 mirror=new_enclr
Check the mirror

# vxprint -thg dg_name
Remove the mirrors
#vxassist -g dg_name remove mirror volume1 !enclr:old_encl_name
# vxprint -thg whitedg (Check volume1 )
Remove the Old disk from disk group
# vxdg -g dg_name rmdisk whitedg01
# vxdisk -g dg_name list
# vxdisk -o alldgs list# vxdmpadm listenclosure all
0
Actions
Marianne van den Berg's picture
Marianne van den Berg Trusted Advisor Partner Accredited
29 Mar 2010 : Link
Nothing needs to be updated in VCS config. Only the diskgroup, volumes and mount
points are cluster resources, not the actual disks.
Your procedure looks good - I would just add 'vxdctl enable' before listing disk
s on all nodes.
Freeze the SG while the mirrors are being syncronized. Once done, unfreeze and t
est failover/switch of SG to all nodes.
Alternative method to remove mirror as per TN posted in Storage Foundation Forum
:
vxplex -g mydg -o rm dis data04-01
Test failover again once original array is removed.
Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup o
n Unix and Windows
Handy NBU Links
+1
Actions
Texas78155's picture
Texas78155
11 May 2010 : Link
Could you please let me know how to migrate logonly plex.
pl abclv-02 abclv ENABLED ACTIVE LOGONLY CONCAT - RW
sd xyzdg05-12 abclv-02 xyzdg05 34767152 448 LOG EMC0_76 ENA
how can migrate above plex in new disk
0
Actions
Texas78155's picture
Texas78155
29 Mar 2010 : Link
What about fencedg . How to migrate that one.
need steps.
0
Actions
g_lee's picture
g_lee Trusted Advisor
29 Mar 2010 : Link
What version(s) of SF + VCS are you using?
The following link gives the procedure for Replacing I/O fencing coordinator dis
ks when the cluster is online for SF 5.1
http://sfdoccentral.symantec.com/sf/5.1/aix/html/s...
If you are using an earlier version than SF5.1, the replacement has to be done w
ith the cluster offline / may require reboot for changes to take effect.
eg: for SF 5.0 - see Adding or removing coordinator disks pp 416-417
http://sfdoccentral.symantec.com/sf/5.0/aix/pdf/sf...
edit: from one of your duplicate posts you've indicated you're using VCS 5.0MP3
- so refer to this link for further details.
http://sfdoccentral.symantec.com/sf/5.0MP3/aix/htm...
If this post has helped you, please vote or mark as solution
0
Actions
RIshi Agrawal's picture
RIshi Agrawal
23 Aug 2010 : Link
Hi g_lee,
Do we have similar articles for Linux platform as well. Kindly guide me to the l
inks.
0
Actions
g_lee's picture
g_lee Trusted Advisor
24 Aug 2010 : Link
As you haven't mentioned the version you're using, the links below are for SF 5.
1 - if you are using a different version, refer to https://vos.symantec.com/docu
ments to find the relevant documents for your version
Replacing I/O fencing coordinator disks when the cluster is online for SF 5.1
https://vos.symantec.com/public/documents/sf/5.1/l...
If you are using an earlier version than SF5.1, the replacement has to be done w
ith the cluster offline / requires reboot for changes to take effect.
5.0MP3 Adding or removing coordinator disks
https://vos.symantec.com/public/documents/sf/5.0MP3/linux/html/sfcfs_admin/ch05s
01s04.htm
If this post has helped you, please vote or mark as solution
0
Actions
Texas78155's picture
Texas78155
29 Mar 2010 : Link
Following vxvm products we have
VRTSaa 5.0.28.574 COMMITTED Veritas Enterprise
VRTSacclib.rte 5.0.3.0 COMMITTED Veritas Cluster Server 5.0MP3
VRTSalloc 5.0.3.0 COMMITTED Veritas Storage Foundation
VRTSat.client 4.3.34.0 COMMITTED Symantec Product
VRTSat.server 4.3.34.0 COMMITTED Symantec Product
VRTSccg 5.0.28.574 COMMITTED Veritas Enterprise
VRTScmccc.rte 5.0.1.0 COMMITTED Veritas Cluster Management
VRTScmcs.rte 5.0.1.0 COMMITTED Veritas Cluster Management
VRTScscm.rte 5.0.3.0 COMMITTED Veritas Cluster Manager - Java
VRTScscw.rte 5.0.0.0 COMMITTED Veritas Cluster Server
VRTScssim.rte 5.0.3.0 COMMITTED Veritas Cluster Server 5.0MP3
VRTScutil.rte 5.0.0.0 COMMITTED Symantec Veritas Cluster
VRTSdcli 5.0.3.0 COMMITTED Storage Foundation Distributed
VRTSddlpr 5.0.3.0 COMMITTED Veritas Device Discovery
VRTSdsa 5.0.0.0 COMMITTED Veritas Datacenter Storage
VRTSfsman 5.0.3.0 COMMITTED Veritas File System Manual
VRTSfsmnd 5.0.3.0 COMMITTED Veritas File System SDK Manual
VRTSfspro 5.0.3.0 COMMITTED Veritas File System Services
VRTSfssdk 5.0.3.0 COMMITTED Veritas Libraries and Header
VRTSgab.rte 5.0.3.0 COMMITTED Veritas Group Membership and
VRTSgapms.VRTSgapms 4.4.14.0 COMMITTED Veritas Generic Array Plugin
VRTSicsco 1.3.18.4 COMMITTED Symantec Infrastructure Core
VRTSjre15.rte 1.5.4.1 COMMITTED Symantec JRE Redistribution
VRTSllt.rte 5.0.3.0 COMMITTED Veritas Low Latency Transport
VRTSmapro 5.0.0.0 COMMITTED Veritas Storage Foundation
VRTSmh 5.0.28.499 COMMITTED Veritas Enterprise
VRTSob 3.3.721.481 COMMITTED Veritas Enterprise
VRTSobc33 3.3.721.481 COMMITTED Veritas Enterprise
VRTSobgui 3.3.721.481 COMMITTED Veritas Enterprise
VRTSpbx 1.3.17.10 COMMITTED Symantec Private Branch
VRTSperl.rte 5.0.2.0 COMMITTED Perl 5.8.8 for Veritas
VRTSspt 5.0.3.0 COMMITTED Veritas Support Tools by
VRTSvail.VRTSvail 4.4.56.0 COMMITTED Veritas Array Providers
VRTSvcs.man 5.0.3.0 COMMITTED Manual Pages for Veritas
VRTSvcs.msg.en_US 5.0.3.0 COMMITTED Veritas Cluster Server 5.0MP3
VRTSvcs.rte 5.0.3.0 COMMITTED Veritas Cluster Server 5.0MP3
VRTSvcsag.rte 5.0.3.0 COMMITTED Veritas Cluster Server 5.0MP3
VRTSvcsvr 5.0.0.0 COMMITTED Veritas Cluster Server Volume
VRTSvdid.rte 1.2.206.0 COMMITTED Veritas Device Identifier
VRTSveki 5.0.3.0 COMMITTED Veritas Kernel Interface by
VRTSvlic 3.2.33.0 COMMITTED VRTSvlic Symantec License
VRTSvmman 5.0.3.0 COMMITTED manual pages for Veritas
VRTSvmpro 5.0.3.0 COMMITTED Veritas Volume Manager Servs
VRTSvrpro 5.0.3.0 COMMITTED VERITAS Volume Replicator
VRTSvrw 5.0.1.0 COMMITTED Veritas Volume Replicator Web
VRTSvxfen.rte 5.0.3.0 COMMITTED Veritas I/O Fencing 5.0MP3 by
VRTSvxfs 5.0.3.0 COMMITTED Veritas File System by
VRTSvxmsa 4.4.0.10 COMMITTED VERITAS - VxMS Mapping
VRTSvxvm 5.0.3.0 COMMITTED Veritas Volume Manager by
VRTSweb.rte 5.0.1.0 COMMITTED Symantec Web Server
VRTSalloc 5.0.3.0 COMMITTED Veritas Storage Foundation
VRTScmccc.rte 5.0.1.0 COMMITTED Veritas Cluster Management
VRTScmcs.rte 5.0.1.0 COMMITTED Veritas Cluster Management
VRTScscm.rte 5.0.3.0 COMMITTED Veritas Cluster Manager - Java
VRTScutil.rte 5.0.0.0 COMMITTED Symantec Veritas Cluster
VRTSdcli 5.0.3.0 COMMITTED Storage Foundation Distributed
VRTSgab.rte 5.0.3.0 COMMITTED Veritas Group Membership and
VRTSgapms.VRTSgapms 4.4.14.0 COMMITTED Veritas Generic Array Plugin
VRTSllt.rte 5.0.3.0 COMMITTED Veritas Low Latency Transport
VRTSmapro 5.0.0.0 COMMITTED Veritas Storage Foundation
VRTSmh 5.0.28.499 COMMITTED Veritas Enterprise
VRTSob 3.3.721.481 COMMITTED Veritas Enterprise
VRTSobc33 3.3.721.481 COMMITTED Veritas Enterprise
VRTSperl.rte 5.0.2.0 COMMITTED Perl 5.8.8 for Veritas
VRTSvail.VRTSvail 4.4.56.0 COMMITTED Veritas Array Providers
VRTSvcs.rte 5.0.3.0 COMMITTED Veritas Cluster Server 5.0MP3
VRTSvcsvr 5.0.0.0 COMMITTED Veritas Cluster Server Volume
VRTSveki 5.0.3.0 COMMITTED Veritas Kernel Interface by
VRTSvxfen.rte 5.0.3.0 COMMITTED Veritas I/O Fencing 5.0MP3 by
VRTSvxfs 5.0.3.0 COMMITTED Veritas File System by
VRTSvxvm 5.0.3.0 COMMITTED Veritas Volume Manager by
VRTSweb.rte 5.0.1.0 COMMITTED Symantec Web Server
0
Actions
Texas78155's picture
Texas78155
29 Mar 2010 : Link
when i use this command it got this output
[pbgxap00006][/root]>vxfenadm -d
VXFEN vxfenadm ERROR V-11-2-1115 Local node is not a member of cluster!
[pbgxap00006][/root]>hastatus -sum
-- SYSTEM STATE
-- System State Frozen
A pbgxap00006 RUNNING 0
A pbgxap00007 RUNNING 0
-- GROUP STATE
-- Group System Probed AutoDisabled State
B ClusterService pbgxap00006 Y N ONLINE
B ClusterService pbgxap00007 Y N OFFLINE
B grp_dev_genesys pbgxap00006 Y N PARTIAL
B grp_dev_genesys pbgxap00007 Y N OFFLINE
B grp_fit_genesys pbgxap00006 Y N OFFLINE
B grp_fit_genesys pbgxap00007 Y N PARTIAL
B mnicb_grp pbgxap00006 Y N ONLINE
B mnicb_grp pbgxap00007 Y N ONLINE
-- RESOURCES FAILED
-- Group Type Resource System
C grp_dev_genesys Application app_dev_gen pbgxap00006
C grp_fit_genesys Application app_fit_gen pbgxap00006
C grp_fit_genesys Application app_fit_gen pbgxap00007
But fail over is working fine i tested. I didn't install this cluster but i have
to support.
and its very confusing.

0
Actions
g_lee's picture
g_lee Trusted Advisor
29 Mar 2010 : Link
the vxfenadm output appears to indicate fencing may not be configured/started
Check gab membership to see whether fencing is actually configured/started / wha
t is actually running
# gabconfig -a
edit: also, what is configured in main.cf?
# cat /etc/VRTSvcs/conf/config/main.cf
If this post has helped you, please vote or mark as solution
SOLUTION
0
Actions
Texas78155's picture
Texas78155
29 Mar 2010 : Link
gabconfig -a
GAB Port Memberships
===============================================================
Port a gen 26fa05 membership 01
Port h gen 26fa0b membership 01
What should i see in main.cf file?
0
Actions
g_lee's picture
g_lee Trusted Advisor
29 Mar 2010 : Link
gabconfig shows no fencing membership (no port b membership) - which would be wh
y vxfenadm -d indicated the node was not a member
If your cluster is configured to use fencing, you would see the following line i
n the cluster definition in main.cf
UseFence = SCSI3
as per: Modifying VCS configuration to use I/O fencing - Setting up I/O fencing
( http://sfdoccentral.symantec.com/sf/5.0MP3/aix/htm... )
Look at the info under Setting up I/O fencing - http://sfdoccentral.symantec.com
/sf/5.0MP3/aix/htm...
If your configuration doesn't have the details shown in that section, that's pro
bably a good indication your cluster isn't using fencing.
If this post has helped you, please vote or mark as solution
0
Actions
Texas78155's picture
Texas78155
29 Mar 2010 : Link
But i can see vxfenmode file
#
# scsi3_disk_policy determines the way in which I/O Fencing communicates with
# the coordination disks.
#
# available options:
# dmp - use dynamic multipathing
# raw - connect to disks using the native interface
#
scsi3_disk_policy=raw
#
# vxfen_mode determines in what mode VCS I/O Fencing should work.
#
# available options:
# scsi3 - use scsi3 persistent reservation disks
# customized - use script based customized fencing
# disabled - run the driver but don't do any actual fencing
#
vxfen_mode=scsi3
and see below output
vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
Disk_0 auto:LVM - - LVM
Disk_1 auto:LVM - - LVM
hdiskpower0 auto:cdsdisk EMC0_12 devgenesysdg online
hdiskpower1 auto:cdsdisk EMC0_11 devgenesysdg online
hdiskpower2 auto:LVM - - LVM
hdiskpower3 auto:LVM - - LVM
hdiskpower4 auto - - LVM
hdiskpower5 auto:LVM - - LVM
hdiskpower6 auto:cdsdisk - (fitgenesysdg) online
hdiskpower7 auto:cdsdisk - (fitgenesysdg) online
hdiskpower8 auto:cdsdisk - (fendg) online
hdiskpower9 auto:cdsdisk - (fendg) online
hdiskpower10 auto:cdsdisk - (fendg) online
hdiskpower11 auto:cdsdisk - - online
hdiskpower12 auto:cdsdisk - - online
I am not able to understand what is fendg if we are not using using fence.

0
Actions
Marianne van den Berg's picture
Marianne van den Berg Trusted Advisor Partner Accredited
29 Mar 2010 : Link
Seems someone has started to configure fencing (by creating the diskgroup) but n
ever completed the setup. g_lee has posted links to help you understand fencing.
Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup o
n Unix and Windows
Handy NBU Links
+1
Actions
Texas78155's picture
Texas78155
30 Mar 2010 : Link
so it means i don't need to check fencedg like i can remove like that.
Just mirror the data (disk group) and one more thing
If i use below command
vxplex -g mydg -o rm dis data04-01
then remove disk from dg.
0
Actions
g_lee's picture
g_lee Trusted Advisor
31 Mar 2010 : Link
Texas78155,
As Marianne has pointed out, it appears whoever setup the cluster started to set
up fencing, but it was not completed, so fencing is not being used in your clus
ter.
To verify nothing is using these disks:
# vxdisk list <disk_in_fendg>
eg:
# vxdisk list hdiskpower8
root@bsespcldb-p1 # vxdisk list pp_emc0_51
Device: hdiskpower8
devicetag: hdiskpower8
type: auto
hostid:
[...]
^^^^^^ Check the hostid field. If dg is not imported anywhere it should be blank
. If it lists a hostname it means the dg is imported on that hostname so you nee
d to check that host to see if that needs migrating.
If nothing is using fendg, do you still need it? If you still want the dg migrat
ed, then you will need to import it and add the new disks / remove the old disks
from the group (if fendg is/was actually used for fencing it should have no dat
a / there should be nothing to mirror)
If this post has helped you, please vote or mark as solution
0
Actions
Kimberley's picture
Kimberley Admin
31 Mar 2010 : Link
This thread is repeated, with comments from community members, in two other post
s in the forum. Thanks to g_lee for pointing this out. To view all comments, ple
ase check out other threads listed below.
For those posting forum topics, please make sure that you're posting only one fo
rum post per topic. This ensures that anyone in future can follow the discussion
, see everyone's responses, comments and suggestions in one thread, and if there
's a solution, that it's clearly identifiable.
http://www.symantec.com/connect/forums/add-disk-vc...
http://www.symantec.com/connect/forums/data-move-a...
Best,
Kimberley
Community Manager
Thanks for participating in the community!
0
Actions
Jaeha's picture
Jaeha Symantec Employee
31 Mar 2010 : Link
It seems like your VCS setup doesn't have any cluster volume manager or filesyst
em, but it's just simple VCS with failover service. So, vxfen is optional, and
not required unless you want to have additional protection from split-bain.
mirroring would not require service group freeze as mirroring would not create a
ny additonal loads for vxcofnigd. But I would recommend to freeze service group
if you need to run "vxdctl enable" which force to discover all devices, and it
make vxconfigd busy, and it would cause delay on all vx commands, and it would c
ause monitor timeout.
+1
Actions
Jaeha's picture
Jaeha Symantec Employee
31 Mar 2010 : Link
Just one more thing. 'vxassist mirror' would not create mirror for log plex, so
if you have any volume has DRL/DCO logs, then you have to add the log using new
disk using below command:
# vxassist -g <DG> addlog <VOLUME> logtype=<dcm or drl or dco> alloc=<new disk>
if all volumes don't have any mirror or replication or snapshot, then you don't
have to worry about the log.
+1
Actions
Texas78155's picture
Texas78155
11 May 2010 : Link
how to see what type of log is there. see below -- and let me know how to create
log -
do i need separate disk for log plex and how to create plex (LOG ) in new disk w
ith same size in exisiting . See below specification
pl ora250lv-01 ora250lv ENABLED ACTIVE LOGONLY CONCAT - RW
sd ecgora01dg03-04 ora250lv-01 ecgora01dg03 25166368 512 LOG EMC0_65 ENA
pl ora250lv-02 ora250lv ENABLED ACTIVE LOGONLY CONCAT - RW
sd ecgora01dg04-04 ora250lv-02 ecgora01dg04 34157152 512 LOG EMC0_66 ENA
pl ora250lv-04 ora250lv ENABLED ACTIVE 44040192 CONCAT - RW
sd EMC2_22-01 ora250lv-04 EMC2_22 0 44040192 0 EMC2_22 ENA
vxdg -g ecgora01dg free
DISK DEVICE TAG OFFSET LENGTH FLAGS
EMC2_20 EMC2_20 EMC2_20 104857600 36551936 - new
EMC2_21 EMC2_21 EMC2_21 63507456 77902080 - new
EMC2_22 EMC2_22 EMC2_22 44040192 97369344 -
new
EMC2_23 EMC2_23 EMC2_23 62914560 78494976 - ne
w
EMC2_24 EMC2_24 EMC2_24 27269088 114140448 - new
ecgora01dg03 EMC0_65 EMC0_65 0 25165824 - ol
d
ecgora01dg03 EMC0_65 EMC0_65 25166880 10183776 - old
ecgora01dg04 EMC0_66 EMC0_66 0 25165824 -
old
ecgora01dg04 EMC0_66 EMC0_66 25166368 8990784 - old
ecgora01dg04 EMC0_66 EMC0_66 34157664 1192992 - old
and i have plenty of space in above disk

0
Actions
g_lee's picture
g_lee Trusted Advisor
12 May 2010 : Link
https://www-secure.symantec.com/connect/forums/how... << new duplicate thread
https://www-secure.symantec.com/connect/forums/how... << an old thread which you
have bumped with similar info
From the output extract it appears these are traditional DRL log plexes (ie: not
DCO) - look at vxprint -htrg <dg> output to confirm.
re: how to create the log, Jaeha has already told you this - from reply above:
add the log using new disk using below command:
# vxassist -g <DG> addlog <VOLUME> logtype=<dcm or drl or dco> alloc=<new disk>
for the post you created in the old thread, re: why log plex creation was failin
g, as the error indicated, there was overlap with an existing subdisk
--------------------
>vxdg -g ecgora01dg free
DISK DEVICE TAG OFFSET LENGTH FLAGS
EMC2_20 EMC2_20 EMC2_20 104857600 36551936 -
EMC2_21 EMC2_21 EMC2_21 63507456 77902080 -
EMC2_22 EMC2_22 EMC2_22 44040192 97369344 -
EMC2_23 EMC2_23 EMC2_23 62914560 78494976 -
EMC2_24 EMC2_24 EMC2_24 27269088 114140448 -
ecgora01dg01 EMC0_58 EMC0_58 0 35350656 -
ecgora01dg02 EMC0_60 EMC0_60 0 35350656 -
ecgora01dg03 EMC0_65 EMC0_65 0 25165824 -
ecgora01dg03 EMC0_65 EMC0_65 25166880 10183776 -
ecgora01dg04 EMC0_66 EMC0_66 0 25165824 -
ecgora01dg04 EMC0_66 EMC0_66 25166368 8990784 -
ecgora01dg04 EMC0_66 EMC0_66 34157664 1192992 -
[pbsxcm00024][/]>
[pbsxcm00024][/]>vxmake -g ecgora01dg sd EMC2_24-07 EMC2_24,25166368,512
VxVM vxmake ERROR V-5-1-10127 creating subdisk EMC2_24-07:
Subdisk EMC2_24-07 would overlap subdisk EMC2_24-06
But i am getting above error.
--------------------
trying to create subdisk on EMC2_24 - free space starts at offset 27269088
syntax expected by vxmake is:
medianame,offset[,len]
thus by running the command above, the offset used is 25166368 -- which is less
than 27269088 ie: it's not free space, it's used by another subdisk, as the err
or indicated.
If this post has helped you, please vote or mark as solution
0
Actions
Jack Lee's picture
Jack Lee
31 Mar 2010 : Link
i think someone who installed the VCS tried to configure fencing.
but it seems to not work fine.
port "b" is not visuable with "gabconfig -a" .
port "b" is associated with fencing but can't see.
you need to excute more command to check the fencing status.
can you check following files.
/etc/vxfendg <-- this file include the DGname for fencing
/etc/vxfentab <- this file include the disks of the DG for fencing
additionally you can check the fencing key with following command.
vxfenadm -g all -f /etc/vxfentab <- check the value of the key of the all disk
s in the file.

Das könnte Ihnen auch gefallen