Sie sind auf Seite 1von 6

Storage D1000 Installation withDual hosts

SCTIMS, Trivandrum
Scope of the work
1. Connectivity of the D1000 with two V480 Servers .To achieve this we need to install
the HBA cards in both the servers.
2. Creation of 6 partitions in the storage , /cache1, /cache2, /cache3, /cache4,/dbase,
/dbase/arch .The /dbase and /dbase/arch will be having a capacity of 100 GB each .
and /cache1, /cache2, /cache3 and /cache4 will be having 200 GB each
3. Disconnection of the existing Storage (D220)
4.
mounting of /cache1, /cache2, /cache3 and /cache4 to the V480 server with
hostname NWG1 . Please take a backup of the old vfstab before making any
changes .
5. Mounting of /dbase , /base/arch in the V480 server with hostname OS3000 and
editing the VFSTAB entries . Please take a backup of the old vfstab before making
any changes .
6. Performance testing

Installation steps.
Install the StoreEdge D1000 with dual host SF V480. One V480 host name is NWG1
(172.16.105.1) and other V480 hostname is OS300 (172.16.105.2). D1000 having
8*146GB HDD has to configure to both hosts as follows.

steps.
1. Both V480 is configured with HVD scsi card part no:375-0006 (375-0005 is
LVD scsi card).
2. One of the V480 (NWG1) is changed the scsi initiator id from default 7 to 6.
Procedure for changing scsi initiator id from default 7 to 6(At OK prompt)
a)

OK show-devs (will show the devices.)

Here we connected the scsi card on 5 th PCI slot of both servers . So


instance
name
is
come
as
PCI@8,700000/scsi@5
and
PCI@8,700000/scsi@5,1.
To change the SCSI initiator id to 6 for this particular card
following steps had to be done.
b)

Edit the nvramrc script to change the scsiinitiator-id for the host(In this scenario PCI based system is used if
it is SBUS the script will change.)

(Note: Insert exactly one space after the first double quote and before scsi-initiator-id )
{0}

0:
1:

ok nvedit
probe-all install-console banner
cd /pci@8,700000/scsi@5

2:
3:
4:
5:
6:
7:
8:
{0}
c)

6 " scsi-initiator-id" integer-property


device-end
cd /pci@8,700000/scsi@5,1
6 " scsi-initiator-id" integer-property
device-end
banner
[Control C]
ok

Store the changes.

The changes you make through the nvedit command are recorded on a
temporary copy of the nvramrc script. You can continue to edit this
copy without risk. After you complete your edits, save the changes.
If you are not sure about the changes, discard them.

To store the changes, at the ok prompt, type:

{0} ok nvstore
{0} ok
To discard the changes, type:

{0} ok nvquit
{0} ok
d)

Verify the contents of the nvramrc

Verify the contents of the nvramrc script you created as shown.


If the contents of the nvramrc script are incorrect, use the nvedit
command to make corrections.
{0} ok printenv nvramrc
nvramrc =
0: probe-all install-console banner
1: cd /pci@8,700000/scsi@5
2: 6 " scsi-initiator-id" integer-property
3: device-end
4: cd /pci@8,700000/scsi@5,1
5: 6 " scsi-initiator-id" integer-property
6: device-end
7: banner
{0} ok

In real scenario at SCTIMST having 0th line in nvramrc is related to the disk alias. We
started the nramrc script from 1st line.
e)

PROM (OBP) Monitor to use the nvramrc script

After nvramrc had been reviewed, you have to instruct the OpenBoot
PROM (OBP) Monitor to use the nvramrc script, as shown.

{0} ok setenv use-nvramrc? true


use-nvramrc? = true
{0} ok

f)

Status of SCSI initiator id

Also check the SCSI initiator id as following method.

OK cd /pci@8,700000/scsi@5,1
OK .properties
At this point we can see the value of scsi-initiator-id

At this step, we are ready to boot the dual hosted Sun StorEdge D1000
Switch ON the D1000
Power ON the both V480 Servers and give the following command at OK
prompt
OK probe-scsi-all
This step we will be able to see the all HDDs in the D1000 also its
shows the D1000.
Boot the system into OE
Give the following commands
#format
Will show the existing HDD. It may not be show the HDDs in the D1000.
#devfsadm
#format
This step we will be able to see all the HDDs in the D1000 also.All
HDDs space detected with 136 GB

From the Host OS300


#format
Select the 2 and 3 HDDs and create one slice in each HDD with full space of the
HDD.(In format utlity use $ to give the full size of the HDD for a slice).
C2t3d0s0 and C2t4d0s0 are the two slices with 136 GB.
#metainit d100 1 1 c2t3d0s0 (Creating concatenation volume d100).
#metattach d100 c2t4d0s0
(attaching the C2t4d0s0 to d100 and d100 is getting
the size of 272 GB)
#metainit d101 p d100 100gb (creating the soft partition d101 in d100 with the
size of 100 GB)
#metainit d102 p d100 100gb (creating the soft partition d102 with the size of
100 gb)
#newfs /dev/md/rdsk/d101 (creating file system in d101)
#newfs /dev/md/rdsk/d102 (creating file system in d102)
Editing /etc/vfstab
#vi /etc/vfstab
/dev/md/dsk/d101
/dev/md/rdsk/d101 /dbase ufs 2 yes /dev/md/dsk/d102
/dev/md/rdsk/d101 /dbase/arch ufs 2 yes #mount -a
Now the Host OS300 is ready to use with the mount points /dbase and
/dbase/arche with the size of 100 GB.

From the Host NWG1


#format
Select the HDDs 4 to 9. C5t5d0,C5t8d0, C5t9d0,C5t10d0,C5t11d0,C5t12d0 in one
after another and create one slice will whole disk capacity.
#metainit d110 1 1 c5t5d0s0 (creating concatenation volume d110)
#metattach d110 c5t8d0s0 (attaching the c5t8d0s0 to d110)
#metattach d110 c5t9d0s0 (attaching the c5t9d0s0 to d110)
#metattach d110 c5t10d0s0 (attaching the c5t10d0s0 to d110)
#metattach d110 c5t11d0s0 (attaching the c5t11d0s0 to d110)
#metattach d110 c5t12d0s0 (attaching the c5t12d0s0 to d110)
#metainit d111 p d110 200gb (creating the soft partition d111 with 200gb space)
#metainit d112 p d110 200gb (creating the soft partition d112 with 200gb space)
#metainit d113 p d110 200gb (creating the soft partition d113 with 200gb space)
#metainit d114 p d110 200gb (creating the soft partition d114 with 200gb space)
#newfs /dev/md/rdsk/d111 (creating file system in d111)
#newfs /dev/md/rdsk/d112 (creating file system in d112)
#newfs /dev/md/rdsk/d113 (creating file system in d113)
#newfs /dev/md/rdsk/d114 (creating file system in d114)
Editing /etc/vfstab
#vi /etc/vfstab
/dev/md/dsk/d111
/dev/md/dsk/d112
/dev/md/dsk/d113
/dev/md/dsk/d114

/dev/md/rdsk/d111
/dev/md/rdsk/d112
/dev/md/rdsk/d113
/dev/md/rdsk/d114

/cache1 ufs 2 yes /cache2 ufs 2 yes /cache3 ufs 2 yes /cache4 ufs 2 yes -

#mount -a
Now the Host NWG1 is ready to use with the mount points
/cache1,/cache2,/cache3 and /cache4 with the size of 100 GB.

But this moment Customer commenting to get the Redundancy and started the
RAID1 in OS300 with two HDDs and RAID5 in NWG1with 6 HDDs. Customer
is ready to compromise with the partition size.

In OS300
Un mounting the both mounts

#umount /dbase/arch
#umount /dbase.
Clearing the both soft partitions d101 and d102
#metaclear d101
#metaclear d102
Clearing the Concat stripe
#metaclear d100
Both C2t3d0s0 and c2t4d0s0 are same size of 136Gb. So without any
modification going to mirroring.
Mirroring starts
#metainit d101 1 1 c2t3d0s0 (creating sub mirrord101)
#metainit d102 1 1 c2t4d0s0 (creating sub mirrord102)
#metainit d100 m d101 (creating the main mirror d100 and adding the submirror
d101 to the main mirror)
#metattach d100 d102 (attaching the d102 sub mirror to the main mirror d100)
# metastat (showing the resyncing status of mirror)
#while true
>do
>metastat | grep %
>sleep 1
>done
#
Will give the contentious resync status.

In the HOST NWG1


Un mounting the all mounts
#umount /cache1
#umount /cache2
#umount /cache3
#umount /cache4

Clearing the all soft partitions d111,d112,d113 and d114


#metaclear d111
#metaclear d112
#metaclear d113
#metaclear d114
Clearing the Concat stripe
#metaclear d110
RAID 5 creation starts with the following
c5t5d0s0
c5t8d0s0
c5t9d0s0
c5t10d0s0
c5t11d0s0

c5t12d0s0
Each slice having the full capacity of the HDD
#metainit d110 r c5t5d0s0 c5t8d0s0 c5t9d0s0 c5t10d0s0 c5t11d0s0 c5t12d0s0
(creats the RAID 5 with 6 HDDs)
#metastat (will show the initialising status)
#while true
>do
>metastat | grep %
>sleep 1
>done
#
Will give the contentious initialising status

Das könnte Ihnen auch gefallen