Beruflich Dokumente
Kultur Dokumente
link:
link:
link:
link:
NOTE:
nge - Nvidia Gigabit ethernet
bge - Broadcom Gigabit ethernet
rtls - Real Tek ethernet
hme - happy meal ethernet
qfe - quad fast Ethernet
To view the mac address:
OK banner
# ifconfig -a
# ifconfig a will provide the following
a. ipaddress of the machine
b. mac address of the machine
c. status flag of the interface
d. instance name of the interface
e. broadcast id
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index1
inet 127.0.0.1 netmask ff000000
nge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.0.120 netmask ffffff00 broadcast 192.168.0.255
ether 0:1b:24:5b:d8:d6
bge1: flags=1000803<UP,BROADCAST,MULTICAST,IPv4> mtu 1500 index 3
inet 192.168.0.145 netmask ff000000 broadcast 192.255.255.255
ether 0:1b:24:5b:d8:d5
Ravi Mishra
# ifconfig
1. is used to assign and view the ipaddress of the system
2. Ip address assigned using ifconfig command will persists only for the current session.
Once if the system is restarted, the ip address assinged to the interface will be vanished.
To assign the ip address permanently to the interface:
Edit the file
/etc/hotname.XXn where
XXn - logical name of the interface
For eg:
# cat > /etc/hostname.nge0
192.168.0.120
Save this file.
This file may have the hostname of the system or the ip.
To assign virtual ip to the interface: WTD:
1. Plumb the interface
2. Assign the ip to the interface
3. Create a file /etc/hostname.XXn and add entry to the file
1. # ifconfig nge0:1 plumb
2. # ifconfig nge0:1 192.168.0.170 up
3. # cat > /etc/hostname.nge0:1
192.168.0.170
Ctrl+d => to save
# ifconfig nge0:1
nge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.0.0.10 netmask ffc00000 broadcast 10.63.255.255
Ravi Mishra
# cat /etc/inet/hosts
/etc/nodename
This file will have the node name. It will be referred at the time of every boot/reboot and accordingly
the hostname will be taken.
# hostname <new_name>
For eg:
# hostname sun10 -- will change the host name only for the current session, once the system is
rebooted, the hostname will not exit.
To make the hostname permanent, edit the file /etc/nodename
# cat > /etc/nodename
sun10
/etc/services
/etc/inet/services
Both files are linked, It provides information about the services & corresponding static port numbers
To configure DHCP in Solaris-10: Client side configuration:
# touch /etc/dhcp.nge0
where
nge0 = name of the physical interface
# touch /etc/hostname.nge0
# touch /etc/notrouter
# cp /dev/null /etc/defaultrouter
# cp /etc/nsswitch.dns /etc/nsswitch.conf
# cp /dev/null /etc/resolv.conf
# ifconfig -a
# vi /etc/resolv.conf
nameserver 192.163.0.1
# svcadm restart physical
# svcadm restart network
# touch /etc/dhcp.nge0
# touch /etc/hostname.nge0
# ifconfig nge0 dhcp drop
Ravi Mishra
# snoop -d <interface>
For eg:
# snoop -d nge0
Ravi Mishra
fire1
accel
fire1
fire1
accel
fire1
->
->
->
->
->
->
accel
fire1
accel
accel
fire1
accel
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
C
R
C
C
R
C
port=32890
port=32890 \r\n-bash-3.00#
port=32890
port=32890 c
port=32890 c
port=32890
# snoop -D -d nge0
where
-D = used to monitor the dropped packet information
-d = used to monitor for the specified interface
#snoop -D -d nge0
fire1 -> 224.0.0.22 drops: 0 IGMP v3 membership report
fire1 -> 192.168.0.255 drops: 0 RIP C (1 destinations)
fire1 -> 224.0.0.2 drops: 0 ICMP Router solicitation
fire1 -> 224.0.0.22 drops: 0 IGMP v3 membership report
fire1 -> 192.168.0.255 drops: 0 RIP C (1 destinations)
fire1 -> 224.0.0.2 drops: 0 ICMP Router solicitation
100.0.0.2 -> (broadcast) drops: 0 ARP C Who is 100.0.0.2, 100.0.0.2 ?
fire1 -> 224.0.0.2 drops: 0 ICMP Router solicitation
fire1 -> 224.0.0.2 drops: 0 ICMP Router solicitation
fire1 -> 224.0.0.22 drops: 0 IGMP v3 membership report
fire1 -> 224.0.0.22 drops: 0 IGMP v3 membership report
fire1 -> (broadcast) drops: 0 ARP C Who is 192.168.0.120, accel ?
accel -> fire1 drops: 0 ARP R 192.168.0.120, accel is 0:1b:24:5b:d8:d6
fire1 -> accel drops: 0 TELNET C port=32890
accel -> fire1 drops: 0 TELNET R port=32890
fire1 -> accel drops: 0 TELNET C port=32890 swap - l\r\0s\3swassssss
accel -> fire1 drops: 0 TELNET R port=32890 ^Cswap -l\r\nsswasssss
fire1 -> accel drops: 0 TELNET C port=32890
accel -> fire1 drops: 0 TELNET R port=32890 \r\n\r\n-bash-3.00#
# snoop -S -d nge0
-S = to monitor the size of the packets
Using device /dev/nge0 (promiscuous mode)
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
->
->
->
->
->
->
->
->
->
->
->
->
->
accel
fire1
accel
accel
fire1
accel
accel
fire1
accel
accel
fire1
accel
accel
length:
length:
length:
length:
length:
length:
length:
length:
length:
length:
length:
length:
length:
60
67
60
60
55
60
60
55
60
60
55
60
60
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
C
R
C
C
R
C
C
R
C
C
R
C
C
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
\33[A
cd /class_doc
\33[D
\33[D
\33[D
\33[D
# snoop a
To gather the audio
# snoop accel fire1
will monitor the transmission only between the specified machine
# snoop accel fire1
Using
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
Ravi Mishra
# snoop -V
Displays the information in verbose summary mode
# snoop -V -d nge0
# snoop -v
# snoop -i /Desktop/snoot_test
-Used to read the entries of the file
NOTE:
Format of the file is different, hence we used # snoop -i to read the entries of the file.
# file /Desktop/snoop_test
/Desktop/snoop_test: Snoop capture file - version 2
SWAP CONFIGURATION
Swap is a virtual space added from hard disk drive to the physical memory to increase system
performance. In Solaris, swap space can be added either permanently or temporary.
At the same time, the swap space can be a file or a dedicated slice.
By default the swap slice will be slice1.
# swap s
Will display the summary of the swap space totally allocated, used and free.
Ravi Mishra
Ravi Mishra
FS
type
fsck
pass
mount
mount
at boot options
no
1
/usr
/var
/opt
no
no
ufs
ufs
ufs
1
1
2
no
no
yes
/devices
sharefs ctfs
objfs
swap
-
/devices
/etc/dfs/sharetab
/system/contract
/system/object objfs
/tmp
tmpfs
-
devfs
sharefs
ctfs
yes
no
-
no
no
no
-
Ravi Mishra
# dumpadm -c all
This will ask the system to dump all the pages from the physical memory.
The default dump contents are kernel pages.
Dump content: all pages
Dump device: /dev/dsk/c0d1s5 (dedicated)
Sun Solaris 10 Operating System
Savecore directory: /var/crash/Unix/
Savecore enabled: yes
Ravi Mishra
If the directory defined in the global core file does not exist, it has to be created manually.
The configuration file is /etc/coreadm.conf
This file is not recommended to edit.
But updates to this file can be performed by using the command
# coreadm -- reads the entries of the file /etc/coreadm.conf and the configuration is displayed.
coreadm pattterns:
%m = machine name
%n = system known name
%p = process-id
%t = decimal value
%u = effective user
%z = which process executes
%g = effictive group id
%f = execuitable file name
-d = disable
-e = enable
# coreadm option argument
MISCLEANIOUS COMMANDS:
Troubleshooting information will be available at
# cat /lib/svc/share/README
To mount the read only slice as read/write:
# mount -o rw,remount /
To view the release of the operating system:
# cat /etc/release
# cat /var/sadm/softinfo/INST_RELEASE
To assign the gateway:
# route add default <ip>
# route add default 192.168.0.150
To view the assigned gateway:
# netstat -rn
Routing Table: IPv4
Sun Solaris 10 Operating System
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ------ --------192.168.0.0 192.168.0.120 U 1 20 nge0
192.168.0.0 192.168.0.121 U 1 0 nge0:1
192.168.0.0 192.168.0.122 U 1 0 nge0:2
192.168.0.0 192.168.0.170 U 1 0 bge1
224.0.0.0 192.168.0.120 U 1 0 nge0
default 192.168.0.150 UG 1 0
127.0.0.1 127.0.0.1 UH 4 1110 lo0
Ravi Mishra
10
#effective:rwx
#effective:rw#effective:rw-
# getfacl -d new -- will display only the owner/group of the file specified
bash-3.00# getfacl -d new
# file: new
# owner: root
# group: root
Syntax:
# setfacl -s u::<perm>,g::<perm>,o:<perm>,m:<perm>,u:<name>:<perm>,g:name:<perm>
<name_of_file_dir>
where
u = user
g =group
o = other
m = ACL mask
Note:
u,g,o can be replaced with user, group,others respectively
m can be replaced with mask
Ravi Mishra
11
#effective:rwx
#effective:rw#effective:rw-
#effective:rwx
#effective:rwx
#effective:rw#effective:rw-
Advantages of NFS:
Allows multiple computers to use the same files, all users on the network can access the same
data (based on the permission).
Reduces storage costs by sharing applications on computers instead of allocating local disk
space for each user
Provides data reliability & consistency
Reduces system administration activity
Note:
1. In Solaris-10 NFS version 4 is used by default.
Ravi Mishra
12
2. /etc/dfs/sharetab
- Not recommended to edit
- File will be updated through "share" , "shareall" , "unshare", "unshareall" commands
- lists the locally and currently shared resources in the system
Output: (With manually edited entries)
bash-3.00# cat /etc/dfs/sharetab
/Desktop/ppt - nfs rw
/export/home - nfs rw
/share - nfs rw
/nfs/share_test - nfs ro
/source/open - nfs rw=natra,ro=solaris test
/unix_share - nfs rw=natra,ro=192.168.0.0/32
3. /etc/dfs/fstypes
- lists the default file system types for remote file systems.
Output:
bash-3.00# cat /etc/dfs/fstypes
nfs NFS Utilities
autofs AUTOFS Utilities
cachefs CACHEFS Utilities
Here,
nfs - used to share the resources across the network
autofs - used to mount the shared resource at client side on demand
cachefs - used to sync the updates performed to the shared resources.
(This is responsible for maintaining the reliability & consistency)
4. /etc/rmtab
- lists file systems remotely mounted by NFS clients.
- do not edit this file
- contains a table of file systems remotely mounted by NFS clients
Ravi Mishra
13
6. /etc/default/nfslogd
- list configuration information describing the behaviour of the nfslogd daemon for NFS v2 and v3.
Output:
bash-3.00# cat /etc/default/nfslogd
#
#ident "@(#)nfslogd.dfl 1.8 99/02/27 SMI"
#
# Copyright (c) 1999 by Sun Microsystems, Inc.
# All rights reserved.
#
# Specify the maximum number of logs to preserve.
#
# MAX_LOGS_PRESERVE=10
# Minimum size buffer should reach before processing.
#
# MIN_PROCESSING_SIZE=524288
# Number of seconds the daemon should sleep waiting for more work.
#
# IDLE_TIME=300
# CYCLE_FREQUENCY specifies the frequency (in hours) with which the
# log buffers should be cycled.
#
# CYCLE_FREQUENCY=24
# Use UMASK for the creation of logs and file handle mapping tables.
#
# UMASK=0137
7. /etc/default/nfs
- contains parameter values for NFS protocols & NFS daemons.
Output: (Only selected parameters is displayed)
#NFSD_MAX_CONNECTIONS=
NFSD_LISTEN_BACKLOG=32
Ravi Mishra
14
#NFS_CLIENT_VERSMIN=2
8. /etc/nfssec.conf
- to enable the necessary security mode.
- can be performed through # nfssec
Output:
# cat /etc/nfssec.conf
#
#ident "@(#)nfssec.conf
1.11
01/09/30 SMI"
#
# The NFS Security Service Configuration File.
#
# Each entry is of the form:
#
#
<NFS_security_mode_name> <NFS_security_mode_number> \
#
<GSS_mechanism_name> <GSS_quality_of_protection> <GSS_services>
#
#
# The "-" in <GSS_mechanism_name> signifies that this is not a GSS mechanism.
# A string entry in <GSS_mechanism_name> is required for using RPCSEC_GSS
# services. <GSS_quality_of_protection> and <GSS_services> are optional.
# White space is not an acceptable value.
#
# default security mode is defined at the end. It should be one of
# the flavor numbers defined above it.
#
none
0
# AUTH_NONE
sys
1
# AUTH_SYS
dh
3
# AUTH_DH
#
# Uncomment the following lines to use Kerberos V5 with NFS
#
#krb5
390003 kerberos_v5
default # RPCSEC_GSS
#krb5i
390004 kerberos_v5
default integrity
# RPCSEC_GSS
#krb5p
390005 kerberos_v5
default privacy
# RPCSEC_GSS
default
1
# default is AUTH_SYS
Note:
1. If the svc:/network/nfs/server service does not find any 'share' commands in the /etc/dfs/dfstab
tile, it does not start the NFS server daemons.
2. The features provided by mountd daemon and lockd daemons are integrated into NFS v4 protocol.
3. In NFSv2 and NFSv3, the mount protocol is implemented by the separate mountd daemon which
did not use an assigned, well-known port number, which is very hard to use NFS through firewall.
4. nfsd and mountd daemons are started if there is an entry (uncommented) share statement in the
system's /etc/dfs/dfstab file.
5. Manually create /var/nfs/public directory before starting nfs server logging. (Pls do ref the file
/etc/nfs/nfslog.conf)
To start/stop the nfs-server:
Solaris-10:
To start/enable:
# svcadm enable nfs/server bash# svcadm -v enable nfs/server
svc:/network/nfs/server:default enabled.
To stop/disable
# svcadm disable nfs/server bash# svcadm -v disable nfs/server
svc:/network/nfs/server:default disabled.
Ravi Mishra
15
Ravi Mishra
16
6. nfsmapid:
- implemented in NFSv4
- maps owner and group indentification that both the NFSv4 client & server user
- started by: svc:/network/nfs/mapid
- no interface to the daemon, but the parameters can be assinged to the file /etc/default/nfs
Commands:
# share
- makes a local directory on an NFS server available for mounting
- also displays the contents of the file /etc/dfs/sharetab
# share --displays the shared contents in the local system
Output:
bash-3.00# share
-
/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
nfs
nfs
nfs
nfs
nfs
rw
rw
nfs
ro
rw=natra,ro=solaris test
rw=natra,ro=192.168.0.0/32
rw
Options-1:
# share -F nfs -d "Comment-description" /data_share
here
-F = specifies the file system
-d = description or comment about the shared directory
Output:
# share -F nfs -d "Comment-description" /data_share/
# share
- /export/home rw ""
Ravi Mishra
17
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw "Comment-description"
Options-2:
# share -F nfs -d "comment" -o rw=solaris,ro=fire2 /data_share
here
-o = specifies the option
ro = read only to the listed clients
rw = read write to the listed clients
# share -F nfs -d "comment" -o rw=solaris,ro=fire2:192.168.0.14 /data_share
Note: Clients name or ip can be given, seperated by , (commas) or by : (semi-colon)
Output:
# share -F nfs -d "comment" -o rw=solaris,ro=fire1 /data_share/
# share
-
/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw=solaris,ro=fire1 "comment"
/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw=solaris,ro=fire1:192.168.0.14 "comment"
Option-3:
# share -F nfs -d "comment" -o root=solaris,rw=fire2,ro=192.168.0.14 /data_share
Output:
# share -F nfs -d "comment" -o root=solaris,rw=fire2,ro=192.168.0.14 /data_share
# share
- /export/home rw ""
- /share rw ""
- /nfs/share_test ro ""
- /source/open rw=natra,ro=solaris "test"
- /unix_share rw=natra,ro=192.168.0.0/32 ""
- /data_share root=solaris,rw=fire2,ro=192.168.0.14 "comment"
here
root=<client_name_or_ip> root=solaris
- informs the client that the root user on the specified client system or systems can perform superuser
privilege requests on the shared resource
Option-4:
# share -F nfs -d "comment" -o ro=@192.168.0.* /data_share
Output:
# share -F nfs -d "comment" -o rw=@192.168.0.* /data_share/
# share
Ravi Mishra
18
/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw=@192.168.0.* "comment"
/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share ro=aita.com "comment"
/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw "Comment-description"
# unshare /data_share/
# share
-
/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
3. # shareall
- reads & executes shared statements from the file /etc/dfs/dfstab
NOTE: All the above discussed share options can be edited to the file /etc/dfs/dfstab and the syntax
remains same.
NOTE:
share
share
share
share
share
4. # unshareall
- makes previously shared resources unavailable
Output:
# share
- /export/home rw ""
- /share rw ""
- /nfs/share_test ro ""
Ravi Mishra
19
# unshareall
# share
#
5. # dfshares
- lists available shared resources from the remote/local NFS server
# dfshares 192.168.0.252
Output:
# dfshares 192.168.0.252
RESOURCE
SERVER ACCESS
TRANSPORT
192.168.0.252:/export/home 192.168.0.252 - -
# dfmounts
- displays a list of NFS server directories that are currently mounted at the clients
- reads the entry from the file /etc/rmtab
At client side:
To make the resource permanently available edit the file /etc/vfstab.
eg entry from the client:
fire2:/nfs/share_test - /mnt/point3 nfs yes ro,nosuid
fire2:/share - /mnt/point1 nfs - yes
AUTOFS
- It's a client side service to make the shared resource available at the client side on demand.
- Autofs file is initialized by
/lib/svc/automount script
/lib/svc/method/svc_autofs script starts the autofs daemon.
NOTE:
automountd deamon is completely independent from the automount command. Because of this
separation, we can add/modify/delete map information without having to stop and start the
automountd daemon process.
Autofs types:
1. Master map
2. Direct map
3. Indirect map
4. Special map
Master map:
1. Lists the other maps used for establishing the autofs file system.
2. The automount command reads this map at boot time.
/etc/auto_master is the configuration file which have the list of direct & indirectly automounted
resources.
Output: (With default entry to the file /etc/auto_master)
Ravi Mishra
20
Direct map:
Lists the mount points as ABSOLUTE PATH names.
This map explicitly indicates the mount point on the client.
Usually /usr/share/man directory is a good example for direct mapping.
/- mount point is a pointer that informs the automount facility that full path names are defined in the
file specified by MAP_NAME (for eg: here its /etc/direct_map).
NOTE:
1. /- is NOT an entry in the default master map file (/etc/auto_master)
2. The automount facility by default automatically searched for all map related file in /etc directory.
Output: ( After adding a manual entry to the file)
Note-1:
Here
1. "direct" is the file name that has to be resided under /etc/ dir. It's mandatory.
This file will have the absolute path of the shared resource & mount point at the client.
2. This file has to be manually created.
3. The name of the file can be anything.
# cat /etc/direct
/usr/share/man 192.168.0.150:/usr/share/man
Note-2:
Here
1. "/direct" is the file name that is residing under / directory.
If the direct maping file is NOT residing under /etc dir, the full path of the file has to be specified.
2. This file will have the absolute path of the shared resources & mount point at the client.
3. Again the name of the file can be anything
They entry of the file /direct
# cat /direct
/usr/share/man 192.168.0.150:/usr/share/man
Indirect map:
Are simplest and most useful autofs.
Ravi Mishra
21
Ravi Mishra
22
Ravi Mishra
23
Ravi Mishra
24
Ravi Mishra
25
Ravi Mishra
26
roles - Returns the information about, to which roles the user is authorized to login
profiles - Returns the information about, to which profile the user is authorized to execute
profiles l Returns the detailed info about the permitted commands that can be executed by the user
auths - Returns the information about the permitted authorization mapped to the user account.
Ravi Mishra
27
Ravi Mishra
28
Ravi Mishra
29
Ravi Mishra
30
Ravi Mishra
31
Ravi Mishra
32
Ravi Mishra
33
Ravi Mishra
34
Ravi Mishra
35
In order for NIS to operate successfully, we have to construct a list of the NIS servers. Please continue
to add the names for YP servers in order of preference, one per line. When you are done with the list,
type a <control D> or a return on a line by itself.
next host to add: sunfire1
next host to add: ^D
The current list of yp servers looks like this:
sunfire1
Is this correct? [y/n: y] y
Installing the YP database will require that you answer a few questions.
Questions will all be asked at the beginning of the procedure.
Do you want this procedure to quit on non-fatal errors? [y/n: n]
OK, please remember to go back and redo manually whatever fails. If you don't, some part of the system
(perhaps the yp itself) won't work.
The yp domain directory is /var/yp/solaris.com
Ravi Mishra
36
There will be no further questions. The remainder of the procedure should take 5 to 10 minutes.
Building /var/yp/solaris.com/ypservers...
Running /var/yp /Makefile...
updated passwd
updated group
updated hosts
updated ipnodes
updated ethers
updated networks
updated rpc
updated services
updated protocols
updated netgroup
updated bootparams
updated publickey
updated netid
/usr/sbin/makedbm /etc/netmasks /var/yp/`domainname`/netmasks.byaddr;
updated netmasks
updated timezone
updated auto.master
updated auto.home
updated ageing
updated auth_attr
updated exec_attr
updated prof_attr
updated user_attr
updated audit_user
sunfire1 has been set up as a yp master server with errors. Please remember to figure out what went
wrong, and fix it.
If there are running slave yp servers, run yppush now for any databases which have been changed. If there
are no running slaves, run ypinit on those hosts which are to be slave servers.
# cp /etc/nsswitch.nis /etc/nsswitch.conf
# svcadm enable nis/server
# svcs nis/server
online 12:30:21 svc:/network/nis/server:default
Configuring NIS Slave Server: (sunfire2)
# hostname
sunfire2
# ypinit -c
In order for NIS to operate successfully, we have to construct a list of the NIS servers. Please continue
to add the names for YP servers in order of preference, one per line. When you are done with the list,
type a <control D> or a return on a line by itself.
next host to add: sunfire1
next host to add: ^D
The current list of yp servers looks like this:
sunfire1
Is this correct? [y/n: y]
Installing the YP database will require that you answer a few questions. Questions will all be asked at
the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n]
OK, please remember to go back and redo manually whatever fails. If you don't, some part of the system
(perhaps the yp itself) won't work.
The yp domain directory is /var/yp/solaris.com
There will be no further questions. The remainder of the procedure should take
a few minutes, to copy the data bases from sunfire1.
sunfire2's nis data base has been set up
# ypwhich
sunfire1
Configuring NIS Client: (sunfire3)
# hostname
sunfire3
# ypinit -c
Ravi Mishra
37
In order for NIS to operate successfully, we have to construct a list of the NIS servers. Please continue
to add the names for YP servers in order of preference, one per line. When you are done with the list,
type a <control D> or a return on a line by itself.
next host to add: sunfire1
next host to add: sunfire2
next host to add: ^D
The current list of yp servers looks like this:
sunfire1
sunfire2
Is this correct? [y/n: y]
Ravi Mishra
38
Ravi Mishra
39
Ravi Mishra
40
Ravi Mishra
41
Ravi Mishra
42
Zone daemons:
Zones uses 2 daemons to control its operation.
a. zoneadm
b. zsched
Note:
The zoneadm daemon is the primary process for managing the zones virtual platform. There is one
zoneadm process running for each active (ready, running or shutting down) zone on the system.
Unless the zoneadmd daemon is already running, it is automatically started by the zoneadm
command.
Zoneadm:
Responsible for:
1. Managing zone booting and shutting down
2. Allocating the zone ID and starting the zsched system process
3. Setting zone-wide resource control (rctl)
4. Preparing the zones devices as specified in the zone configuration
5. Plumbing virtual network interfaces
Ravi Mishra
43
Ravi Mishra
44
Ravi Mishra
45
0:global:running:/::native:shared
-:zones1:configured:/etc/zones/zonepractice::native:shared
0:global:running:/::native:shared
1:zones1:running:/etc/zones/zonepractice:f84ec383-bfe3-c890-8a7ff74970d40c96:
native:shared
Ravi Mishra
46
Ravi Mishra
47
Mirror:
bash-3.00# zpool create testmirrorpool mirror c2d0s3 c2d0s4
bash-3.00# zpool list
swap 3.1G 32K 3.1G 1% /var/run
testpool 2.0G 25K 1.9G 1% /testpool
testpool/homedir 2.0G 100M 1.9G 5% /testpool/homedir
testmirrorpool 4.9G 24K 4.9G 1% /testmirrorpool
DESTROYING A POOL:
bash-3.00# zpool destroy testmirrorpool
bash-3.00# zpool list
Ravi Mishra
48
RAID-Z POOL:
bash-3.00# zpool create testraid5pool raidz c2d0s3 c2d0s4 c2d0s5
bash-3.00# zpool list
bash-3.00# df -h
Ravi Mishra
49
testmirrorpool
mirror
c2d0s3
c2d0s4
Note: Here the n option is used not to create a zpool but just to check if it is
possible to create it or not. If it is possible, itll give the above output, else
itll give the error which is expected to occur when creating the zpool.
LISTING THE POOLS AND ZFS:
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testmirrorpool 4.97G 52.5K 4.97G 0% ONLINE testpool 2G 100M 1.90G 4% ONLINE -
Ravi Mishra
50
MOUNTPOINT
/testmirrorpool
/testpool
/testpool/homedir_old
REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 114K 1.97G 26.5K /testpool
testpool/homedir_old 24.5K 1.97G 24.5K /testpool/homedir_old
DESTROYING A ZFS:
bash-3.00# zfs list
bash-3.00# ls -l testpool/homedir/
total 4
drwxr-xr-x 2 root root 2 Nov 13 11:36 newdir
-rw-r--r-- 1 root root 0 Nov 13 11:36 newfile
bash-3.00# pwd
/testpool/homedir/newdir
bash-3.00# zfs destroy testpool/homedir
cannot unmount '/testpool/homedir': Device busy
bash-3.00# zfs destroy -f testpool/homedir
bash-3.00# zfs list
Ravi Mishra
51
/testpool/homedir_old
/testpool/homedir_old/nesteddir
Ravi Mishra
/mnt/altloc
/mnt/altloc/nesteddir
52
testpool
Note:
1. zfs mount -a command doesn't mount legacy filesystems.
2. To force a mount on top of a non-empty directory, use the option -O
3. To specify the options like ro, rw use the option o
/testpool
/testpool/homedir
/testpool/homedir/nesteddir
Ravi Mishra
53
type snapshot creation Fri Nov 13 16:26 2009 used 0 referenced 27.5K compressratio 1.00x -
PROPERTIES OF SNAPSHOTS:
bash-3.00# zfs get all testpool/homedir_old@snap1
NAME PROPERTY VALUE SOURCE
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1
type snapshot creation Fri Nov 13 16:26 2009 used 0 referenced 27.5K compressratio 1.00x -
Ravi Mishra
54
<Output Truncated>
bash-3.00# zfs set sharenfs=on testpool/additionaldir/testclone
bash-3.00# zfs set quota=500m testpool/additionaldir/testclone
bash-3.00# zfs get sharenfs,quota testpool/additionaldir/testclone
NAME PROPERTY VALUE SOURCE
testpool/additionaldir/testclone sharenfs on local
testpool/additionaldir/testclone quota 500M local
Ravi Mishra
55
Ravi Mishra
56
5.
6.
7.
8.
9.
Note:
512 bytes = 1 sector
1 block = 1 sector
1 block = 1/2 kb
32 block = 16kb
64 block = 32kb
128 block = 64kb
By default interlace = 64kb
Ravi Mishra
57
3.c. /etc/lvm/mddb.cf
i. Created whenever the 'metadb' command is run and is used by command 'metainit' to find locations
of the meta device state database.
ii. never edit this file
iii. each meta device state database replica has a unique entry in the file.
# cat /etc/lvm/mddb.cf
#metadevice database location file do not hand edit
#driver minor_t daddr_t device id checksum
cmdk 7 16 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31T=5QE4B31T/h-4269
cmdk 7 8208 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31T=5QE4B31T/h-12461
cmdk 7 16400 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31T=5QE4B31T/h-20653
4. /kernel/drv/md.conf
a. used by the metadisk driver, when it is initially loaded.
b. the only modified field in the file is "nmd"
nmd = reprsesnts the number of meta devices supported by the driver
c. If the field is modified, perform reconfiguration boot to build meta devices. (OK boot -r)
Ravi Mishra
58
Ravi Mishra
59
first blk
16
8208
16
8208
block count
8192
8192
8192
8192
/dev/dsk/c1t1d0s5
/dev/dsk/c1t1d0s5
/dev/dsk/c1t1d0s4
/dev/dsk/c1t1d0s4
Ravi Mishra
60
bash-3.00# metastat -p
d0 1 1 c1d0s3
bash-3.00# metastat -p
d10 1 3 c1d0s4 c1d0s5 c1d0s6 -i 32b
d0 1 1 c1d0s3
Ravi Mishra
61
Creating a mirror:
1. A mirror is a meta device composed of one or more sub-mirrors.
Sub-mirror:
a. Is made up of one or more striped or concatenated meta deices.
b. Each meta device within a meta mirror is called a sub mirror.
2. Mirroring data provides us with maximum data availability by maintaining multiple copies of our
data.
3. The system must contain at least 3 state database replica before creating mirrors.
4. Any file system including / (root), swap and /usr or any application such as database can use a
mirror.
Ravi Mishra
62
Ravi Mishra
63
Ravi Mishra
64
Soft partition:
1. Dividing one logical component (meta device) into many soft partitions. It can be laid out over
physical disk/slices.
# metainit d5 1 1 c0t11d0s6
Consider the size of the c0t11d0s6 size is 10gb.
Then the size of the meta device d5 is of 10gb.
# metainit d61 -p d5 1g
metainit = to create a soft partition
-p = to create a soft partition
A d61 = the new meta device going to be created
B d5 = is the existing meta device with 10gb of size.
C 1g = is the size of the new meta device d61
# metainit d62 -p d5 1g
# metaclear d61
Removes the soft partition d61 only.
# metaclear -p d5
Will remove all soft partitions from d5.
Soft partition is a means of dividing a disk or volume into as many partitions as needed, overcoming
the current limitation of 7. This is done by creating logical partitions within physical disk slices or
logical volumes. Soft partitions differ from hard disk slices created using 'format' command because
soft partitions can be non-contiguous, where as a hard disk slices are contiguous. Therefore soft
partitions can cause I/O performance degradation.
Note:
1. No automatic problem detection is available in SVM.
2. The SVM s/w does not detect problems with state database/replica until there is a change to an
existing SVM configuration and an update to the database replicas is required. If in-sufficient state
database replicas are available, you'll need to boot to single user mode, and delete/replace enough of
the corrupted/missing database replicas to achieve the quorum.
bash-3.00# cat /etc/lvm/md.cf
Raid-5:
# metainit <meta_device_name> -r <component1> <component2> <component3>
# metainit d100 -r c0t0d0s4 c0t1d0s4 c0t2d0s4
-r = specifies that the configuration is RAID level 5.
# metastat | grep %
Ravi Mishra
65
Ravi Mishra
66
bash-3.00# pwd
/mnt/concat_grow
bash-3.00# growfs -M /mnt/concat_grow /dev/md/rdsk/d50
/dev/md/rdsk/d50: Unable to find Media type. Proceeding with system determined parameters.
/dev/md/rdsk/d50: 3047424 sectors in 93 cylinders of 128 tracks, 256 sectors
1488.0MB in 47 cyl groups (2 c/g, 32.00MB/g, 15040 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 65824, 131616, 197408, 263200, 328992, 394784, 460576, 526368, 592160,
2434336, 2500128, 2565920, 2631712, 2697504, 2763296, 2829088, 2894880,
2960672, 3026464
bash-3.00# pwd
/mnt/concat_grow
Growing the size for raid-5 component while it's mounted, w/o loss of data:
bash-3.00# df -h
Ravi Mishra
67
# metastat -p
d5 -m d0 d10 1
d0 1 1 c2t2d0s0
d10 1 1 c2t14d0s0
/dev/md/rdsk/d1000: Unable to find Media type. Proceeding with system determined parameters.
/dev/md/rdsk/d1000: 3047424 sectors in 93 cylinders of 128 tracks, 256 sectors
1488.0MB in 47 cyl groups (2 c/g, 32.00MB/g, 15040 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 65824, 131616, 197408, 263200, 328992, 394784, 460576, 526368, 592160,
2434336, 2500128, 2565920, 2631712, 2697504, 2763296, 2829088, 2894880,
2960672, 3026464
bash-3.00# df -h
bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c2t2d0s0
d10 1 1 c2t14d0s0
d1000 -r c2t4d0s1 c2t14d0s1 c2t12d0s1 c2t11d0s1 -k -i 32b -o 3
d20 -r c2t4d0s0 c2t14d0s3 c2t12d0s0 -k -i 32b
d50 3 1 c2t4d0s3 \
1 c2t4d0s4 \
1 c2t4d0s5
d502 -p d500 -o 2097216 -b 204800
d500 1 7 c2t2d0s1 c2t2d0s3 c2t2d0s4 c2t2d0s5 c2t14d0s4 c2t14d0s5 c2t14d0s6 -i32b
d501 -p d500 -o 32 -b 2097152
# metattach d0 c2t11d0s3
d0: component is attached
# metattach d10 c2t11d0s4
d10: component is attached
# growfs -M /mnt/mirror /dev/md/rdsk/d5
/dev/md/rdsk/d5: Unable to find Media type. Proceeding with system determined parameters.
/dev/md/rdsk/d5: 2031616 sectors in 62 cylinders of 128 tracks, 256 sectors
992.0MB in 31 cyl groups (2 c/g, 32.00MB/g, 15040 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 65824, 131616, 197408, 263200, 328992, 394784, 460576, 526368, 592160,
1381664, 1447456, 1513248, 1579040, 1644832, 1710624, 1776416, 1842208,
1908000, 1973792
bash-3.00# df -h
Ravi Mishra
68
ROOT MIRRORING:
WHAT TO DO?
1. Ensure that the alternate disk has equal geometry & size.
2. Take backup of /etc/system and /etc/vfstab file.
3. Copy VTOC from root (booting) disk to the alternate disk.
4. Ensure that the state database is created.
5. Convert the root slice as a logical component forcefully.
6. Create another metadevice for duplicating root slice.
7. Convert the swap slice as a logical component forcefully.
8. Create another metadevce for duplicating the swap slice.
9. Associate first sub-mirror (for root) to mirror root.
10. Associate first sub-mirror (for swap) to mirror swap.
11. Update the system & vfstab file by running 'metaroot' command.
12. Reboot the system.
13. Associate the second sub-mirror to mirror root.
14. Associate the second sub-mirror to mirror swap.
15. Install boot block or grub in the alternate root slice
16. See the physical path for the alternate disk
17 Set alias name in the OK prompt.
18. Set boot sequence in OK prompt.
How to do?
1. # format
To create the slices manually.
2. # cp /etc/system /etc/system.orig
# cp /etc/vfstab /etc/vfstab.orig
3. # prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t12d0s2
Note: fmthard -> populate lable on the new hard disk drive
4. # metadb -afc3 c0t8d0s7 c0t10d0s7 c0t12d0s7
(if the replicas are existing, this step can be avoided)
5. # metainit -f d5 1 1 c0t0d0s0
Converting forcefully the root slice as a metadevice (primary root slice)
6. # metainit d10 1 1 c0t12d0s0
Creating another metadevice for root (secondary)
7. # metainit -f d25 1 1 c0t0d0s1
Converting forcefull the swap slice as a metadevice (primary swap slice)
8. # metainit d30 1 1 c0t12d0s1
Creating another metadevice for swap (secondary)
9. # metainit d15 -m d5
Associating d5 with d15
Here d15 = main mirror for root
10. # metainit d35 -m d25
Associate d35 with d25
Here d35 = main mirror for swap
11. # metaroot d15
a) 'metaroot' edits the file /etc/system and /etc/vfstab so that the system may be booted with the
root filesystem on a meta device.
b) 'metaroot' may also be used to edit the files so that the system may be booted with root file
system on a conventional disk device.
Ravi Mishra
69
Ravi Mishra
70
Ravi Mishra
71
Ravi Mishra
72
bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0
d10 1 1 c0t10d0s0
d15 1 1 c0t12d0s0
hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks
c0t9d0s1 Available 1027216 blocks
c0t9d0s3 Available 1027216 blocks
c0t9d0s4 Available 1027216 blocks
c0t9d0s4
Yes
Yes
Yes
Yes
d5 -m d0 d10 1
d0 1 1 c0t8d0s0
d10 1 1 c0t10d0s0
d15 1 1 c0t12d0s0
hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4 c0t9d0s5
Ravi Mishra
73
d5 -m d0 d10 1
d0 1 1 c0t8d0s0 -h hsp005
d10 1 1 c0t10d0s0 -h hsp005
d100 -r c0t8d0s1 c0t10d0s1 c0t12d0s1 -k -i 32b -h hsp005
d15 1 1 c0t12d0s0
hsp005 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4
d5: Mirror
Submirror 0: d0
State: Okay
Submirror 1: d10
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Ravi Mishra
74
d0: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t8d0s0 0 No Okay Yes
d10: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t10d0s0 0 No Okay Yes
d100: RAID
State: Okay
Hot spare pool: hsp005
Interlace: 32 blocks
Size: 2031616 blocks (992 MB)
(Output truncated)
# metastat
d0: Submirror of d5
State: Resyncing
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t8d0s0 0 No Resyncing Yes c0t9d0s1
d10: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t10d0s0 0 No Okay Yes
Ravi Mishra
75
Ravi Mishra
76
Ravi Mishra
77
Device
Device
c0t8d0
c0t9d0
Relocation Information:
Reloc Device ID
Yes id1,sd@SSEAGATE_ST318203LSUN18G_LR901376000010210UDS
Yes id1,sd@SSEAGATE_ST318203LSUN18G_LRA609240000W0270ZT6
bash-3.00# vi /etc/lvm/md.cf
"/etc/lvm/md.cf" 7 lines, 140 characters
# metadevice configuration file
# do not hand edit
# create a replica
mddb01 -c6 c0t8d0s0
# creating a metadevice
d0 1 1 c0t8d0s3
d10 1 1 c0t9d0s3
Disk Set
Disk set feature lets us to set up groups of host machines and disk drives in which all of the hosts in
the set are connected to all the drives in the set.
Types of diskset:
a. Local diskset
b. Shared diskset
Local Diskset:
1. Each host in a disk set must have a local disk set.
2. Local disk set for a host consists of all drives which are not part of a shared diskset.
3. The host's local metadevice configuration is contained within this local diskset in the local
metadevice state database/replica.
Shared Diskset:
1. Is a grouping of 2 hosts and disk drives in which all the drives are accessible by both hosts.
Condition:
Disk suite requires that the device name be identical on each host in the disk set.
2. There is one meta device state database per shared diskset.
Note:
1. Drives shared diskset must not be in any ohter diskset.
2. None of the partitions on any of the drives in a diskset can be mounted on, swapped on or part of a
local metadevice.
3. All the drives in a shared diskset must be accessible by both hosts in the diskset.
4. Metadevices & hotspare pools in any diskset must consist of drives within that dataset.
Likewise, metadevices & host spare pools in the local diskset must be made up of drives from within
the local diskset.
Naming convention:
1. Metadevices within the local diskset use the standard disk suite naming conventions.
2. Metadevices within the shared diskset use the following conventions.
/dev/md/setname/(r)dsk/dnumber
(usually 0 to 127)
Eg:
/dev/md/dataset/(r)dsk/d10
3. Hotspare:
setname/hsp000
Ravi Mishra
78
Ravi Mishra
79
Ravi Mishra
80
Ravi Mishra
81