Beruflich Dokumente
Kultur Dokumente
SUN SOLARIS
SA-I
1
Solaris Commands For SA-I & SA-II
2
Solaris Commands For SA-I & SA-II
3
Solaris Commands For SA-I & SA-II
Vi Editor Commands
4
Solaris Commands For SA-I & SA-II
OK Prompt Commands
5
Solaris Commands For SA-I & SA-II
Abbreviation
Important Locations
6
Solaris Commands For SA-I & SA-II
7
Solaris Commands For SA-I & SA-II
usr/bin -> for all users usr/sbin -> only for root
8
Solaris Commands For SA-I & SA-II
SUN SOLARIS
SA-II
Introduction To IP - Version IV
/etc/rcS.d/S30network.sh _solaris 8|/etc/rcS.d/S30rootusr.sh _sol 7 & below
During booting of sys these files get exe. This uses ifconfig utility & searches
etc/hostname.xxn to identify instance |xx _ hme/nic
/etc/inet/hosts _ this can be used instead of DNS, NIS, NIS+. It contains ip,
hostname, nickname, comments.
9
Solaris Commands For SA-I & SA-II
SWAP Expansion
#swap –s _ to display the virtual swap area
#swap –l _ to display the physical swap area
Adding swap space by slice:
_ create a slice of required size in hard-disk
_ add the entry for that slice in /etc/vfstab for an permanent effect
_ swap –a /dev/dsk/c#t#d#S# | to activate the swap space
Adding swap space by file:
_ #mkfile <size#> <mount point>
_ #swap –a /<path of above file> | to activate the swap file
_ #also add an entry in the /etc/vfstab for an permanent effect
e.g: export/swapfile -- -- swap -- no --
Removing a swap space:
_ #swap –d <path of swap area> _ this is to de-activate the swap area
_ remove the entry from the /etc/vfstab
_ delete the slice or file to free the utilized space
Crash Dump
#dumpadm _ its to activate the crash dump device for the fore-coming fault
Procedure to read the crash message:
_ cd /var/crash/”uname-n”/ -> log-into this location
_ mdb unix.# vmcore.# | mdb or adb
when you execute above command the prompt will change as below
> |when prompt is as shown when there is no prompt type $p>
> ::status then you will get the prompt as >
> $c
> $q |to quit the prompt
10
Solaris Commands For SA-I & SA-II
Auto FS
_ It provides an automatic mounting using NFS protocol
_ Its an client side service
Components of auto-mount facility:
Auto FS file system, Auto-mountd daemon, Auto-mount command , Auto FS file system
mount-point is defined in the auto-mount maps on the client system.
Configuring Master Map:
For all types of auto sharing methods the entry must be here. Hence this remains the
base entry for all sharing methods. Edit /etc/auto_master
----------------------entries-------------------
+auto_master # +_ indicates that consider automountd daemon
/home auto_home -browse, (or) -nobrowse
<mount point> <map name> <mount options>
#- browse _ allow the users to view the mount-point
--------------------------------------------------
#automount _ reactivate the above file again
Then shared resources get mounted below the /home/<auto_home – file mount point>
Direct map entries:
This map is only for general directory’s, and common sharing files (eg: softwares). In
case 1 server is down then it can be get from other. Edit /etc/auto_direct
----------------------entries-------------------
/usr/share/man -ro, soft ser1, ser2, ser3:/usr/share/man
<local mnt pt> <permission> <hostname>:<path>
#/- auto_direct -ro _ this entry in auto_master
--------------------------------------------------
#automount _ reactivate the above file again
Indirect Map Entries:
This mapping is for user home directories and for their files.
Edit /etc/auto_home
----------------------entries-------------------
magi server_primary:/export/home/babu
<local mnt pt> <hostname>:<home dir path>
#/home auto_home -nobrowse _ this entry in auto_master
--------------------------------------------------
procedure:
_ Create the account for the user in both the servers with same UID & GID
_ Put the entry for auto_home as shown above for a desired user e.g. babu
_ # passwd –h babu | in secondary server (/export/home/babu)? /home/magi | change
the home dir as desired
_ # cd /home/magi |in sec server to enter into the pri servers home dir. After all this files
are edited just type #automount -t <time#> -v this is to activate the desired mountpoint.
If time is mentioned in –t then upto that time the FS remains mounted when its not in
use.
11
Solaris Commands For SA-I & SA-II
After all this files are edited just type #automount -t <time#> -v this is to activate the
desired mountpoint. If time is mentioned in –t then upto that time the FS remains
mounted when its not in use.
Name Server
Name service switching files Location
Local files /etc/nsswitch.files
DNS /etc/nsswitch.dns
NIS /etc/nsswitch.nis
NIS+ /etc/nsswitch.nisplus
LDAP /etc/nsswitch.ldap
12
Solaris Commands For SA-I & SA-II
NIS
The NIS maps are located at /var/yp/domainname/<host>.byname.pag |& .dir
Similarly /var/yp/domainname/<host>.byaddr.pag |& .dir
Daemons used: ypserv, ypbind, rpc.yppasswdd, ypxfrd, rpc.updated
All five are utilized by server and only first two by clients. Through NIS service a centralized user-
account can be provided.
Configuring a machine as the NIS master server :
_ #cp /etc/nsswitch.nis / etc/nsswitch.conf
_ #domainname accel.com
_ #touch /etc/defaultdomain
_ #domainname > /etc/defaultdomain
_ #cd /etc
_ #touch ethers bootparams locale timezone netgroup netmasks
_ #ypinit -m _ To initialize the master server
_ #/usr/lib/netsvc/yp/ypstart _ To start the NIS daemons
Configuring a machine as the NIS client :
The steps were same as above but instead of ypinit –m give ypinit –c to start client service and also
put the server IP entry in the /etc/hosts.
Root Mirroring
_ create a slice of 50mb for state database ||let it be slice 7 i.e. s7
_ #prtvtoc /dev/rdsk/c0t0d0s2 | fmthard –s - /dev/rdsk/c#t#d0s2
_ #metadb -afc3 c0t0d0s7 c0t#d0s7 c#t#d0s7 || To create 3 replicas in each state db | 3 hard-disk
because (n/2)+1 replicas
_ #metadb -i ||To check for status
_ #metainit -f d10 1 1 c0t0d0s0 ||To create primary sub-mirror
_ #metastat
_ #metainit -f d20 1 1 c0t#d0s0 ||To create secondary sub-mirror
_ #metainit d50 -m d10 || mapping to main mirror
_ #metaroot d50 ||To put an auto entry in vfs tab
_ reboot the system now # init 6
_ #metattach d50 d20 ||resynchronization starts now
Replacing the defective disk in case of failure :
# metareplace <defective disk c#t#d#s#> <newly attached disk c#t#d#s#>
Breaking the Mirror :
_ #metadetach d50 d20 ||breaking secondary sub-mirror
_ #metaroot /dev/dsk/c0t0d0s0 ||role-back the vfs entry
_ #init 6
_ #metaclear -r d50 ||removing main mirror
_ #metaclear d20 ||removing secondary mirror
_ #metadb -d -f c0t0d0s7 c0t#d0s7 c#t#d0s7 ||removing the replicas
13
Solaris Commands For SA-I & SA-II
14
Solaris Commands For SA-I & SA-II
15
Solaris Commands For SA-I & SA-II
16
Solaris Commands For SA-I & SA-II
Remote Access
Put a “+” or “user name” in the “.rhost” file at home directory or else in /etc location so that users
from other system can login to the local system remotely.
Command Function
rcp <host name>:<path> <destiny> Remote copy
rlogin Remote login lllar to telnet
rsh <host name> mt staus To utilize the other system h/w |tape
Hints Purpose
Well-known ports Ranges upto 1024 _ located at /etc/inet/services
Dynamic ports From 1025 to 65535_ at /etc/inet/services
17
Solaris Commands For SA-I & SA-II
SUN CLUSTER
Required Components:
Terminal Concentrator, Cluster host, Storage array, Public network, Administrative workstation –
sun cluster console software SUNWccon It won’t support multiple connections from a same
server to a single storage box.
Topologies: Clustered Pair Topology, N+1 Topology, Pair + N topology. Clustered Pair
Topology : If two are more pairs of nodes operating under a single cluster administrative
framework. In this the Nodes are configured in pairs-two, four, six or eight
nodes
N+1 Topology : This is providing one system to act as the back-up for the other system in
the group.
Pair + N topology : It is adding or including a pair of cluster in an existing cluster pair
only for the purpose of storage sharing. In this there is no direct connection between them.
NOTE:
Once cluster is installed the init command won’t work to shutdown the system use
# scshutdown -y g <grace period in minuter ##>
Configuration steps of sun cluster 3.0
Steps involved :
_ Install Disk Suit
_ Change SCSI Initiator ID
_ Initial Installation - Sun Cluster Software Installation
_ Configuring Quorum device
_ Disk Set using Disk Suit
_ Data Service Configuration
Changing SCSI Initiator ID : { do this steps only in any one server node }
Ok: probe-scsi-all | To view the physical disk path of the hard disk | note this for disk set
Ok: nvedit
0: probe-all install-console banner
1: cd < physical path or external hard disk_@1f,2000/scsi@1>
2: 6 “ scsi-initiator-id” integer-property
3: device-end
4: cd < physical path or external hard disk_@1f,2000/scsi@1>
5: 6 “ scsi-initiator-id” integer-property
6: device-end
7: banner < to save press ctrl c >
Ok: nvstore
Ok: setenv use-nvramrc? true
Ok: setenv auto-boot? true
Ok: reset-all
Ok: cd < physical path or external hard disk_@1f,2000/scsi@1>
Ok: .properties | This is to view the changed SCSI initiator ID
Creating Slices : { do this step in all the servers/nodes }
For an sun cluster 4 slices are important they are root, swap, /globaldevices, 50mb for replica.
Edit /etc/host : { do this step in all the servers/nodes }
-----------------------entry-----------------------
<server ip address> <host name>
<server ip address> <host name>
<service/logical ip address> <service/logical name_nfs-server> | its virtual IP
----------------------------------------------------
Assigning Logical or Virtual IP Address For the Service : { do in all servers/nodes }
# ifconfig hme0:1 plumb
# ifconfig hme0:1 <service ip address> netmask <netmask> up
# init6 | restart all the nodes in a same instant
Cluster Installation : { do in all servers/nodes }
# cd /cdrom/cdrom0/SunCluster_3.0/Tools
18
Solaris Commands For SA-I & SA-II
# ./scinstall | A installation menu will appear in that provide all the details
Add root users to the sysadmin group i.e. /etc/group { do in all servers/nodes }
Entry : sysadmin::14::root | add this entry in /etc/group
Configure the Quorum Device : { do in any one servers/node } | for voting purpose
# scdidadm -L | to see the DID instance name on the storage box
# scsetup # scconf -a -q globaldev=d9
continue (y/n) :y # scconf -c -q reset
Quorum device : <DID instance Or
# scstat -q | to see status
name>
Add another quorum device (y/n) :n
Reset installmode (y/n) :y
19
Solaris Commands For SA-I & SA-II
Data Service Configuration : Services like apache, oracle, dns, iws etc., Insert sun
cluster 2nd CD & give ./scinstall and then choose 4 for adding support for data service.
In case of nfs select it and quit and then add required patched for the service eg. 111555-
07 and then restart the system by scshutdown -y -g0
20
Solaris Commands For SA-I & SA-II
FLASH Installation
Flash installation is getting and saving the network configuration & setting of your system to
an existing extra disk. In case of any failure in your system you put the extra disk in some
other machine and restore the configuration by locating it while reinstallation of OS on your
system.
#flarcreate -n <archive_name> -a <author name> -R / -x </var/tmp>
</flashdisk/file.archive – path in the destiny disk> _ To create a flash archive.
21