Beruflich Dokumente
Kultur Dokumente
UNIX Tips - 1
This is the series of pages that consists of UNIX commands which regularly used by the Middleware Admins/ Developers /Architects. The major focus on high performance in your hands to reach success with excellent tricks and templatising your work. We care about Solaris and Linux different commands that make more comfortable to play in your role..
Here I assumed the old str as 'one', new str as 'two' all those files are in the same directory. if the those files are in different directory then you need to use one more loop to traverse the directory tree.
11. How do you find the number of Processors are there in SPARC machine?
$ psrinfo -p 4
14. How to grep the CLASSPATH contents into line by line as per path-separator (:)?
Ans: The command could be the combination of grep and awk with a looping as follows: grep CLASSPATH anylogfile.log|awk -F":" '{for (i=1 ; i<=NF ; i++) print $i}'
16. How to find the processor type (32 bit or 64 bit) on a SPARC machine? /usr/bin/isainfo -kv
16. How to check the java version supports 32 bit or 64 bit? truss -t exec java -d64 -version This will give you which JDK supported on your Solaris machine. How to debug what is inside happenings at system, internal files while running starting a new service? The strace is a wonderful tool for debug, this must be aware by every middleware architect/admin/developer. While installing WebLogic or any software this will help to find what is happening internally, how the system calls happening. Where it is got stuck. A sample is as follows:
$ strace -fv -o output -p <WL PID>
Disk Usage commands Production environment is critical to perform. Every act that we perform double-check is best way. 1. Check the filessize after combining 2. check the transferred file is same size as source First find the combined file is correct or not. Verify this with
du k [path][.]
This will give you the size of each folder and also the sum of those folder content together.
If the disk usage by the files is equal to the combined file approximately same then it is fine otherwise troubleshooting the combine process.
cksum filename num num filename
The first number indicates the number of blocks consumed for this file. The second number indicates the file size in bytes. Here the first number is significant because after transferring to the other boxes we need to verify the number of blocks received it should match with source side blocks.
Compression commands
Amazon.com Widgets
If the file size is very huge compress it in double time with tar and gzip commands.
tar cvf tarfile source-file(s)
This will creates the new .tar file which will be compressed, combined if the source is a folder.
gzip tarfile
This Gun zip is most powerful and its ability to compress is very high it is almost comes to of the original size after compression, this will generates .gz file.
gunzip zippedfile.gz
Here the gunzip command will extract the zipped file (unzip) the file content. Both the above commands combination can be achieved with pipe as follows:
tar cvf - [sorce][.] | gzip > ~/backup-120908.tar.gz
-r option is for recursive properties -f to indicate that a file Overall -rf will gives the same permissions which are assigned at source to the destination file too. Another form of scp
scp rf sourcepath user@host:destinationAbsolutePath
When you are copying between the different remote machines but on to the same user then the command goes like this:
scp rf sourcepath @host:~/relativePath
There is no need of giving user name before @ symbol. You can use the relative path for the destinations. follow this trick save your time. Conclusion With my experiments found that SCP is much better in fastness than SFTP command. The status of copying is clear with * bar shown below the scp command. It is also shows how much percentage copied to the target out of original size of the file. This will allow you to make wise decision on the transferring of the files.
grep CLASSPATH wluser.log|awk -F":" '{for (i=1 ; i<=NF ; i++) print $i}'
If user issue startWebLogic.sh command to start the WebLogic server and the SSH or Putty window closed the process also stopped. Here I suggest you create a shell script that will starts up you WebLogic Server and also keep alive forever. Name the script as 'startAdmin.sh' or your convience. clear nohup /export/home/wluser/domains/wlscldom/startWebLogic.sh >>$HOME/$USERNAME.log 2>&1 & echo tail -f $HOME/$USERNAME.log USERNAME you can replace with your servername also. Now letus understand what above script will perform for us. 1. nohup .. This command will run the Unix process @ Server Side without terminal window. that is if w e close the SSH or Putty window even though the WebLogic server should start. 2. Redirecting the console log using filter >> user can specify the desired log location here. And here the standard error(2) is redirecting(>) to standard output(&1). This log extension some of users using as .out because it is standard output of console and some other using as .log. 3. To run the 'startWebLogic.sh' script in backgroud use '&'. 4. Then tail the redirected log file path.
Some situations demands that you want to run a bash/sh shell script on every machine. If you have SSH password less connectivity then you can do simply do the following trick: ssh user@remotehost 'bash -s' < commonScript.sh or To execute few commands on remote machine ssh user@host <<'ENDSH' #commands to run on remote host ENDSH Critical demand on the above command that you need to pass arguments. How to do?? No worries!!! just export the ARG(s) you wish to pass. export ARG1='/my/home/path' ssh user@remotehost 'bash -s' <<'ENDSH' # commands to run on remote host echo $ARG1 ENDSH
In Oracle Enterprise Linux 5 to know about RAM stats we have many options but easy option is using free command.
$ free m total used free shared buffers cached Mem: 3949 2822 1126 0 479 1949 -/+ buffers/cache: 393 3555 Swap: 4095 0 4095
Another thing is you can also use top in linux to show the memory and swap statistics
varrun 393M 144k 393M 1% /var/run varlock 393M 0 393M 0% /var/lock procbususb 393M 123k 393M 1% /proc/bus/usb udev 393M 123k 393M 1% /dev devshm 393M 0 393M 0% /dev/shm lrm 393M 35M 359M 9% /lib/modules/2.6.20-15-generic/volatile /dev/sdb5 29G 5.4G 22G 20% /media/docs
To display large amounts of disk space using file/directory will be on top. This will make administrators life easy
$ du -ah | grep M
{ t += $2 } \
{ u += $3 } \ {GB2 = 1024*1024} END \ { printf "%d of %d Gb in use.\n", u/GB2, t/GB2 }'