Sie sind auf Seite 1von 7

Installation Manual of Cluster Computing Analysis by Solving Facility Location Problem

1. First assemble the hardware. 2. Next, install a Linux distribution on each computer in the cluster: During the installation process, assign sensible hostnames and unique IP addresses for each node in your cluster. Usually, one node is designated as the master node (where the cluster is controlled, programs are written and run, etc.) with all the other nodes used as computational slaves. We named our nodes node0 through node2 to keep things simple, using node1 as our master node. Our cluster is private, so theoretically we could assign any valid IP address to our nodes as long as each had a unique value. We used IP address 172.22.75.1 for the master node and added one for each slave node (namely 172.22.75.2, 172.22.75.3, 172.22.75.4). 3. Then configure rsh on each node in a cluster. We used rsh for two reasons: first, rsh appeared to be easier to configure than ssh, and because we have a private network with trusted users, security is not an issue; second, from what we understand, rsh does not have the added overhead of encryption, so its cluster performance should be a bit faster than ssh. 4. We wanted to install MPICH from both user and root perspectives, so we configured rsh to allow user and root access. Our methods, however repugnant to Linux security experts, were as follows: Create .rhosts files in the user and root directories. Our .rhosts files for root users are as follows: sang root

sanju root sushi root komok root

5. Edit the file /etc/hosts for each node (master and slaves) (make sure that a root user is active). In this study, the following entries are used.

127.0.0.1 172.22.75.1 172.22.75.2 172.22.75.3 172.22.75.4

localhost sanju komok sushi sang

After the complete of the above editing, you should go to the directory " /etc/init.d" and enter the command "./network restart" to restart the network system.

6. To allow root users to use rsh, we had to add the following lines to the /etc/securetty file: rsh, rlogin, rexec, pts/0, pts/1. 7. We modified the /etc/pam.d/rsh file: #%PAM-1.0 # For root login to succeed here with pam_securetty, "rsh" must be # listed in /etc/securetty. auth auth auth auth account session sufficient /lib/security/pam_nologin.so optional /lib/security/pam_securetty.so sufficient /lib/security/pam_env.so sufficient /lib/security/pam_rhosts_auth.so sufficient /lib/security/pam_stack.so service=system-auth sufficient /lib/security/pam_stack.so service=system-auth

8. rsh, rlogin, Telnet and rexec are disabled in Red Hat by default. To change this, we navigated to the /etc/xinetd.d directory and modified each of the command files (rsh, rlogin, telnet and rexec), changing the disabled = yes line to disabled = no. 9. xinetd -restart to enable rsh, rlogin, etc. We were then good to go with no more rsh password prompts.

10. NFS Server Setup on the Master Node 1. After configuring the network system of your cluster correctly, you can set up the NFS. Lets see how to set up NFS Server on our master node. The objective is to configure the master node so that it allows the slave nodes to share its file system. 2. Log on the master node as a "root" user. In the RedHat LINUX console window, enter the command "setup". It prompts a setup window. We chose the item "System Services" to activate the following daemons: "network," "nfs," "nfslock," "portmap," "rsh," "rlogin, "sshd," and "xinetd." (Note you can select/unselected the item by pressing the Space key.) 3. Edit the /etc/exports file to specify the file systems (to be shared), hosts (to be allowed) and the type of permissions (ro or rw). In this study, our "exports" file has the following entry: /home 172.22.75.1/4(rw, no_root_squash)

[Note: 1.) 172.22.75.1/4 presents the first 4 IP addresses, range from 172.22.75.1 to172.22.75.4 to be allowed to access the exported file system; 2.) No space between172.22.75.1/4 and (rw, no_root_squash)]

4. Reboot the master node.

5. If you don't want to reboot at this moment, you can manually restart the NFS service to export the shared directory. First, you need to check the configure file "/etc/exports" using the command "/usr/sbin/exportfs -a". If there is something wrong in your configure file, it will be reported. Otherwise, you may run the second command " /sbin/service nfs reload", which restarts the NFS service and exports the shared directory. As long as you modify the configure file "/etc/exports" to change a new shared directory or driver, you need to execute these commands.

11. NFS Client Setup on Slave Nodes Similarly, we set up the NFS Client on each slave node. The objective is to enable each slave node to mount the file sharing system the master node has. 1. Create the directory "/home" on each node (If this directory exist, you dont need to create it.) This directory serves as a mount point of the shared file system established in the master.

2. Log on each slave node as a "root" user. In the console window, enter the command " setup", which prompts a setup window. In the setup window, we choose the item " SystemServices" to activate the following daemons: network, nfs, portmap, rsh, rlogin, sshd, and xinetd.(Note you can selected/unselected the item by pressing the Space key.) 3. Edit the file "/etc/fstab". In this study, we need to append an extra line at the end of the file. This line tells the homedirectory of the master node exactly mounts (links) to the one of current slave node. For example, we had the following line sanju:/home /home nfs This approach enables slave nodes to automatically and statically mount the share directory of the master node after you reboot your machines. The alternative, you can use an explicit command " mount", which can dynamically mount the masters shared directory to the slave node without rebooting your machines. For example,

#mount -t nfs sanju:/home /home Or you can run the following command to restart NFS client if the file "/etc/fstab" is already configured. ./netfs restart

4. Check whether the NFS is successful or not. We can use the command " df" in the LINUX console window. The successful information will display as follows. For example: if you are on node02, you may have the following Filesystem 1k-blocks Used Available Use% Mounted on /dev/hda2 3581536 933536 2466064 28% / /dev/hda1 46636 8846 35382 21% /boot none 127956 0 127956 0% /dev/shm sanju:/home 3023760 755264 2114896 27% /home

6.

Reboots this slave node by enter the command reboot.

12. Install Parallel Library MPI on Master Node 1. We were having MPICH package (UNIX all flavors) from www-unix.mcs.anl.gov/mpi/mpich/download.html Untar the file in either the common user directory (the identical user you established for all nodes "beowulf" on our cluster) or in the root directory (if you want to run the cluster as root). Issue the command: tar xvf mpich.tar.gz (or whatever the name of the tar file is for your version of MPICH), and the mpich directory will be created with all subdirectories in place.

2. Type ./configure, and when the configuration is complete and you have a command prompt, type make. 3. Configure the MPI environment after generating the binary codes. You can go to the directory "/home/cluster/mpich-1.2.4/util/machines". Edit the file "machines.LINUX" to specify the computer nodes, which are desirable to join the MPI empowered parallel computing in the cluster system. In this study, we use these entries in the file: sanju komok sushi sang

4. Configure the cluster system to prepare for running the MPI programs. You need to edit the hidden file .bash_profile to add the path for the MPI as:

PATH=$PATH:$HOME/bin:/home/cluster/mpich-1.2.4/bin:/home/cluster/mpich1.2.4/util

5. Set the environment of server by the lines in .cshrc file that make MPICH the default

setenv MPIRUN_HOME /home/cluster/mpich-1.2.4/bin

set path = (/home/cluster/mpich-1.2.4/bin $PATH )

13. Running the Test Programs

Once the files have been copied, well type the following from the top directory of my master node to test my cluster: mpirun -np 1 cpilog This will run the cpilog program on the master node to see if the program works correctly. Some MPI programs require at least two processors (-np 2), but cpilog will work with only one. The output looks like the following:

pi is approximately 3.1415926535899406, Error is 0.0000000000001474 Process 0 is running on sanju wall clock time = 0.360909

Das könnte Ihnen auch gefallen