Beruflich Dokumente
Kultur Dokumente
Node0 1.1.1.10
Node1 1.1.1.20
Step 2:-Create a new user in both the nodes. Let us call this new user as
mpiuser. You can create a new user through GUI by going to System>Administration->Users and
Groups and click "Add User". Create a
new user called mpiuser and give it a
password. Give administrative
privileges to that user. Make sure that you create
the same user on all
nodes. Although same password on all the nodes is not
necessary, it is
recommended that you do so because it'll eliminate the need to
remember passwords for every node.
Step 3:-Now download and install ssh-server in every node. Execute the
command sudo apt-get install opensshserver in every machine.
Step 4:-Now logout from your session and log in as mpiuser.
Step 5:-Open terminal and type the following ssh-keygen -t dsa. This
command will
generate a new ssh key. On executing this command,
it'll ask for a paraphrase.
Leave it blank as we want to create a
passwordless ssh (Assuming that you've a
trusted
LAN
with
no
security issues).
Step 6:-A folder called .ssh will be created in your home directory. Its a
hidden folder.
This folder will contain a file id_dsa.pub that contains
your public key. Now copy
this
key
to
another
file
called
authorized_keys in the same directory. Execute the command
in
the
terminal
cd
/home/mpiuser/.ssh;
cat
id_dsa.pub
>>
authorized_keys;.
Step 7:-Now download MPICH from the following website(MPICH1). Please
download the
MPICH 1.xx version from the website. Do not download
MPICH 2 version. I was unable to get MPICH 2 to work in the cluster.
Step 8:-Untar the archive and navigate into the directory in the terminal.
Execute the
following commands:
mkdir /home/mpiuser/mpich1
./configure --prefix=/home/mpiuser/mpich1
Make
make install
step 9:-Open the file .bashrc in your home directory. If file does not exist,
create one. Copy the following code into that file
export PATH=/home/mpiuser/mpich1/bin:$PATH
export PATH
LD_LIBRARY_PATH="/home/mpiuser/mpich1/lib:
$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
step 10:-Now we'll define the path to MPICH for SSH. Run the following
command: sudo echo
/home/mpiuser/mpich1/bin
>>
/etc/environment
step 11:-Now logout and login back into the user mpiuser.
Step 12:-In the folder mpich1, within the sub-directory share or
util/machines/ a
file called machines.LINUX will be found. Open
that file and add the hostnames of all nodes except the home node ie. If
you're editing the machines.LINUX file of node0, then that file will contain
host names of all nodes except node0. By default
MPICH executes a
copy of the program in the home node. The machines.LINUX file for
the
machine node0 is as follows
The following are the high-level steps involved in configuring Linux cluster
on Redhat or CentOS:
OK
OK
You also need to assign a password for the RICCI on both the nodes.
passwd ricci
Changing password for user ricci.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Also, If you are running iptables firewall, keep in mind that you need to
have appropriate firewall rules on both the nodes to be able to talk to
each other
Also keep in mind that we are running these commands only from one
node on the cluster and we are not yet ready to propagate the changes to
the other node on the cluster
Next, add the second node rh2 to the cluster as shown below.
ccs -h rh1.mydomain.net --addnode rh2.mydomain.net
Node rh2.mydomain.net added
Once the nodes are created, you can use the following command to view
all the available nodes in the cluster. This will also display the node id for
the corresponding node.
ccs -h rh1 --lsnodes
rh1.mydomain.net: nodeid=1
rh2.mydomain.net: nodeid=2
This above will also add the nodes to the cluster.conf file as shown below.
cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="mycluster">
<fence_daemon/>
<clusternodes>
<clusternode name="rh1.mydomain.net" nodeid="1"/>
<clusternode name="rh2.mydomain.net" nodeid="2"/>
</clusternodes>
<cman/>
<fencedevices/>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
Next, add a fence device. There are different types of fencing devices available.
If you are using virtual machine to build a cluster, use fence_virt device as shown
below.
[root@rh1 ~]# ccs -h rh1 --addfencedev myfence agent=fence_virt
Next, add fencing method. After creating the fencing device, you need to created
the fencing method and add the hosts to the fencing method.
[root@rh1 ~]# ccs -h rh1 --addmethod mthd1 rh1.mydomain.net
Method mthd1 added to rh1.mydomain.net.
Finally, associate fence device to the method created above as shown below:
[root@rh1 ~]# ccs -h rh1 --addfenceinst myfence rh1.mydomain.net mthd1
[root@rh1 ~]# ccs -h rh1 --addfenceinst myfence rh2.mydomain.net mthd1
Ordered domain: Nodes in the ordered domain are assigned a priority level
from 1-100. Priority 1 being highest and 100 being the lowest. A node with
the highest priority will run the resource group. The resource if it was
running on node 2, will migrate to node 1 when it becomes online.
Once the failover domain is created, add both the nodes to the failover
domain as shown below:
[root@rh1 ~]# ccs -h rh1 --addfailoverdomainnode webserverdomain
rh1.mydomain.net priority=1
[root@rh1 ~]# ccs -h rh1 --addfailoverdomainnode webserverdomain
rh2.mydomain.net priority=2
You can view all the nodes in the failover domain using the following
command.
[root@rh1 ~]# ccs -h rh1 --lsfailoverdomain
webserverdomain: restricted=0, ordered=1, nofailback=0
rh1.mydomain.net: 1
rh2.mydomain.net: 2
To add a service to the cluster, create a service and add the resource to
the service.
[root@rh1 ~]# ccs -h rh1 --addservice webservice1 domain=webserverdomain
recovery=relocate autostart=1
Now add the following lines in the cluster.conf for adding the resource
references to the service. In this example, we also added failover IP to our
service.
<fs ref="web_fs"/>
<ip address="192.168.1.12" monitor_link="yes" sleeptime="10"/>
http://clusterlabs.org/doc/en-US/Pacemaker/1.1pcs/html/Clusters_from_Scratch/_configure_the_os.html