You are on page 1of 10

Clster Knoppix

Master al disc dur

1. 2. 3. 4. 5. 6. 7. 8. 9. Arrancar LiveCD Term su knoppix-installer Crear partici de swap d1 Gb i laltre de ext3 Reiniciar Term su knoppix-terminalopenmosixserver a. Configuraci DHCP b. No triar cap targeta

1. 2. 3. 4. Arrancar CD Obrir un terminal netcardconfig Comprovar (ping)

Programa de test:


Building the cluster

For this I used two boxes. The master node was a Pentium III 1.7-GHz box with 384 MB RAM that it shares with the on-board graphics. The drone is a Pentium III 997-MHz machine with 256 MB dedicated RAM. Both have CD-ROM drives. They are connected through a standard crossed networking wire with RealTek 10/100 Mbps LAN cards on both ends. If you have a home network setup with two (or more) computers with wires connecting them, then your setup is similar to mine. You will also need ClusterKnoppix (the latest version as of this writing is clusterKNOPPIX V3.4-2004-05-10-EN-cl). ClusterKnoppix has the ability to sniff out drones as they boot on the network, but you need special LAN cards and a BIOS that supports booting over the network. Because the cost of replicating CDs is minimal and you want X running on all the nodes, it's easiest to use as many ClusterKnoppix CDs as there are nodes in the cluster. You can use the following network settings for the various nodes on the cluster:

IP address of Drone #1 -- I won't go into detail on networking in Linux. There's a lot of information available; see Resources below. Back to top

Network -- Netmask -- Default Gateway -- IP address of Master --

Initializing the master node

openMosix doesn't require the first node initiated to be a master node, but just to keep things straight, set up the master node first. At the boot: prompt, press Enter. Give ClusterKnoppix time to detect your hardware and boot. By default it boots into KDE.

1. 2. 1. 2.

Put the ClusterKnoppix CD in the drive and boot from it.

Once in, open a root shell. You'll find it inside the second item on the task bar. Now we need to configure the local network interface. First, give your network card, eth0, an IP address: ifconfig eth0

3. Next, specify the route it must follow to the gateway: route add -net gw That sets up your network.
Last, announce this node as the master node in the cluster: tyd. Next, you'll initialize the drone node. Back to top

1. 2.

Next, initiate the openMosix system: tyd -f init.

Initializing the drone node

Setting up a drone is not very different from setting up a master. Repeat the first three steps above for initializing the master node. Try configuring the network card on the drone yourself with the values mentioned previously (ifconfig eth0 ifconfig eth0 Last, insert this node into the cluster: tyd -m That's it! Your cluster's up and running. Back to top

1. 2.

Now, initialize the openMosix system. Same as last time: tyd -f init.

Getting familiar with tracking tools

You'll need to check the status of the cluster; ClusterKnoppix packs the following tools for tracking status:

Bring up the utility by typing its name on the root shell. It will detect the number of nodes in the cluster and present you with a nice, geeky-looking interface. At a glance, you can see the efficiency of the cluster, the load on the cluster, the memory available to the cluster, the percentage of memory used, and other information. You won't see much activity at this point, since the cluster is hardly being used. Spend some time familiarizing yourself with this application.


This application shows the processes that have been migrated from the master node to the drone. Move your mouse over the square surrounding the circle in the center. You'll be told the name of the process and its ID. To "migrate" a particular process from the master, you can drag a square and drop it in the smaller circle (the drone).

This simple application reports on the load of the cluster as well as individual nodes from its initialization till the time the cluster is up.

This command-line-based monitor shows you the load on the cluster, the memory available, memory being used, and other things in real time. Review its man page to understand how you can tailor the view.

This tool is of interest to people who are familiar with top. top keeps track of each and every process running on the computer. mtop, a cluster-aware variant of top, also displays each and every process, but with the additional information of the node in which the process is running. Back to top

Testing the cluster

Now that the cluster is up and running, it's time to overload it. To do this, you'll borrow a script written by the good people of the CHAOS distribution: Listing 1. Cluster test script // testapp.c Script for testing load-balancing clusters #include <stdio.h> int main() { unsigned int o = 0; unsigned int i = 0; unsigned int max = 255 * 255 * 255 * 128; // daemonize code (flogged from thttpd) switch ( fork() ) { case 0: break; case -1: // syslog( 1, "fork - %m" ); exit( 1 ); default: exit( 0 ); } // incrementing counters is like walking to the moon // its slow, and if you don't stop, you'll crash. while (o < max) { o++; i = 0; while (i < max) { i++; } } return 0; }

Open any word processor, copy this script, and save it as testapp.c. Make this script available on all the nodes in the cluster. To execute this script, issue these commands on all the nodes. First, compile the C program: gcc testapp.c -o testapp

Then, execute ./testapp. Execute the script at least once on all the nodes. I executed three instances on both nodes. After executing each instance, toggle back to the applications described above. Notice the spur of activity. Enjoy watching your drawing room cluster migrate processes from one node to another. Look Ma, it's balancing loads!
Construccin de un Cluster con ClusterKnoppix Se necesita una red instalada con dos (o ms) computadoras conectadas en red, y cada una con unidades de CDROM Tambin dos (o mas) CDs de ClusterKnoppix, que tiene la capacidad de divisar los nodos mientras existan en la red, y por ultimo se necesita una Placa de red LAN y una BIOS que soporte el booteo sobre la red. Inicializacin del nodo principal OpenMosix no requiere del primer nodo inicial para ser un nodo principal, pero para mantener una forma prudente, se instala primero el nodo principal. 1. Poner el CD de ClusterKnoppix en la unidad de CD-ROM para su autoarranque. 2. En el booteo: presionar enter. Dar el tiempo suficiente a ClusterKnoppix para que detecte el hardware y arranque. Por defecto empieza en KDE. 3. Una vez adentro, abrir una consola y registrarse como usuario root. - Para esto debemos escribir su y presionar enter. 4. Configurar la interfaz local de la red: ifconfig eth0 5. Despus, iniciar el sistema del openMosix: tyd f init. 6. Anunciar este nodo como el nodo principal en el cluster: tyd. tyd: Demonio de autodescubrimiento. Es una pequea aplicacin que esta en constante ejecucin para detectar un nuevo nodo al conectarse. Inicializacin de un nodo esclavo Establecer un nodo esclavo no es muy diferente de establecer un nodo principal. Se deben repetir los primeros tres pasos de arriba para inicializar el nodo principal. Configurar la tarjeta de red con los valores mencionados previamente. 8. Configurar la interfaz local de la red: ifconfig eth0 9. Inicializar el sistema del openMosix.: tyd f init. 10. Por ltimo, inserta este nodo en el Cluster: tyd -m

Familia de Herramientas

Si el estado del Cluster es el adecuado; ClusterKnoppix nos permitir usar las siguientes herramientas: OpenMosixview

Detectar el nmero de nodos en el Cluster con una interfaz agradable. Se puede ver la eficacia, la carga, y la memoria disponible para el Cluster, el porcentaje de la memoria usado, y otra informacin. No se ver mucha actividad a este punto, puesto que el grupo apenas se est utilizando. Pasa un cierto tiempo hasta que se familiariza con esta aplicacin. OpenMosixmigmon

Esta aplicacin muestra los procesos que se han emigrado del nodo principal al Cluster. Moviendo el Mouse sobre el nodo, brindar el nombre del proceso y su identificacin. Se puede Emigrar un proceso particular del nodo principal. OpenMosixAnalyzer Esta aplicacin simple escribe una crnica de la carga del Cluster as como tambin los nodos individuales de su inicializacin hasta el tiempo que el Cluster est activo. El Mosmon

Esta herramienta monitorea la carga en el Cluster, la memoria disponible, memoria que fue utilizada, y otras cosas en tiempo real. El Mtop Esta herramienta es de inters para las personas que estn familiarizadas con top. Los mantenimientos de top rastrean todos y cada uno de los procesos andando en la computadora. El mtop, tambin exterioriza a cada proceso, con la informacin adicional del nodo en el cual el proceso echa a andar.

3. Cluster de alto rendimiento, OpenMosix ml

Los pasos para montar un cluster de alto rendimiento con OpenMosix son muy simples: Parchear, configurar y compilar el ncleo Linux con OpenMosix. Una forma de hacerlo, es instalando el parche de OpenMosix para el ncleo, mediante aptget. Para ello teclee: # apt-get install kernel-patch-openmosix Una vez instalado el ncleo, hemos de reiniciar el sistema.

El segundo paso es instalar las utilidades de administracin para OpenMosix, para ello tecleamos: # apt-get install openmosix Adicionalmente podemos instalar el programa openMosixview: # apt-get install openmosixview

Una vez tenemos todas las aplicaciones necesarias, slo nos queda configurar el cluster para adaptarlo a nuestras necesidades. Como los pasos para realizar esta tarea son muy simples y estn perfectamente detallados en el The OpenMosix HOWTO, no los voy a repetir aqu, por lo que le remito a dicho manual para leerlos.

3.1. ClusterKnoppix, un cluster con un Live CD

Aun siendo extremadamente sencilla la instalacin de OpenMosix, se puede facilitar aun ms gracias a Win Vandersmissen. Win ha adaptado una distribucin KNOPPIX [8] para que nada ms arrancar, configure y ponga en funcionamiento un cluster con OpenMosix transparentemente al usuario. La distribucin se llama ClusterKnoppix. Para hacer uso de esta distribucin, no tenemos ms que grabar la imagen ISO en un CD y arrancar el ordenador con este CD en la unidad de CD-ROM. Despus de un rato, la distribucin habr detectado el hardware del ordenador, lo habr configurado y habr arrancado el cluster OpenMosix, buscando inmediatamente nuevos nodos (transparentemente), que sern aadidos al cluster en caso de ser encontrados. Aviso Le recomiendo, que nada ms arrancar la distribucin, establezca una clave para el usuario knoppix. Esto se debe a que la distribucin ClusterKnoppix hace uso deSSH para la comunicacin entre nodos, y si no se ha establecido una clave para dicho usuario, ser imposible la comunicacin. Para realizar esto, abra una consola y teclee lo siguiente: $ sudo bash # passwd knoppix Tras lo cual, ya puede teclear una nueva clave para dicho usuario. Repita este procedimiento para cada nodo. Nota

Para ver el estado del cluster, puede hacer uso de openMosixview, aplicacin que le permite configurar y administrar el cluster de una manera muy cmoda (vea el El manual para el clustering con openMosix para ms informacin). Arranque la aplicacin desde una consola, para ver los mensajes que lanza y para poder teclear la clave del usuario knoppix en las conexiones SSH que realiza.

[8] KNOPPIX es una distribucin de GNU/Linux que se ejecuta completamente desde un CD-ROM. Est basada en Debian GNU/Linux, y contiene aplicaciones como, The Gimp!, Mozilla, KDE y miles de aplicaciones libres ms. Toda esta informacin (2 GB aprox.) entra en un CD de 700MB, gracias al uso de compresin. The Tools: (Linux Version) We are going to start with the linux version first ( i have not tested the windows version yet). 1) Live CD ClusterKnoppix. Right now the latest version is 3.6. Download: 2) Atleast Two systems. Could be any combination of PCs and/or Laptops. The only requirement here is that should have atleast 1 Network Card. And thats about it to get your basic cluster going. Building the Cluster Now comes the exciting part. There are four steps to the process. We will discuss them in detail later. 1) Getting the tools. 2) Starting the master and the node. 3) Networking the master and the node. 4) Testing your Cluster. Now we will discuss these steps in detail. 1) Getting the tools:

This is the easy part. All you have to do is to download the clusterknoppix live cd from any of the mirrors provided in the links. It will be in an ISO format. Use your favorite cd burning applications, and burn the iso to the cd. Since we will be using only two machines in this project, so you should burn the iso twice, one cd for the master, one for the node. 2) Starting the master and the node: Once you have the burned the iso, put the cd in the drive and boot your system. If it would boot in your existing OS, change the bios settings boot sequence so it will boot from the cd. Do the same for your node. Once both the systems are up, open a shell so we could configure the networking. 3) Networking the master and the node. Assuming these systems are either connected in a peer or peer fashion or connected thru a switch. This is how we will configure the network and start the openmosix service: Master: sudo -s ifconfig eth0 up ifconfig eth0 route add -net gw /etc/init.d/openmosix restart omdiscd -i eth0 openmosixview &
Quote Just a little explanation: sudo -s => change to root ifconfig eth0 up => Making sure the eth0 interface is up ifconfig eth0 => Configuring the ip address on eth0 route add -net gw => Configuring the default gateway /etc/init.d/openmosix restart

=> Restarting the openmosix service to reflect the changes omdiscd -i eth0 => Broadcasting and listening for nodes openmosixview & => To open the openmosic viewer utility, to monitor the cluster.

Node : sudo -s ifconfig eth0 up ifconfig eth0 route add -net gw tyd -f init tyd -m omdiscd -i eth0
Quote A little Explanation: The first four lines are to configure the network. tyd -f init => To initialize the tyd service openmosix node tyd -m => to look for the 1.10 node and associate with it

And thats where everything comes to a halt for me. Both the systems are up and running. Configured with IPs, i even tried to ping them to see if they will work and they did. But i was not able to form a cluster. So i am asking this nice community of experts to please find some time and give my project a try and see if its working out for you, or tell me if i am doing something wrong. Its DEC 27th and i only have three days. I will be around. Thanks in advance.