Sie sind auf Seite 1von 29

Facultatea de Automatica Calculatoare si Electronica Craiova

MANAGEMENTUL SISTEMELOR INFORMATICE

Profesor: Mihail Buricea

Student: Duicu Adrian 10404B

LABORATOR NR.1 ASPECTE ALE MANAGEMENTULUI UNUI SISTEM LOCAL (WINDOWS XP)

1. Computer Management

2. Event Viewer Microsoft Windows 7, ofera acum posibilitatea de a monitoriza/supraveghea activitatea de pe calculator.Utilizatorii de Windows trebuie s deschid un program numit Event Viewer, acesta putnd nregistra aproape toate aciunile recent ntreprinse pe acel calculator. Practic poti vedea tot ce s-a intamplat pe calculatorul tau, sa zicem, in intervalul in care ai lipsit de acasa.

3. Local security policy

MODULE 1. INSTALLING SUSE LINUX ENTERPRISE SERVER 1. BASIC NOTIONS ON INSTALLATION PROCEDURES

General matters System Start-Up for Installation Insert the first SUSE Linux Enterprise CD or the DVD into the drive. Then reboot the computer to start the installation program from the medium in the drive. 1.1.Installing from the SUSE Linux Enterprise Media -> to install a single, disconnected workstation. 1.1.1. The boot screen

1.2. Installing with Yast

A few seconds after starting the installation, SUSE LINUX loads a minimal Linux system to run the installation procedure. At the end of the loading process, the YaST installation program starts.This is where the actual installation procedure begins, which is controlled by the YaST installation program.

1.3. Selecting the Installation Mode

1.1.2. Installation settings After a thorough system analysis, YaST presents reasonable suggestions for all installation settings. The options that sometimes need manual intervention in common installation situations are presented under the Overview tab. 1.5. Possible Options for Windows Partitions

To shrink the Windows partition, one must interrupt the installation and boot Windows to prepare the partition from there. Although this step is not strictly required for FAT partitions, it speeds up the resizing process and also makes it safer. 1.6. Resizing the Windows Partition

1.1.4. Software The default desktop of SUSE Linux Enterprise is GNOME. To install KDE, click Software and select KDE Desktop Environment from Graphical Environments. 1.2. Installing from a Network Server Using SLP (Service Location Protocol) This option is recommended for a single workstation or a small number of workstations and if a network installation server announced via SLP is available. Set up an installation server as described below in Setting Up the Server Holding the Installation Sources. Insert the first CD of the media kit into the CD-ROM drive an reboot the machine. At the boot screen, select Installation, press F4 then select SLP. The installation program retrieves the location of the network installation source using OpenSLP and configures the network connection with DHCP. If the DHCP network configuration fails, you are prompted to enter the appropriate parameters manually. The installation then proceeds normally. Finish the installation as if you had chosen to install from physical media. 1.2.1. Setting Up an Installation Server Using YaST YaST offers a graphical tool for creating network installation sources. It supports HTTP,

FTP, and NFS network installation servers. 1. Log in as root to the machine that should act as installation server. 2. Start YaST Miscellaneous Installation Server. 3.Select Server Configuration. 4.Select the server type (HTTP, FTP, or NFS). The selected server service is started automatically every time the system starts. If a service of the selected type is already running on your system and you want to configure it manually for the server, deactivate the automatic configuration of the server service with Do Not Configure Any Network Services. In both cases, define the directory in which the installation data should be made available on the server. 5. Configure the required server type. This step relates to the automatic configuration of server services. It is skipped when automatic configuration is deactivated. Define an alias for the root directory of the FTP or HTTP server on which the installation data should be found. The installation source will later be located under: - ftp://Server-IP/Alias/Name (FTP) ; - http://Server-IP/Alias/Name (HTTP).

Laborator nr.3 SUSE LINUX Enterprise Server Installation and Administration


CHAPTER 1. INSTALLATION WITH YAST 1.3. The Boot Screen

To install the system, select Installation with the arrow keys. This loads YaST and starts the installation. The Boot Screen

Selecting the Language

Partitioning with YaST

When you select the partitioning item in the suggestion window for the first time, YaST displays a dialog listing the partition settings as currently proposed. This way it is possible to accept/change the current settings. Alternatively, discard all the settings and start over from scratch.

The YaST Partitioner in Expert Mode

Creating a Partition

Select New. If several hard disks are connected, a selection dialog appears in which to select a hard disk for the new partition. Then, specify the partition type (primary or extended). Create up

to four primary partitions or up to three primary partitions and one extended partition. Within the extended partition, create several logical partitions
Select the file system to use to format the partition and a mount point, if necessary. YaST suggests a mount point for each partition created. 1.7.5.2. Partitioning Parameters

If you create a new partition or modify an existing partition, various parameters can be set in the partitioning tool. For new partitions, suitable parameters are set by YaST and usually do not require any modification. To perform manual settings, proceed as follows: 1. Select the partition.
Edit the partition and set the parameters

Software SUSE LINUX contains a number of software packages for various application purposes. and offers three system types with various installation scopes. Depending on the available disk space, YaST selects one of these predefined systems and displays it in the suggestion window.
Minimal System (only recommended for special purposes)

This basically includes the core operating system with various services, but without any graphical user interface. The machine can only be operated using ASCII consoles. It is especially suitable for server scenarios that require little direct user interaction.
Minimal Graphical System (without KDE)

If you do not want the KDE desktop or if there is insufficient disk space, install this system type. The installed system includes the X Window System and a basic window manager. You can use all programs that have their own graphical user interface.
Default System (with KDE)

This system type includes the KDE desktop together with most of the KDE programs and the CUPS print server. If possible, YaST selects this system type.
Full Installation

This system type is the largest one and includes all packages coming with SUSE LINUX, except those that would result in dependency conflicts. Click Software Selection in the suggestion window to open a dialog in which to select one of the predefined systems. To start the software installation module (package manager) and modify the installation scope, click Detailed Selection.

Laborator nr. 4 SUSE LINUX Enterprise Server, YaST handles both the installation and the configuration of your system. The Filter Window The package manager offers various filter methods for arranging the packages in categories and limiting the number of displayed packages. The Individual Package Window The content of this list of individual packages is determined by the currently selected filter. If, for example, the Selection filter is selected, the individual package window displays all packages of the current selection. In the package manager, each package has a status that determines what to do with the package ( Install or Delete). This status is shown by means of a symbol in a status box at the beginning of the line. Depending on the current situation, some of the possible status flags may not be available for selection . For example, a package that has not yet been installed cannot be set to Delete. View the available status flags with Help+Symbols. Packages
Click Packages to start the package manager and (de)select individual packages for update. Any package conflicts should be resolved with the consistency check.

Configuration with YaST To configure the printer, select Hardware+Printer in the YaST control center. This opens the main printer configuration window. The lower part lists any queues configured so far. If your printer was not autodetected, you can configure it manually. 2.4.3.2. Automatic Configuration YaST is able to configure the printer automatically if the parallel or USB port can be set up automatically and the connected printer can be autodetected. Additionally, the ID string of the printer, as supplied to YaST during hardware autodetection, must be included in the printer database. Given that this ID may differ from the actual name of the model, you may need to select the model manually. To make sure everything works properly, each configuration should be checked with the print test function of YaST. The YaST test page also provides important information about the configuration that is being tested. 2.4.3.3. Manual Configuration
If the requirements for automatic configuration are not met or if you want a custom setup, configure the printer manually.

Hard Disk Controller Normally YaST configures the hard disk controller of your system during the installation. If you add controllers, integrate these into the system with this YaST module. You can also modify the existing configuration, but this is generally not necessary.

Graphics Card and Monitor (SaX2) The graphical user interface, or X server, handles the communication between hardware and software. Desktops, like KDE and GNOME, and the wide variety of window managers use the X server for interaction with the user. Multihead If you have installed more than one graphics card in your computer or a graphics card with multiple outputs, you can connect more than one screen to your system. The operation with 2 screens = dualhead, and with more =multihead. SaX2 automatically detects multiple graphics cards in the system and prepares the configuration accordingly. Set the multihead mode and the arrangement of the screens in the multihead dialog. Three modes are offered: Traditional (default), One screen (Xinerama), and Clone mode. Traditional Multihead. Each monitor represents an individual unit. The mouse pointer can switch between the screens. Cloned Multihead. In this mode, all monitors display the same contents. The mouse is only visible on the main screen. Xinerama Multihead. All screens combine to form a single large screen. Program windows can be positioned freely on all screens or scaled to a size that fills more than one monitor. Sound When the sound configuration tool is started, YaST tries to detect your sound card automatically. Configure one or multiple sound cards. To use multiple sound cards, start by selecting one of the cards to configure. Press Configure to enter the Setup dialog. Edit opens a dialog in which to edit previously configured sound cards. If YaST is unable to detect your sound card automatically, press Add Sound Card in Sound Configuration . Sound Configuration Use Delete to remove a sound card. Existing entries of configured sound cards are deactivated in the file /etc/modprobe.d/sound. Click Options to open a dialog in which to customize the sound module options manually. In Volume, configure the individual settings for the input and output of each sound card. Next saves the new values and Back restores the default configuration. Under Add Sound Card..., configure additional sound cards. If YaST detects another sound card, continue to Configure a Sound Card. If YaST does not detect a sound card, automatically be directed to Manual Sound Card Selection.
Creating a Boot, Rescue, or Module Disk

Use this YaST module to create boot disks, rescue disks, and module disks. These floppy disks are helpful if the boot configuration of your system is damaged. The rescue disk is especially necessary if the file system of the root partition is damaged. In this case, you might also need the module disk with various drivers to be able to access the system (e.g., to access a RAID system).

Sysconfig Editor

The directory /etc/sysconfig contains the files with the most important settings for SUSE LINUX. The sysconfig editor displays all settings in a well-arranged form. The values can be modified and saved to the individual configuration files. Generally, manual editing is not necessary, as the files are automatically adapted when a package is installed or a service is configured. Do not edit the files in /etc/sysconfig if you do not know exactly what you are doing, as this could seriously inhibit the operability of your system. The Software Installation Module

Laborator nr 4 continuare

Special Installation Procedures


Loading Modules If the user wants to select the modules (drivers) needed, linuxrc offers the available drivers in a list (name of module -> displayed to the left and a brief description of the HW supported by the driver -> displayed to the right). For some components, linuxrc offers several drivers or newer alpha versions of them.

Figure 3.4. Loading Modules

Figure 3.5. Selecting SCSI Drivers

3.1.4. Entering Parameters After the HW driver is selected, Enter => This opens a dialog in which to enter additional parameters for the module. Separate multiple parameters for one module with spaces. Figure 3.6. Entering Parameters for a Module

. Start Installation or System After setting up HW support via modules, proceed to Start Installation or System. From this menu, a number of procedures can be started: Start Installation or Update, Boot Installed System (the root partition must be known), Start Rescue System , and Eject CD.

Starting SUSE LINUX Following the installation, decide how to boot Linux for daily operations. Following -> various alternatives for booting Linux. The most suitable method depends on the intended purpose. Boot Disk You can boot Linux from a boot disk (always work and is easy) The boot disk can be created with YaST. Installation from a Network Source No installation support is available for this approach. Therefore, the following procedure should only be attempted by experienced computer users. 2 steps are necessary: a) The data required for the installation (CDs, DVD) must be made available on a machine that will serve as the installation source. b) The system to install must be booted from floppy disk or CD and the network must be configured.

Partitioning for Experts


The information are mainly of interest for those who want to optimize a system for security and speed and who are prepared to reinstall the entire existing system if necessary.

Size of the Swap Partition Many sources state the rule that the swap size should be at least twice the size of the main memory. Modern applications require even more memory. For normal users, 512 MB of virtual memory is a reasonable value. Optimization The hard disks are normally the limiting factor. One can combine 3 possibilities: Distribute the load evenly to multiple disks. Use an optimized file system, such as reiserfs. Equip your file server with a sufficient amount of memory (at least 256 MB). Speed and Main Memory In Linux, the size of main memory is often more important than the processor speed, especially due to the ability of Linux to create dynamic buffers containing hard disk data.

LVM Configuration
LVM Configuration = professional partitioning tool that enables you to edit/delete existing partitions and create new ones. One can access the Soft RAID and LVM configuration from here.

. LVM Configuration with YaST Prepare the LVM configuration in YaST by creating a LVM partition when installing. Steps: click Partitioning in the suggestion window then Discard or Change in the screen that follows; - Next, create a partition for LVM by first clicking Add+Do not format in the partitioner then selecting 0x8e Linux LVM; - continue partitioning with LVM immediately afterwards or wait until after the system is completely installed. To do this, highlight the LVM partition in the partitioner then click LVM.... Figure 3.9. Activating LVM during Installation -

CURSUL 4 STOCARE SI VIRTUALIZAREA STOCARII N SISTEME INFORMATICE DAS - DIRECT-ATTACHED STORAGE Direct-attached storage (DAS) reprezinta stocare atasata direct si se refera la sistemul de stocare digital atasat direct la un server sau statie, fara ca o retea de stocare sa fie interpusa. Protocoalele principale utilizate n DAS sunt: a) ATA b) SATA (Serial ATA) . c) SCSI SCSI = set de standarde pentru conectarea fizica si transferul de date ntre calculatoare si dispozitive periferice. Standardele SCSI definesc comenzi, protocoale si interfete electrice si optice. SCSI se foloseste uzual pentru HDD si drivere de disc, dar poate fi conectat la o gama mare de alte dispozitive, incluznd scanere si drivere CD. Standardul SCSI defineste seturi de comenzi pentru anumite tipuri de dispozitive periferice. Comparatie SAS vs. SCSI paralel SAS opereaza point-to-point pe cnd magistrala SCSI este multidrop => avantaj SAS. Fiecare disp. SAS este conectat printr-o legatura dedicata la initiator (n afara cazului cnd se foloseste un expander). Daca un initiator este conectat la un target , nu este posibil sa apara conflicte , pe cnd la SCSI paralel, nsasi conectarea poate provoca unul. La SAS nu se impune utilizarea de terminator de magistrala (spre deosebire de SCSI -> fig.3). SAS elimina fenomenul de clock skew (clock skew=semnalul de clock ajunge la componente diferite la momente diferite!) Pe un singur canal, SAS suporta max. 16 384 dispoz. (daca se utilizeaza expanderi) , pe cnd SCSI suporta maxim 8 sau 16. SAS ofera viteze de transfer mai mari (1.5 sau 3.0 Gbit/s , n feb. 2009 este estimata atingerea performantei de 6 Gbit/s) comparativ cu majoritatea standardelor SCSI. Viteza este atinsa pe fiecare conexiune initiator-target => performante mai bune, pt. ca un disp. conectat cu SCSI paralel partajeaza viteza cu toate celelalte disp. de pe magistrala. Controllerele SAS pot suporta conectarea de dispozitive SATA, aceste putnd fi conectate fie direct folosind protocolul SASA, fie prin expanderi SAS folosind protocolul SATA Tunneled

Comparatie SAS vs SATA Sistemele identifica dispozitivele SATA dupa numarul de port prin care sunt conectate la HBA, pe cnd dispoz. SAS sunt identificate n mod unic prin WWN. Protocolul SAS suporta mai multi initiatori ntr-un domeniu SAS, SATA nu . Majoritatea driverelor SAS furnizeaza tagged command queuing, pe cnd cele mai noi drivere SATA furnizeaza native command queuing, fiecare cu avantajele si dezavantajele sale. (Tagged command queuing (TCQ) = tehnologie cu care sunt echipate anumite HD-uri ATA si SCSI. Permite SO-ului sa trimita simultan mai multe cereri de citire/scriere catre un HDD. Native Command Queuing (NCQ) = tehnologie proiectata pentru marirea performantelor HD-urilor SATA n anumite situatii, permitnd optimizarea operatiilor interne prin stabilirea ordiniii n care se executa operatiile de R/W. Se reduc miscarile capetelor discului ). SATA foloseste setul de comenzi ATA => suporta doar HDD si drivere CD/DVD. Teoretic, SAS suporta numeroase alte dispoz. (inclusiv scanere si imprimante). Avantajul este nsa chestionabil , pt. ca multe dintre aceste dispoz. folosesc cai alternative prin magistrale ca USB, IEEE 1394 (FireWire) si Ethernet. Hardware-ul SAS permite multipath I/O pentru dispozitive, pe cnd SATA anterior lui SATA II nu. SATA II utilizeaza port multipliers pentru a obtine expandarea porturilor, a.. anumiti producatori de port multiplier-i au implementat multipath I/O folosind hardware-ul de la port multiplier-i. SATA a fost desemnat pe piata ca succesor de uz general al parallel ATA si se utilizeaza intensiv, pe cnd SAS-ul, mai scump, se utilizeaza n special n aplicatii server critice. SAS foloseste comenzi SCSI pentru error-recovery and reporting (restabilirea n urma erorilor si raportarea acestora) , aceste comenzi avnd o functionalitate mai mare dect comenzile SMART de la ATA folosite de driverele SATA. SAS foloseste tensiuni de semnal mai mare dect SATA => creste potentialul lor de utilizare n backplane-uri din servere. SAS poate folosi cabluri de pna la 8 m, SATA de max. 1 m.

Rutarea directa permite unui dispozitiv sa identifice dispozitivele care sunt conectate direct la el (n fig. 6 expanderul 2 acceseaza direct dispoz. A). Table routing-ul identifica dispozitivele conectate la expanderii conectati la propriul PHY al dispozitivului (n fig.6 expanderul 2 acceseaza prin table routing dispoz. B). Subtractive routing-ul este folosit atunci cnd dispozitivele dorite nu se afla n sub-ramura de care apartinem. Subtractive routing-ul va pasa cererea altei ramuri.

J.B.O.D. (Just a Bunch of Disks) = metoda mult utilizata pentru combinarea mai multor drivere fizice de disc ntr-unul singur, virtual (inversul partitionarii). Controllerul trateaza fiecare driver de acest tip ca si un disc independent, de aceea fiecare driver este un driver logic independent. Concatenarea nu furnizeaza si redundanta datelor.

Virtualizarea stocarii Generalitati Un DISC LOGIC = dispozitiv care furnizeaza o zona de capacitatate de stocare utilizabila pe unul sau mai multe componente de tip driver de disc fizic ntr-un sistem de calcul. Se mai utilizeaza si alti termeni pentru a desemna un disc logic: partitie, volum logic si uneori virtual disk (vdisk). Discul este descris ca logic deoarece nu exista ca si entitate fizica de sine statatoare. Exista mai multe modalitati pentru a defini un disc (sau volum) logic. Majoritatea SO-urilor modern furnizeaza aceeasi forma de logical volume management care permite crearea si gestionarea volumelor logice. LOGICAL VOLUME MANAGER (LVM) reprezinta o metoda de alocare de spatiu pe dispozitive masive de stocare, care este mult mai flexibila dect schemele de partitionare conventionale. REMAPAREA SPATIULUI DE ADRESA Virtualizarea stocarii sprijina obtinerea independentei de locatie (location independence) prin abstractizarea localizarii fizice a datelor. Sistemul de virtualizare prezinta utilizatorului un spatiu logic pentru data storage si el nsusi gestioneaza procesul de mapare n locatia reala fizica. Forma exacta de mapare va depinde de implementarea aleasa. Anumite implementari pot limita granularitatea maparii care la rndul sau poate limita capacitatile (capabilities) dispozitivului. Tehnica LOGICAL UNIT NUMBER MASKING (LUN masking) reprezinta un proces de autorizare care face ca LUN-ul sa fie disponibil anumitor host-uri si indisponibile altora. LOGICAL BLOCK ADDRESSING (LBA) reprezinta o schema comuna folosita pentru specificarea localizarii blocurilor de date stocate pe dispozitive de stocare (uzual HD). MAPAREA LBA SI VIRTUALIZAREA LUN Pentru cazurile mai complexe (n special dispozitive RAID si SAN-uri unde LUN-urile sunt compuse prin virtualizarea si agregarea LUN-urilor), LBA-urile sunt translatate din modelul aplicatie al discului n cele utilizate n realitate de dispozitivul d stocare.

LABORATOR NR.5 IBM Tivoli Storage Area Network (SAN) Manager Exista multe similitudini cu Netwiew. Tivoli SAN Manager utilizeaza Netview pentru a afisa SAN-ul utilizatorului mpreuna cu reteaua sa IP. Device Centric View permite regasirea informatiilor relative la dispozitive. n acest exemplu este prezentat un router Crossroads si un dispozitiv de stocare IBM 3552, (FastT 500).

Vederea Host Centric View afiseaza toate sistemele gazda si relatiile lor logice cu dispozitivele de stocare locale sau atasate la SAN. SAN View

Vizualizarea SAN view afiseaza cte un simbol pentru fiecare SAN.


a) Zone View afiseaza informatii despre gruparea de tip zoning. n acest exemplu se afiseaza informatii despre 3 zone care au fost definite relativ la switch-ul FC: Compaq, FastT si respectiv Tape.

Primele informatii care se prezinta atunci cnd se detaliaza vederea topologica sunt relative la elementele de interconectare, care prezinta conexiunea dintre switchuri.

Daca se detaliaza informatiile despre Brocade, se poate vedea care sunt dispozitivele si sistemele care sunt atasate direct la acesta. Configurarea managerului se realizeaza f. simplu. Se pot seta intervalele polling , prin care se stabileste ct de frecvent este determinata starea SAN de catre agenti. Aceste intervale pot lua valori de ordinul minutelor, orelor, saptamnilor, etc. Se pot stabili zile precise cnd se realizeaza polling-ul, se poate realiza un polling instantaneu (prin butonul poll now). Butonul clear history anuleaza modificarile starii unui obiect care a avut anterior o problema dar a carui functionare este acum restabilita a.. culoarea cu care se simbolizeaza starea sa se va modifica din galben n verde. Tivoli SAN Manager poate de asemenea sa execute asa-zisii Element Managers. Prin Element Manager este desemnata o aplicatie pe care vendorii o folosesc pentru a-si configura HW-ul.

CURS 6
Nivele de management al SAN Nivelul retea este compus din componente diferite care furnizeaza conectivitatea, cum ar fi cabluri, switch-uri, legaturi inter-switch, gateway, routere si HBA-uri. Acest nivel devine strns legat de infrastructurile inter-networking cum sunt de ex. solutiile LAN si SAN. Nivelul sistemelor ntreprinderii asigura n principal capacit. de a avea o singura consola si o singura vedere manageriala. Fabric Manager furnizeaza un rezumat al informatiilor de nivel nalt despre toate switchurile dintr-un fabric si lanseaza n mod automat interfata Web Tools atunci cnd sunt necesare informatii mai detaliate. Software SAN multipathing ntr-un SAN bine proiectat, adesea se doreste ca un dispozitiv sa poata fi accesat de catre aplicatia host pe mai multe cai (paths), n scopul de a obtine performante potential mai bune, si pentru a ajuta recurperarea n cazul n care s-ar defecta un adaptor, cablu, switch sau un transceiver. 1. IBM Data Path Optimizer (DPO) face posibila comutarea dinamica a cailor (atunci cnd acestea sunt asignate), n medii Windows/UNIX, asigurnd un grad mare de disponibilitate si de asemenea asigura un algoritm de echilibrare a ncarcarii care poate sa mbunatateasca mult performantele si throughput-ul. (Wikipedia: throughput reprezinta viteza cu care un calculator sau o retea trimit/receptioneaza date. De aceea este o buna metoda pentru evaluarea capacitatii de canal aferente unei legaturi de comunicatie). 2. IBM Subsystem Device Driver (SDD) reprezinta un pseudo driver de dispozitiv proiectat pentru a oferi suport mediilor cu configuratii multipath din IBM ESS. Ofera functii ca: (a) Disponibilitate mai buna a datelor; (b) Echilibrare dinamica a sarcinii I/O pe mai multe cai; (c) Protectie automata la caderea unei cai; (d) Politici pentru selectarea path-ului.

3. IBM Redundant Disk Array Controller (RDAC) reprezinta un driver FC care implementeaza att refacerea n urma caderii unei cai ct si echilibrarea sarcinii pentru mai multe platforme. Se utilizeaza mpreuna cu *I DS si FAStT Server Family. 4. IBM Multi-Path Proxy driver (cunoscut si sub numele de Multi-Path Proxy (MPP) driver) reprezinta o arhitectura noua de driver care se foloseste pentru noile versiuni de drivere multipath pentru Windows 2000/2003 si Linux. Caseta cu instrumente pentru proiectarea SAN n acest paragraf se prezinta doar o descriere generala a aplicatiilor SAN precum si a tipurilor de componente necesare pentru a le implementa. Problematica este nsa mult mai complexa. O reprezentarea de nivel nalt a unui SAN capata adesea forma unui nor care prezinta linii de conectare de la nor la servere si la DS-uri. Pentru a reprezenta componentele din care este format norul, se poate folosi caseta cu instrumente Adaugarea de capacitate de stocare Adaugarea de capacitate de stocare la unul sau mai multe servere poate fi facilitata atunci cnd DS este conectat la un SAN. n functie de configuratia SAN-ului si de SO-ul serverului, poate fi posibila adaugarea/ndepartarea unui dispozitv fara a fi necesara oprirea sau restartarea serverului. Tehnica disk pooling Prin tehnica disk-pooling mai multe servere pot utiliza n comun un pool (bazin) comun de DS de tip disk atasate la un SAN. Resursele de stocare aferente discurilor sunt reunite (pooled) n interiorul unui singur subsistem-disc, dar pool-ul poate fi gazduit si de mai multe subsisteme disc.

Tehnica tape-pooling Tehnica tape-pooling rezolva problema ntlnita actual n mediile de tip open system n care mai multe servere nu pot partaja resurse de tip banda atunci cnd sunt implicate mai multe hosturi. Tehnica server clustering Arhitectura SAN permite clustering-ul (gruparea) scalabila n situatii de partajare de tip shareall , deoarece un cluster de servere omogene poate vedea o singura imaginesistem a datelor.

Tehnica n care se utilizeaza dispozitive SCSI cu cai multiple este deficitara din punctul de vedere al scalabilitatii, datorita constrngerilor impuse de SCSI relative la distanta: SCSI admite distante de max. 25 m, iar marimea conectorilor SCSI limiteaza nr. de conexiuni aferente serverelor dintr-un sistem. Consolidarea insulelor utiliznd SAN Actual se pot mbunatati caracteristicile referitoare la scalabilitate, securitate si respectiv usurinta a administrarii prin asigurarea capacitatii de comunicare a dispozitivelor conectate prin intermediul unor fabricuri SAN separate, fara ca pentru aceasta sa fie necesara reunirea fabricurilor respective ntr-unul singur. Tehnica File Pooling Prin tehnica File Pooling se atribuie spatiu disc care are dimensiunea necesara pentru a contine fisierul care urmeaza a fi creat. n loc de a asigna capacitate disc la servere individuale folosind discuri logice sau fizice, sau folosind functii ale SO (ca de ex. z/OS) pentru a administra capacitatea, tehnica File Pooling prezinta serverelor de aplicatie un spatiu de nume care poate fi montat (mountable name space). Aceasta tehnica este asemanatoare modului n care se comporta sistemul de fisiere NFS actual. Diferenta consta n faptul ca exista un canal de acces direct, nu unul de retea (cum este cazul NFS), ntre serverele de aplicatie si disc(uri) atunci cnd se stocheaza fisierul. Capacitatea-disc este asignata numai atunci cnd fisierul este creat si este eliberata atunci cnd fisierul este sters. Fisierele pot fi partajate ntre servere n acelasi mod (suport din partea SO, blocari, securitate, etc.) ca si cum ar fi stocate pe un disc fizic sau logic partajat. Servicii de copiere 1. Copierea traditionala Una dintre cele mai frecvente sarcini executate de catre un space manager este reprezentata de mutarea datelor utiliznd instrumente diferite. Astfel, o entitate care utilizeaza frecvent metoda copierii traditionale este TSM Tivoli Space Manager. Folosind SAN, copierea traditionala se poate realiza ntr-o maniera server-free, simplificnd operatiunile de planificare si marind viteza de executie. 2. Tehnica de copiere T-0 Un alt serviciu de copiere de tip outboard data movement posibil atunci cnd se utilizeaza tehnologia FC este copierea de tip time-zero (T-0). Consta din preluarea unei copii instantanee (snapshot) sau din nghetarea datelor (prin date ntelegnd baze de date, fisiere sau volume) la un moment de timp specificat, urmnd ca apoi sa se permita aplicatiilor sa actualizeze datele originale pe timpul ct datele nghetate sunt duplicate. Caracterisiticile de flexibilitate si extensibilitate oferite de FC permit ca aceste copii instantanee sa se faca att pe dispozitive locale ct si pe dispozitive aflate la distanta. Aceasta tehnica a aparut pentru a asigura sistemelor cu baze de date critice caracterul de disponibilitate de tip 24X7 . 3. Copiere la distanta Copierea la distanta reprezinta o cerinta impusa de interesele legate de afaceri, n scopul protejarii datelor mpotriva dezastrelor sau al realizarii migrarii lor dintr-o anumita locatie, pentru a evita timpii morti ai aplicatiilor pe timpul perioadelor de inactivitate planificata, cum sunt de ex. Cele aferente operatiunilor de mentenanta HW sau SW. La ora actuala solutiile de copiere la distanta sunt fie sincro

Lucrarea nr. 6 - SAN Storage Provisioning Planner 1. Introduction


Storage provisioning in SAN environments has long been done manually. Beginning with the basic capacity requirement, an administrator decides how many volumes to create and what their individual sizes should be. Next comes the question of whether there is enough space available in the storage systems to accommodate the new volumes. If the administrator is familiar with the internal structure of the storage system then he can use storage management tools such as TPC to see how much space is available in each pool in the storage system. Sometimes these may not match the chosen volume sizes. The SAN Storage Provisioning Planner functionality in TPC 3.3 is aimed to assist the administrators in this process. It is comprised of three modules: Volume Planner Path Planner Zone Planner

2. Volume Planner
The Volume Planner recommends where to create storage volumes based on the workload space and performance requirements and the current resource utilizations of the various components in the storage systems.

Workload Profile In addition to the basic capacity requirement, several attributes about the workload nature influence its performance behavior and where it can be allocated. As mentioned earlier some workloads are read-intensive with few writes while others such as backups are more write

dominated. Some workloads have sequential access patterns while others are more random. Sequential accesses require fewer disk seeks and hence can support greater bandwidths. TPC 3.3 provides five predefined template workload profiles since majority of the workloads usually fall into one or more of these categories. OLTP Standard : For typical online transaction processing OLTP High : For very active online transaction processing Batch Sequential : For batch applications involving large volumes of data Data Warehouse : For applications with inquiries into large data warehouses Document Archival : For document archival applications. Unassigned Volumes: Often times, there may be volumes in the system that are not being used for any workload currently. These could either be because the workload for which they were created has since gone away but the corresponding volume hasnt been deleted as such. Storage system components and pre-existing workloads: The Volume Planner interacts with the TPC database to obtain the current configuration and performance information from the storage systems.

3. Path Planner
The next step for the administrator, once the volumes are determined, is to set-up a suitable number of paths from the hosts to these volumes. The number of paths is dependent on whether the host driver supports multi-pathing, the number of host ports, the SAN architecture connecting the host to the subsystems, how much redundancy is required, and the IO rate desired from the hosts to the volumes. Once the number of paths is determined, the administrator then has to choose suitable port pairs on the host and subsystem, ones that are not already carrying a lot of other traffic.

Zone Planner
Once the volume has been created, and the paths and port-pairs are determined, the next step for the administrator is to set up zones to enable the hosts to access the volumes.

5. Executing the Planner Recommendations


The figure above also shows the final output from the planners. It shows the pool, storage system where the volumes will be created or reused. It shows the multipath options and data paths from the hosts to those volumes. And it also shows the new zones that will be created and any new members that will be added to the new or existing zones.

Das könnte Ihnen auch gefallen