Sie sind auf Seite 1von 8

IBM AIX - VIO Made Easy - A Complete Overview!!

PowerVM: It allows to increase the utilization of servers. Power VM includes Logical partitioning, Micro
Partitioning, Systems Virtualization, VIO, hypervisor and so on.
Simultaneous Multi-Threading: SMT is an IBM microprocessor technology that allows 2 separate H/W instruction
streams to run concurrently on the same physical processor.
Virtual Ethernet : VLAN allows secure connection between logical partitions without the need for a physical IO
adapter or cabling. The ability to securely share Ethernet bandwidth across multiple partitions increases H/W
utilization.
Virtual SCSI: VSCSI provides secure communication between the partitions and VIO server. The combination of
VSCSI and VIO capabilities allows you to share storage adapter bandwidth and to subdivide single large disks into
smaller segments. The adapters and disks can shared across multiple partitions, increase utilization.
VIO server : Physical resources allows you to share the group of partitions. The VIO server can use both virtualized
storage and network adapters, making use of VSCSI and virtual Ethernet.
Redundant VIO server: AIX or linux partitions can be a client of one or more VIO servers at the same time. A good
strategy to improve availability for sets of client partitions is to connect them to 2 VIO servers. The reason for
redundancy is ability to upgrade latest technologies without affecting production workloads.
Micro Partitioning: Sharing the processing capacity from one or more logical partitions. The benefit of Micro
Partitioning is that it allows significantly increased overall utilization. no of processor resources. A micro partition
must have 0.1 processing units. Maximum no.of partitions on any system P server are 254.
Uncapped Mode : The processing capacity can exceed the entitled capacity when resources are available in the
shared processor pool and the micro partition is eligible to run.
Capped Mode: The processing capacity can never exceed the entitled capacity.
Virtual Processors: A virtual processor is a representation of a physical processor that is presented to the operating
system running in a micro partition.
If a micro partition is having 1.60 processing units , and 2 virtual processors. Each virtual processor will have 0.80
processing units.
Dedicated processors : Dedicated processors are whole processors that are assigned to dedicated LPARs . The
minimum processor allocation for an LPAR is one.
IVM(Integrated virtualization manager): IVM is a h/w management solution that performs a subset of the HMC
features for a single server, avoiding the need of a dedicated HMC server.
Live partition Mobility: Allows you to move running AIX or Linux partitions from one physical Power6 server to
another without disturb.
VIO
Version for VIO 1.5
For VIO command line interface is IOSCLI
The environment for VIO is oem_setup_env
The command for configuration through smit is cfgassist
Initial login to the VIO server is padmin
Help for vio commands ex: help errlog

Hardware requirements for creating VIO :


1.
2.
3.
4.
5.
6.

Power 5 or 6
HMC
At least one storage adapter
If you want to share Physical disk then one big Physical disk
Ethernet adapter
At least 512 MB memory

Latest version for vio is 2.1 fixpack 23


Copying the virtual IO server DVD media to a NIM server:
mount /cdrom
Cd /cdrom
Cp /cdrom/bosinst.data /nim/resources
Execute the smitty installios command
Using smitty installios you can install the VIO S/w.
topas cecdisp flag shows the detailed disk statistics
viostat extdisk flag shows detailed disk statistics.
wklmgr and wkldagent for handling workload manager. They can be used to record performance data and that can
be viewed by wkldout.
chtcpip command for changing tcpip parameters
Viosecure command for handling the secure settings
mksp : to create a storage pool
Chsp: Adds or removes physical volumes from the storage pool
lssp: lists information about storage pool
mkbdsp: Attaches storage from storage pool to virtual SCSI adapter
rmbdsp: removes storage from virtual scsi adapter and return it to storage pool
Default storage pool is rootvg
Creation of VIO server using HMC version 7 :
Select the managed system -> Configuration -> Create Logical Partition -> VIO server
Enter the partition name and ID.
Check the mover service box if the VIO server partition to be created will be supporting partition mobility.
Give a partition profile name ex:default
Processors : You can assign entire processors to your partition for dedicated use, or you can assign partial
processors units from the shared processor pool. Select shared.
Specify the minimum, desired and maximum processing units.
Specify minimum, desired and maximum virtual processors. And select the uncapped weight is 191
The system will try to allocate the desired values
The partition will not start if the managed system cannot provide the minimum amount of processing units.
You cannot dynamically increase the amount of processing units to more than the maximum,
Assign the memory also min, desired and max.
The ratio between minimum and maximum amount of memory cannot be more than 1/64
IO selects the physical IO adapters for the partition. Required means the partition will not be able to start unless
these are available in this partition. Desired means that the partition can start also without these adapters. A
required adapter cannot be moved in a dynamic LPAR operation.
VIO server partition requires a fiber channel adapter to attach SAN disks for the client partitions. It also requires an
Ethernet adapter for shared Ethernet adapter bridging to external networks.
VIO requires minimum of 30GB of disk space.
Create Virtual Ethernet and SCSI adapters: increase the maximum no of virtual adapters to 100
The maximum no of adapters must not set more than 1024.
In actions -> select create -> Ethernet adapter give Adapter ID and VLAN id.
Select Access External Network Check Box to use this adapter as a gateway between internal and external network.

And also create SCSI adapter also.


VIO server S/W installation :
1. Place the CD/DVD in P5 Box
2. Activate the VIO server by clicking the activate. Select the default
partition
3. Then check the Open terminal window or console section and click the
advanced. And OK.
4. Under the boot mode drop down list select SMS.
After installation is complete login with padmin and press a(for s/w maintenance agreement terms)
License accept for accepting the license.
Creating a shared Ethernet adapter
1. lsdev virtual ( check the virtual Ethernet adapter)
2. lsdev type adapter ( Check the physical Ethernet adapter)
3. You use the lsmap all net command to check the slot numbers of the
virtual Ethernet adapter.
4. mkvdev sea ent0 vadapter ent2 default ent2 defaultid 1
5. lsmap all net
6. use the cfgassist or mktcpip command configure the tcp/ip or
7. mktcpip hostname vio_server1 inetaddr 9.3.5.196 interface ent3
netmask 255.255.244.0 gateway 9.3.4.1
Defining virtual disks
Virtual disks can either be whole physical disks, logical volumes or files. The physical disks can be local or SAN disks.
Create the virtual disks
1. login to the padmin and run cfgdev command to rebuild the list of visible
devices.
2. lsdev virtual (make sure virtual scsi server adapters available ex:vhost0)
3. lsmap all --> to check the slot numbers and vhost adapter numbers.
4. mkvg f vg rootvg_clients hdisk2 --> creating rootvg_clients vg.
5. mklv lv dbsrv_rvg rootvg_clients 10G
Creating virtual device mappings:
1.
2.
3.
4.

lsdev vpd |grep vhost


mkvdev vdev dbsrv_rvg -vadapter vhost2 dev dbsrv_rvg
lsdev virtual
lsmap all

fget_config Av command provided on the IBM DS4000 series for a listing of LUN names.
Virtual SCSI Optical devices:
A dvd or cd device can be virtualized and assigned to client partitions. Only one VIO client can access the device at a
time.
Steps :
1. let the DVD drive assign to VIO server
2. Create a server SCSI adapter using the HMC.

3. Run the cfgdev command to get the new vhost adapter. Check using lsdev
virtual
4. Create the virtual device for the DVD drive.(mkvdev vdev cd0 vadapter
vhost3 dev vcd)
5. Create a client scsi adapter in each lpar using the HMC.
6. Run the cfgmgr
Moving the drive:
1. Find the vscsi adapter using lscfg |grep Cn(n is the slot number)
2. rmdev Rl vscsin
3. run the cfgmgr in target LPAR
Through dsh command find which lpar is currently holding the drive.
Unconfiguring the dvd drive :
1.
2.
3.
4.
5.

rmdev dev vcd ucfg


lsdev slots
rmdev dev pci5 recursive ucfg
cfgdev
lsdev virtual

1.
2.
3.
4.
5.
6.

chvg factor 6 rootvg (rootvg can include upto 5 PVs with 6096 PPs)
extendvg f rootvg hdisk2
lspv
mirrorios f hdisk2
lsvg lv rootvg
bootlist mode normal ls

1.
2.
3.
4.
5.
6.
7.
8.

Create new partition using HMC with AIX/linux


give partition ID and Partition name
Give proper memory settings(min/max/desired)
Skip the physical IO
give proper processing units (min/desired/max)
Create virtual ethernet adapter ( give adapter ID and VLAN id)
Create virtual SCSI adapter
In optional settings

Enable connection monitoring


Automatically start with managed system
Enable redundant error path reporting

Mirroring the VIO rootvg:

Creating Partitions :

9. bootmodes select normal


Advanced Virtualization:
Providing continuous availability of VIO servers : use multiple VIO servers for providing highly available virtual scsi
and shared Ethernet services.

IVM supports a single VIO server.


Virtual scsi redundancy can be achieved by using MPIO and LVM mirroring at client partition and VIO server level.
Continuous availability for VIO

Shared Ethernet adapter failover


Network interface backup in the client
MPIO in the client with SAN
LVM Mirroring

Virtual Scsi Redundancy:


Virtual scsi redundancy can be achieved using MPIO and LVM mirroring.
Client is using MPIO to access a SAN disk, and LVM mirroring to access 2 scsi disks.
MPIO: MPIO for highly available virtual scsi configuration. The disks on the storage are assigned to both virtual IO
servers. The MPIO for virtual scsi devices only supports failover mode.
Configuring MPIO:

Create 2 virtual IO server partitions


Install both VIO servers
Change fc_err_recov( to fast_fail and dyntrk(AIX tolerate cabling changes)
to yes. ( chdev dev fscsi0 attr fc_err_recov=fast_fail dyntrk=yes perm
Reboot the VIO servers
Create the client partitions. Add virtual Ethernet adapters
Use the fget_config(fget_config vA) command to get the LUN to hdisk
mappings.
Use the lsdev dev hdisk vpd command to retrieve the information.
The reserve_policy for each disk must be set to no_reserve.(chdev dev
hdisk2 attr reserve_policy=no_reserve)
Map the hdisks to vhost adapters.( mkvdev vdev hdisk2 vadapter
vhost0 dev app_server)
Install the client partitions.
Configure the client partitions
Testing MPIO

Configure the client partitions:

Check the MPIO configuration (lspv, lsdev Cc disk)


Run lspath
Enable the health check mode (chdev l hdisk0 a hcheck_interval=50 P
Enable the vscsi client adapter path timeout ( chdev l vscsi0 a
vscsi_path_to=30 P)
Changing the priority of a path( chpath l hdisk0 p vscsi0 a priority=2)

Lspath
Shutdown VIO2
Lspath
Start the vio2
Lspath

Testing MPIO:

LVM Mirroring: This is for setting up highly available virtual scsi configuration. The client partitions are configured
with 2 virtual scsi adapters. Each of these virtual scsi adapters is connected to a different VIO server and provides
one disk to the client partition.
Configuring LVM Mirroring:

Create 2 virtual IO partitions, select one Ethernet adapter and one


storage adapter
Install both VIO servers
Configure the virtual scsi adapters on both servers
Create client partitions. Each client partition needs to be configured with
2 virtual scsi adapters.
Add one or two virtual Ethernet adapters
Create the volume group and logical volumes on VIO1 and VIO2
A logical volume from the rootvg_clients VG should be mapped to each of
the 4 vhost devices.( mkvdev vdev nimsrv_rvg vadapter vhost0 dev
vnimsrv_rvg)
Lsmap all
When you bring up the client partitions you should have hdisk0 and
hdisk1. Mirror the rootvg.
Lspv
Lsdev Cc disk
Extendvg rootvg hdisk1
Mirrorvg m rootvg hdisk1
Test LVM mirroring

Lsvg l rootvg
Shutdown VIO2
Lspv hdisk1 (check the pvstate, stale partitions)
Reactivate VIO and varyonvg rootvg
Lspv hdisk1
Lsvg l rootvg

Testing LVM mirroring:

Shared Ethernet adapter: It can be used to connect a physical network to a virtual Ethernet network. Several client
partitions to share one physical adapter.
Shared Ethernet Redundancy: This is for temporary failure of communication with external networks. Approaches
to achieve continuous availability:

Shared Ethernet adapter failover


Network interface backup

Shared Ethernet adapter failover: It offers Ethernet redundancy. In a SEA failover configuration 2 VIO servers have
the bridging functionality of the SEA. They use a control channel to determine which of them is supplying the
Ethernet service to the client. The client partition gets one virtual Ethernet adapter bridged by 2 VIO servers.
Requirements for configuring SEA failover:

One SEA on one VIOs acts as the primary adapter and the second SEA on
the second VIOs acts as a backup adapter.

Each SEA must have at least one virtual Ethernet adapter with the access
external network flag(trunk flag) checked. This enables the SEA to provide
bridging functionality between the 2 VIO servers.
This adapter on both the SEAs has the same pvid
Priority value defines which of the 2 SEAs will be the primary and which is
the secondary. An adapter with priority 1 will have the highest priority.

Procedure for configuring SEA failover:

Configure a virtual Ethernet adapter via DLPAR. (ent2)


o Select the VIO-->Click task button-->choose DLPAR-->virtual
adapters
o Click actions-->Create-->Ethernet adapter
o Enter Slot number for the virtual Ethernet adapter into adapter ID
o Enter the Port virtual Lan ID(PVID). The PVID allows the virtual
Ethernet adapter to communicate with other virtual Ethernet
adapters that have the same PVID.
o Select IEEE 802.1
o Check the box access external network
o Give the virtual adapter a low trunk priority
o Click OK.

Create another virtual adapter to be used as a control channel on VIOS1.(


give another VLAN ID, do not check the box access external network
(ent3)
Create SEA on VIO1 with failover attribute. ( mkvdev sea ent0 vadapter
ent2 default ent2 defaultid 1 attr ha_mode=auto ctl_chan=ent3. Ex:
ent4
Create VLAN Ethernet adapter on the SEA to communicate to the external
VLAN tagged network ( mkvdev vlan ent4 tagid 222) Ex:ent5
Assign an IP address to SEA VLAN adapter on VIOS1. using mktcpip
Same steps to VIO2 also. ( give the higher trunk priority:2)

Create client LPAR same as above.

Client LPAR Procedure:

Network interface backup: NIB can be used to provide redundant access to external networks when 2 VIO servers
used.
Configuring NIB:

Create 2 VIO server partitions


Install both VIO servers
Configure each VIO server with one virtual Ethernet adapter. Each VIO
server needs to be a different VLAN.
Define SEA with the correct VLAN ID
Add virtual Scsi adapters
Create client partitions
Define the ether channel using smitty etherchannel

Configuring multiple shared processor pools:


Configuration --> Shared processor pool management --> Select the pool name

VIOs Security:
Enable basic firewall settings: viosecure firewall on
View all open ports on firewall configuration: viosecure firewall view
To view current security settings: viosecure view nonint
Change system security settings to default: viosecure level default
List all failed logins : lsfailedlogin
Dump the global command log: lsgcl
Backup:
Create a mksysb file of the system on a nfs mount: backupios file /mnt/vios.mksysb mksysb
Create a backup of all structures of VGs and/or storage pools: savevgstruct vdiskvg ( data will be stored to
/home/ios/vgbackups)
List all backups made with savevgstruct: restorevgstruct ls
Backup the system to a NFS mounted file system: backupios file /mnt
Performance Monitoring:
Retrieve statistics for ent0: entstat all ent0
Reset the statistics for ent0: entstat reset ent0
View disk statistics: viostat 2
Show summary for the system in stats: viostat sys 2
Show disk stats by adapter: viostat adapter 2
Turn on disk performance counters: chdev dev sys0 attr iostat=true
Topas cecdisp
Link aggregation on the VIO server:
Link aggregation means you can give one IP address to two network cards and connect to two different switches for
redundancy purpose. One network card will be active on one time.
Devices --> communication --> Etherchannel/IEEE 802.3 ad Link Aggregation --> Add an etherchannel / Link
aggregation
Select ent0 and mode 8023ad
Select backup adapter as redundancy ex: ent1
Automatically virtual adapter will be created named ent2.
Then put IP address: smitty tcpip --> Minimum configuration and startup --> select ent2 --> Put IP address

Das könnte Ihnen auch gefallen