You are on page 1of 20

Deployment Guide

Informatica Enterprise Grid on VMware

Applicable to PowerCenter Versions 9.5 and higher


AUTHOR
Charles W. McDonald Jr.
Systems Architect
Informatica Core Product Specialists Team
Contributions from VMware & Symantec

Informatica & VMware Proprietary

Table of Contents
Table of Contents ................................................................................................................................. 2
Table of Figures & Tables ..................................................................................................................... 2
Overview ............................................................................................................................................. 4
Background.......................................................................................................................................... 4
Virtualization Support Statement......................................................................................................... 4
Informatica's Role ................................................................................................................................ 5
High Availability Center of Excellence ...................................................................................................... 5
Symantecs Role ................................................................................................................................... 5
Coordination Point Server ........................................................................................................................ 5
VMwares Role .................................................................................................................................... 5
HACOE Setup Assistance / Deployment Guide ......................................................................................... 5
Informaticas Technical Recommendations .......................................................................................... 5
The Shared File System ............................................................................................................................. 5
I/O Fencing................................................................................................................................................ 6
The Virtual Network ................................................................................................................................. 9
The Virtual Machine ................................................................................................................................. 9
The Cluster-Ware for the Shared File System .......................................................................................... 9
CP Server Setup ........................................................................................................................................ 9
Resource Sharing ...................................................................................................................................... 9
Effective Load Balancing ......................................................................................................................... 10
HACOE VMware vSphere Build (Current State) ................................................................................... 10
Environment Information ....................................................................................................................... 10
Native Software Architecture ................................................................................................................. 10
Virtualized Software Architecture .......................................................................................................... 11
The Shared File System (Shared VMFS with multi-writer flag)............................................................... 12
HACOE VMware (Certification Environment)...................................................................................... 12
Hardware & Firmware Information ........................................................................................................ 12
Virtualized Environment ......................................................................................................................... 12
Distributed Switch Configuration ........................................................................................................... 14
Informatica Guest Setup/Configuration (Certification Environment) .................................................... 18
VMFS Multi-Writer Flag for Shared Access for Informatica_shared location ........................................ 18
Where can I find more information? .................................................................................................. 19
High Availability/Grid .............................................................................................................................. 19
Glossary of Terms .............................................................................................................................. 19
Sources .............................................................................................................................................. 20

Table of Figures & Tables


Figure 1 - Dom_hacoe_primary Logical ...................................................................................................... 11
Figure 2 - HACOE Logical Architecture (At the time of Certification) ......................................................... 13
Figure 3 - VMware Distributed Switch (dvSwitch) Network Architecture .................................................. 14
Figure 4 - VMFS vMotion.......................................................................................................................... 15
Figure 5 - VMware ESXi Hosts Fencing ..................................................................................................... 16
Informatica & VMware Proprietary

Page 2 of 20

10/26/2012

Figure 6 - Heartbeat Architecture ............................................................................................................... 17

Informatica & VMware Proprietary

Page 3 of 20

10/26/2012

Overview

The purpose of this document is to provide a detailed technical deployment guide for System & Enterprise
Architects, System Administrators, DBAs, VMware administrators, VMware certified professionals, and other
IT implementers. It is not meant as a step by step you must do everything in this document this way
cookbook, but as a prescriptive solution clearly articulating what to do in order to have the best possible out
of box experience with Informatica Enterprise Grid/HA on VMware. Examples where deviations might be less
problematic might be operating system versions used at client deployments vs what is discussed in this
document. So long as the operating system version being used at the client is a supported version, it shouldnt
be an issue. However, advanced parameter settings should be followed precisely as prescribed in this
document for support.
The intended audience of this document should readily be familiar with terms such as I/O fencing, cluster
file system, cluster volumes, trusted connection, etc.

Background

As a brief reminder, PowerCenter HA is designed to provide the following functionality:


1.
2.
3.

Resilience between clients and Informatica servers, Informatica servers and other Informatica
servers, and Informatica servers and repository servers as well as between sources and targets
Failover of Informatica Application Services and Informatica nodes between and among
Informatica servers
Recovery of Informatica Application Services and PowerCenter Session Tasks

In order to properly provide the above mentioned recovery features, Informatica has always required a high
performance, highly available, fully POSIX compliant shared file system. Informatica highly recommends
Veritas Storage Foundation Suite Cluster File System Enterprise HA for this shared file system, especially in
vSphere, because that most closely follows the Informatica certification environment and will provide the best
performance and best availability both on bare metal and on VMware vSphere.

Virtualization Support Statement


Virtualization support for PowerCenter Enterprise Grid/HA begins at version 9.5 onward including only
VMware vSphere 4.1U1 and later versions in Enterprise Plus VMware HA Clusters. The prescribed solution
contained within describes Informaticas recommended solution using Veritas Storage Foundation Suite
Cluster File System Enterprise HA (or other SFHA variants thereof such as VxSFRAC). Informatica will support
the same NAS/NFS shared file systems for PowerCenter HA/Grid in VMware that it supports on bare metal,
but does not recommend it for the caveats described in the PowerCenter HA file system bare metal
certification described at the link below. For more information, see:
Informatica Support Statement for Virtualization https://communities.informatica.com/docs/DOC-7306.
PowerCenter HA file system bare metal certification https://communities.informatica.com/docs/DOC-3019.

Informatica & VMware Proprietary

Page 4 of 20

10/26/2012

Informatica's Role
High Availability Center of Excellence
Informatica has invested in building its own High Availability Center of Excellence (HACOE) which contains
both Native HW and VMware guests in an Informatica Platform domain in order to provide realistic
performance metrics, best practices, and comparison between native hardware and VMware guests. It is also
the source of this deployment guide and PowerCenter HA/Grid Pre-Install Checklist.

Symantecs Role
Coordination Point Server
In recent versions of the SFHA stack, Symantec/Veritas has added a new HW-based I/O fencing capability they
call a CP Server which basically acts as another HW voting device that can be used with SCSI III PGR drives or in
place of SCSI III PGR drives allowing more flexibility for SFHA package usage in virtualized clusters.
Symantec has also provided licenses and SME assistance in helping plan and configure the HACOE with
VMware + VxSFCFSHA and CP Server setup. We thank them for their partnership and assistance in this matter.

VMwares Role
HACOE Setup Assistance / Deployment Guide
VMware has provided SME assistance in helping with the VMware HACOE setup, trouble-shooting, and
deployment guide assistance.

Informaticas Technical Recommendations


The Shared File System
It should be noted that Informaticas HACOE setup has changed significantly since its certification of
PowerCenter HA on VMware. Specifically the following items have changed:

VMware ESXi 4.1U1 upgraded to 5.0U1


RHEL 5.6 guests rebuilt using RHEL 6.2
BETA PureStorage Array upgraded to GA PureStorage Array
Veritas SFCFS Enterprise HA 5.1SP1 upgraded to Veritas SFRAC 6.0RP1
Single CP Server upgraded to 3 guest CP Server cluster Veritas SFVCS 6.0RP1P2

For more information on the technical differences between current state and certification state see:
Certification Environment section or Current State Architecture sections of this document.

Informatica & VMware Proprietary

Page 5 of 20

10/26/2012

Informatica strongly recommends using VMFS with the VMware multi-writer flag Advanced Option to allow
simultaneous writer access to the same vmdk from multiple guests within the same VMware HA cluster. You
can find out more information on the multi-writer flag at:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1034165

The multi-writer flag configuration cannot yet be used for Veritas coordinator disks, but can and should be
used for creating clustered diskgroups, clustered volumes and clustered file systems inside VMware HA
Clusters. It is now mutually supported by VMware and Informatica. It is conditionally supported by Symantec
for the moment while an issue (described at the bottom of the following section) is worked to resolution. This
recommended method is considered best practice, because it is stable, it performs well, it is mutually
supported, and it still allows for both storage vMotion and host vMotion while Informatica sessions are
running. This method describes how Informatica certified PowerCenter HA on VMware HA Clusters using
vSphere 4.1U1 and that support extends to vSphere 5.x due to backward compatibility support from VMware.

I/O Fencing
Currently, the true UUID you would see on your SAN or as the naa id in vCenter Storage Configuration
Devices View is not allowed to go all the way down to the guest level unchanged/unmasked. Informatica
recommends a cluster of 3 CP Servers for configuring I/O fencing if using Veritas SFCFS Enterprise HA. When
configuring I/O fencing, Informatica specifically recommends and considers best practice the use of
/opt/VRTS/install/installvcs fencing to configure I/O fencing first in disabled mode, then immediately after
a reboot of all guests, configure again in enabled mode for CP Server fencing.

Informatica has seen an issue whereby there was a SCSI-3 error reported during the creation of a clustered
diskgroup when using VMFS with the scsi.sharing = multi-writer advanced parameter. This is most likely
caused by misconfigured CP Server fencing, and can sometimes be caused if you were to use any other install
script except installvcs with the fencing switch. In the HACOE this issue appears to have been caused by a
reportedly successful configuration of CP Server Fencing in enabled mode, but using the
`/opt/VRTS/install/installsfrac fencing` command.

Most clients will likely use VxSFCFS Enterprise HA therefore likely using `installcfs` to lay down the binaries and
do the initial cluster configuration. After the initial cluster configuration (which will have likely configured I/O
fencing in disabled mode) run installvcs with the fencing switch first to configure fencing again in disabled
mode. Each guest in the INFA PWC HA cluster should then be rebooted, then run installvcs with the
fencing switch again to configure fencing in enabled mode with CP Server fencing. Afterwards, you should
be able to create a clustered disk group, clustered volumes, and clustered file system with the VMFS with the
scsi.sharing set with multi-writer parameters. If you cannot, and you get that same SCSI-3 error we noticed,
Informatica recommends you try the following steps to correct:
1.
2.

Backup main.cf
Remove UseFence=SCSI3 line from current main.cf and copy to all other nodes in INFA PWC HA
cluster to the /etc/VRTSvcs/conf/config directory.

Informatica & VMware Proprietary

Page 6 of 20

10/26/2012

3.
4.
5.

Edit the /etc/vxfenmode file to have only one line in it vxfen_mode = disabled. Copy to all other
nodes in INFA PWC cluster.
Reboot each node in INFA PWC cluster.
gabconfig a should show all nodes joined to the cluster and CVM started, though active / active
access of any CFS configured may not be tolerated. Youre doing steps 1-4 because configuring
fencing requires VCS to be running and VCS requires fencing to be configured. Its important to
understand the distinction, configured in disabled mode is configured enough to start VCS.
pslxhacoe07.informatica.com:/u02#gabconfig -a
GAB Port Memberships
===============================================================
Port a gen d2dc0d membership 0123
Port b gen d2dc27 membership 0123
Port d gen d2dc13 membership 0123
Port f gen d2dc36 membership 0123
Port h gen d2dc2c membership 0123
Port o gen d2dc10 membership 0123
Port u gen d2dc33 membership 0123
Port v gen d2dc2f membership 0123
Port w gen d2dc31 membership 0123
Port y gen d2dc2e membership 0123

Informatica & VMware Proprietary

Page 7 of 20

10/26/2012

6.
7.
8.

run /opt/VRTS/install/installvcs with the fencing switch first to configure fencing again in disabled
mode
re-run /opt/VRTS/install/installvcs with the fencing switch again to configure fencing in enabled
mode with CP Server fencing
gabconfig a should show all clustered nodes joined on all ports. Whether it does or does not,
Informatica recommends you perform an `hastop all`, reboot each guest in the INFA PWC cluster,
and let everything come up and join the cluster gracefully. When all is said and done you should see
something similar to the following:
pslxhacoe07.informatica.com:/u02#gabconfig -a
GAB Port Memberships
===============================================================
Port a gen d2dc0d membership 0123
Port b gen d2dc27 membership 0123
Port d gen d2dc13 membership 0123
Port f gen d2dc36 membership 0123
Port h gen d2dc2c membership 0123
Port o gen d2dc10 membership 0123
Port u gen d2dc33 membership 0123
Port v gen d2dc2f membership 0123
Port w gen d2dc31 membership 0123
Port y gen d2dc2e membership 0123
pslxhacoe07.informatica.com:/u02#vxclustadm -v nidmap
Name
CVM Nid CM Nid State
pslxhacoe05
0
0
Joined: Master
pslxhacoe06
1
1
Joined: Slave
pslxhacoe07
2
2
Joined: Slave
pslxhacoe08
3
3
Joined: Slave
NOTE: The HACOE uses VxSFRAC so we may show more ports in a gabconfig than what you may see
in your environment.
Symantec is working on verifying if the differences in fencing between the scripts contributed to the
issue and should have an official support statement shortly thereafter. Until then they are providing
conditional support for VMFS with the multi-writer flag for shared CFS data drives with the condition

Informatica & VMware Proprietary

Page 8 of 20

10/26/2012

that any issues relating to fencing may require being replicated in a Native Environment or on
VMware with RDM-P drives. For more information contact your Symantec account representative.

The Virtual Network


The distributed network switches setup for the certification environment were essentially just upgraded and
re-used as-is except for adding a new ESXi node into the distributed switches.

Informatica strongly recommends the use of distributed switches (as opposed to the use of Standard
vSwitches) among all ESXi hosts hosting Informatica guests participating in the Enterprise Grid/HA
environment. Using this configuration without channel bonding at the guest level, instead allowing the
failover to happen at the distributed switch level among multiple involved NICs/LOMs (Logical Onboard
Module) on each ESXi proves to be far more stable and effective. Informatica left one standard vSwitch in
place that created at the time you configure the management network from the ESXi interface text menu.
FT logging traffic was allowed to run on that network, while all other traffic (especially guest traffic) runs
exclusively on the distributed switches.

The Virtual Machine


Virtual Hardware version 7 guests were used in the certification environment. Informatica strongly
recommends the use of at least Virtual Hardware version 7 or higher. Virtual Hardware version 8 on vSphere
5.x is considered best practice, especially if using SSD Enterprise Storage for Informatica Guests.

The Cluster-Ware for the Shared File System


Informatica used VxSFCFS Enterprise HA 5.1SP1 in the certification environment and strongly recommends at
least that code level or higher in customer environments. If customers wish to use the v6.x code level of the
VxSFCFS Enteprise HA product, Informatica recommends SFHA 6.0RP1 with VCS patched to at least 6.0RP1P2.

CP Server Setup
Informatica used SFHA 5.1SP1 to create the CP Server in the certification environment. Customers using this
version may experience the CP Server guest being unreachable by network because it removed the routing
table. If this happens to you its likely the NIC resource cpsnic which you can remove entirely from the
main.cf (/etc/VRTSvcs/conf/config/main.cf) and remove its associated dependencies then save the edited
main.cf. Run hastop all force, then reboot the CP Server guest. Ensure that the network parameters
described above are set up correctly before you issue the hastart command.

When using SFHA 6.0 and higher, these problems were not experienced in the HACOE. The CP Server install
and configuration went smoothly and was easily repeatable as many times as necessary. For this reason,
Informatica recommends you use SFHA 6.0 or higher when configuring your CP Server(s).

Resource Sharing
While resource pooling was available in the certification environment, it was not leveraged and Informatica
strongly recommends against resource over-sharing or over-committing on ESXi hosts where Informatica
Informatica & VMware Proprietary

Page 9 of 20

10/26/2012

Enterprise Grid/HA guests reside or could reside. This recommendation applies to all I/O, CPU and memory
resources.

Effective Load Balancing


Informatica Enterprise Grid comes with 3 different load balancing algorithms, two of which derive system level
metrics which act as seed values in formulas designed to figure out which node to assign workload for
maximum runtime efficiencies. With virtualization, the system level metrics are now guest level metrics. If
the hosts/hypervisors are over-committed, it can create false or inaccurate seed value information for
Informaticas load balancing algorithms leading to unanticipated performance issues.
Informatica
recommends not implementing shares for CPU, I/O or RAM between Informatica guests and other application
guests and if it is done, give Informatica significant percentage of those shares applicable enough to meet your
business requirements. Informatica also recommends not over-committing the ESXi host unless there is no
other alternative and even that should be done only temporarily.

HACOE VMware vSphere Build (Current State)


Environment Information

QTY 1 HP c7000 Blade Chassis


QTY 2 HP Onboard Administrators, Firmware 3.31
NETWORK = QTY 2 HP BLc VC Flex-10 Ethernet Module, Firmware 3.31 (40Gb backbone to Cisco
Nexus Series)
SAN = QTY 2 HP BLc VC 8Gb FC 20-Port, Firmware 1.42
SAN = QTY 2 HP StorageWorks 8/16 FC Switches Brocade 300 series
Storage Array = PureStorage FA-320 26TB SSD Flash Array
QTY 4 HP BL465c G7 CTO Blade (AMD 6172 x 2 (12 cores each)) 24 x 64GB RAM
o HP BLc QLogic QMH2562 8Gb FC HBA Opt
o HP Smart Array BL465c/685c G7 FIO Controller
o HP 146GB 6G SAS 15K 2.5in DP ENT HDD
o Firmware A19
o iLO3 Firmware 1.25
QTY 2 HP BL685c G7 CTO Blade (AMD 6172 x 4 (12 cores each)) 48 x 96GB RAM
o HP BLc QLogic QMH2562 8Gb FC HBA Opt
o HP Smart Array BL465c/685c G7 FIO Controller
o HP 146GB 6G SAS 15K 2.5in DP ENT HDD
o Firmware A20
o iLO3 Firmware 1.25
QTY 1 HP BL465c G7 CTO Blade (AMD 6176 x 2 (12 cores each)) 24 x 128GB RAM
o HP BLc QLogic QMH2562 8Gb FC HBA Opt
o HP Smart Array BL465c/685c G7 FIO Controller
o HP 146GB 6G SAS 15K 2.5in DP ENT HDD
o Firmware A19
o iLO3 Firmware 1.25

Native Software Architecture

RHEL 5.5
Veritas Storage Foundation Suite for Oracle RAC 5.1P3

Informatica & VMware Proprietary

Page 10 of 20

10/26/2012

Oracle RAC 11gR2


Informatica Platform 9.0.1HF2
Informatica B2BDT 9.0.1HF1
Informatica B2BDX-HA 9.0.1HF1

Virtualized Software Architecture

VMware ESXi 5.0.1 HP OEM Specific


vCenter Server 5.0.1 Standard running on Windows 2008R2 Standard x64 Server
RHEL 6.2
Veritas Storage Foundation Suite Cluster File System Enterprise HA 6.0.1
Oracle RAC 11.2.0.3 running inside VxSFRAC inside VMware HA
Informatica Platform 9.5 GA (all country content accelerators supported)
Informatica B2BDT 9.5 GA
Informatica B2BDX-HA 9.5 GA
Ultra Messaging UMQ 5.0

Current State Architecture (as of September 2012)


Figure 1 - Dom_hacoe_primary Logical

Informatica & VMware Proprietary

Page 11 of 20

10/26/2012

NOTE: The diagram above illustrates a best practice of 3 CP Servers in a single cluster separate from both the
VMware & Veritas Informatica Enterprise Grid Cluster. A production VMware/Informatica Enterprise Grid
environment would need at least 3 CP servers or a cluster of 3 CP servers if using VxSFCFSHA for the required high
performance, highly available shared file system.

The Shared File System (Shared VMFS with multi-writer flag)


The current state HACOE uses VMFS with the multi-writer flag with VxSFCFSHA/VxSFRAC providing the high
performance, highly available, fully POSIX compliant shared file system required by Informatica PowerCenter
HA.

HACOE VMware (Certification Environment)


Hardware & Firmware Information

QTY 1 HP c7000 Blade Chassis


QTY 2 HP Onboard Administrators, Firmware 3.31
NETWORK = QTY 2 HP BLc VC Flex-10 Ethernet Module, Firmware 3.31 (40Gb backbone to Cisco Nexus Series)
SAN = QTY 2 HP BLc VC 8Gb FC 20-Port, Firmware 1.42
SAN = QTY 2 HP StorageWorks 8/16 FC Switches Brocade 300 series
Storage Array = PureStorage FA-320 26TB SSD Flash Array
QTY 2 HP BL685c G7 CTO Blade (AMD 6172 x 4 (12 cores each)) 48 x 96GB RAM
o HP BLc QLogic QMH2562 8Gb FC HBA Opt
o HP Smart Array BL465c/685c G7 FIO Controller
o HP 146GB 6G SAS 15K 2.5in DP ENT HDD
o Firmware A20
o iLO3 Firmware 1.25

Virtualized Environment

VMware ESXi 4.1U1 HP OEM Specific (4.1_U1_Feb_2011_ESXi_HD_USB_SDImgeInstlr_Z7550_00031)


vCenter Server Windows 2008 R2 Standard x64
vCenter Server 4.1 (VMware-vpx-all-4.1.0-258902)
RHEL 5.6
Veritas Storage Foundation Suite Cluster File System Enterprise HA 5.1SP1
Oracle RAC 11gR2 used from Native HW side
Informatica Platform 9.5

Informatica & VMware Proprietary

Page 12 of 20

10/26/2012

Figure 2 - HACOE Logical Architecture (At the time of Certification)

Informatica & VMware Proprietary

Page 13 of 20

10/26/2012

Distributed Switch Configuration


Figure 3 - VMware Distributed Switch (dvSwitch) Network Architecture
HP Virtual Connect Flex 10 Switch Server Profile for each ESXi BL 465c G7 Blade

Psesxihacoe01
unassigned
Pswivchacoe01
4 cores x 8GB
RAM

Notice eth channel bonding not in use. Instead, I let the distributed
switch handle NIC teaming and failover. Under Native HW
conditions, I would channel bond eth0 and eth1 together via LACP.
Instead eth1 is only configured as a passive standby.

vCenter

eth0

Pslxhacoe05
8 cores x 24GB
RAM
Informatica Apps

eth1

psesxihacoe01

Psesxihacoe02
unassigned
pslxhacoecpp01
2 cores x 2GB RAM
Single VCS
Coordinator Point
Server

Pslxhacoe08
8 cores x 24GB
RAM
Informatica Apps

Pslxhacoe06
8 cores x 24GB
RAM

Pslxhacoe07
8 cores x 24GB
RAM
eth0

Informatica Apps

eth1

Informatica Apps

Both vmnic0 and vmnic1 have multiple networks assigned to them.


Its very important to understand when assigning these to either a
distributed switch or a vstandard switch VLAN tagging must be used
or that network is connection is dead. However, specifically when
using distributed switches, you cannot use a vmnic with multiple
networks and one without multiple networks in the same distributed
switch. This is due to the fact that a DS without multiple networks will
REQUIRE no VLAN tagging be used and one with REQUIRES VLAN
tagging is used. So you find yourself in a catch 22 and all networks
on that DS wont work. This is why I have separated my DSs by
network affiliation.

psesxihacoe02

One of the most complex concepts to grasp is how the physical network maps to the virtual. The
purpose of this diagram is to show how those relationships map down to the guest level. Notice
how there are 2 10Gb Emulex LOMs (logical onboard modules) for each server profile and each
network is divided up over those 2 LOMs such that if you add up all the networks on each LOM
each will be 10Gb or less since each LOM is a 10Gb physical channel.

Informatica & VMware Proprietary

Page 14 of 20

10/26/2012

Figure 4 - VMFS vMotion

Informatica & VMware Proprietary

Page 15 of 20

10/26/2012

Figure 5 - VMware ESXi Hosts Fencing

2 HP BL465c Blades 24cores x 64GB RAM


AMD 6172 Chipset used for ESXi 4.1U1

Psesxihacoe02

Psesxihacoe01
unassigned

unassigned

Pswivchacoe01
4 cores x 8GB
RAM
vCenter

pslxhacoecpp01
2 cores x 2GB RAM
Single VCS
Coordinator Point
Server

Pslxhacoe07
8 cores x 24GB
RAM

Pslxhacoe08
8 cores x 24GB
RAM

Informatica Apps

Informatica Apps

x64 vmware guests

x64 vmware guests

Pslxhacoe05
8 cores x 24GB
RAM

Pslxhacoe06
8 cores x 24GB
RAM

Informatica Apps

Informatica Apps

HW-Based I/O Fencing Traffic

Informatica & VMware Proprietary

Page 16 of 20

10/26/2012

Figure 6 - Heartbeat Architecture


Notice these MAC addresses dont map to the VC Flex 10 MAC addresses, but instead to the VMware guest vmxnet3 MAC addresses.
The same is true for all network interfaces, but its very critical they match up perfectly for the heartbeat channels or the cluster will never
be stable.

pslxhacoe05.informatica.com:/root#lltconfig -a list
Link 0 (eth2):
Node 0 pslxhacoe05: 00:50:56:B6:00:06 permanent
Node 1 pslxhacoe06: 00:50:56:B6:00:0A
Node 2 pslxhacoe07: 00:50:56:B6:00:0E
Node 3 pslxhacoe08: 00:50:56:B6:00:12
Link 1 (eth3):
Node 0 pslxhacoe05:
Node 1 pslxhacoe06:
Node 2 pslxhacoe07:
Node 3 pslxhacoe08:

00:50:56:B6:00:07 permanent
00:50:56:B6:00:0B
00:50:56:B6:00:0F
00:50:56:B6:00:13

Informatica & VMware Proprietary

Page 17 of 20

10/26/2012

Informatica Guest Setup/Configuration (Certification Environment)


NOTE: See Informatica Technical Recommendation section for additional information.
1.

Present shared LUNs for Informatica guests vmdk and working location. In the HACOE, a special
datastore was created called, vmotion_datastore_pure that was used as a single working location for all
Informatica vms. You may wish to do something similar.
2. Format shared LUN with vmfs3 or vmfs5 (recommended) while creating Informatica guests root volume
vmdk.
3. Install Informatica guests operating system. In HACOE, we used RHEL 5.6 since RHEL 5.5 failed
immediately upon install first attempt with a kernel panic upon CPU check/validation. Here is the bug
report on this issue: https://bugzilla.redhat.com/show_bug.cgi?id=607947 IMPORTANT NOTE: DO NOT USE
RHEL LVM unless you have no other choice! We noticed several instances of LVM root volume suddenly
going into a read only state which eventually crashed the node. This happened repeatedly on CP server
and Informatica guests.
4. Install the VMware Tools package on the guest operating system for the Informatica virtual machine.
5. Verify MAC addresses for each eth device match that prescribed inside VMware (edit settings). Correct
if they do not.
6. Install other rpms that may be required in your environment.
7. Install VxSFCFS Enterprise HA 5.1SP1 (or higher version). Configure everything except I/O fencing. This
will be done after CP Server is successfully setup, running and trusted .ssh connections are established
between Informatica VxSFCFSHA VMware guest cluster and CP Server.
8. Un-install VMware tools. Reboot CP Server guest. If you dont use LVM, you dont have to execute this
step, but it doesnt hurt to go ahead and do it just to be safe. We used LVMs in our CP Servers, so this
step absolutely helped in those guests. In the HACOE Informatica Clustered Guests we used a larger
300GB LUN divided by the operating system physically into standard linux partitions so this step was not
necessary on those guests.
9. Re-install VMware tools. Why do this? This has proven successful in mitigating issue of LVM root volume
going to read only state. Important that you do this AFTER you install required Veritas products. If you
dont use LVM, you dont have to execute this step, but it doesnt hurt to go ahead and do it just to be
safe.
10. Re-check all eth devices in /etc/sysconfig/network-scripts again verify they match MAC addresses defined
in VMware edit settings for that vm guest.
11. Follow steps listed under "Coordination Point Server and CP I/O Fencing Setup" to setup I/O fencing for
this cluster. Informatica recommends using `/opt/VRTS/install/installvcs fencing` when configuring CP
Servers. First configure in disabled mode, then configure CP Server Fencing in enabled mode. This
has proven most reliable.

VMFS Multi-Writer Flag for Shared Access for Informatica_shared location


12. Present shared LUNs for Informatica guest vmdk for Informatica_shared CFS mount. Harddisk2 through
Harddiskn.
13. Format shared LUN with vmfs3 or vmfs5 (recommended) while creating vmdk for first Informatica guest
(this should be node 0 in VxSFCFSHA cluster). Harddisk2 through Harddiskn.
14. After you have created the vmdks (representative of each shared device you intend to use for the
Veritas Clustered Volume and Clustered File System) on the first Informatica guest, then you can apply the
multi-writer flag to each shared LUN/vmdk per VMware KB (http://kb.VMware.com/kb/1034165 ). It is
best if you use the exact same SCSI device channels on subsequent vm guest configurations. For example,
Informatica & VMware Proprietary

Page 18 of 20

10/26/2012

we use a separate SCSI LSI Parallel Controller from that of Harddisk1 (the operating system drive) such
that our SCSI LSI Parallel Controller for our operating system harddisk is using 0:0, while the LSI Parallel
Controller for the CFS devices with the multi-writer flag use 2:0 through 2:n. Both should have SCSI Bus
Sharing set to NONE.
15. At this point you should be ready to create the cluster volume (with host-side striping) and the clustered
file system required for Informatica Enterprise Grid. For more on how to set this up, consult
PowerCenter_HAGrid_Pre-Install_Checklist.

Where can I find more information?


High Availability/Grid
The PowerCenter_HAGrid_Pre-Install_Checklist is a client-facing build guide that contains extremely specific
information related to requirements for implementing Informatica Platform HA or PowerCenter HA. The
document also goes into details around related topics (ICC, OPERATING SYSTEM profiles, and 3rd party
requirements).
The H2L library on my.informatica.com also contains a higher level client-facing document that addresses
minimum requirements and supported configuration for Informatica Platform HA and PowerCenter HA.
How to Achieve Greater Availability in Enterprise Data Integration Systems
https://communities.informatica.com/docs/DOC-2813

Glossary of Terms
i.
ii.
iii.
iv.
v.
vi.
vii.
viii.
ix.
x.
xi.
xii.
xiii.
xiv.
xv.
xvi.

CP = Coordination Point (an alternative HW-based I/O fencing mechanism)


LVM = Logical Volume Manager (usually an operating system feature to provide simplistic management of
RAW volumes)
SonG = Session on Grid
KB = Knowledge Base
HA = High Availability, defined as an environment built with all the required hardware and software
ensuring there are no single points of failure.
HACOE = High Availability Center of Excellence Informatica internal HA environment used to
demonstrate best practices and proof performance between virtual and native environment foundations
ICC = Integration Competency Center
LUN = Logical Unit Number used to identify storage presented from storage array units to hosts as devices
MAC Address = MAC addresses are used in the Media Access Control protocol sub-layer of the OSI
reference model
OSI = Open Systems Interface is a logical standards-based network architecture model
POSIX = Portable Operating System Interface [for Unix]
SCSI = Small Computer System Interface standards-based bus used to access local and remote host
devices
SPOF = Single Point of Failure
Grid = cluster of smaller computing entities engineered for coordinated computing and greater horizontal
scaling
VxSFCFSHA / VxSFCFS Enterprise HA Veritas Storage Foundation Suite Cluster File System with the VCS
(Veritas Cluster Server) components.
SFHA / VxSFHA Veritas Storage Foundation Suite for High Availability a superset of all VxSFHA code
bases including SFRAC, SFCFSHA, SFCFS, etc.

Informatica & VMware Proprietary

Page 19 of 20

10/26/2012

Sources
Symantec/Veritas SFHA product guides available at http://vos.symantec.com/
VMware product guides
VMware vSphere 4.1 Install, Configure, Manage Student Manuals vol1&2

Informatica & VMware Proprietary

Page 20 of 20

10/26/2012